Lambda: Serverless Computing
What if you could run code without thinking about servers at all? AWS Lambda lets you do exactly that. Just upload your code, and AWS handles everything else - provisioning, scaling, patching, and maintaining servers. In this lesson, we'll explore serverless computing with Lambda.
What You'll Learn
By the end of this lesson, you'll understand how Lambda works, when to use it, how to create and test functions, and the pricing model that makes serverless cost-effective.
What is Serverless?
Serverless doesn't mean there are no servers - it means you don't manage them. The cloud provider handles all infrastructure concerns:
- Server provisioning and maintenance
- Operating system patching
- Capacity planning and scaling
- Availability and fault tolerance
You focus entirely on your code.
Serverless vs. Traditional
| Aspect | EC2 (Traditional) | Lambda (Serverless) |
|---|---|---|
| Server management | You manage | AWS manages |
| Scaling | You configure | Automatic |
| Billing | Per hour/second running | Per request + duration |
| Idle costs | Pay while running | Zero when idle |
| Cold starts | None (always running) | Possible delay on first request |
What is AWS Lambda?
AWS Lambda is an event-driven compute service. You upload your code (a function), and Lambda runs it in response to events:
- HTTP requests via API Gateway
- File uploads to S3
- Messages in an SQS queue
- Database changes in DynamoDB
- Scheduled events (cron jobs)
- And many more
How Lambda Works
- You upload your code - Package your function code
- Configure a trigger - What event should run your function
- Event occurs - Something triggers your function
- Lambda runs your code - Spins up an execution environment
- Function returns - Output goes to the configured destination
- You're charged - Only for the actual execution time
Lambda Execution Model
Event → Lambda Service → Execution Environment → Your Code → Response
↑
(created on demand,
reused when possible)
Supported Runtimes
Lambda supports many programming languages:
| Runtime | Languages |
|---|---|
| Node.js | JavaScript, TypeScript |
| Python | Python 3.8, 3.9, 3.10, 3.11, 3.12 |
| Java | Java 8, 11, 17, 21 |
| .NET | C#, PowerShell |
| Go | Go |
| Ruby | Ruby |
| Custom | Any language via custom runtime |
Creating Your First Lambda Function
Let's create a simple Lambda function using the AWS Console.
Step 1: Open Lambda Console
- Go to AWS Console
- Search for "Lambda"
- Click "Create function"
Step 2: Configure Function
- Choose "Author from scratch"
- Function name:
hello-world - Runtime: Python 3.12
- Architecture: x86_64 (or arm64 for better price/performance)
- Click "Create function"
Step 3: Write Your Code
Replace the default code with:
import json
def lambda_handler(event, context):
# Get name from event, default to "World"
name = event.get('name', 'World')
message = f"Hello, {name}! Welcome to AWS Lambda."
return {
'statusCode': 200,
'body': json.dumps({
'message': message
})
}
Click "Deploy" to save your changes.
Step 4: Test Your Function
- Click "Test"
- Configure test event:
{
"name": "Lambda Learner"
}
- Click "Test"
You should see output like:
{
"statusCode": 200,
"body": "{\"message\": \"Hello, Lambda Learner! Welcome to AWS Lambda.\"}"
}
Understanding the Handler
Every Lambda function has a handler - the entry point for your code.
Python Handler
def lambda_handler(event, context):
# event: Input data (JSON converted to dict)
# context: Runtime information (function name, memory, time remaining)
return response
Node.js Handler
exports.handler = async (event, context) => {
// event: Input data
// context: Runtime information
return response;
};
The Event Object
The event contains input data. Its structure depends on the trigger:
API Gateway event (HTTP request):
{
"httpMethod": "POST",
"path": "/users",
"body": "{\"name\": \"John\"}",
"headers": { "Content-Type": "application/json" }
}
S3 event (file upload):
{
"Records": [{
"s3": {
"bucket": { "name": "my-bucket" },
"object": { "key": "uploaded-file.txt" }
}
}]
}
The Context Object
The context provides runtime information:
function_name- Name of the functionmemory_limit_in_mb- Configured memoryaws_request_id- Unique request IDget_remaining_time_in_millis()- Time before timeout
Lambda Configuration
Memory and CPU
Lambda allocates CPU proportionally to memory:
| Memory | Approximate CPU |
|---|---|
| 128 MB | Minimal |
| 1,792 MB | 1 vCPU |
| 3,008 MB | 2 vCPUs |
| 10,240 MB | 6 vCPUs |
More memory = more CPU = faster execution (but higher cost per ms).
Timeout
Maximum execution time for a function (1 second to 15 minutes).
- Default: 3 seconds
- Consider your use case when setting timeout
- Functions are terminated if they exceed the timeout
Environment Variables
Store configuration without hardcoding:
import os
def lambda_handler(event, context):
database_url = os.environ['DATABASE_URL']
api_key = os.environ['API_KEY']
# Use these values...
Set environment variables in the Lambda console under "Configuration" → "Environment variables".
Security note: For sensitive values, use AWS Secrets Manager or Parameter Store instead.
Common Lambda Triggers
API Gateway
Expose your Lambda function as an HTTP endpoint:
- In Lambda console, click "Add trigger"
- Select "API Gateway"
- Create a new REST API or HTTP API
- Lambda automatically creates the API Gateway
Your function is now available at a URL like:
https://abc123.execute-api.us-east-1.amazonaws.com/default/hello-world
S3 Events
Run your function when files are uploaded to S3:
def lambda_handler(event, context):
for record in event['Records']:
bucket = record['s3']['bucket']['name']
key = record['s3']['object']['key']
print(f"Processing file: s3://{bucket}/{key}")
# Process the file...
Use cases: Image resizing, file validation, data processing.
Scheduled Events (EventBridge)
Run your function on a schedule:
rate(5 minutes)- Every 5 minutesrate(1 day)- Once per daycron(0 12 * * ? *)- Every day at 12:00 UTC
Use cases: Cleanup jobs, reports, data sync.
Lambda Pricing
Lambda pricing has two components:
Request Charges
- First 1 million requests/month: Free
- After that: $0.20 per million requests
Duration Charges
- Based on memory allocated and execution time
- Billed per millisecond (1 ms minimum)
- Free Tier: 400,000 GB-seconds/month
Pricing Example
Function with 512 MB memory, 200 ms average duration:
1 million invocations/month
Requests: $0.20 (1M requests × $0.20/million)
Duration:
512 MB = 0.5 GB
200 ms × 1M = 200,000 seconds
200,000 × 0.5 = 100,000 GB-seconds
100,000 GB-seconds × $0.0000166667/GB-second = $1.67
Total: $1.87/month for 1 million invocations
Compare this to running an EC2 t3.micro 24/7: ~$8.50/month
Cold Starts
A cold start occurs when Lambda needs to create a new execution environment. This adds latency (100ms to several seconds depending on runtime).
What Causes Cold Starts
- First request to a new function
- All existing environments are busy
- Function hasn't been invoked recently
- Code or configuration changes
Minimizing Cold Starts
- Use smaller deployment packages
- Use provisioned concurrency for consistent latency
- Keep functions warm with scheduled pings
- Use lighter runtimes (Python, Node.js faster than Java, .NET)
- Minimize initialization code outside the handler
Provisioned Concurrency
For latency-sensitive applications, provisioned concurrency keeps environments pre-initialized:
Always-ready environments → No cold starts → Consistent latency
This adds cost but eliminates cold start latency.
Best Practices
Code Organization
- Keep handlers small - delegate to other modules
- Initialize resources outside the handler (reused across invocations)
- Use environment variables for configuration
import boto3
# Initialize outside handler (reused)
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('my-table')
def lambda_handler(event, context):
# Use pre-initialized resources
response = table.get_item(Key={'id': event['id']})
return response
Error Handling
- Use try/except blocks
- Return meaningful error responses
- Use structured logging
import json
import logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
def lambda_handler(event, context):
try:
# Your logic here
return {'statusCode': 200, 'body': json.dumps('Success')}
except Exception as e:
logger.error(f"Error: {str(e)}")
return {'statusCode': 500, 'body': json.dumps('Error occurred')}
Security
- Use IAM roles with least privilege
- Never hardcode credentials
- Use VPC for private resource access
- Encrypt environment variables
Key Takeaways
- Serverless means you don't manage servers - focus on code
- Lambda runs code in response to events without provisioning servers
- Pay only for actual execution time - no charges when idle
- Triggers include API Gateway, S3, SQS, schedules, and more
- Cold starts add latency on first invocation; use provisioned concurrency for consistent latency
- Free Tier includes 1M requests and 400,000 GB-seconds/month
- Memory setting affects both memory and CPU allocation
What's Next
Now that you understand compute services (EC2 and Lambda), let's explore storage. In the next lesson, we'll dive into Amazon S3 - the most popular and versatile storage service on AWS.

