Your First Serverless App with AWS Lambda — No Servers, No Kidding
Imagine deploying code without provisioning a single server, without patching an OS, without worrying about scaling, and paying nothing when nobody's using it. That's Lambda. And once you build your first function, you'll wonder why you ever managed EC2 instances for simple workloads.
What Serverless Actually Means
"Serverless" doesn't mean there are no servers. It means you don't manage them. AWS handles provisioning, scaling, patching, and high availability. You just write a function, upload it, and AWS runs it when triggered.
The Lambda execution model works like this:
- An event triggers your function (HTTP request, S3 upload, schedule, etc.)
- AWS spins up a container with your code
- Your handler function executes
- AWS returns the response and freezes the container
- If another request comes, AWS reuses the warm container (or spins up a new one)
Creating Your First Lambda Function (Python)
# lambda_function.py
import json
import datetime
def lambda_handler(event, context):
"""Process incoming API requests."""
# Get the HTTP method and path
method = event.get('httpMethod', 'UNKNOWN')
path = event.get('path', '/')
body = json.loads(event.get('body', '{}')) if event.get('body') else {}
# Simple routing
if path == '/health':
response_body = {
'status': 'healthy',
'timestamp': datetime.datetime.utcnow().isoformat(),
'region': context.invoked_function_arn.split(':')[3]
}
elif path == '/process' and method == 'POST':
name = body.get('name', 'World')
response_body = {
'message': f'Hello, {name}!',
'request_id': context.aws_request_id,
'memory_limit': context.memory_limit_in_mb
}
else:
return {
'statusCode': 404,
'body': json.dumps({'error': 'Not found'})
}
return {
'statusCode': 200,
'headers': {
'Content-Type': 'application/json',
'Access-Control-Allow-Origin': '*'
},
'body': json.dumps(response_body)
}
Deploy it with the CLI:
# Zip the function
zip function.zip lambda_function.py
# Create the execution role
aws iam create-role \
--role-name lambda-basic-role \
--assume-role-policy-document '{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {"Service": "lambda.amazonaws.com"},
"Action": "sts:AssumeRole"
}]
}'
# Attach basic execution policy (CloudWatch Logs)
aws iam attach-role-policy \
--role-name lambda-basic-role \
--policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
# Create the function
aws lambda create-function \
--function-name my-api-handler \
--runtime python3.12 \
--handler lambda_function.lambda_handler \
--role arn:aws:iam::123456789012:role/lambda-basic-role \
--zip-file fileb://function.zip \
--timeout 30 \
--memory-size 256 \
--environment 'Variables={ENVIRONMENT=production,LOG_LEVEL=INFO}'
The Same Function in Node.js
// index.mjs
export const handler = async (event, context) => {
const method = event.httpMethod || 'UNKNOWN';
const path = event.path || '/';
const body = event.body ? JSON.parse(event.body) : {};
if (path === '/thumbnail' && method === 'POST') {
const { imageUrl, width = 200, height = 200 } = body;
return {
statusCode: 200,
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
message: 'Thumbnail job queued',
imageUrl,
dimensions: `${width}x${height}`,
requestId: context.awsRequestId,
}),
};
}
return {
statusCode: 200,
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
message: 'Lambda is running',
memoryAllocated: `${context.memoryLimitInMB}MB`,
remainingTime: `${context.getRemainingTimeInMillis()}ms`,
}),
};
};
API Gateway Integration
Lambda by itself can't receive HTTP requests. You need API Gateway as the front door:
# Create a REST API
API_ID=$(aws apigateway create-rest-api \
--name "my-serverless-api" \
--description "Lambda-powered API" \
--endpoint-configuration types=REGIONAL \
--query 'id' --output text)
# Get the root resource ID
ROOT_ID=$(aws apigateway get-resources \
--rest-api-id $API_ID \
--query 'items[?path==`/`].id' --output text)
# Create a resource
RESOURCE_ID=$(aws apigateway create-resource \
--rest-api-id $API_ID \
--parent-id $ROOT_ID \
--path-part "process" \
--query 'id' --output text)
# Create POST method
aws apigateway put-method \
--rest-api-id $API_ID \
--resource-id $RESOURCE_ID \
--http-method POST \
--authorization-type NONE
# Integrate with Lambda
aws apigateway put-integration \
--rest-api-id $API_ID \
--resource-id $RESOURCE_ID \
--http-method POST \
--type AWS_PROXY \
--integration-http-method POST \
--uri "arn:aws:apigateway:us-east-1:lambda:path/2015-03-31/functions/arn:aws:lambda:us-east-1:123456789012:function:my-api-handler/invocations"
# Deploy the API
aws apigateway create-deployment \
--rest-api-id $API_ID \
--stage-name prod
Lambda Layers — Share Code Across Functions
Layers let you package libraries, custom runtimes, and shared utilities separately from your function code:
# Create a layer with shared dependencies
mkdir -p python/lib/python3.12/site-packages
pip install requests boto3 -t python/lib/python3.12/site-packages/
zip -r layer.zip python/
aws lambda publish-layer-version \
--layer-name shared-dependencies \
--description "Common Python packages" \
--zip-file fileb://layer.zip \
--compatible-runtimes python3.12
# Attach the layer to a function
aws lambda update-function-configuration \
--function-name my-api-handler \
--layers arn:aws:lambda:us-east-1:123456789012:layer:shared-dependencies:1
Cold Starts Explained
A cold start happens when AWS needs to spin up a new execution environment for your function. Here's what affects it:
| Factor | Impact on Cold Start | Mitigation |
|---|---|---|
| Runtime | Python/Node.js: ~200-500ms, Java/C#: ~1-3s | Use Python or Node.js for latency-sensitive functions |
| Memory | More memory = more CPU = faster init | Allocate 512MB+ for better performance |
| Package size | Larger deployment = slower load | Minimize dependencies, use layers |
| VPC | Adds 1-2s for ENI attachment | Use VPC only when needed |
| Provisioned Concurrency | Eliminates cold starts | Pre-warms N instances (costs money) |
# Set provisioned concurrency to eliminate cold starts
aws lambda put-provisioned-concurrency-config \
--function-name my-api-handler \
--qualifier prod \
--provisioned-concurrent-executions 5
Lambda Pricing — Why It's Usually Cheaper
Lambda pricing has two components: requests and duration.
| Component | Price | Free Tier (monthly) |
|---|---|---|
| Requests | $0.20 per 1 million | 1 million requests |
| Duration | $0.0000166667 per GB-second | 400,000 GB-seconds |
Example: A function with 256MB memory running for 500ms, invoked 1 million times/month:
- Requests: 1M x $0.20/M = $0.20
- Duration: 1M x 0.5s x 0.25GB x $0.0000166667 = $2.08
- Total: $2.28/month
The same workload on a t3.small EC2 instance would cost ~$15/month running 24/7. Lambda wins for spiky, event-driven workloads.
SAM Template — Infrastructure as Code
AWS SAM (Serverless Application Model) lets you define your entire serverless app in a template:
# template.yaml
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: My Serverless API
Globals:
Function:
Timeout: 30
Runtime: python3.12
MemorySize: 256
Environment:
Variables:
ENVIRONMENT: production
Resources:
ApiFunction:
Type: AWS::Serverless::Function
Properties:
FunctionName: my-api-handler
Handler: lambda_function.lambda_handler
CodeUri: ./src/
Events:
ProcessPost:
Type: Api
Properties:
Path: /process
Method: POST
HealthGet:
Type: Api
Properties:
Path: /health
Method: GET
Policies:
- DynamoDBCrudPolicy:
TableName: !Ref SessionsTable
SessionsTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: user-sessions
BillingMode: PAY_PER_REQUEST
AttributeDefinitions:
- AttributeName: UserId
AttributeType: S
KeySchema:
- AttributeName: UserId
KeyType: HASH
Outputs:
ApiEndpoint:
Description: API Gateway endpoint URL
Value: !Sub "https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/"
# Build and deploy with SAM
sam build
sam deploy --guided --stack-name my-serverless-app
Common Lambda Use Cases
- Thumbnail generation: S3 upload triggers Lambda to resize images
- Webhook processing: Receive Stripe/GitHub webhooks, process asynchronously
- Scheduled tasks: CloudWatch Events triggers Lambda every hour for cleanup
- Data transformation: Kinesis stream triggers Lambda to process and load to DynamoDB
- API backends: API Gateway + Lambda for REST/GraphQL APIs
- ChatOps: SNS/SQS triggers Lambda to post alerts to Slack
Monitoring with CloudWatch
Every Lambda invocation automatically logs to CloudWatch:
# Tail logs in real-time
aws logs tail /aws/lambda/my-api-handler --follow --since 5m
# Get function metrics
aws cloudwatch get-metric-statistics \
--namespace AWS/Lambda \
--metric-name Duration \
--dimensions Name=FunctionName,Value=my-api-handler \
--start-time $(date -u -d '1 hour ago' +%Y-%m-%dT%H:%M:%S) \
--end-time $(date -u +%Y-%m-%dT%H:%M:%S) \
--period 300 \
--statistics Average p99 Maximum \
--output table
What's Next?
You've now covered the core AWS services: IAM, S3, EC2, VPC, databases, and Lambda. The next step is tying them together with infrastructure as code. Stay tuned for our Terraform on AWS series where we'll automate everything you've learned in this series.
This is Part 8 of our AWS series. Serverless isn't the answer to everything, but for event-driven workloads, it's hard to beat.
