Running Bun and Alternative JavaScript Runtimes on AWS Lambda
Technical implementation guide for running Bun and Deno on AWS Lambda using custom runtimes, with real performance benchmarks, cost analysis, and production deployment patterns.
Abstract
AWS Lambda officially supports Node.js, but the platform's custom runtime capability opens the door to alternative JavaScript runtimes like Bun and Deno. This guide explores the technical implementation of running these runtimes on Lambda through two approaches: Lambda Layers and container images. We'll examine performance characteristics from real benchmarks, implementation gotchas, and the trade-offs between AWS's optimized Node.js runtime and alternative runtimes.
The Custom Runtime Question
Engineers encounter scenarios where alternative JavaScript runtimes become attractive for Lambda deployments. The common motivations include cold start overhead in latency-sensitive applications, avoiding TypeScript transpilation steps, CPU-bound workloads where runtime efficiency impacts costs, and accessing modern JavaScript features before Node.js LTS support.
The core technical challenge: AWS Lambda is heavily optimized for its managed runtimes, but custom runtimes sacrifice these optimizations. Is the performance gain from alternative runtimes worth the cold start penalty and implementation complexity?
Understanding Lambda Custom Runtimes
AWS Lambda's custom runtime feature allows you to run any runtime by implementing the Lambda Runtime API. This API provides a simple HTTP interface that your runtime uses to receive events and return responses.
The Runtime API Flow
The bootstrap process runs in an infinite loop, requesting events from Lambda, executing your handler, and returning results. This simple protocol is what makes custom runtimes possible.
Implementation Approach 1: Bun with Lambda Layers
Lambda Layers provide a way to package and share runtime dependencies across multiple functions. Bun maintains an official bun-lambda package that implements the Runtime API.
Building the Bun Lambda Layer
The publish script creates a Lambda Layer with the Bun runtime and bootstrap script, then publishes it to your AWS account. You'll get back a Layer ARN that looks like arn:aws:lambda:us-east-1:123456789012:layer:bun-runtime:1.
Writing a Bun Lambda Handler
Bun Lambda handlers follow the Web API standard instead of Node.js conventions:
Notice the handler exports a fetch method, not handler. This follows Bun's Web API approach. Lambda events are converted to standard Request objects, and your handler returns Response objects.
Deploying with AWS CDK
Critical requirement: The layer architecture must match the function architecture. Build separate layers for x86_64 and arm64 if you need both.
Implementation Approach 2: Container Images
Container images provide full control over the runtime environment and enable advanced optimizations. This approach uses the AWS Lambda Web Adapter to convert HTTP servers into Lambda-compatible handlers.
Bun Container Deployment
The Lambda adapter intercepts incoming Lambda events, converts them to HTTP requests to your server on port 8080, then converts responses back to Lambda format.
Deno with Cache Pre-warming
Deno's architecture caches module resolution and compilation. Pre-running the application during the Docker build populates these caches:
The timeout 10s command runs the application during build, letting Deno cache all module resolution and compilation. Exit code 124 (timeout) is expected and acceptable - we're only populating caches, not actually running the server.
Building and Deploying Container Images
Platform specification is critical: Lambda defaults to x86_64, but Docker on Apple Silicon defaults to arm64. Always specify --platform linux/amd64 unless you're using arm64 Lambda functions.
Performance Benchmarks
Real-world performance data from CPU-intensive benchmark (SHA3-512 hash generation, 50 iterations):
Cold Start Times (Initialization Duration)
Key insights:
- Node.js wins cold starts by 76% vs Deno, 260% vs Bun
- AWS optimizations for managed runtimes make a significant difference
- Bun's layer approach adds substantial initialization overhead
Warm Invocation Duration
Key insights:
- Deno wins warm invocations by 36% vs Node.js
- Container-based Deno outperforms despite being a custom runtime
- Bun's execution performance lags in this benchmark
Cost Analysis
Let's analyze the real cost implications for a typical workload: 10 million invocations per month, 512MB memory, 100ms average duration, with 10% cold start rate.
Node.js (Managed Runtime)
Bun (Custom Layer)
Assuming 50ms faster execution but 500ms slower cold starts:
Deno (Container)
Assuming 115ms slower cold starts but 35% faster warm execution:
Decision factors:
- High steady traffic favors faster warm invocations (Deno)
- Frequent cold starts favor Node.js managed runtime
- CPU-intensive functions benefit more from runtime performance
- I/O-bound workloads (95% of Lambda functions) see minimal runtime impact
Common Pitfalls
Platform Architecture Mismatch
Building container images for the wrong CPU architecture causes cryptic runtime errors.
Symptom:
Root cause: Lambda defaults to x86_64, but Docker on Apple Silicon defaults to arm64.
Solution:
Missing Lambda Adapter Configuration
Container runs locally but fails on Lambda with connection errors.
Symptom: Function times out or returns 502 Bad Gateway.
Root cause: Lambda adapter requires PORT environment variable set to 8080.
Correct implementation:
AWS SDK Compatibility Issues
Earlier Bun versions had AWS SDK compatibility challenges including Could not resolve: 'http2' errors and SignatureDoesNotMatch errors with S3. Recent versions have improved significantly, but always test AWS SDK operations explicitly in your specific use case:
Pin Bun version in Dockerfile:
Lambda Layer Architecture Mismatch
Problem: Layer deploys successfully but function fails with "Runtime not supported" error.
Solution: Build and publish layers for both architectures:
Match architecture between layer and function in CDK:
Production-Ready Implementation Patterns
Pattern 1: Deno with HTTP Server + Lambda Adapter
Here's what works well for API workloads:
Results from production deployment:
- Cold start: 185ms (p50) vs Node.js 145ms
- Warm invocation: 6.7ms (p50) vs Node.js 8.1ms
- Cost reduction: ~15% due to faster execution
- Developer experience: Native TypeScript with no build step
Pattern 2: Hybrid Approach - Runtime per Workload
Use the optimal runtime for each function type:
Architecture example:
- API Gateway endpoints: Node.js (I/O-bound, cold start sensitive)
- Image processing: Bun container (CPU-intensive, high memory)
- Scheduled tasks: Deno container (TypeScript-native, predictable traffic)
Alternative Approaches to Consider
Optimize Node.js First
Before switching runtimes, consider Node.js optimizations:
ES Modules for tree shaking:
Often achieves similar performance improvements without runtime switch complexity.
Evaluate Rust or Go for Maximum Performance
For truly CPU-intensive workloads, compiled languages outperform all JavaScript runtimes:
Performance comparison (1000 iterations):
- Node.js: ~100ms
- Bun: ~50ms
- Rust: ~5ms (20x faster than Node.js)
Trade-offs:
- Dramatically faster execution and lower memory usage
- Different language requires team skill investment
- Longer compilation times, less flexible for rapid iteration
Key Takeaways
Performance Reality Check
-
Cold starts matter more than warm execution for most Lambda workloads. 95% of functions are I/O-bound, not CPU-bound. AWS's Node.js optimizations provide 3-4x faster cold starts. Alternative runtimes win on benchmarks but lose on user experience.
-
Deno offers the best alternative runtime balance. Fastest warm invocations (36% faster than Node.js), reasonable cold starts with cache pre-warming (~185ms), and container-based deployment is more mature than Bun's layer approach.
-
Bun requires careful evaluation for Lambda production use (as of late 2025). Slowest cold starts (~548ms average), improved but still evolving AWS SDK compatibility, and smaller production user base means limited troubleshooting resources. Well-suited for local development and careful production pilots where cold start impact is measured.
Cost and Complexity Trade-offs
-
Container images add operational overhead. Must manage base image security updates, longer deployment times (build + push vs code upload), and ECR storage costs. Benefit: full control over runtime environment.
-
Lambda Layers are simpler but more limited. Faster deployments and easier rollbacks, can share runtime across multiple functions, but have 250MB uncompressed limit and require architecture matching.
-
Cost savings rarely justify complexity. Most workloads see less than 20% cost reduction, and development overhead often exceeds savings. Exception: high-volume CPU-intensive workloads.
Implementation Recommendations
-
Start with Node.js optimization before switching runtimes. Top-level await, ES modules, layer usage, and provisioned concurrency often achieve 80% of alternative runtime benefits with zero operational complexity increase.
-
If switching, use a phased approach. Run a proof-of-concept with a non-critical function (1-2 weeks), deploy a production pilot with limited traffic (2-4 weeks), measure actual performance and costs before expanding, and maintain rollback capability.
-
Container images are the future of Lambda custom runtimes. Layer approach adds 300-500ms initialization overhead. Containers enable cache pre-warming and optimization. Industry trend favors containerized deployments.
-
Consider workload characteristics, not benchmarks. I/O-bound with burst traffic? Use Node.js managed runtime. CPU-bound with steady traffic? Consider Bun or compiled languages. Background jobs with TypeScript? Try Deno with containers. Simple transformations? Use CloudFront Functions at edge.
Production Lessons
-
Test with Lambda-like constraints locally. Use Lambda Runtime Interface Emulator, simulate read-only filesystem and memory limits, test cold start behavior - not just warm invocations.
-
Monitor runtime-specific metrics. Track initialization duration separately from execution, measure cold start percentage (not just average duration), and alert on AWS SDK errors for compatibility issues.
-
The best runtime is the one you don't have to manage. AWS invests heavily in Node.js Lambda optimization, official runtimes receive security patches automatically, and ecosystem tooling assumes Node.js. Alternative runtimes make sense for specific use cases, not as default choice.
Tools and Resources
Bun Lambda Integration:
- Official bun-lambda package: https://github.com/oven-sh/bun/tree/main/packages/bun-lambda
- AWS SDK compatibility tracking: https://github.com/oven-sh/bun/labels/aws-sdk
Deno on Lambda:
- AWS Lambda Web Adapter: https://github.com/awslabs/aws-lambda-web-adapter
- Official Docker images: https://hub.docker.com/r/denoland/deno
Deployment Frameworks:
- SST (Serverless Stack) with native Bun support
- AWS CDK for container and layer deployments
- Serverless Framework custom runtime configurations
Testing Tools:
- Lambda Runtime Interface Emulator for local testing
- AWS X-Ray for custom runtime tracing
- CloudWatch Embedded Metrics Format for runtime-specific measurements
References
- docs.aws.amazon.com - AWS Lambda best practices.
- docs.aws.amazon.com - AWS Lambda Developer Guide.
- serverless.com - Serverless learning resources (patterns and operations).
- web.dev - web.dev performance guidance (Core Web Vitals).
- typescriptlang.org - TypeScript Handbook and language reference.
- github.com - TypeScript project wiki (FAQ and design notes).
- docs.aws.amazon.com - AWS Overview (official whitepaper).
- cloud.google.com - Google Cloud documentation.