Skip to content

Running Bun and Alternative JavaScript Runtimes on AWS Lambda

Technical implementation guide for running Bun and Deno on AWS Lambda using custom runtimes, with real performance benchmarks, cost analysis, and production deployment patterns.

Abstract

AWS Lambda officially supports Node.js, but the platform's custom runtime capability opens the door to alternative JavaScript runtimes like Bun and Deno. This guide explores the technical implementation of running these runtimes on Lambda through two approaches: Lambda Layers and container images. We'll examine performance characteristics from real benchmarks, implementation gotchas, and the trade-offs between AWS's optimized Node.js runtime and alternative runtimes.

The Custom Runtime Question

Engineers encounter scenarios where alternative JavaScript runtimes become attractive for Lambda deployments. The common motivations include cold start overhead in latency-sensitive applications, avoiding TypeScript transpilation steps, CPU-bound workloads where runtime efficiency impacts costs, and accessing modern JavaScript features before Node.js LTS support.

The core technical challenge: AWS Lambda is heavily optimized for its managed runtimes, but custom runtimes sacrifice these optimizations. Is the performance gain from alternative runtimes worth the cold start penalty and implementation complexity?

Understanding Lambda Custom Runtimes

AWS Lambda's custom runtime feature allows you to run any runtime by implementing the Lambda Runtime API. This API provides a simple HTTP interface that your runtime uses to receive events and return responses.

The Runtime API Flow

typescript
// Simplified Lambda Runtime API implementationconst RUNTIME_API = `http://${process.env.AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime`;
while (true) {  // 1. Get next invocation  const eventResponse = await fetch(`${RUNTIME_API}/invocation/next`);  const requestId = eventResponse.headers.get('Lambda-Runtime-Aws-Request-Id');  const event = await eventResponse.json();
  try {    // 2. Invoke handler    const result = await handler(event);
    // 3. Return response    await fetch(`${RUNTIME_API}/invocation/${requestId}/response`, {      method: 'POST',      body: JSON.stringify(result),    });  } catch (error) {    // 4. Report error    await fetch(`${RUNTIME_API}/invocation/${requestId}/error`, {      method: 'POST',      body: JSON.stringify({        errorMessage: error.message,        errorType: error.constructor.name,      }),    });  }}

The bootstrap process runs in an infinite loop, requesting events from Lambda, executing your handler, and returning results. This simple protocol is what makes custom runtimes possible.

Implementation Approach 1: Bun with Lambda Layers

Lambda Layers provide a way to package and share runtime dependencies across multiple functions. Bun maintains an official bun-lambda package that implements the Runtime API.

Building the Bun Lambda Layer

bash
# Clone Bun repositorygit clone https://github.com/oven-sh/bun.gitcd bun/packages/bun-lambda
# Build and publish layer (defaults to arm64)bun run publish-layer
# Build for x86_64 (recommended for compatibility)ARCH=x64 bun run publish-layer

The publish script creates a Lambda Layer with the Bun runtime and bootstrap script, then publishes it to your AWS account. You'll get back a Layer ARN that looks like arn:aws:lambda:us-east-1:123456789012:layer:bun-runtime:1.

Writing a Bun Lambda Handler

Bun Lambda handlers follow the Web API standard instead of Node.js conventions:

typescript
// handler.ts - Bun Lambda handlerexport default {  async fetch(request: Request): Promise<Response> {    const event = await request.json();
    // Process Lambda event    const result = {      message: 'Hello from Bun on Lambda!',      timestamp: Date.now(),      input: event,    };
    return new Response(JSON.stringify(result), {      headers: { 'Content-Type': 'application/json' },    });  },};

Notice the handler exports a fetch method, not handler. This follows Bun's Web API approach. Lambda events are converted to standard Request objects, and your handler returns Response objects.

Deploying with AWS CDK

typescript
import { Function, Runtime, Code, LayerVersion, Architecture } from 'aws-cdk-lib/aws-lambda';
// Reference the published Bun layerconst bunRuntimeLayer = LayerVersion.fromLayerVersionArn(  this,  'BunRuntime',  'arn:aws:lambda:us-east-1:123456789012:layer:bun-runtime:1');
const bunFunction = new Function(this, 'BunFunction', {  runtime: Runtime.PROVIDED_AL2023,  handler: 'index.fetch',  code: Code.fromAsset('dist'),  layers: [bunRuntimeLayer],  architecture: Architecture.X86_64, // Must match layer architecture});

Critical requirement: The layer architecture must match the function architecture. Build separate layers for x86_64 and arm64 if you need both.

Implementation Approach 2: Container Images

Container images provide full control over the runtime environment and enable advanced optimizations. This approach uses the AWS Lambda Web Adapter to convert HTTP servers into Lambda-compatible handlers.

Bun Container Deployment

dockerfile
# Multi-stage build for Bun Lambda deploymentFROM public.ecr.aws/awsguru/aws-lambda-adapter:0.9.1 AS aws-lambda-adapterFROM oven/bun:1-debian AS runtime
# Copy Lambda adapterCOPY --from=aws-lambda-adapter /lambda-adapter /opt/extensions/lambda-adapter
WORKDIR /var/task
# Install dependenciesCOPY package.json bun.lock ./RUN bun install --production --frozen-lockfile
# Copy applicationCOPY . .
# Required Lambda adapter configurationENV PORT=8080
CMD ["bun", "run", "index.ts"]

The Lambda adapter intercepts incoming Lambda events, converts them to HTTP requests to your server on port 8080, then converts responses back to Lambda format.

Deno with Cache Pre-warming

Deno's architecture caches module resolution and compilation. Pre-running the application during the Docker build populates these caches:

dockerfile
FROM public.ecr.aws/awsguru/aws-lambda-adapter:0.9.1 AS adapterFROM denoland/deno:bin-2.6.3 AS deno-binFROM debian:bookworm-slim
# Install DenoCOPY --from=deno-bin /deno /usr/local/bin/denoCOPY --from=adapter /lambda-adapter /opt/extensions/lambda-adapter
WORKDIR /var/taskENV DENO_DIR=/var/deno_dir
# Copy applicationCOPY . .
# Critical: Pre-warm Deno caches# This runs the app once during build to populate runtime cachesRUN timeout 10s deno run --allow-net main.ts || [ $? -eq 124 ] || exit 1
ENV PORT=8080CMD ["deno", "run", "--allow-net", "main.ts"]

The timeout 10s command runs the application during build, letting Deno cache all module resolution and compilation. Exit code 124 (timeout) is expected and acceptable - we're only populating caches, not actually running the server.

Building and Deploying Container Images

bash
# Build for correct architecture (critical on Apple Silicon)docker build \  --platform linux/amd64 \  --provenance=false \  -t bun-lambda:latest .
# Authenticate to ECRaws ecr get-login-password --region us-east-1 | \  docker login --username AWS --password-stdin ${ECR_URI}
# Tag and pushdocker tag bun-lambda:latest ${ECR_URI}:latestdocker push ${ECR_URI}:latest
# Create Lambda functionaws lambda create-function \  --function-name bun-container-function \  --package-type Image \  --code ImageUri=${ECR_URI}:latest \  --role arn:aws:iam::123456789012:role/lambda-role

Platform specification is critical: Lambda defaults to x86_64, but Docker on Apple Silicon defaults to arm64. Always specify --platform linux/amd64 unless you're using arm64 Lambda functions.

Performance Benchmarks

Real-world performance data from CPU-intensive benchmark (SHA3-512 hash generation, 50 iterations):

Cold Start Times (Initialization Duration)

RuntimeAveragep10p90Range
Node.js (managed)152ms146ms160ms~14ms
Deno (container)267ms185ms297ms~30ms
Bun (layer)548ms500ms603ms~56ms

Key insights:

  • Node.js wins cold starts by 76% vs Deno, 260% vs Bun
  • AWS optimizations for managed runtimes make a significant difference
  • Bun's layer approach adds substantial initialization overhead

Warm Invocation Duration

RuntimeAveragep50p90
Deno13.7ms6.7ms19.8ms
Node.js21.3ms8.1ms56.7ms
Bun50.5ms15.2ms68.2ms

Key insights:

  • Deno wins warm invocations by 36% vs Node.js
  • Container-based Deno outperforms despite being a custom runtime
  • Bun's execution performance lags in this benchmark

Cost Analysis

Let's analyze the real cost implications for a typical workload: 10 million invocations per month, 512MB memory, 100ms average duration, with 10% cold start rate.

Node.js (Managed Runtime)

Compute: 10M × 0.0000000083 × 100ms = $0.83Requests: 10M × 0.0000002 = $2.00Total: $2.83/month

Bun (Custom Layer)

Assuming 50ms faster execution but 500ms slower cold starts:

Cold start overhead: 1M × 500ms × 0.0000000083 = $4.15Compute savings: 10M × 50ms × 0.0000000083 = $4.15 savedNet effect: Approximately equal to Node.jsTrade-off: Worse user experience during cold starts

Deno (Container)

Assuming 115ms slower cold starts but 35% faster warm execution:

Cold start overhead: 1M × 115ms × 0.0000000083 = $0.95Compute savings: 9M × 35ms × 0.0000000083 = $2.62 savedNet savings: ~$1.67/monthBest balance of performance and cost

Decision factors:

  • High steady traffic favors faster warm invocations (Deno)
  • Frequent cold starts favor Node.js managed runtime
  • CPU-intensive functions benefit more from runtime performance
  • I/O-bound workloads (95% of Lambda functions) see minimal runtime impact

Common Pitfalls

Platform Architecture Mismatch

Building container images for the wrong CPU architecture causes cryptic runtime errors.

Symptom:

Error: Runtime exited with error: exit status 1Runtime.InvalidEntrypoint

Root cause: Lambda defaults to x86_64, but Docker on Apple Silicon defaults to arm64.

Solution:

bash
# Always specify platform in builddocker build --platform linux/amd64 -t myfunction .
# Verify built imagedocker inspect myimage:latest | grep Architecture# Should output: "Architecture": "amd64"

Missing Lambda Adapter Configuration

Container runs locally but fails on Lambda with connection errors.

Symptom: Function times out or returns 502 Bad Gateway.

Root cause: Lambda adapter requires PORT environment variable set to 8080.

Correct implementation:

dockerfile
ENV PORT=8080CMD ["bun", "run", "server.ts"]
typescript
// Use environment variable in applicationconst port = process.env.PORT || 3000;
Bun.serve({  port: Number(port),  fetch(request) {    return new Response('Hello World');  }});

AWS SDK Compatibility Issues

Earlier Bun versions had AWS SDK compatibility challenges including Could not resolve: 'http2' errors and SignatureDoesNotMatch errors with S3. Recent versions have improved significantly, but always test AWS SDK operations explicitly in your specific use case:

typescript
// test/aws-sdk.test.tsimport { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';import { describe, test, expect } from 'bun:test';
describe('AWS SDK Compatibility', () => {  test('S3 PutObject works', async () => {    const client = new S3Client({ region: 'us-east-1' });    const result = await client.send(new PutObjectCommand({      Bucket: 'test-bucket',      Key: 'test.txt',      Body: 'test content'    }));    expect(result.$metadata.httpStatusCode).toBe(200);  });});

Pin Bun version in Dockerfile:

dockerfile
# Use specific version tag for stabilityFROM oven/bun:1-debian

Lambda Layer Architecture Mismatch

Problem: Layer deploys successfully but function fails with "Runtime not supported" error.

Solution: Build and publish layers for both architectures:

bash
# Build for x86_64ARCH=x64 bun run publish-layer# Output: arn:aws:lambda:us-east-1:123:layer:bun-x64:1
# Build for arm64ARCH=arm64 bun run publish-layer# Output: arn:aws:lambda:us-east-1:123:layer:bun-arm64:1

Match architecture between layer and function in CDK:

typescript
import { Architecture } from 'aws-cdk-lib/aws-lambda';
const bunLayerX64 = LayerVersion.fromLayerVersionArn(  this, 'BunLayerX64',  'arn:aws:lambda:us-east-1:123:layer:bun-x64:1');
new Function(this, 'MyFunction', {  architecture: Architecture.X86_64,  layers: [bunLayerX64], // Must match});

Production-Ready Implementation Patterns

Pattern 1: Deno with HTTP Server + Lambda Adapter

Here's what works well for API workloads:

typescript
// main.ts - Deno with oak frameworkimport { Application } from "https://deno.land/x/[email protected]/mod.ts";
const app = new Application();
app.use((ctx) => {  ctx.response.body = { message: "Hello from Deno on Lambda!" };});
const port = parseInt(Deno.env.get("PORT") || "8080");console.log(`Server running on port ${port}`);await app.listen({ port });
dockerfile
# Optimized DockerfileFROM public.ecr.aws/awsguru/aws-lambda-adapter:0.9.1 AS adapterFROM denoland/deno:bin-2.6.3 AS deno-binFROM debian:bookworm-slim
# Minimal dependenciesRUN apt-get update && apt-get install -y ca-certificates && rm -rf /var/lib/apt/lists/*
# Copy binariesCOPY --from=deno-bin /deno /usr/local/bin/denoCOPY --from=adapter /lambda-adapter /opt/extensions/lambda-adapter
WORKDIR /var/taskENV DENO_DIR=/var/deno_dir PORT=8080
# ApplicationCOPY . .
# Pre-warm cache (critical optimization)RUN timeout 10s deno run -A main.ts || [ $? -eq 124 ] || exit 1
CMD ["deno", "run", "-A", "main.ts"]

Results from production deployment:

  • Cold start: 185ms (p50) vs Node.js 145ms
  • Warm invocation: 6.7ms (p50) vs Node.js 8.1ms
  • Cost reduction: ~15% due to faster execution
  • Developer experience: Native TypeScript with no build step

Pattern 2: Hybrid Approach - Runtime per Workload

Use the optimal runtime for each function type:

typescript
// runtime-selector.ts - Decision logictype RuntimeSelection = 'nodejs' | 'bun' | 'deno';
function selectRuntime(characteristics: {  cpuIntensive: boolean;  ioIntensive: boolean;  coldStartSensitive: boolean;  requiresAWSSDK: boolean;  typescriptNative: boolean;}): RuntimeSelection {  // Cold start sensitive + AWS SDK heavy = Node.js  if (characteristics.coldStartSensitive && characteristics.requiresAWSSDK) {    return 'nodejs';  }
  // CPU intensive + warm traffic = Bun  if (characteristics.cpuIntensive && !characteristics.coldStartSensitive) {    return 'bun';  }
  // Background jobs + TypeScript = Deno  if (!characteristics.coldStartSensitive && characteristics.typescriptNative) {    return 'deno';  }
  // Default: Node.js for production safety  return 'nodejs';}

Architecture example:

  • API Gateway endpoints: Node.js (I/O-bound, cold start sensitive)
  • Image processing: Bun container (CPU-intensive, high memory)
  • Scheduled tasks: Deno container (TypeScript-native, predictable traffic)

Alternative Approaches to Consider

Optimize Node.js First

Before switching runtimes, consider Node.js optimizations:

typescript
// Bad: initialization in handlerexport async function handler(event: APIGatewayEvent) {  const db = await createDatabaseConnection(); // Cold start penalty  // ...}
// Good: initialization at module levelconst db = await createDatabaseConnection(); // Outside handler
export async function handler(event: APIGatewayEvent) {  // Use pre-initialized db}

ES Modules for tree shaking:

typescript
// Old: CommonJS imports entire moduleconst AWS = require('aws-sdk');
// New: ES Modules import only needed codeimport { S3Client } from '@aws-sdk/client-s3';

Often achieves similar performance improvements without runtime switch complexity.

Evaluate Rust or Go for Maximum Performance

For truly CPU-intensive workloads, compiled languages outperform all JavaScript runtimes:

Performance comparison (1000 iterations):

  • Node.js: ~100ms
  • Bun: ~50ms
  • Rust: ~5ms (20x faster than Node.js)

Trade-offs:

  • Dramatically faster execution and lower memory usage
  • Different language requires team skill investment
  • Longer compilation times, less flexible for rapid iteration

Key Takeaways

Performance Reality Check

  1. Cold starts matter more than warm execution for most Lambda workloads. 95% of functions are I/O-bound, not CPU-bound. AWS's Node.js optimizations provide 3-4x faster cold starts. Alternative runtimes win on benchmarks but lose on user experience.

  2. Deno offers the best alternative runtime balance. Fastest warm invocations (36% faster than Node.js), reasonable cold starts with cache pre-warming (~185ms), and container-based deployment is more mature than Bun's layer approach.

  3. Bun requires careful evaluation for Lambda production use (as of late 2025). Slowest cold starts (~548ms average), improved but still evolving AWS SDK compatibility, and smaller production user base means limited troubleshooting resources. Well-suited for local development and careful production pilots where cold start impact is measured.

Cost and Complexity Trade-offs

  1. Container images add operational overhead. Must manage base image security updates, longer deployment times (build + push vs code upload), and ECR storage costs. Benefit: full control over runtime environment.

  2. Lambda Layers are simpler but more limited. Faster deployments and easier rollbacks, can share runtime across multiple functions, but have 250MB uncompressed limit and require architecture matching.

  3. Cost savings rarely justify complexity. Most workloads see less than 20% cost reduction, and development overhead often exceeds savings. Exception: high-volume CPU-intensive workloads.

Implementation Recommendations

  1. Start with Node.js optimization before switching runtimes. Top-level await, ES modules, layer usage, and provisioned concurrency often achieve 80% of alternative runtime benefits with zero operational complexity increase.

  2. If switching, use a phased approach. Run a proof-of-concept with a non-critical function (1-2 weeks), deploy a production pilot with limited traffic (2-4 weeks), measure actual performance and costs before expanding, and maintain rollback capability.

  3. Container images are the future of Lambda custom runtimes. Layer approach adds 300-500ms initialization overhead. Containers enable cache pre-warming and optimization. Industry trend favors containerized deployments.

  4. Consider workload characteristics, not benchmarks. I/O-bound with burst traffic? Use Node.js managed runtime. CPU-bound with steady traffic? Consider Bun or compiled languages. Background jobs with TypeScript? Try Deno with containers. Simple transformations? Use CloudFront Functions at edge.

Production Lessons

  1. Test with Lambda-like constraints locally. Use Lambda Runtime Interface Emulator, simulate read-only filesystem and memory limits, test cold start behavior - not just warm invocations.

  2. Monitor runtime-specific metrics. Track initialization duration separately from execution, measure cold start percentage (not just average duration), and alert on AWS SDK errors for compatibility issues.

  3. The best runtime is the one you don't have to manage. AWS invests heavily in Node.js Lambda optimization, official runtimes receive security patches automatically, and ecosystem tooling assumes Node.js. Alternative runtimes make sense for specific use cases, not as default choice.

Tools and Resources

Bun Lambda Integration:

Deno on Lambda:

Deployment Frameworks:

  • SST (Serverless Stack) with native Bun support
  • AWS CDK for container and layer deployments
  • Serverless Framework custom runtime configurations

Testing Tools:

  • Lambda Runtime Interface Emulator for local testing
  • AWS X-Ray for custom runtime tracing
  • CloudWatch Embedded Metrics Format for runtime-specific measurements

References

Related Posts