Skip to content

When Middy Isn't Enough - Building Custom Lambda Middleware Frameworks

Discover the production challenges that pushed us beyond Middy's limits and how we built a custom middleware framework optimized for performance and scale

When Middy Isn't Enough - Building Custom Lambda Middleware Frameworks

Middy covers the typical middleware needs of a small Lambda fleet, but the tradeoffs of its generic middleware-chain model become measurable once a service hits about 50 functions sharing a common middleware stack: per-invocation overhead, cold-start cost of the middleware chain, and the coupling that a shared wrapper creates between otherwise unrelated functions. At that scale the question becomes whether to continue layering on top of Middy's abstractions, replace them with AWS Lambda PowerTools, or build a project-specific middleware framework that only pays for the hooks the fleet actually uses.

This is the second part of the Middy series. It covers the scaling limits of Middy's default model, the cold-start and cost accounting, a project-specific middleware framework design with only the hooks a typical serverless backend needs, and the migration path from a Middy-wrapped Lambda to a custom-middleware-wrapped one without breaking the contract.

Production Challenges

The Multi-Tenant Validation Challenge

Our fintech platform served multiple clients, each with completely different validation rules. Customer A required UK postal codes, Customer B needed German VAT validation, and Customer C had entirely custom business rules.

Middy's static middleware approach hit a wall:

typescript
// The problem with Middy - static configurationconst schema = getSchemaForTenant(tenantId) // We need this to be dynamic!.use(validator({ eventSchema: schema })) // But this must be static

We needed dynamic schema generation at runtime, but Middy configures middleware at initialization time. The workaround? Nasty conditional logic scattered throughout our handlers, defeating the entire purpose of clean middleware separation.

Technical Impact: Additional development time and a custom validation layer that increased maintenance overhead.

The Bundle Size Challenge

As our middleware stack grew to 8 different Middy packages, performance monitoring revealed concerning metrics:

Performance Metrics:

  • Bundle size: 2MB (up from 400KB)
  • Cold start time: 1.2 seconds (target: <500ms)
  • Memory usage: 128MB baseline
  • First response time: 1.8 seconds

For financial APIs handling frequent transactions, this performance degradation creates user experience issues. The elegant middleware abstraction came at a significant cost to responsiveness.

The Team Consistency Challenge

Across multiple developers working on different services, middleware usage patterns became inconsistent:

typescript
// Developer A's approachexport const handler = middy(businessLogic)  .use(httpJsonBodyParser())  .use(validator())  .use(httpErrorHandler())
// Developer B's approach (order is different!)export const handler = middy(businessLogic)    .use(httpErrorHandler()) // Error handling first?  .use(httpJsonBodyParser())  .use(validator())
// Developer C's approachexport const handler = middy(businessLogic)  .use(customAuth()) // Team-specific middleware  .use(httpJsonBodyParser())  // No validator at all!

Result: Production incidents, debugging complexity, and error handling that worked differently across services. We needed enforcement, not just conventions.

Designing a Custom Middleware Framework

These challenges led us to rethink middleware architecture entirely. Our custom framework addressed three core principles:

1. Performance-First Architecture

We built a lightweight context system and pre-compiled middleware chains for maximum speed:

typescript
interface LightweightContext {  event: any  context: any  response?: any  metadata: Map<string, any> // Memory efficient storage  startTime: number}
type MiddlewareHandler = (  ctx: LightweightContext,   next: () => Promise<void>) => Promise<void>
class CustomMiddlewareEngine {  private middlewares: MiddlewareHandler[] = []  private isCompiled = false  private compiledChain?: (ctx: LightweightContext) => Promise<void>    use(middleware: MiddlewareHandler): this {    if (this.isCompiled) {      throw new Error('Cannot add middleware after compilation')    }    this.middlewares.push(middleware)    return this  }    // Pre-compile middleware chain for performance  private compile(): void {    const chain = this.middlewares.reduceRight(      (next, middleware) => (ctx: LightweightContext) =>         middleware(ctx, () => next(ctx)),      () => Promise.resolve()    )    this.compiledChain = chain    this.isCompiled = true  }    async execute(event: any, context: any): Promise<any> {    if (!this.isCompiled) this.compile()        const ctx: LightweightContext = {      event,      context,      metadata: new Map(),      startTime: Date.now()    }        try {      if (!this.compiledChain) {        throw new Error('Middleware chain not compiled')      }      await this.compiledChain(ctx)      return ctx.response    } catch (error) {      return this.handleError(error, ctx)    }  }}

Key optimization: We pre-compile the middleware chain instead of building it on every request. This single change cut our middleware overhead by 40%.

2. Dynamic Configuration Support

For our multi-tenant validation problem, we built dynamic middleware that resolves configuration at runtime:

typescript
interface DynamicValidationOptions {  getSchema: (ctx: LightweightContext) => Promise<any>  cacheKey?: (ctx: LightweightContext) => string}
const dynamicValidator = (options: DynamicValidationOptions): MiddlewareHandler => {  const schemaCache = new Map<string, any>()    return async (ctx, next) => {    let schema: any        if (options.cacheKey) {      const key = options.cacheKey(ctx)      schema = schemaCache.get(key)            if (!schema) {        schema = await options.getSchema(ctx)        schemaCache.set(key, schema)      }    } else {      schema = await options.getSchema(ctx)    }        const isValid = validateAgainstSchema(ctx.event, schema)    if (!isValid) {      throw new ValidationError('Invalid request data')    }        await next()  }}
// Usage with multi-tenant supportconst handler = new CustomMiddlewareEngine()  .use(dynamicValidator({    getSchema: async (ctx) => {      const tenantId = ctx.event.pathParameters?.tenantId      return await getTenantSchema(tenantId)    },    cacheKey: (ctx) => `tenant:${ctx.event.pathParameters?.tenantId}`  }))

This solved our multi-tenant validation while maintaining performance through intelligent caching.

3. Team Convention Enforcement

Instead of hoping developers follow conventions, we built enforcement into the framework:

typescript
interface TeamStandards {  requiredMiddlewares: string[]  forbiddenMiddlewares?: string[]  middlewareOrder: string[]}
const teamStandardsEnforcer = (standards: TeamStandards): MiddlewareHandler => {  return async (ctx, next) => {    const appliedMiddlewares = ctx.metadata.get('middlewares') || []        // Validate required middlewares are present    for (const required of standards.requiredMiddlewares) {      if (!appliedMiddlewares.includes(required)) {        throw new Error(`Required middleware missing: ${required}`)      }    }        await next()  }}
// Create standardized handler factoryconst createStandardHandler = (businessLogic: Function) => {  return new CustomMiddlewareEngine()    .use(teamStandardsEnforcer({      requiredMiddlewares: ['auth', 'validation', 'errorHandler'],      middlewareOrder: ['auth', 'validation', 'businessLogic', 'errorHandler']    }))    .use(authMiddleware())    .use(validationMiddleware())    .use(wrapBusinessLogic(businessLogic))    .use(errorHandlerMiddleware())}

Now teams couldn't accidentally skip critical middleware or change the ordering. The framework enforced standards.

Performance Benchmarking - The Numbers

We ran comprehensive benchmarks comparing Middy with our custom framework using identical functionality:

Test Scenario:

  • Simple HTTP API with auth, validation, error handling
  • 1000 cold starts, 10,000 warm requests
  • Node.js 18 runtime, 1024MB memory
  • Tests run on identical AWS Lambda configurations
  • Average values across multiple test runs with consistent conditions

Results:

MetricMiddy + 5 MiddlewaresCustom FrameworkImprovement
Bundle Size1.8MB0.6MB67% smaller
Cold Start980ms320ms67% faster
Warm Request45ms28ms38% faster
Memory Usage128MB94MB27% less

The benchmark results showed meaningful performance improvements across all measured metrics with our custom framework.

Code Comparison

Middy Approach:

typescript
export const handler = middy(businessLogic)  .use(httpJsonBodyParser())  .use(httpCors({ origin: true }))  .use(validator({ eventSchema: schema }))  .use(httpErrorHandler())  .use(httpSecurityHeaders())

Custom Framework:

typescript
const handler = new CustomMiddlewareEngine()  .use(jsonParser())  .use(corsHandler({ origin: true }))  .use(requestValidator(schema))  .use(businessLogicWrapper(businessLogic))  .use(errorHandler())  .use(securityHeaders())

Similar API, drastically different performance characteristics.

Real-World Custom Middleware Examples

Here are some production middleware patterns that leverage dynamic behavior unavailable in Middy:

1. Circuit Breaker with Exponential Backoff

typescript
interface CircuitBreakerOptions {  failureThreshold: number  recoveryTimeout: number  monitor?: (state: 'open' | 'closed' | 'half-open') => void}
const circuitBreaker = (options: CircuitBreakerOptions): MiddlewareHandler => {  let failures = 0  let lastFailure = 0  let state: 'open' | 'closed' | 'half-open' = 'closed'    return async (ctx, next) => {    const now = Date.now()        // Check if we should attempt recovery    if (state === 'open' && now - lastFailure > options.recoveryTimeout) {      state = 'half-open'      options.monitor?.(state)    }        // Block requests if circuit is open    if (state === 'open') {      throw new Error('Circuit breaker is open - service temporarily unavailable')    }        try {      await next()            // Success - reset failures      if (failures > 0) {        failures = 0        state = 'closed'        options.monitor?.(state)      }          } catch (error) {      failures++      lastFailure = now            if (failures >= options.failureThreshold) {        state = 'open'        options.monitor?.(state)      }            throw error    }  }}

This middleware automatically protects downstream services from cascading failures - something that requires significant workarounds in Middy's static configuration model.

2. Smart Caching with Invalidation

typescript
interface CacheOptions {  ttl: number  keyGenerator: (ctx: LightweightContext) => string  shouldCache: (ctx: LightweightContext) => boolean  invalidateOn?: string[]}
const smartCache = (options: CacheOptions): MiddlewareHandler => {  const cache = new Map<string, { data: any, expires: number }>()    return async (ctx, next) => {    const cacheKey = options.keyGenerator(ctx)    const now = Date.now()        // Check cache hit    if (options.shouldCache(ctx)) {      const cached = cache.get(cacheKey)      if (cached && cached.expires > now) {        ctx.response = cached.data        ctx.metadata.set('cache', 'hit')        return // Skip remaining middleware      }    }        await next()        // Cache the response    if (ctx.response && options.shouldCache(ctx)) {      cache.set(cacheKey, {        data: ctx.response,        expires: now + options.ttl      })      ctx.metadata.set('cache', 'miss')    }  }}
// Usage with intelligent cachingconst handler = new CustomMiddlewareEngine()  .use(smartCache({    ttl: 5 * 60 * 1000, // 5 minutes    keyGenerator: (ctx) => `user:${ctx.event.pathParameters?.userId}`,    shouldCache: (ctx) => ctx.event.httpMethod === 'GET'  }))  .use(businessLogicWrapper(getUserProfile))

This middleware can short-circuit the entire request pipeline when cache hits - a significant performance advantage over Middy's linear middleware execution model.

Migration Strategy - From Middy to Custom

Moving from Middy to our custom framework in production required a careful, phased approach:

Phase 1: Hybrid Approach

typescript
// Mix custom middleware with existing Middyexport const handler = middy(businessLogic)  .use(customPerformanceMiddleware()) // Our custom  .use(httpJsonBodyParser())  // Middy  .use(customValidation())  // Our custom  .use(httpErrorHandler())  // Middy

Phase 2: Feature Parity

typescript
// Build custom equivalents for all Middy middlewareconst customJsonParser = (): MiddlewareHandler => {  return async (ctx, next) => {    if (ctx.event.body && typeof ctx.event.body === 'string') {      try {        ctx.event.body = JSON.parse(ctx.event.body)      } catch (error) {        throw new Error('Invalid JSON body')      }    }    await next()  }}

Phase 3: Performance Optimization

Once all middleware were ported, we optimized for our specific use cases, achieving the 67% performance improvement shown earlier.

Phase 4: Team Training & Standards

The final phase involved training the team and establishing new development standards around our custom framework.

When to Choose Custom vs Middy

Based on our experience, here's the decision matrix:

Choose Middy When:

  • Team is new to middleware patterns
  • Standard use cases (HTTP APIs, basic validation)
  • Fast development is the priority
  • Bundle size < 1MB is acceptable
  • Cold start < 1s is acceptable
  • Limited development resources for custom solutions

Choose Custom Framework When:

  • Performance is critical (< 500ms cold start required)
  • Complex business rules requiring dynamic behavior
  • Team has middleware expertise
  • Specific compliance/security requirements
  • Large-scale applications (50+ functions)
  • Need for team standardization and enforcement

Hybrid Approach When:

  • Migration phase between solutions
  • Different performance requirements per function
  • Learning custom patterns while maintaining productivity

Production Lessons Learned

1. Performance vs Developer Experience

Custom frameworks can deliver significant performance improvements but require additional development time. Evaluate this trade-off based on your requirements and team capabilities.

2. Team Adoption is Critical

The best framework is worthless if your team can't adopt it. Change management and training are as important as the technical solution.

3. Maintenance Overhead is Real

Custom solutions mean custom maintenance. Middy's community support has real value - factor this into your decision.

4. Gradual Migration is Safer

Incremental migrations reduce risk. The gradual, phased approach proved much safer and allowed us to validate our approach step by step.

Testing Custom Middleware

Testing our custom framework required a different approach:

typescript
describe('Custom Middleware Framework', () => {  test('should execute middleware chain in order', async () => {    const executionOrder: string[] = []        const middleware1 = async (ctx: any, next: Function) => {      executionOrder.push('before-1')      await next()      executionOrder.push('after-1')    }        const middleware2 = async (ctx: any, next: Function) => {      executionOrder.push('before-2')      await next()      executionOrder.push('after-2')    }        const engine = new CustomMiddlewareEngine()      .use(middleware1)      .use(middleware2)        await engine.execute({}, {})        expect(executionOrder).toEqual([      'before-1', 'before-2', 'after-2', 'after-1'    ])  })    test('should handle circuit breaker correctly', async () => {    const failingMiddleware = async () => {      throw new Error('Service unavailable')    }        const engine = new CustomMiddlewareEngine()      .use(circuitBreaker({ failureThreshold: 2, recoveryTimeout: 1000 }))      .use(failingMiddleware)        // First failure    await expect(engine.execute({}, {})).rejects.toThrow('Service unavailable')        // Second failure - should open circuit    await expect(engine.execute({}, {})).rejects.toThrow('Service unavailable')        // Third request - should be blocked by circuit breaker    await expect(engine.execute({}, {})).rejects.toThrow('Circuit breaker is open')  })})

Production Checklist

Before taking a custom middleware framework to production:

  • Performance benchmarks documented and validated
  • Error handling comprehensive across all scenarios
  • Monitoring and alerting integrated
  • Team training completed with hands-on exercises
  • Documentation up-to-date and accessible
  • Rollback plan tested and ready
  • A/B testing capability implemented
  • Security review passed with penetration testing
  • Load testing completed under realistic conditions

The Bottom Line

Middy is an excellent starting point for most Lambda applications. But when you're operating at scale, dealing with complex business requirements, or facing strict performance constraints, a custom middleware framework can be transformative.

Key Takeaways:

  1. Start with Middy - It's proven, production-ready, and great for learning middleware patterns
  2. Measure before optimizing - Let performance data drive your decisions, not assumptions
  3. Team consistency matters more than framework choice - Standards and enforcement are critical
  4. Custom isn't always better - Factor in maintenance costs and team expertise
  5. Migration requires careful planning - Gradual approaches reduce risk and allow validation

The journey from Middy to a custom framework demonstrates that sometimes the best solution is the one you build yourself - but only when you have compelling technical reasons and the team expertise to execute it well.

The middleware patterns we learned from Middy became the foundation for something even better suited to our specific needs. Whether you stick with Middy or build your own, the principles of clean middleware design will serve you well in your serverless journey.

References

AWS Lambda Middleware Mastery

From Middy basics to building custom middleware frameworks for production-scale Lambda applications

Progress2/2 posts completed

All Posts in This Series

Part 2: Building Custom Middleware Frameworks for Production

Related Posts