We shipped a bug to production last year that TypeScript should have caught. A partner integration started sending age as a string ("29" instead of 29) in their user creation payloads. Our TypeScript types said age was number | undefined, so every handler downstream assumed it was safe to do arithmetic on it. The result: "29" + 1 evaluated to "291", which got written to the database as a string in a numeric column that Postgres silently accepted because the ORM was interpolating values. We didn’t find the problem until a billing report divided a lifetime value by customer age and produced nonsensical results, three weeks later. By then, roughly 14,000 records had corrupted age data. The cleanup involved a migration script, an audit of every downstream system that consumed that field, and an uncomfortable conversation with the partner about their API client.

The frustrating part: our types were perfect. The compiler was happy. The code review looked clean. But TypeScript’s type annotations are a compile-time fiction. They disappear completely at runtime, which is exactly when the bad data showed up.

This experience made us rethink how we handle validation in our Express APIs. Not just request body validation, but validation at every boundary: incoming requests, outgoing responses, environment variables, all of it. Here’s the pattern we landed on after a year of iteration.

The problem: TypeScript lies at the boundary

TypeScript’s type system is powerful, but it has a basic limitation: types are erased at runtime. When a request hits your API, TypeScript has no say in what’s actually in the payload:

interface CreateUserInput {
  username: string;
  email: string;
  age?: number;
}

// TypeScript is happy, but req.body could be literally anything
const input: CreateUserInput = req.body;

This compiles. The types look correct. But if someone sends { "foo": 123 }, your application accepts it as a valid CreateUserInput. The type annotation is a lie, a comfortable one that lulls you into trusting data you shouldn’t.

The problem compounds quickly. That unvalidated input flows into your service layer, gets persisted to a database, gets serialized into events, gets consumed by other services. By the time you discover the data was malformed, it’s everywhere.

The cost of a validation bug is never the bug itself. It’s the blast radius, how far bad data travels before someone notices.

We needed validation that worked at runtime and kept our types in sync, without maintaining two parallel definitions that inevitably drift apart.

The foundation: Zod as the single source of truth

Zod bridges the gap between compile-time types and runtime validation. Instead of defining types and validation separately (and hoping they stay in sync), we define a schema once and derive everything from it:

import { z } from 'zod';

const CreateUserSchema = z.object({
  username: z.string().min(3).max(20),
  email: z.string().email(),
  age: z.number().int().min(18).optional()
});

// The type is inferred from the schema — always in sync
type CreateUserInput = z.infer<typeof CreateUserSchema>;

One schema. One source of truth. The type and validation can never drift apart because the type is the schema. If you add a field to the schema, the type updates. If you change a constraint, the type still matches. There’s no second definition to forget about.

Building validation middleware

Our first middleware only validated req.body. That was a mistake. Query parameters, path parameters, and headers are just as untrusted. We ended up with a middleware that validates all of them:

import { Request, Response, NextFunction } from 'express';
import { ZodSchema, ZodError } from 'zod';

interface ValidationSchemas {
  body?: ZodSchema;
  query?: ZodSchema;
  params?: ZodSchema;
  headers?: ZodSchema;
}

function validate(schemas: ValidationSchemas) {
  return (req: Request, res: Response, next: NextFunction) => {
    try {
      if (schemas.body) {
        req.body = schemas.body.parse(req.body);
      }
      if (schemas.query) {
        req.query = schemas.query.parse(req.query);
      }
      if (schemas.params) {
        req.params = schemas.params.parse(req.params);
      }
      if (schemas.headers) {
        // Only validate expected headers, don't clobber the rest
        schemas.headers.parse(req.headers);
      }
      next();
    } catch (error) {
      if (error instanceof ZodError) {
        res.status(400).json({
          error: 'Validation failed',
          details: error.errors.map((e) => ({
            path: e.path.join('.'),
            message: e.message
          }))
        });
        return;
      }
      next(error);
    }
  };
}

The middleware either passes validated (and parsed) data to the next handler or returns a structured error response. No invalid data gets through.

Validating query parameters: the coercion problem

Query parameters are always strings. When Express parses ?page=2&limit=50, both page and limit arrive as "2" and "50", strings, not numbers. This is where Zod’s coercion becomes important:

const PaginationSchema = z.object({
  page: z.coerce.number().int().min(1).default(1),
  limit: z.coerce.number().int().min(1).max(100).default(20),
  sort: z.enum(['asc', 'desc']).default('desc'),
  search: z.string().trim().optional()
});

type PaginationQuery = z.infer<typeof PaginationSchema>;

app.get(
  '/users',
  validate({ query: PaginationSchema }),
  (req, res) => {
    // req.query.page is now a number, not a string
    // req.query.limit is now a number, not a string
    // Defaults are applied if parameters were omitted
    const { page, limit, sort, search } = req.query as PaginationQuery;

    const offset = (page - 1) * limit;
    // ... query the database with confidence
  }
);

Without coercion, z.number() would reject the string "2" because it isn’t a number. z.coerce.number() runs Number(value) first, converting "2" to 2 before validating. Subtle distinction, but it matters for anything that comes from the URL.

Every value from a query string, path parameter, or HTTP header is a string. If your schema expects a number or boolean, you need to coerce, not just validate.

Validating path parameters

Path parameters have the same string-coercion problem as query parameters, but they tend to be simpler, usually just an ID:

const UserParamsSchema = z.object({
  id: z.coerce.number().int().positive()
});

// For UUID-based IDs:
const ResourceParamsSchema = z.object({
  id: z.string().uuid()
});

app.get(
  '/users/:id',
  validate({ params: UserParamsSchema }),
  (req, res) => {
    const { id } = req.params as z.infer<typeof UserParamsSchema>;
    // id is a number, not the string "42"
  }
);

This seems minor, but it prevents bugs where you accidentally pass a string ID into a function expecting a number and JavaScript’s loose comparison rules silently let it slide.

Validating headers

Sometimes you need to validate specific headers: API keys, content types, custom correlation IDs for tracing:

const ApiKeyHeadersSchema = z.object({
  'x-api-key': z.string().min(1, 'API key is required'),
  'x-request-id': z.string().uuid().optional()
}).passthrough(); // Don't strip other headers

app.post(
  '/webhooks/payment',
  validate({
    headers: ApiKeyHeadersSchema,
    body: PaymentWebhookSchema
  }),
  (req, res) => {
    // Both headers and body are validated
  }
);

The .passthrough() matters here. Without it, Zod would strip every header not in the schema (like content-type, host, etc.), which would break downstream middleware.

Response validation: catching bugs before your consumers do

This is the one most teams skip, and we think it’s the most valuable. Validating what your API sends, not just what it receives, catches a whole category of bugs before they reach consumers.

import { z } from 'zod';

const UserResponseSchema = z.object({
  id: z.number(),
  username: z.string(),
  email: z.string().email(),
  createdAt: z.string().datetime()
});

const UserListResponseSchema = z.object({
  users: z.array(UserResponseSchema),
  total: z.number(),
  page: z.number(),
  limit: z.number()
});

type UserResponse = z.infer<typeof UserResponseSchema>;
type UserListResponse = z.infer<typeof UserListResponseSchema>;

We use a helper that validates and sends in one call:

function validatedJson<T>(
  res: Response,
  schema: ZodSchema<T>,
  data: unknown,
  statusCode = 200
): void {
  const result = schema.safeParse(data);
  if (!result.success) {
    // In production, log the error and send a generic 500
    // In development, send the full Zod error for debugging
    console.error('Response validation failed:', result.error.errors);

    if (process.env.NODE_ENV === 'development') {
      res.status(500).json({
        error: 'Response validation failed',
        details: result.error.errors
      });
      return;
    }

    res.status(500).json({ error: 'Internal server error' });
    return;
  }

  res.status(statusCode).json(result.data);
}
app.get(
  '/users',
  validate({ query: PaginationSchema }),
  async (req, res) => {
    const { page, limit } = req.query as PaginationQuery;
    const result = await userService.list(page, limit);

    validatedJson(res, UserListResponseSchema, result);
  }
);

Why does this matter? Say someone refactors userService.list and accidentally stops including createdAt in the response. Without response validation, the API silently sends responses missing a field that clients depend on. The bug surfaces as a cryptic undefined error in someone else’s frontend, maybe days later. With response validation, the deployment catches it. Your service throws a 500 with a clear log message the moment the schema stops matching.

Request validation protects your system from the outside world. Response validation protects the outside world from your system. You need both.

We enable response validation in development and staging. In production, we log mismatches but don’t block. The development-time catches alone are worth the setup cost.

Environment variable validation: fail fast on misconfiguration

One of the best uses of Zod has nothing to do with HTTP. We validate environment variables at startup:

const EnvSchema = z.object({
  NODE_ENV: z.enum(['development', 'staging', 'production']).default('development'),
  PORT: z.coerce.number().int().min(1).max(65535).default(3000),
  DATABASE_URL: z.string().url(),
  REDIS_URL: z.string().url(),
  JWT_SECRET: z.string().min(32, 'JWT secret must be at least 32 characters'),
  API_RATE_LIMIT: z.coerce.number().int().positive().default(100),
  ALLOWED_ORIGINS: z
    .string()
    .transform((val) => val.split(',').map((s) => s.trim()))
    .pipe(z.array(z.string().url()))
});

// Parse at startup — crash immediately if config is invalid
export const env = EnvSchema.parse(process.env);

// Now fully typed throughout your application:
// env.PORT is number
// env.DATABASE_URL is string
// env.ALLOWED_ORIGINS is string[]

This replaces dozens of scattered process.env.WHATEVER || 'default' calls with a single, validated, typed configuration object. If someone deploys without setting DATABASE_URL, the app crashes on startup with a clear error instead of crashing minutes later when the first database query runs and produces a baffling connection error.

The ALLOWED_ORIGINS example shows a useful pattern: the raw environment variable is a comma-separated string ("https://app.example.com, https://staging.example.com"), and the schema transforms it into a validated array of URLs. One definition handles parsing, transformation, and validation.

Schema composition: reuse without repetition

Real APIs don’t have one schema per resource. You need variations: one for creation, one for updates (where most fields are optional), one for responses (which include server-generated fields like id and createdAt). Zod’s composition methods keep these DRY:

// Base schema: the core fields of a user
const UserBaseSchema = z.object({
  username: z.string().min(3).max(20),
  email: z.string().email(),
  age: z.number().int().min(18).optional(),
  bio: z.string().max(500).optional(),
  role: z.enum(['user', 'admin', 'moderator']).default('user')
});

// Create: all required fields must be present
const CreateUserSchema = UserBaseSchema;

// Update: everything is optional (PATCH semantics)
const UpdateUserSchema = UserBaseSchema.partial();

// Admin create: extend with admin-only fields
const AdminCreateUserSchema = UserBaseSchema.extend({
  role: z.enum(['user', 'admin', 'moderator']),  // Required, no default
  verified: z.boolean().default(false)
});

// Response: includes server-generated fields
const UserResponseSchema = UserBaseSchema.extend({
  id: z.number(),
  createdAt: z.string().datetime(),
  updatedAt: z.string().datetime()
});

// Summary: for list endpoints, omit heavy fields
const UserSummarySchema = UserResponseSchema.omit({ bio: true });

// Derive types from all of them
type CreateUserInput = z.infer<typeof CreateUserSchema>;
type UpdateUserInput = z.infer<typeof UpdateUserSchema>;
type AdminCreateUserInput = z.infer<typeof AdminCreateUserSchema>;
type UserResponse = z.infer<typeof UserResponseSchema>;
type UserSummary = z.infer<typeof UserSummarySchema>;

The methods: .partial() makes all fields optional, good for PATCH endpoints. .extend() adds or overrides fields, useful for admin variants or response schemas. .omit() removes fields, good for summary views. .pick() keeps only specific fields. .merge() combines two schemas.

Adding a field to UserBaseSchema automatically propagates to every derived schema. No manual synchronization across five type definitions.

app.post('/users', validate({ body: CreateUserSchema }), createUserHandler);
app.patch('/users/:id', validate({ body: UpdateUserSchema, params: UserParamsSchema }), updateUserHandler);
app.post('/admin/users', validate({ body: AdminCreateUserSchema }), adminCreateUserHandler);

Transforms and preprocessing: normalizing dirty data

Real-world input is messy. Users paste emails with trailing whitespace, send dates in various formats, capitalize things randomly. Zod transforms let you normalize data during validation, so your handlers always receive clean input:

const CreateUserSchema = z.object({
  username: z
    .string()
    .trim()
    .toLowerCase()
    .min(3)
    .max(20)
    .regex(/^[a-z0-9_]+$/, 'Username must be alphanumeric with underscores'),

  email: z
    .string()
    .trim()
    .toLowerCase()
    .email(),

  displayName: z
    .string()
    .trim()
    .min(1)
    .max(50)
    .transform((name) => {
      // Normalize multiple spaces to single space
      return name.replace(/\s+/g, ' ');
    }),

  birthDate: z
    .string()
    .transform((val) => new Date(val))
    .pipe(z.date().max(new Date(), 'Birth date cannot be in the future')),

  website: z
    .string()
    .trim()
    .url()
    .optional()
    .or(z.literal(''))  // Accept empty string as "no website"
    .transform((val) => val || undefined)  // Normalize empty string to undefined
});

The birthDate example shows a useful pattern: .transform() converts the string to a Date object, and .pipe() feeds the result into a new z.date() validator. The inferred type of birthDate is Date, not string. The transform changes the output type.

Another common pattern, preprocessing for values that arrive in multiple formats:

// Accept boolean as true/false, "true"/"false", 1/0
const flexibleBoolean = z.preprocess((val) => {
  if (typeof val === 'string') {
    return val.toLowerCase() === 'true' || val === '1';
  }
  if (typeof val === 'number') {
    return val !== 0;
  }
  return val;
}, z.boolean());

// Accept tags as either an array or a comma-separated string
const tagsField = z.preprocess((val) => {
  if (typeof val === 'string') {
    return val.split(',').map((s) => s.trim()).filter(Boolean);
  }
  return val;
}, z.array(z.string().min(1)).max(10));

Transforms move data normalization out of your business logic and into your schema. The handler never sees the mess, only the cleaned, typed result.

This is especially useful when multiple endpoints accept the same field. Without transforms, you’d duplicate normalization logic in every handler. With transforms, it lives in the schema once.

Testing validated endpoints

Validation middleware changes how you write tests. You need to test the happy path and the rejection paths, and the rejections are often more interesting because they document your API’s contract:

import request from 'supertest';
import express from 'express';

// Setup: create an app instance for testing
function createTestApp() {
  const app = express();
  app.use(express.json());

  app.post(
    '/users',
    validate({ body: CreateUserSchema }),
    (req, res) => {
      res.status(201).json({ id: 1, ...req.body });
    }
  );

  app.get(
    '/users',
    validate({ query: PaginationSchema }),
    (req, res) => {
      res.json({ users: [], total: 0, ...req.query });
    }
  );

  return app;
}

describe('POST /users', () => {
  const app = createTestApp();

  it('accepts valid input', async () => {
    const response = await request(app)
      .post('/users')
      .send({ username: 'alice', email: 'alice@example.com', age: 25 });

    expect(response.status).toBe(201);
    expect(response.body.username).toBe('alice');
  });

  it('rejects missing required fields', async () => {
    const response = await request(app)
      .post('/users')
      .send({ username: 'alice' }); // Missing email

    expect(response.status).toBe(400);
    expect(response.body.error).toBe('Validation failed');
    expect(response.body.details).toEqual(
      expect.arrayContaining([
        expect.objectContaining({ path: 'email' })
      ])
    );
  });

  it('rejects invalid email format', async () => {
    const response = await request(app)
      .post('/users')
      .send({ username: 'alice', email: 'not-an-email' });

    expect(response.status).toBe(400);
    expect(response.body.details).toEqual(
      expect.arrayContaining([
        expect.objectContaining({ path: 'email', message: expect.stringContaining('email') })
      ])
    );
  });

  it('rejects username that is too short', async () => {
    const response = await request(app)
      .post('/users')
      .send({ username: 'ab', email: 'alice@example.com' });

    expect(response.status).toBe(400);
    expect(response.body.details).toEqual(
      expect.arrayContaining([
        expect.objectContaining({ path: 'username' })
      ])
    );
  });

  it('strips unknown fields', async () => {
    const response = await request(app)
      .post('/users')
      .send({
        username: 'alice',
        email: 'alice@example.com',
        isAdmin: true  // This field isn't in the schema
      });

    expect(response.status).toBe(201);
    expect(response.body.isAdmin).toBeUndefined();
  });
});

describe('GET /users (query validation)', () => {
  const app = createTestApp();

  it('applies default pagination values', async () => {
    const response = await request(app).get('/users');

    expect(response.status).toBe(200);
    expect(response.body.page).toBe(1);
    expect(response.body.limit).toBe(20);
  });

  it('coerces string query params to numbers', async () => {
    const response = await request(app).get('/users?page=3&limit=50');

    expect(response.status).toBe(200);
    expect(response.body.page).toBe(3);
    expect(response.body.limit).toBe(50);
  });

  it('rejects limit above maximum', async () => {
    const response = await request(app).get('/users?limit=500');

    expect(response.status).toBe(400);
  });
});

Notice the “strips unknown fields” test. By default, Zod strips properties that aren’t in the schema. This is a security feature: it prevents clients from injecting fields like isAdmin: true that your handler might accidentally spread into a database query. Test for it explicitly so it doesn’t change without someone noticing.

We also test schemas in isolation, separate from the HTTP layer, for quick feedback:

describe('CreateUserSchema', () => {
  it('transforms email to lowercase', () => {
    const result = CreateUserSchema.parse({
      username: 'alice',
      email: 'Alice@Example.COM'
    });
    expect(result.email).toBe('alice@example.com');
  });

  it('trims whitespace from username', () => {
    const result = CreateUserSchema.parse({
      username: '  alice  ',
      email: 'alice@example.com'
    });
    expect(result.username).toBe('alice');
  });
});

These schema-level tests are fast (no HTTP overhead) and precise. They’re the best way to verify transforms and coercions work correctly.

When to reach for tRPC instead

Everything above assumes a traditional REST (or REST-ish) Express API. But there’s an alternative worth knowing about: tRPC, which gives you end-to-end type safety between a TypeScript backend and a TypeScript frontend, without code generation, without schemas at the boundary, and without manual API client maintenance.

With tRPC, you define procedures on the server:

import { initTRPC } from '@trpc/server';
import { z } from 'zod';

const t = initTRPC.create();

const appRouter = t.router({
  getUser: t.procedure
    .input(z.object({ id: z.number() }))
    .query(async ({ input }) => {
      const user = await db.user.findUnique({ where: { id: input.id } });
      return user;
    }),

  createUser: t.procedure
    .input(CreateUserSchema)
    .mutation(async ({ input }) => {
      return await db.user.create({ data: input });
    })
});

export type AppRouter = typeof appRouter;

And on the client, you call them with full type inference. The client knows the input and output types of every procedure without any manual type definitions:

import { createTRPCClient, httpBatchLink } from '@trpc/client';
import type { AppRouter } from '../server/router';

const client = createTRPCClient<AppRouter>({
  links: [httpBatchLink({ url: 'http://localhost:3000/trpc' })],
});

// Fully typed — TypeScript knows this returns a User
const user = await client.getUser.query({ id: 1 });

We use the Zod middleware approach from this post when the API serves non-TypeScript clients (mobile apps, third-party integrations, other languages), when we need standard REST semantics (status codes, content negotiation, caching headers), when we’re adding validation to an existing Express codebase incrementally, or when the API is public-facing.

We use tRPC when both client and server are TypeScript (typically a Next.js or similar full-stack setup), when we control both ends and they’re in the same repo, when we want the fastest iteration speed (change a procedure’s return type and the client gets an immediate type error), and when we don’t need the API to be consumable by non-TypeScript clients.

The tradeoff is flexibility versus safety. Zod middleware gives you a validated REST API that anything can consume. tRPC gives you tighter type safety but ties you to a TypeScript client. In practice, we use both: tRPC for internal full-stack apps where we control the frontend, Zod-validated Express for anything external-facing.

Putting it all together

Here’s what a fully validated endpoint looks like with everything we’ve discussed:

import express from 'express';
import { z } from 'zod';
import { env } from './config'; // Zod-validated env vars

const app = express();
app.use(express.json());

// Schemas
const CreateUserSchema = z.object({
  username: z.string().trim().toLowerCase().min(3).max(20),
  email: z.string().trim().toLowerCase().email(),
  age: z.number().int().min(18).optional()
});

const UserParamsSchema = z.object({
  id: z.coerce.number().int().positive()
});

const PaginationSchema = z.object({
  page: z.coerce.number().int().min(1).default(1),
  limit: z.coerce.number().int().min(1).max(100).default(20)
});

const UserResponseSchema = z.object({
  id: z.number(),
  username: z.string(),
  email: z.string(),
  age: z.number().optional(),
  createdAt: z.string().datetime()
});

const UserListResponseSchema = z.object({
  users: z.array(UserResponseSchema),
  total: z.number(),
  page: z.number(),
  limit: z.number()
});

// Routes
app.post(
  '/users',
  validate({ body: CreateUserSchema }),
  async (req, res) => {
    const user = await userService.create(req.body);
    validatedJson(res, UserResponseSchema, user, 201);
  }
);

app.get(
  '/users',
  validate({ query: PaginationSchema }),
  async (req, res) => {
    const { page, limit } = req.query as z.infer<typeof PaginationSchema>;
    const result = await userService.list(page, limit);
    validatedJson(res, UserListResponseSchema, result);
  }
);

app.get(
  '/users/:id',
  validate({ params: UserParamsSchema }),
  async (req, res) => {
    const { id } = req.params as z.infer<typeof UserParamsSchema>;
    const user = await userService.findById(id);

    if (!user) {
      res.status(404).json({ error: 'User not found' });
      return;
    }

    validatedJson(res, UserResponseSchema, user);
  }
);

app.listen(env.PORT, () => {
  console.log(`Server running on port ${env.PORT}`);
});

By the time any handler runs, we know the input is valid, typed, normalized, and coerced. When the response goes out, we know it matches what our API documentation promises. The entire data flow is validated.

What we gained

After adopting this pattern across our APIs over the past year, we saw concrete improvements:

We haven’t had a type-mismatch bug at an API boundary since adoption. The kind of bug that started this whole post, a string where a number should be, is structurally impossible now.

Handlers got cleaner because there’s no inline validation logic. They read like business logic because that’s all they contain.

API consumers get better error messages. Zod errors include the exact path and reason for every validation failure, which means fewer support tickets about mysterious 400 errors.

New engineers onboard faster. The schemas are self-documenting: you can read one and understand exactly what an endpoint accepts and returns.

Response validation caught at least a dozen cases in staging where a refactor changed a response shape without updating the API contract.

Environment variable validation means misconfigured deployments fail at startup with a clear message, not at 3 AM when a code path finally hits the missing variable.

The tradeoff is a small runtime cost for parsing, but for most APIs it’s negligible compared to the database calls and network I/O that follow. We’ve never seen Zod validation show up as a bottleneck in profiling.

The takeaway

TypeScript gives you confidence inside your application. Zod extends that confidence to every boundary where your code touches the outside world: incoming requests, outgoing responses, environment variables, configuration files. Together, they give you APIs that are actually type-safe, not just at compile time but when real data flows through.

The pattern is straightforward: define schemas, derive types, validate at boundaries, compose schemas for variations, test both valid and invalid paths. The initial investment pays for itself the first time you catch a bug that would have slipped into production and corrupted data for weeks.

Type safety isn’t a feature you add once. It’s a discipline you apply at every boundary where trusted meets untrusted.

Start with request body validation. Then add query parameter coercion. Then response validation in development. Then environment variables. Each layer catches bugs the previous ones miss, and together they let you move fast without worrying that bad data is quietly corrupting your system.