Fix Prisma Migrate PgBouncer 's0' Error (2026)

Akshit Ahuja

Akshit Ahuja

Co-Founder & Lead Engineer

February 6, 2026
#Prisma#PgBouncer#Postgres#Neon#Vercel#Migrations#Serverless

If you are shipping a Postgres app on Vercel, Cloudflare Workers + a DB, AWS Lambda, or any other serverless-ish setup, you probably got told to use a pooler. Neon, Supabase, and a bunch of hosted Postgres vendors run PgBouncer for you. Great. Then you run prisma migrate deploy and it explodes with: ERROR: prepared statement "s0" already exists.

This post is the playbook we use when we rescue apps that hit this exact wall. It is not about generic connection pooling. It is about why Prisma Migrate and transaction pooling fight, what the error actually means, and the least painful fix in 2026.

The bug report you are really looking at (2026)

That "prepared statement s0" error is usually not your schema. It is your connection mode.

Prisma Migrate (the CLI) uses the Prisma Schema Engine. It expects one stable database session while it checks migrations, reads the migrations table, and runs transactional SQL. PgBouncer in transaction pooling mode does not promise a stable session. It hands you a backend connection only for the life of a transaction, then it can swap it out.

Some Postgres features are session-scoped. PgBouncer calls these out. In transaction pooling mode, it breaks a bunch of session based behavior by design. It can support protocol-level prepared plans if max_prepared_statements is set above 0, but the details matter.

Why this happens with PgBouncer transaction pooling

Transaction pooling swaps sessions

In transaction pooling mode, PgBouncer gives you a server connection only while a transaction is running. When the transaction ends, PgBouncer can return that server connection to the pool. Next transaction, you might get a different server connection.

If your client assumes it is talking to one session, it can get confused fast.

Prepared statements collide

Prepared statements live on the server connection. If a client prepares statement "s0" on one backend connection, then later reuses the name "s0" in a way PgBouncer does not expect, you can get collisions like "already exists".

This shows up a lot during migrations because the engine tends to do repeated prepares and replays while also using transactions.

The most common real world setup that triggers it

Here is the pattern we see in US and UK startup codebases:

1) App runtime uses a pooled DATABASE_URL (Neon pooled host, Supabase pooler on port 6543, or a managed PgBouncer). 2) Someone points Prisma CLI at that same URL. 3) prisma migrate deploy runs in CI. 4) It fails at random, often only in CI, because of pooling behavior.

The fix is boring but works: stop running Prisma CLI through the pooler.

The fix that actually works: split DATABASE_URL and DIRECT_URL

You want two URLs:

- DATABASE_URL: goes through PgBouncer (pooled). Use this for runtime traffic.

- DIRECT_URL: connects straight to Postgres (not pooled). Use this for Prisma CLI and migrations.

Prisma documents this workaround for PgBouncer, because Prisma Migrate does not support connection pooling through PgBouncer. The Schema Engine wants a direct connection.

Example .env

DATABASE_URL=postgres://USER:PASSWORD@POOLED_HOST:PORT/DATABASE

DIRECT_URL=postgres://USER:PASSWORD@DIRECT_HOST:5432/DATABASE

Example prisma.config.ts (Prisma ORM v7 style)

In Prisma ORM v7, the docs show using prisma.config.ts to point CLI to DIRECT_URL, while Prisma Client uses the pooled URL via a driver adapter.

If you are still on older Prisma, the same idea applies. The key is: migrations use DIRECT_URL.

Neon numbers you should know (because they change your plan)

Neon uses PgBouncer to allow up to 10,000 client connections, but that is not 10,000 queries at once. The actual Postgres max_connections depends on compute size. Neon documents 0.25 CU (1 GB RAM) at 104 max_connections, 1 CU (4 GB) at 419, and 2 CU (8 GB) at 839.

Neon also reserves 7 connections for the superuser, so your app gets less than the headline number. On the tiny 0.25 CU tier, that means 97 connections left for you.

Why does this matter? Because it explains why founders reach for pooling early, then accidentally route migrations through the pooler and hit the s0 error.

PgBouncer settings that trip people up

Transaction mode is required for most serverless pooling

Prisma says PgBouncer must run in transaction mode for Prisma Client to work reliably. That is true for runtime.

max_prepared_statements should not be zero

PgBouncer can support protocol-level prepared plans in transaction pooling mode if max_prepared_statements is set to a non-zero value. Some hosted poolers set it for you, some do not. If it is zero, you can get weird prepared statement behavior.

The pgbouncer=true flag is version sensitive

Prisma also has a pgbouncer=true connection string flag. But the docs warn not to set pgbouncer=true if you are on PgBouncer 1.21.0 or later. For older PgBouncer, you may need it.

How to debug it in 30 minutes

When a team pings us with this error, we ask three questions:

1) Are you running prisma migrate through a pooled host or a pooler port? If yes, stop.

2) What pooling mode are you in? Transaction pooling is the usual culprit.

3) Are you creating a new PrismaClient per request in a serverless function? If yes, you are also wasting connections and making failures more likely.

Fast checks

- Look at your DATABASE_URL host and port. Supabase pooler is commonly on 6543. If your CI uses that for migrate, that is your bug.

- If you use Neon, check if you are using the pooled connection string vs the direct one.

- In CI logs, print which URL prisma is reading (without secrets). People think it uses DIRECT_URL but the config is wrong.

What to do on Vercel in 2026

Vercel moved Vercel Postgres to Neon back in 2024, so a lot of Vercel apps are really Neon apps now. That usually means you have both a pooled and a direct string available.

In Vercel, set DATABASE_URL to the pooled string for runtime. Set DIRECT_URL to the direct Postgres connection. Then make your migration step explicitly use DIRECT_URL.

If you are using Vercel Serverless Functions, do not new PrismaClient inside the handler for every request. Make it a module level singleton. Yes, even in serverless. The runtime reuses the module between invocations when it can, and you will open fewer pools.

Prisma Client connection pooling gotcha in v7

Prisma ORM v7 changed some defaults because it now uses driver adapters by default for relational databases. That means pool defaults come from the underlying driver. Prisma documents that the pg driver adapter defaults to max 10 connections and idle timeout 10 seconds.

If you scaled a function fleet and assumed Prisma would keep a huge pool, it will not. You get lots of small pools, one per instance, unless you put a pooler in front.

Costs and timelines (real rescue numbers)

Here is what this usually costs when a founder hires us to fix it:

- Simple fix (CI uses pooled URL): 1 to 2 hours. $250 to $600 if you hire a contractor, $0 if you do it yourself.

- Mixed setup (Supabase pooler plus local dev plus multiple envs): 4 to 8 hours. $800 to $2,400.

- Full serverless cleanup (PrismaClient lifecycle bugs, connection storms, pool sizing, plus observability): 2 to 4 days. $3,500 to $9,000 depending on scope.

Hot take: if your startup is blocked on migrations because of pooling, your infra is too clever for your stage. Keep pooling for runtime, but keep migrations dumb and direct.

A checklist you can paste into your repo

- Create DIRECT_URL and use it only for prisma migrate and prisma db push.

- Keep DATABASE_URL pooled for runtime only.

- If you run PgBouncer yourself, confirm transaction pooling and max_prepared_statements > 0.

- Watch for pgbouncer=true. Use it only when you know your PgBouncer version and Prisma docs suggest it.

- In serverless, keep PrismaClient as a singleton and avoid creating a new instance per request.

- Add a CI step that prints which host Prisma CLI is using (mask secrets) so this never regresses.

When you should not use PgBouncer

If you are on a single VM or a simple container setup with a handful of instances, you probably do not need PgBouncer at all. Prisma Client already pools. The pooler is for when you have lots of app instances, lots of short-lived connections, or a hard max_connections ceiling.

Poolers add failure modes. This error is one of them.

Wrap up

If you want this to never happen again (CI guardrails)

Make it a rule: migrations never use the pooled host. Add a tiny script in CI that parses your DATABASE_URL host and fails if it matches your pooler host or a known pooler port like 6543. It sounds silly, but it saves you from mystery failures during a hotfix deploy.

Also, log the Prisma version and PgBouncer mode in your runbooks. People upgrade Prisma or switch providers, then keep stale flags like pgbouncer=true and wonder why behavior changes. Keep it written down.

The s0 prepared statement error is a symptom of running Prisma Migrate through a pooler. Fix it by splitting runtime and migration connections. Your future self will thank you the next time you do an emergency deploy at 2 AM.

---

Related reading

Akshit Ahuja

Akshit Ahuja

Co-Founder & Lead Engineer

Software EngineerBackend Specialist

Backend systems specialist who thrives on building reliable, scalable infrastructure. Akshit handles everything from API design to third-party integrations, ensuring every product HeyDev ships is production-ready.

Related Articles