Prisma Advisory Lock CI Fixes (2026)

Akshit Ahuja
Co-Founder & Lead Engineer
If you ship a Next.js app with Prisma, this error will eventually ruin a deploy:
Context: Timed out trying to acquire a postgres advisory lock (SELECT pg_advisory_lock(...)). Timeout: 10000ms.
It shows up right after you get real traffic. A US-based customer imports a CSV. Your worker queue is busy. Someone merges two PRs back-to-back. Boom, pipeline red.
This post is the playbook we use in 2026 to stop it. Not theory. The real causes, the fastest fixes, and when you should actually kill a Postgres backend.
Why Prisma uses advisory locks (and why the 10s timeout is brutal)
Prisma Migrate uses advisory locking so only one migration command runs at a time. That guardrail matters because two concurrent schema changes can mess up your migration history and your app.
The catch is the timeout: Prisma’s docs say advisory locking has a 10 second timeout and you cannot tune it.
So your options are:
- make sure only one thing tries to migrate at a time
- make sure migrations run on a direct database connection
- make sure the database is not blocked on long DDL locks
If you treat it like a random flake and just retry, you are going to keep paying for it.
The long-tail keyword people actually search
Most folks type some version of:
- prisma advisory lock timeout 10000ms
- timed out trying to acquire a postgres advisory lock prisma migrate deploy
- prisma migrate deploy stuck pg_advisory_lock
That is what we are fixing.
The 6 failure modes we keep seeing in 2026
1) Two production deploys are racing
Classic setup:
- GitHub Actions runs on push to main
- you merge PR A
- you merge PR B a minute later
- both runs reach prisma migrate deploy
- one wins the lock
- the other dies at 10 seconds
Fix: enforce one deploy lane.
In GitHub Actions, this is the minimum viable fix:
concurrency:
group: prod-deploy
cancel-in-progress: false
That single stanza saves teams weeks of pain.
If you are on Vercel, Render, Fly, Railway, or a self-hosted runner, you still need the same idea: only one migration runner hits prod at a time.
2) You are running migrations through PgBouncer (transaction pooling)
This is the big one for managed Postgres.
Many providers give you a pooler endpoint (often PgBouncer). It is great for app traffic. It is risky for migrations when it uses transaction pooling.
Why it hurts:
- advisory locks are session-based
- transaction pooling breaks the idea of a stable session
- you can end up with locks that behave weird, or migrations that cannot reliably finish
Fix: run migrations on a direct connection.
In Prisma, that means using directUrl:
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
directUrl = env("DATABASE_DIRECT_URL")
}
Then set:
- DATABASE_URL: your pooler URL (what your app uses)
- DATABASE_DIRECT_URL: your direct Postgres URL (what migrations use)
If your provider only gives you one URL, ask support how to get a non-pooled endpoint. For founders in the US or UK, this is usually a ticket worth filing.
3) A previous migration process crashed and left a backend holding the lock
In a perfect world, the lock releases when the session ends.
In the real world:
- CI runner gets killed
- node process hangs
- network drops mid-migration
- you end up with a backend still alive and holding the advisory lock
You do not need to restart the whole database to fix this.
Find the lock holder:
SELECT *
FROM pg_locks
WHERE objid = 72707369;
Then inspect that pid:
SELECT pid, usename, application_name, state, query_start, query
FROM pg_stat_activity
WHERE pid = <pid>;
If it is clearly stale (old query_start, idle, or a dead app name), terminate it:
SELECT pg_terminate_backend(<pid>);
That drops the lock.
Yes, it feels scary the first time. But it is safer than restarting a shared Postgres cluster.
4) Your migration is blocked on real DDL locks, not Prisma locks
Here is the part most blog posts skip.
Prisma’s advisory lock is just a guardrail. The real reason migrations take forever is usually Postgres DDL locks.
Examples:
- adding a column with a default on a huge table
- changing a column type
- building a non-concurrent index
- dropping a column that is still referenced
If a migration waits on an ACCESS EXCLUSIVE lock while your EU and US traffic keeps queries open, prisma migrate deploy can sit there holding the advisory lock.
Now the next deploy starts, hits the 10 second advisory timeout, and fails.
Fixes that work:
- prefer expand, backfill, contract migrations
- use CREATE INDEX CONCURRENTLY when you can
- split a risky migration into two deploys
- run heavy schema changes off-peak (and yes, pick a window that respects US and EU users)
If you are consistently doing big schema changes at peak traffic, you are gambling with downtime.
5) Your pipeline runs prisma migrate deploy twice
This one is embarrassingly common.
We see teams run migrations:
- once during build (CI)
- again in the container entrypoint on release
- sometimes again in a post-start hook
So your own deploy is fighting itself.
Fix: pick one.
Our stance: run prisma migrate deploy once in CI, right before you switch traffic to the new release. Do not run it in every app pod.
If you have 8 pods starting at the same time and each tries to migrate, you do not have a migration system. You have a thundering herd.
6) Someone disabled advisory locking and now you have silent damage
Prisma supports PRISMA_SCHEMA_DISABLE_ADVISORY_LOCK.
It can get you unblocked in a bad moment.
But leaving it on in production is how you end up with:
- two schema changes applied in the wrong order
- migration history drift
- a weekend spent manually fixing _prisma_migrations
If you are thinking of disabling it because your pipeline flakes, fix the deploy lane and direct connection first.
A production-safe CI setup (the one we keep shipping)
For a typical Next.js + Prisma + Postgres app, here is a boring setup that works:
1) One prod deploy at a time (concurrency)
2) Migrations run on a direct DB URL
3) Exactly one migration step, not repeated elsewhere
4) Smoke test after deploy
Here is a concrete GitHub Actions sketch:
name: Deploy (prod)
on:
push:
branches: [main]
concurrency:
group: prod-deploy
cancel-in-progress: false
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 22
- run: npm ci
- run: npm run build
- name: Migrate (direct DB)
env:
DATABASE_URL: ${{ secrets.DATABASE_URL_POOLER }}
DATABASE_DIRECT_URL: ${{ secrets.DATABASE_URL_DIRECT }}
run: npx prisma migrate deploy
- name: Deploy app
run: ./scripts/deploy.sh
You can swap the deploy step for whatever your host uses.
The key parts are:
- concurrency group
- direct DB env vars set for the migration step
A fast precheck to fail with useful logs
Retries hide the real problem. A precheck makes it visible.
Before running prisma migrate deploy, query for sessions that are trying to take advisory locks:
SELECT pid, usename, application_name, state, query_start
FROM pg_stat_activity
WHERE query ILIKE '%pg_advisory_lock%'
ORDER BY query_start DESC;
If you always see a pid sitting there for 5 minutes, you do not have a Prisma problem. You have a stuck process problem.
Also check for long running transactions that can block DDL:
SELECT pid, state, now() - xact_start AS xact_age, query
FROM pg_stat_activity
WHERE xact_start IS NOT NULL
ORDER BY xact_age DESC
LIMIT 10;
That query has saved more deployments than I can count.
What to do when it fails at 2 AM (runbook)
If you are on-call and the deploy just failed:
1) Check if another deploy is running right now. If yes, stop and wait.
2) Verify migrations are using a direct DB URL, not the pooler.
3) Look for a stuck advisory lock holder (pg_locks, pg_stat_activity).
4) If it is stale and safe to kill, run pg_terminate_backend(pid).
5) Re-run prisma migrate deploy once.
Do not spam retries. You will just create more overlap.
Cost and time: what this fix usually takes
Real numbers, assuming a small SaaS team:
- Add CI concurrency control: 20 to 40 minutes
- Split DATABASE_URL vs DATABASE_DIRECT_URL: 30 to 90 minutes
- Remove duplicate migration runners: 1 to 3 hours
- Add precheck + better logs: 1 to 2 hours
If your schema changes are risky (type changes, huge tables), budget 1 day to refactor migrations into safer steps.
If you are seeing this weekly, you are already burning more than a day. It just shows up as stress and broken deploys.
The blunt take
If production can run prisma migrate deploy from multiple pods at the same time, your deploy is a dice roll.
Fix the lane. Use a direct DB connection for migrations. Keep migrations small. Then shipping feels normal again.
---
Related reading

Akshit Ahuja
Co-Founder & Lead Engineer
Backend systems specialist who thrives on building reliable, scalable infrastructure. Akshit handles everything from API design to third-party integrations, ensuring every product HeyDev ships is production-ready.