Supabase RLS + OFFSET Pagination Trap (2026)

Akshit Ahuja
Co-Founder & Lead Engineer
If you are building a multi-tenant SaaS on Supabase, you will eventually hit a weird phase: the database is not on fire, CPU is not pinned, but your app feels slow and inconsistent. Page 1 loads fast. Page 8 takes forever. Your API logs look fine. Your Postgres metrics look fine. Then you turn off RLS in a staging copy and suddenly everything is fast. Magic.
It is not magic. It is a combo of (1) OFFSET pagination, (2) RLS policies that get evaluated per row, and (3) queries that do not filter hard enough. The punchline: OFFSET makes Postgres walk past rows it still has to validate, and RLS makes that validation expensive.
This post is a practical guide to fixing that trap in 2026. Not a generic RLS tutorial. The specific problem is: your Supabase app is using OFFSET or range pagination, and your policies are written in the way that seems correct but turns into row-by-row pain at scale.
The symptom: page 1 is fast, page 20 is cursed
Typical timeline: you ship with something like .range(0, 49). Then later you ship infinite scroll with .range(page*50, page*50+49). It works in dev. It works with 200 rows. It works in prod until it does not.
Why it breaks: OFFSET pagination is not "skip N rows for free". Postgres often has to scan and sort a lot of rows to know which 50 come after the first 950. Even if your query returns 50 rows, the database may still evaluate the policy for thousands.
Supabase's own RLS troubleshooting guide calls this out: queries that use LIMIT and OFFSET usually still need to look at all rows to determine order, so they get hit by RLS overhead too.
Quick gut-check: is it your query, or is it RLS?
Do this in a non-production clone. Run the same query twice: once through your normal client (RLS on), once as service_role (RLS bypass). If the service_role version is much faster, you do not have a Postgres problem. You have an RLS evaluation problem.
Even easier: in a staging DB, temporarily disable RLS on the one table you are testing and rerun EXPLAIN ANALYZE. If timings are similar, fix the query. If timings are wildly different, fix the policy.
The 3 RLS mistakes that hurt OFFSET the most
1) No index on the columns used inside policies
The most common policy is ownership: auth.uid() = user_id. If user_id is not indexed, Postgres can end up doing a sequential scan and evaluating the policy as it goes. Supabase docs mention seeing 100x improvements on large tables just by indexing the column used in the policy.
Fix: create an index on every column you compare to auth.uid() (or tenant_id, org_id, team_id). If it is a foreign key, index it anyway. If you are in a multi-tenant schema, index (tenant_id, created_at desc) for the feed queries you actually run.
2) Calling auth.uid() (or auth.jwt()) per row
This one is sneaky because it looks harmless. Policy:
using (auth.uid() = user_id)
On big tables, that can behave like: call auth.uid() for every row you touch. Supabase docs recommend wrapping JWT functions inside a SELECT so the planner can treat it as an initPlan and cache it for the query:
using ((select auth.uid()) = user_id)
In their examples, a 100K-row table went from ~171ms to <0.1ms when the policy column was indexed. The bigger point is not the exact number. The point is: avoid doing work per row when the value is constant for the whole query.
3) Relying on RLS as your filter
A lot of Supabase code looks like this:
supabase.from('projects').select('*').order('created_at', { ascending: false }).range(950, 999)
And then the policy is: user can only see rows where user_id = auth.uid().
That is secure, but it is a bad habit. RLS is your lock. Your WHERE clause is your steering wheel. Add the filter in the query too:
...eq('user_id', userId)
Supabase docs say this directly: do not rely on RLS for filtering, only for security. It reduces the work Postgres has to do before it can use indexes and stop scanning.
Why OFFSET is extra painful in multi-tenant SaaS
In a single-tenant app, your biggest table is still one user's data. In multi-tenant, your biggest table often mixes data from many tenants. OFFSET is basically asking the database to walk past other tenants' rows too, even though they will never be returned. With RLS on, those rows still get checked.
It gets worse when your policy uses EXISTS against a membership table. Example: user can see rows where row.team_id is in teams they belong to. If that policy is written as a correlated subquery (row.team_id inside the subquery), Postgres can end up running that membership lookup in an ugly way.
Supabase docs recommend reversing the join so you compare row columns to a fixed list of allowed IDs, or moving the membership check into a security definer function when it is safe to do so.
The fix that actually sticks: switch to keyset pagination
If you only change one thing after reading this, change pagination. OFFSET is fine for admin dashboards and jump-to-page tables with small data. It is not fine for activity feeds, logs, chat messages, or anything that grows.
Keyset pagination (also called cursor pagination) means you page by a stable sort key. Usually created_at + id. Instead of "skip 950", you ask for "the next 50 rows before this timestamp".
Example pattern:
Page 1: order by created_at desc, id desc, limit 50
Next page: add lt('created_at', lastCreatedAt) OR (created_at = lastCreatedAt AND id < lastId)
That keeps the scan tight. Postgres can use an index like (tenant_id, created_at desc, id desc) and stop early. RLS still applies, but it applies to far fewer candidate rows.
A concrete rescue plan (what we do on client codebases)
Here is the playbook we use when a Supabase app hits the page-12-is-slow phase. For US and UK founders, this is usually a 1 to 3 day fix, not a rewrite.
Step 1: Identify the slow query path
Pick one endpoint. Capture the exact SQL shape. If you are using PostgREST via the client, log the filters, order, and range. Make a minimal reproduction query.
Step 2: Measure with and without RLS
Run EXPLAIN ANALYZE as authenticated (RLS enforced) and as postgres or service_role (RLS bypass). If the RLS version is 5x slower, you know where to focus.
Step 3: Fix policy hotspots
Checklist:
- Index every column referenced in USING and WITH CHECK
- Wrap auth.uid() and auth.jwt() calls: (select auth.uid())
- If you do membership checks, avoid correlated subqueries where possible
- Consider a security definer helper function for membership checks so you do not cascade RLS onto join tables
Step 4: Add explicit filters in queries
Even with perfect RLS, do not do broad selects and hope the policy filters. If you know the tenant_id, filter tenant_id. If you know the owner_id, filter owner_id. Make the query selective.
Step 5: Replace OFFSET with keyset pagination
This is where most teams hesitate because it touches frontend UX. But it is usually painless. Infinite scroll becomes simpler. Load more becomes stable. And the DB cost becomes predictable.
How much does this kind of fix cost in 2026?
Rough numbers from projects we see (US and Europe):
- Small app (2 to 4 tables affected, simple ownership policies): 8 to 16 engineer-hours
- Multi-tenant app (membership tables, feeds, lots of joins): 20 to 40 engineer-hours
- Big refactor (fix schema + pagination + add tests): 40 to 80 engineer-hours
At US rates ($120 to $200/hr), you can do the math. It is still cheaper than scaling your database tier to hide a policy problem. Throwing hardware at OFFSET + RLS is the most expensive kind of denial.
Testing: stop shipping policy regressions
RLS bugs are special because they ship quietly. One policy change can turn a 30ms query into a 3s query, and nobody notices until traffic is up. Add tests that run key queries as an authenticated role and assert timing budgets in CI for a representative dataset.
At minimum, add regression tests for the policy shape: make sure auth.uid() is wrapped and indexes exist. Supabase's advisors in the dashboard can catch missing indexes and common mistakes, but you should still treat RLS changes like you treat migrations: review them like you mean it.
If you only remember 4 rules
1) OFFSET pagination is a tax that grows with your table.
2) RLS is not your WHERE clause. Add real filters.
3) Index policy columns and cache auth.uid() with (select auth.uid()).
4) Use keyset pagination for anything that grows.
If you are stuck in the trap right now, you are not alone. This is one of the most common vibe-coded production issues we see in Supabase apps: everything works until it gets real users. The fix is boring SQL and a small pagination change. Boring is good.
If you want a second pair of eyes on your RLS policies and the query plan, that is exactly the kind of rescue we do at HeyDev.
---
Related reading

Akshit Ahuja
Co-Founder & Lead Engineer
Backend systems specialist who thrives on building reliable, scalable infrastructure. Akshit handles everything from API design to third-party integrations, ensuring every product HeyDev ships is production-ready.