R2 Presigned Upload 403 Fix (Next.js 2026)

Akshit Ahuja
Co-Founder & Lead Engineer
You generated a Cloudflare R2 presigned URL. You hit it with curl. It works. Then your Next.js app tries the same upload from the browser and you get a 403. Welcome to the most annoying class of bugs: the URL is valid, but your request is not the same request you signed.
This post is a field guide for fixing R2 presigned upload 403s in a Next.js App Router app deployed on Vercel in 2026. Not theory. The boring stuff is what breaks you: CORS, signed headers, and tiny header mismatches.
If you are a US-based founder or a dev in the UK/EU shipping a consumer app with avatars, receipts, or video clips, this is the pattern you want: your API mints a short-lived upload URL, the browser uploads straight to R2, your app never sees the file bytes.
The symptom: 403 from R2, but curl works
Typical reports look like this:
- Browser upload: 403 Forbidden, sometimes with SignatureDoesNotMatch
- curl upload to the same URL: 200 OK
- Everything works locally, but breaks on Vercel
- Or it works for small files, then randomly breaks
R2 is S3-compatible, so the same root causes from S3 show up here. AWS even calls out the big one: if you sign a header (like Content-Type), your request must send that exact header value, byte-for-byte, or the signature check fails and you get a 403.
First: confirm what kind of 403 you are getting
Before you change code, figure out whether you are fighting CORS or signature validation.
A. It is CORS (browser blocks you)
You will see an error in DevTools that mentions CORS, preflight, or something like: No 'Access-Control-Allow-Origin'. Even if the presigned URL is perfect, the browser will not send the upload unless the bucket allows it.
B. It is signature validation (R2 rejects you)
The request is actually sent, you get a 403 response, and the response body often includes an S3-ish error like SignatureDoesNotMatch or AccessDenied.
These two are different fixes. People mix them up and lose hours.
The mental model that stops the pain
A presigned URL does not say 'upload this object'. It says 'upload this object with these exact request details'.
Those request details include:
- HTTP method (PUT vs POST vs GET)
- Path (bucket + key)
- Query params (the signature itself)
- And the killer: any headers that were part of the signature
If your browser request differs from what you signed, R2 is right to reject it.
The 2026 trap: Content-Type mismatch
Cloudflare's own docs mention a best practice that can backfire if you copy it blindly: signing ContentType so uploads are restricted to a file type. That is fine in server-to-server. In browsers, it's a minefield.
Here is the most common failure pattern:
1) Your API signs a PUT URL with ContentType = image/png
2) The frontend uploads a File that the browser labels as image/png; charset=utf-8 (or similar)
3) R2 compares the signed header with the actual header and returns 403
AWS describes the same rule for S3: if you used a ContentType parameter while signing, you must send a matching Content-Type header with the upload. Mismatch equals SignatureDoesNotMatch and 403.
My stance
If you are doing browser uploads to R2, do not sign Content-Type unless you are 100% sure the client will send the exact same value every time. Most teams are not that sure.
Instead, validate file type and size in your own API before minting the URL. Then store the expected mime in your DB and validate later.
Fix #1: get CORS right (or nothing else matters)
R2 bucket CORS has to allow your app origin and the method you use (usually PUT). If you run a separate marketing domain, remember that https://app.example.com and https://example.com are different origins.
A pragmatic policy for production looks like:
AllowedOrigins: ["https://app.yourdomain.com"]
AllowedMethods: ["PUT","GET","HEAD"]
AllowedHeaders: ["*"]
ExposeHeaders: ["ETag"]
MaxAgeSeconds: 3600
During dev you can temporarily use AllowedOrigins: ['*'], but lock it down before you ship.
One more gotcha: browsers send a preflight OPTIONS for many PUT requests. Your CORS config has to make that succeed too.
Fix #2: stop manually setting headers in fetch
A lot of example code does this:
await fetch(uploadUrl, {
method: 'PUT',
headers: { 'Content-Type': file.type },
body: file,
})
If your presigned URL was created with that header in the signed headers list, you now need an exact match. And exact is brutal.
Try the boring version first:
await fetch(uploadUrl, {
method: 'PUT',
body: file,
})
Let the browser handle the headers. Less surface area for signature mismatch.
Fix #3: decide between S3 SDK vs Workers signing
In 2026, teams mix runtimes. Your signing code might run in:
- Node.js (Next.js route handler on Vercel Serverless)
- Edge runtime (Vercel Edge, Cloudflare Workers)
The AWS SDK v3 works fine in Node, but can break in Workers-like runtimes. People switch to aws4fetch or similar libs when they generate URLs in a Worker.
Pick one and be consistent. Mixing two signing implementations across environments is a great way to ship 'works on my machine' bugs.
Node (Vercel Serverless) signing: stable
Use @aws-sdk/client-s3 + @aws-sdk/s3-request-presigner against the R2 endpoint.
Workers signing: different rules
If you are signing inside a Worker using aws4fetch with signQuery: true, pay attention to what headers are being signed. Some setups only sign host, so adding headers from the browser can make R2 reject the request.
A solid Next.js App Router implementation (2026)
This is a pattern we ship for client apps. It avoids the Content-Type trap while still keeping control.
1) Your API route mints a URL
// app/api/uploads/presign/route.ts
import { NextResponse } from 'next/server'
import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3'
import { getSignedUrl } from '@aws-sdk/s3-request-presigner'
import { randomUUID } from 'crypto'
export const runtime = 'nodejs'
const s3 = new S3Client({
region: 'auto',
endpoint: `https://${process.env.R2_ACCOUNT_ID}.r2.cloudflarestorage.com`,
credentials: {
accessKeyId: process.env.R2_ACCESS_KEY_ID!,
secretAccessKey: process.env.R2_SECRET_ACCESS_KEY!,
},
})
export async function POST(req: Request) {
const { filename, size, mime } = await req.json()
if (!filename || typeof filename !== 'string') {
return NextResponse.json({ error: 'bad filename' }, { status: 400 })
}
if (typeof size !== 'number' || size <= 0 || size > 10 * 1024 * 1024) {
return NextResponse.json({ error: 'file too big' }, { status: 400 })
}
const allowed = new Set(['image/png','image/jpeg','application/pdf'])
if (!allowed.has(mime)) {
return NextResponse.json({ error: 'file type not allowed' }, { status: 400 })
}
const key = `uploads/${randomUUID()}-${filename}`
const cmd = new PutObjectCommand({
Bucket: process.env.R2_BUCKET_NAME!,
Key: key,
// NOTE: do NOT set ContentType here unless your browser will match it exactly.
})
const uploadUrl = await getSignedUrl(s3, cmd, { expiresIn: 60 * 10 })
return NextResponse.json({ key, uploadUrl })
}
2) Your client uploads the bytes
// client-side
const res = await fetch('/api/uploads/presign', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
filename: file.name,
size: file.size,
mime: file.type,
}),
})
const { uploadUrl, key } = await res.json()
const put = await fetch(uploadUrl, {
method: 'PUT',
body: file,
})
if (!put.ok) {
const text = await put.text().catch(() => '')
throw new Error(`R2 upload failed ${put.status}: ${text}`)
}
// persist key in your DB
Debug checklist when 403 still happens
When this still fails, do not guess. Check these in order:
1) Method: are you signing PUT but sending POST?
2) URL: did you accidentally encode the key twice?
3) Expiration: is the client clock way off?
4) Headers: are you sending any custom headers (Content-Type, x-amz-meta-*, cache-control)? If yes, are they signed?
5) CORS: does the preflight succeed?
6) Vercel rewrites: are you proxying the upload through your domain by mistake? You should hit R2 directly.
AWS's own guidance for SignatureDoesNotMatch is basically this list: validate the HTTP action, validate headers, verify bucket/key, and make sure system time is sane.
What we do in real client rescues
When a startup comes to us with 'R2 uploads are flaky', 8 times out of 10 it's one of these:
- They signed Content-Type then set a different Content-Type in fetch
- CORS only allows GET, not PUT
- The frontend sends x-amz-meta-* headers that were never signed
- They generate URLs in an edge runtime that signs only host, then they add headers later
The fix is almost always to reduce variables: stop signing headers you do not fully control, stop setting headers you do not need, and lock CORS to your real origin.
Costs and why this pattern is worth it
This is why founders in the US/UK like direct-to-R2 uploads:
- Your serverless function stops being a bottleneck. Upload bandwidth does not go through your app.
- Your API stays cheap. In 2026, that can be the difference between a $40/month and a $400/month baseline bill.
- No egress fees from R2 helps when users download a lot.
The tradeoff is complexity. But it's manageable if you treat signing as a strict contract.
Bottom line
A presigned URL is a signed recipe for one exact request.
Make the browser request match the signed request. Keep headers boring. Configure CORS like you mean it. Your 403s will disappear.

Akshit Ahuja
Co-Founder & Lead Engineer
Backend systems specialist who thrives on building reliable, scalable infrastructure. Akshit handles everything from API design to third-party integrations, ensuring every product HeyDev ships is production-ready.