R2 Presigned PUT 403: Hidden Header Traps (2026)

Akshit Ahuja
Co-Founder & Lead Engineer
If you have ever generated a Cloudflare R2 presigned PUT URL, pasted it into your app, hit upload, and got a 403, you are in the club.
The annoying part is how inconsistent it feels. It works with curl, then fails in Chrome. It works in Postman, then fails in your Next.js frontend. Same URL. Same file. Different outcome.
In 2026 most teams ship the same upload flow: API route mints a presigned URL, browser uploads straight to object storage, app stores the key. It is a good pattern. It also has sharp edges.
This post is not a basic presigned URL tutorial. It is a field guide for the specific, boring reasons R2 returns 403 for presigned PUT uploads, plus the exact checks I run so I do not waste half a day.
Context: we have rescued a few apps where file uploads were 20 to 40 percent of support tickets. It was never the storage provider. It was always tiny request drift.
The core rule: the signature is a receipt
A presigned URL is not a password. It is a signed receipt for one exact request.
If the request you send is not the request you signed, R2 should reject it. That is the whole point.
Cloudflare’s docs spell out the most common footgun: if you sign a ContentType, the signature includes that header, so uploads fail with a 403 (often SignatureDoesNotMatch) if the client sends a different Content-Type.
Once you internalize this, the mystery goes away. Your job becomes simple: make the request match the receipt.
Why this bites Next.js teams in particular
Next.js apps often have a split brain:
- Server route runs in Node (or Edge), signs the URL.
- Client runs in the browser, uploads the file.
Every time you cross that boundary, something tries to be helpful. A fetch wrapper adds headers. A reverse proxy normalizes them. A CDN injects something. A library swaps the body encoding.
So you can do everything right in the server code and still ship a broken client request.
Symptom-driven debugging (what I check in order)
When I see a 403 on a presigned PUT, I do not guess. I run this checklist.
1) Compare signed headers vs headers your client actually sends
Open the presigned URL and find X-Amz-SignedHeaders. That list is the contract.
If it only says host, then only Host is part of the signature. If it includes content-type, then your Content-Type must match exactly.
Exact means exact. Not close. Not same mime but with a charset. Not what the browser picked. Exact.
2) Content-Type mismatch (the most common facepalm)
There are three versions of this bug I see constantly:
- You sign ContentType as image/jpeg but the browser sends image/jpeg; charset=utf-8 (some libs do this).
- You sign ContentType as application/pdf but your code sends multipart/form-data because you used FormData.
- You do not sign ContentType, but your bucket CORS policy does not allow the Content-Type header, so the browser preflight fails or the PUT is blocked.
My stance: either sign ContentType and force the client to send that exact value, or do not sign it and validate the file another way. Half doing it creates production mystery bugs.
3) CORS: the URL can be valid and still fail in the browser
A presigned URL handles auth. It does not bypass the browser’s CORS rules.
Cloudflare’s R2 CORS docs say it plainly: without a CORS policy, browser-based uploads and downloads using presigned URLs will fail even though the presigned URL itself is valid.
Minimum CORS for browser PUT uploads:
- AllowedOrigins: your app origin (not *)
- AllowedMethods: PUT
- AllowedHeaders: Content-Type (and anything else you send)
- ExposeHeaders: ETag (handy for confirming uploads)
One detail people miss: preflight checks the exact headers you plan to send. If your fetch includes x-amz-checksum-sha256, you must allow it. If your client includes x-amz-meta-foo, you must allow it. CORS is strict. That is good.
4) Checksums and integrity headers can break signatures
AWS’s presigned URL docs mention checksums for upload integrity. Modern SDKs can add checksum headers automatically.
That sounds great until your browser upload path does not send the same checksum header that the server signed, or your server signs none but your client sends one anyway.
If you see Content-MD5 or x-amz-checksum-* anywhere, treat it like Content-Type: either sign it and send it, or do not send it.
5) Your HTTP client is helping you in the worst way
Axios, fetch wrappers, and some S3 helpers love adding headers.
Real examples I have seen cause R2 403s:
- Adding x-amz-acl even though you did not sign it
- Switching the body to multipart/form-data
- Adding Cache-Control or Content-Disposition with a different value than what you signed
- Using a proxy that rewrites Host
If you are stuck, do one thing: drop down to plain fetch and send the file bytes directly.
A boring, reliable Next.js + R2 upload contract
Here is the pattern I ship for US-based startups that want direct-to-R2 uploads without waking up to a 403 pager.
Server: mint presigned PUT with a strict contract
Pick a key. Decide Content-Type. Decide expiry. Validate auth. Then sign.
I keep expiries short. Five minutes is fine for normal uploads. If you need 30 minutes, ask why. Long expiries are how URLs leak and turn into weird abuse.
I also return the contentType I signed so the client does not guess.
TypeScript sketch (Node runtime):
- Create S3 client with endpoint https://<ACCOUNT_ID>.r2.cloudflarestorage.com and region auto.
- Call getSignedUrl(S3, new PutObjectCommand({ Bucket, Key, ContentType }), { expiresIn: 300 }).
- Return { url, key, contentType }.
Client: upload with fetch and match the contract
Do not use FormData for presigned PUT uploads unless you are signing for a multipart POST policy (different flow).
Use fetch with the File or Blob body:
fetch(url, { method: 'PUT', headers: { 'Content-Type': contentType }, body: file })
If you signed image/png, do not send image/png; charset=utf-8. Do not omit it. Do not guess.
Bucket: CORS policy that matches reality
Cloudflare’s CORS example for presigned URLs shows allowing PUT from a specific origin with Content-Type allowed.
If you are in the US or EU dealing with user uploads, split dev and prod buckets. Dev can allow localhost. Prod should not.
The fast triage loop (three requests)
When an upload fails, I run these in order:
1) curl -X PUT with --data-binary and no extra headers. If this fails, the URL is wrong or expired.
2) curl -X PUT with the exact Content-Type you expect. If this fails but (1) works, you signed Content-Type and you are mismatching.
3) Browser upload with DevTools Network open. Compare Request Headers to X-Amz-SignedHeaders.
If step (1) works and step (3) fails, it is almost always CORS or header drift.
The hidden header list (print this)
These are the headers that silently show up and ruin your day:
- Content-Type (exact string, including charset)
- Content-MD5 / x-amz-checksum-*
- x-amz-meta-* (metadata)
- x-amz-acl
- Cache-Control and Content-Disposition (if you sign them)
- Any custom header your proxy adds
Rule of thumb: if a header is in X-Amz-SignedHeaders, you must send it exactly. If it is not in that list, do not send it unless you are sure it is safe.
Real cost of getting this wrong (numbers founders care about)
This bug class looks small. It is not.
A typical early-stage SaaS with user uploads will see a 1 to 3 percent upload failure rate if the contract is loose. That is enough to create constant ‘my file is missing’ tickets.
We have seen teams spend 2 to 4 engineer-days chasing it because it only happens on certain browsers, certain file types, or certain networks.
At US agency rates, that is often $2k to $8k of burn on a bug that should be a 45 minute fix.
Worse: some teams “fix” it by making the bucket public or opening CORS wide. That trades a debugging issue for a security one.
Security stance (slightly provocative)
I see teams solve this by removing ContentType from the signature and setting AllowedHeaders to * and AllowedOrigins to *. It fixes the 403, sure.
It also turns your upload URL into a public dumping ground if the link leaks. In 2026, links leak. Logs, analytics, support screenshots, browser extensions, you name it.
My rule: short expiry (5 minutes), signed Content-Type when it matters, CORS locked to your real origins, and server checks before you mint the URL.
If you are a founder in the US, UK, or EU, this is the difference between a normal week and a compliance headache.
Recap: what to do when R2 says 403
- Read X-Amz-SignedHeaders. That is your contract.
- Make your client send exactly those headers, no more, no less.
- Configure R2 CORS to allow your real origin, your real method, and your real headers.
- Keep presigned URLs short-lived and generate them only after auth and file validation.
Once you treat the presigned URL like a receipt instead of a password, these bugs stop being spooky.
---
Related reading

Akshit Ahuja
Co-Founder & Lead Engineer
Backend systems specialist who thrives on building reliable, scalable infrastructure. Akshit handles everything from API design to third-party integrations, ensuring every product HeyDev ships is production-ready.
