R2 Multipart Upload InvalidPart Fix (2026)

Akshit Ahuja
Co-Founder & Lead Engineer
If you have ever had a multipart upload work fine locally and then explode in prod with InvalidPart, you are not alone. I see this one a lot with Cloudflare R2 when teams wire it up through the AWS S3 SDK and call it a day.
The annoying part is the error looks like data corruption, but most of the time it is just you sending the wrong ETag back to CompleteMultipartUpload. Or the SDK is doing extra checksum stuff you did not ask for.
This post is a field guide. It is opinionated. It is written for people shipping Next.js apps (App Router or Pages) with direct-to-R2 uploads, presigned URLs, or server-side multipart.
The failure mode: CompleteMultipartUpload returns InvalidPart
The AWS S3 API expects you to upload parts, capture each part's ETag, then call CompleteMultipartUpload with a list of {PartNumber, ETag} pairs. AWS is picky here. R2 is also picky, and it does not always forgive small mistakes.
If the ETag in your CompleteMultipartUpload request does not match what R2 recorded for that part number, you get InvalidPart. Sometimes you will also see InvalidPartOrder, but InvalidPart is the classic.
What I see teams do wrong (in the wild)
1) Stripping quotes from ETag
S3-compatible APIs often return ETag values wrapped in quotes in headers or XML. People strip the quotes because it looks messy. Then they pass the naked value into CompleteMultipartUpload. Sometimes that still works on AWS. On R2, it can fail.
Rule: store the ETag exactly as the SDK gives it to you. If the SDK returns "abc", keep the quotes. If it returns abc, keep it unquoted. Do not be clever.
2) Normalizing case or trimming whitespace
I have seen code that lowercases ETags, trims, or runs them through JSON stringify and parse cycles. Multipart completion is not the place to clean strings.
3) Mixing SDK-managed multipart with your own presigned UploadPart
This is the big one. If you use the AWS SDK's high-level multipart helper (like lib-storage Upload) it may choose part size, concurrency, checksums, and retry behavior. If you also generate presigned UploadPart URLs on the side, you end up with a franken-upload. The SDK thinks it owns the uploadId state. You think you own it. R2 owns the truth.
Pick one model per upload: either client uploads parts via presigned URLs and your server only does CreateMultipartUpload and CompleteMultipartUpload, or your server does the whole multipart upload with the SDK.
4) Forgetting that retries can overwrite a part
Part numbers are not append-only. Uploading a part with the same part number overwrites the previous part. If your client retries part 7 and you keep the old ETag from the first attempt, you will complete with a stale ETag and get InvalidPart.
Rule: treat the last successful UploadPart response as the only truth. If you retry, overwrite the stored ETag for that part number.
5) Sending the wrong content length or using chunked encoding
For presigned UploadPart, some HTTP clients will try to stream with chunked transfer. That can break signature validation depending on how the URL was signed. Even when signature passes, mismatched Content-Length has caused weirdness.
My take: for browser uploads, use PUT with a Blob and let fetch set Content-Length. For Node, avoid chunked streaming for presigned requests unless you really know what you are signing.
R2-specific context you should know in 2026
Cloudflare R2 is S3 API compatible, but not identical. The docs spell out that some checksum features differ from AWS. For example, R2 has a table of supported checksum algorithms and whether it supports FULL_OBJECT vs COMPOSITE types.
That matters because the AWS SDK has been getting more aggressive about checksums over the last couple years. If your SDK decides to send checksum headers you did not plan for, you can hit odd edge cases.
Also, multipart operations are Class A operations on R2. As of the current R2 pricing page, Standard storage is $0.015 per GB-month, Class A is $4.50 per million requests, and Class B is $0.36 per million. Egress is free. Those numbers matter when a buggy client retries parts 10 times.
A boring but reliable architecture for Next.js + R2 multipart
If you are a founder in the US or UK and you want the least drama, do this:
1) Server: CreateMultipartUpload (store uploadId, key, userId, intended size)
2) Server: Generate presigned URLs for UploadPart for part numbers 1..N (or generate on demand)
3) Client: Upload parts directly to R2 using those URLs
4) Client: Send back an array of {partNumber, etag} to your server
5) Server: CompleteMultipartUpload using exactly those ETags
No SDK-managed multipart on the client. No server streaming the whole file through Next.js. Keep the data path simple.
Debug checklist: how to catch the real bug fast
Step 1: Log the raw ETag per part
When a part upload succeeds, log the part number and the exact ETag string you store. Include whether it has quotes. In prod, I usually log the first 8 chars only plus a flag like quoted:true to avoid noise.
Step 2: Verify you are not double JSON-encoding
A classic bug: you store ETag in JSON as ""abc"" (quotes inside quotes) because you stored the header value with quotes, then you serialized it again. When you later parse and send it, you end up with extra escaping.
If you ever see backslashes in your ETag strings, stop. That is your bug.
Step 3: Confirm part numbers start at 1
Multipart part numbers are 1..10000. I still see people start at 0 because arrays. AWS will reject it. Some clients will upload but completion fails. Start at 1.
Step 4: Check for overwritten parts
If you allow retries, your UI should track per-part attempt. If part 4 was uploaded twice, only keep the newest ETag.
Step 5: Ensure the final list is sorted
CompleteMultipartUpload expects parts in ascending order by PartNumber. Many SDKs sort for you, but if you hand-roll the XML payload or use a thin client, sort it yourself.
Concrete fixes (code-level) that have saved real incidents
Fix 1: Keep the ETag untouched
If you use the AWS SDK v3 in Node, the UploadPartCommand response typically includes an ETag. Store it as-is. Do not strip quotes.
Fix 2: Do not mix two multipart implementations
If you are using @aws-sdk/lib-storage Upload, do not also issue CreateMultipartUpload and UploadPart yourself. Either use the helper end-to-end or do it manually end-to-end.
Fix 3: Pin part size and concurrency
When you let the client choose part size dynamically, you get messy edge cases near the end of the file, especially when a user pauses and resumes on flaky hotel Wi-Fi.
I like 8 MiB or 16 MiB parts for browser uploads. Concurrency 3 to 5. Past that, you just create self-inflicted throttling.
Fix 4: Treat completion as a server-only privilege
Do not let the browser call CompleteMultipartUpload directly with your credentials. Make completion a server route that validates the part list length, part numbers, and the total expected byte size.
What this costs when it goes wrong
Let us put numbers on it, because founders care.
Example: 1 GB video, 16 MiB parts. That is 64 parts. Each UploadPart is a Class A operation. CreateMultipartUpload and CompleteMultipartUpload are also Class A.
So one clean upload is 66 Class A ops. If a buggy client retries each part 3 extra times on average, you are at 66 + (64 * 3) = 258 Class A ops.
At $4.50 per million Class A requests, that is still cheap per upload. But at scale, retries add up fast, and they also hammer your users. The bigger cost is support time and the angry customer email.
My take: stop chasing perfect S3 compatibility
R2 is good. It is fast. The pricing is friendly, especially since egress is free. But it is not AWS. Pretending it is AWS leads to sloppy engineering.
Write your upload path like you expect failure: retries, overwritten parts, mismatched ETags, timeouts. Add logs that you can read at 2 AM. And keep the completion logic on the server.
Quick ‘ship it’ checklist
- Part numbers start at 1, and you sort them before completion
- You store ETag strings exactly as returned
- Retries overwrite the stored ETag for that part number
- You do not mix SDK helper multipart with manual multipart
- Completion is server-only and validates inputs
If you follow that list, InvalidPart goes from a weekly incident to a story you tell once.
Common questions I get from US and EU teams
Can I do multipart from Vercel or Netlify functions?
Yes, but I would not stream the file through your function. Use the function to mint presigned URLs, then let the browser talk to R2. Serverless timeouts and memory caps make full proxy uploads a self-own.
Do I need to use checksums?
If you are using presigned URLs from your server, start without custom checksum headers. Get the happy path stable first. Then add checksums if you have a real integrity issue, not because it feels enterprise.
What about resumable uploads?
Resumable uploads are just "retry part N" plus storing uploadId state. The trap is keeping stale ETags. If you support resume across sessions, persist the latest ETag per part and expire uploadIds that never complete.
When to stop and switch approaches
If your product is mostly large uploads (videos, CAD files, raw datasets) and you are spending more than a couple hours a month on upload weirdness, stop hacking it. Use a battle-tested client protocol like tus or Uppy with a known-good multipart plugin, or pay for a managed upload service. Your time is more expensive than storage.
---
Related reading

Akshit Ahuja
Co-Founder & Lead Engineer
Backend systems specialist who thrives on building reliable, scalable infrastructure. Akshit handles everything from API design to third-party integrations, ensuring every product HeyDev ships is production-ready.
