Skip to content

Commit b69e682

Browse files
committed
feat(objects): add multipart upload documentation
Signed-off-by: Xe Iaso <[email protected]>
1 parent 07110e6 commit b69e682

File tree

1 file changed

+135
-0
lines changed

1 file changed

+135
-0
lines changed

docs/objects/multipart-uploads.mdx

Lines changed: 135 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,135 @@
1+
# Multipart Upload
2+
3+
Multipart Upload allows for the upload of large objects in parts. This provides
4+
improved throughput and greater resilience to network errors. You can upload
5+
parts in parallel to improve throughput, or if an upload of a part fails, you
6+
can re-upload that part without affecting other parts.
7+
8+
Tigris is S3-compatible, so you can use the same SDKs and patterns for multipart
9+
uploads. Tigris also routes traffic to the nearest region by default via its
10+
global endpoint, providing accelerated, low-latency ingress without any extra
11+
configuration. Use `https://t3.storage.dev` (outside Fly.io) or
12+
`https://fly.storage.tigris.dev` (from Fly.io).
13+
14+
## Prerequisites
15+
16+
- A Tigris account and access keys.
17+
- A bucket.
18+
- An SDK that supports S3 Multipart Upload.
19+
20+
Tigris implements the standard S3 Multipart Upload operations
21+
(`CreateMultipartUpload`, `UploadPart`, `CompleteMultipartUpload`, etc.), so any
22+
modern S3 client will work.
23+
24+
When using an S3-compatible tool or SDK, you should use the global endpoint
25+
`https://t3.storage.dev` and virtual-hosted style addressing, where the bucket
26+
is in the hostname.
27+
28+
## Example: Node.js (AWS SDK v3) — Managed Multipart Upload
29+
30+
The following is an example of a managed multipart upload using the AWS SDK v3
31+
for Node.js.
32+
33+
```ts
34+
import { S3Client } from "@aws-sdk/client-s3";
35+
import { Upload } from "@aws-sdk/lib-storage";
36+
import { createReadStream } from "node:fs";
37+
38+
const s3 = new S3Client({
39+
region: "auto",
40+
endpoint: "https://t3.storage.dev",
41+
s3ForcePathStyle: false, // virtual-hosted-style
42+
credentials: {
43+
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
44+
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
45+
},
46+
});
47+
48+
export async function putLargeObject(
49+
bucket: string,
50+
key: string,
51+
filePath: string,
52+
) {
53+
const upload = new Upload({
54+
client: s3,
55+
params: { Bucket: bucket, Key: key, Body: createReadStream(filePath) },
56+
queueSize: 8, // concurrency
57+
partSize: 32 * 1024 * 1024, // 32 MiB parts
58+
});
59+
await upload.done();
60+
}
61+
```
62+
63+
## Example: Python (boto3) — Tuned Transfer Config
64+
65+
The following is an example of a multipart upload with a tuned transfer
66+
configuration using boto3 for Python.
67+
68+
```py
69+
import boto3
70+
from botocore.config import Config
71+
from boto3.s3.transfer import TransferConfig
72+
73+
s3 = boto3.client(
74+
"s3",
75+
endpoint_url="https://t3.storage.dev",
76+
aws_access_key_id=os.environ["AWS_ACCESS_KEY_ID"],
77+
aws_secret_access_key=os.environ["AWS_SECRET_ACCESS_KEY"],
78+
config=Config(s3={"addressing_style": "virtual"})
79+
)
80+
81+
# 32 MiB parts, multipart threshold 64 MiB
82+
tconfig = TransferConfig(
83+
multipart_threshold=64 * 1024 * 1024,
84+
multipart_chunksize=32 * 1024 * 1024,
85+
max_concurrency=8,
86+
use_threads=True
87+
)
88+
89+
def put_large_object(bucket, key, path):
90+
s3.upload_file(path, bucket, key, Config=tconfig)
91+
```
92+
93+
The `upload_file` method will transparently switch to a multipart upload for
94+
files larger than the specified threshold.
95+
96+
## Cleaning Up In-Progress Uploads
97+
98+
It is good practice to occasionally list and abort stale multipart uploads to
99+
reclaim storage.
100+
101+
- `ListMultipartUploads` to discover in-progress multipart uploads
102+
- `AbortMultipartUpload` to cancel stale multipart uploads
103+
104+
Each SDK exposes these as standard S3 operations.
105+
106+
## Browser & Mobile Uploads
107+
108+
For browser and mobile applications, it is recommended to not proxy large
109+
payloads through your servers. Two common approaches are:
110+
111+
- **[Presigned URLs](presigned.md):** Generate a time-limited URL on your server
112+
and upload directly from the browser or mobile app.
113+
- **[HTML Form POST](upload-via-html-form.md):** Use a policy-based POST from
114+
the browser to constrain headers like `Content-Type` and object key patterns.
115+
116+
## CLI & Tools
117+
118+
The following tools can be used for multipart uploads.
119+
120+
- **AWS CLI**:
121+
122+
```bash
123+
aws s3 cp bigfile.bin s3://my-bucket/bigfile.bin \
124+
--endpoint-url https://t3.storage.dev
125+
```
126+
127+
The AWS CLI automatically switches to multipart for large files.
128+
129+
- **rclone**: Set the endpoint to `https://t3.storage.dev`.
130+
131+
## Limits
132+
133+
- Standard S3 multipart semantics apply (e.g., large objects up to 5 TB).
134+
- Tigris implements the S3 MPU API surface (create/upload
135+
parts/complete/list/abort).

0 commit comments

Comments
 (0)