Table of Contents#
- Prerequisites
- Understanding Multipart Uploads in S3
- Why Presigned URLs for Browser Uploads?
- Step-by-Step Implementation
- Troubleshooting Common Issues
- Conclusion
- References
Prerequisites#
Before starting, ensure you have:
- An AWS account with permissions to manage S3 buckets and IAM roles.
- An S3 bucket (create one via AWS Console or AWS CLI).
- Basic knowledge of:
- JavaScript/TypeScript (frontend and backend).
- Node.js (for the backend server; we’ll use Express).
- REST APIs (to communicate between frontend and backend).
- AWS SDK for JavaScript v3 (installed via
npm).
Understanding Multipart Uploads in S3#
Multipart uploads break a large file into smaller parts (typically 5MB–1GB) and upload them independently. This offers several advantages:
- Resumable uploads: Retry failed parts without reuploading the entire file.
- Parallel uploads: Upload parts simultaneously to speed up transfer.
- No single large request: Avoids timeouts for files >5GB (S3’s single-part limit).
The workflow has three main steps:
- Initiate: Start the multipart upload and get an
UploadId. - Upload parts: Upload each part with its
PartNumber(1–10,000) and trackETag(entity tag) responses. - Complete: Combine parts using the
UploadIdand list ofETags.
Why Presigned URLs for Browser Uploads?#
Direct browser-to-S3 uploads require temporary, secure access. Presigned URLs solve this by:
- Avoiding credential exposure: No AWS keys in frontend code.
- Temporary access: URLs expire after a set time (e.g., 15 minutes).
- Granular permissions: Restrict access to specific objects/actions (e.g., only upload a specific part).
Presigned URLs are generated server-side using AWS credentials, then sent to the browser. The browser uses these URLs to interact directly with S3.
Step-by-Step Implementation#
4.1 Configure S3 Bucket and CORS#
Browser uploads require CORS (Cross-Origin Resource Sharing) configuration on your S3 bucket to allow requests from your frontend domain.
Step 1: Enable CORS on the S3 Bucket#
- Go to the S3 Console, select your bucket, and navigate to the Permissions tab.
- Under Cross-origin resource sharing (CORS), click Edit and paste the following configuration (replace
https://your-frontend-domain.comwith your actual frontend origin):
[
{
"AllowedHeaders": ["*"],
"AllowedMethods": ["GET", "PUT", "POST"],
"AllowedOrigins": ["https://your-frontend-domain.com"],
"ExposeHeaders": ["ETag"],
"MaxAge": 3000
}
]AllowedOrigins: Restrict to your frontend domain for security.ExposeHeaders: IncludeETagso the browser can read the part’s ETag after upload.MaxAge: Caches preflight request results for 3000 seconds (50 minutes).
4.2 Backend: Generate Presigned URLs with AWS SDK v3#
We’ll build a Node.js/Express backend to generate presigned URLs for initiating, uploading parts, and completing the multipart upload.
Step 1: Set Up Dependencies#
Create a new Node.js project and install dependencies:
mkdir s3-multipart-upload-backend && cd $_
npm init -y
npm install @aws-sdk/client-s3 @aws-sdk/s3-request-presigner express cors dotenv@aws-sdk/client-s3: S3 client for AWS SDK v3.@aws-sdk/s3-request-presigner: Generates presigned URLs.express: Web framework for backend endpoints.cors: Handle CORS for frontend-backend communication.
Step 2: Configure AWS SDK#
Create a .env file to store AWS credentials (use IAM roles with s3:CreateMultipartUpload, s3:UploadPart, s3:CompleteMultipartUpload permissions):
AWS_REGION=us-east-1
AWS_ACCESS_KEY_ID=your-access-key
AWS_SECRET_ACCESS_KEY=your-secret-key
S3_BUCKET_NAME=your-bucket-nameStep 3: Implement Backend Endpoints#
Create index.js with the following code:
require('dotenv').config();
const express = require('express');
const cors = require('cors');
const { S3Client } = require('@aws-sdk/client-s3');
const { createMultipartUploadCommand, uploadPartCommand, completeMultipartUploadCommand } = require('@aws-sdk/client-s3');
const { getSignedUrl } = require('@aws-sdk/s3-request-presigner');
const app = express();
app.use(cors());
app.use(express.json());
// Initialize S3 client
const s3Client = new S3Client({
region: process.env.AWS_REGION,
credentials: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
},
});
// Endpoint 1: Initiate multipart upload
app.post('/initiate-upload', async (req, res) => {
const { fileName, contentType } = req.body;
const key = `uploads/${Date.now()}-${fileName}`; // Unique object key
try {
const command = new createMultipartUploadCommand({
Bucket: process.env.S3_BUCKET_NAME,
Key: key,
ContentType: contentType,
});
const { UploadId } = await s3Client.send(command);
res.json({ uploadId: UploadId, key });
} catch (error) {
res.status(500).json({ error: error.message });
}
});
// Endpoint 2: Get presigned URL for a part upload
app.post('/get-part-url', async (req, res) => {
const { key, uploadId, partNumber } = req.body;
try {
const command = new uploadPartCommand({
Bucket: process.env.S3_BUCKET_NAME,
Key: key,
UploadId: uploadId,
PartNumber: partNumber,
});
const uploadUrl = await getSignedUrl(s3Client, command, { expiresIn: 3600 }); // URL expires in 1 hour
res.json({ uploadUrl });
} catch (error) {
res.status(500).json({ error: error.message });
}
});
// Endpoint 3: Complete multipart upload
app.post('/complete-upload', async (req, res) => {
const { key, uploadId, parts } = req.body; // parts = [{ PartNumber: 1, ETag: "..." }, ...]
try {
const command = new completeMultipartUploadCommand({
Bucket: process.env.S3_BUCKET_NAME,
Key: key,
UploadId: uploadId,
MultipartUpload: { Parts: parts },
});
const result = await s3Client.send(command);
res.json({ success: true, location: result.Location });
} catch (error) {
res.status(500).json({ error: error.message });
}
});
const PORT = 3001;
app.listen(PORT, () => console.log(`Backend running on port ${PORT}`));4.3 Frontend: Split File, Upload Parts, and Complete Upload#
The frontend handles file selection, splitting into parts, uploading via presigned URLs, and finalizing the upload.
Step 1: Create HTML/JavaScript Frontend#
Create a frontend folder with index.html:
<!DOCTYPE html>
<html>
<body>
<input type="file" id="fileInput" />
<div id="progress"></div>
<script>
const fileInput = document.getElementById('fileInput');
const progressDiv = document.getElementById('progress');
const PART_SIZE = 5 * 1024 * 1024; // 5MB parts
const backendUrl = 'http://localhost:3001'; // Your backend URL
fileInput.addEventListener('change', async (e) => {
const file = e.target.files[0];
if (!file) return;
try {
// Step 1: Initiate multipart upload
const { uploadId, key } = await initiateUpload(file);
console.log('Initiated upload. UploadId:', uploadId, 'Key:', key);
// Step 2: Split file into parts and upload
const parts = await uploadParts(file, uploadId, key);
// Step 3: Complete upload
const result = await completeUpload(key, uploadId, parts);
progressDiv.textContent = `Upload complete! File URL: ${result.location}`;
} catch (error) {
progressDiv.textContent = `Error: ${error.message}`;
}
});
// Initiate multipart upload (call backend)
async function initiateUpload(file) {
const response = await fetch(`${backendUrl}/initiate-upload`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
fileName: file.name,
contentType: file.type,
}),
});
if (!response.ok) throw new Error('Failed to initiate upload');
return response.json();
}
// Split file into parts and upload each
async function uploadParts(file, uploadId, key) {
const partCount = Math.ceil(file.size / PART_SIZE);
const parts = [];
for (let partNumber = 1; partNumber <= partCount; partNumber++) {
// Get presigned URL for this part
const { uploadUrl } = await fetch(`${backendUrl}/get-part-url`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ key, uploadId, partNumber }),
}).then(res => res.json());
// Split file into chunk
const start = (partNumber - 1) * PART_SIZE;
const end = Math.min(start + PART_SIZE, file.size);
const chunk = file.slice(start, end);
// Upload chunk via presigned URL (PUT request)
const response = await fetch(uploadUrl, {
method: 'PUT',
body: chunk,
headers: { 'Content-Type': file.type },
});
if (!response.ok) throw new Error(`Failed to upload part ${partNumber}`);
// Store ETag (required to complete upload)
const etag = response.headers.get('ETag');
parts.push({ PartNumber: partNumber, ETag: etag });
// Update progress
progressDiv.textContent = `Uploaded part ${partNumber}/${partCount}`;
}
return parts;
}
// Complete multipart upload (call backend)
async function completeUpload(key, uploadId, parts) {
const response = await fetch(`${backendUrl}/complete-upload`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ key, uploadId, parts }),
});
if (!response.ok) throw new Error('Failed to complete upload');
return response.json();
}
</script>
</body>
</html>Key Frontend Logic:#
- File Splitting: Uses
File.slice()to split the file into 5MB chunks. - Part Upload: For each chunk, fetches a presigned URL from the backend, then uploads the chunk via a
PUTrequest to S3. - ETag Collection: Stores
ETagheaders from each part upload (required to complete the multipart upload).
Troubleshooting Common Issues#
1. CORS Errors#
Issue: Browser blocks uploads with "No 'Access-Control-Allow-Origin' header".
Fix: Ensure S3 bucket CORS allows your frontend origin and exposes ETag (see 4.1).
2. Part Size Mismatch#
Issue: S3 rejects parts with "The part size is too small" or "Part number out of range".
Fix: Use part sizes between 1MB–5GB. Ensure PART_SIZE in frontend matches the chunk size uploaded.
3. Expired Presigned URLs#
Issue: Upload fails with "Request has expired".
Fix: Increase expiresIn (e.g., 3600 seconds = 1 hour) when generating presigned URLs.
4. Incomplete Uploads#
Issue: Orphaned multipart uploads consume storage.
Fix: Implement an abort-multipart-upload endpoint (use AbortMultipartUploadCommand) and call it on frontend errors.
Conclusion#
Browser-based multipart uploads with AWS SDK presigned URLs enable secure, efficient large-file uploads directly to S3. By splitting files into parts, leveraging parallel uploads, and using presigned URLs for temporary access, you avoid server bottlenecks and credential exposure.
Follow the steps above to implement your own solution, and refer to the troubleshooting section for common pitfalls.