A Comprehensive Guide to Securely Uploading and Reading Files from Amazon S3 Using Next.js and Nest.js

In modern web applications, file uploads and downloads are common requirements—profile pictures, documents, images, and so on. Amazon S3 (Simple Storage Service) is a popular and reliable solution for storing and serving files in a scalable manner. However, ensuring that uploads and reads are performed securely is critical to protect sensitive data and maintain compliance with best practices.
In this guide, we will cover:
- Setting up an S3 bucket (private by default)
- Creating a dedicated AWS IAM User and assigning proper permissions
- Blocking public access & ACL management
- CORS configuration
- Generating and using Pre-signed URLs (for secure uploads/downloads)
- Implementation in Next.js (frontend)
- Implementation in NestJS (backend)
- Preventing bucket listing
- Common security threats and best practices
By the end, you’ll have a production-ready approach to integrate secure file handling into your web application.
1. Setting Up an S3 Bucket
Step 1: Create an S3 Bucket
- Sign in to the AWS Management Console.
- Navigate to S3.
- Click on Create Bucket.
- Provide a globally unique Bucket name (e.g.,
my-secure-bucket-12345
). - Choose a region close to your user base to minimize latency (e.g.,
us-east-1
). - Block Public Access settings for this bucket:
- Keep the defaults (which should block all public access).
- Bucket Versioning: optional, but recommended for better data safety and file version control.
- Click Create bucket.
Step 2: Configure Bucket Settings
- Encryption: Enable server-side encryption with AWS KMS if you want an extra layer of security for sensitive data.
- Bucket Policy: By default, if the bucket is private, no one can access the bucket content without proper credentials. We will refine this with an IAM policy.
At this point, you have a bucket that is private and does not allow public access.
2. Creating a Dedicated AWS IAM User with Specific Permissions
Why a Dedicated IAM User or Role?
Using a dedicated IAM user or role to access S3 ensures that you follow the principle of least privilege—the user or role has only the permissions they need and nothing more. This is far more secure than using your AWS root account or giving excessive privileges.
Step 1: Create the IAM User
- Go to the AWS IAM console.
- Click Users -> Add users.
- Enter a username (e.g.,
s3-upload-user
). - Select Access key - Programmatic access (since we’ll be using this user in code).
- Click Next: Permissions.
Step 2: Create a Custom Policy
- On the permissions screen, select Attach existing policies directly or Create policy.
- Click Create policy and add a custom JSON policy to grant read/write access to the specific bucket only.
Below is an example policy granting s3:PutObject
, s3:GetObject
, and s3:ListBucket
on the specified bucket—while preventing public listing. You can fine-tune this based on your needs.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowListBucket",
"Effect": "Allow",
"Action": ["s3:ListBucket"],
"Resource": ["arn:aws:s3:::my-secure-bucket-12345"]
},
{
"Sid": "AllowGetPutObject",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": ["arn:aws:s3:::my-secure-bucket-12345/*"]
}
]
}
Important: Replacemy-secure-bucket-12345
with your actual bucket name. If you only want to allow uploads, remove thes3:GetObject
permission. Also note thats3:ListBucket
permission is used to list objects; you can remove or restrict it if you don’t need it, but some applications rely on listing to confirm object existence.
- Save the policy.
- Attach the new policy to your new IAM user.
Step 3: Generate Access Keys
- On the last step, AWS will give you the Access key ID and Secret access key.
- Store these securely (e.g., in an environment variable, or a secure secrets manager). Do not commit them to source control.
3. Blocking Public Access & Managing ACLs
Amazon S3 supports ACLs (Access Control Lists), which define access at the object and bucket level. For most modern applications:
- Use Bucket Policies and IAM Policies rather than ACLs to manage access.
- Disabling public ACLs is strongly recommended to prevent public read or write.
Disabling Public Access
While creating your bucket or afterwards in the Permissions tab of your S3 bucket, ensure that the following is set to On:
- Block Public Access (ACLs)
- Block Public Access (bucket policies)
(Adjust them depending on your scenario, but typically you want to block all public access unless absolutely needed.)
Why Avoid Public ACLs?
- Public ACLs can accidentally expose your files to anyone on the internet.
- It’s easy to misconfigure and inadvertently allow open access.
4. Setting Up CORS for S3
If you are uploading files directly from the browser (client) to S3, you need to configure Cross-Origin Resource Sharing (CORS). This allows your web application domain (e.g., https://example.com
) to securely upload to the S3 bucket.
In the Permissions tab of your S3 bucket:
- Under CORS configuration, click Edit.
- Add a CORS rule. A typical minimal configuration might look like this:
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"GET",
"PUT",
"POST"
],
"AllowedOrigins": [
"https://your-frontend-domain.com"
],
"ExposeHeaders": [
"ETag"
],
"MaxAgeSeconds": 3000
}
]
- AllowedHeaders: Specifies which headers are allowed in a preflight request.
*
allows all. - AllowedMethods: The HTTP methods you allow (e.g.
GET
,PUT
,POST
). - AllowedOrigins: The domains allowed to make cross-origin requests (replace with your actual domain).
- ExposeHeaders: Response headers that are safe to expose to the browser (e.g.,
ETag
). - MaxAgeSeconds: The number of seconds browsers should cache the preflight response.
This ensures that browsers permit cross-origin requests from your web app to S3.
5. Generating and Using Pre-signed URLs
Why Pre-signed URLs?
Pre-signed URLs let you create a URL that users can use to upload or download an object. The URL is valid only for a specified time, and it ensures that:
- You don’t expose your AWS credentials.
- You have a temporary and limited permission to upload or download.
- You can track or log usage more easily in your application or CloudTrail.
How to Generate Pre-signed URLs
You can generate pre-signed URLs either server-side (using Node.js, Python, etc.) or in a serverless function (AWS Lambda). In this guide, we’ll use NestJS to generate pre-signed URLs, which our Next.js frontend can consume.
6. Implementation in NestJS (Backend)
NestJS Controller to Generate Pre-signed URLs
Assume you have a NestJS application. We’ll install the AWS SDK for JavaScript (v3):
npm install @aws-sdk/client-s3 @aws-sdk/s3-request-presigner
In a dedicated service, say s3.service.ts
, we set up our S3 client and a function to generate the pre-signed URL:
// s3.service.ts
import { Injectable } from '@nestjs/common';
import { S3Client, PutObjectCommand, GetObjectCommand } from '@aws-sdk/client-s3';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
@Injectable()
export class S3Service {
private readonly s3Client: S3Client;
private readonly bucketName = 'my-secure-bucket-12345';
constructor() {
this.s3Client = new S3Client({
region: 'us-east-1',
credentials: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
},
});
}
async getUploadUrl(key: string): Promise<string> {
const command = new PutObjectCommand({
Bucket: this.bucketName,
Key: key,
// Optionally set ContentType, ACL, etc.
// ACL is typically not needed if the bucket is private (recommended).
});
// Expires in 60 seconds
const url = await getSignedUrl(this.s3Client, command, { expiresIn: 60 });
return url;
}
async getDownloadUrl(key: string): Promise<string> {
const command = new GetObjectCommand({
Bucket: this.bucketName,
Key: key,
});
// Expires in 60 seconds
const url = await getSignedUrl(this.s3Client, command, { expiresIn: 60 });
return url;
}
}
Note: We set the URL to expire in 60 seconds (1 minute). Adjust to your app’s needs, but the shorter the better for security.
Next, create a controller, say s3.controller.ts
, to handle routes:
// s3.controller.ts
import { Controller, Get, Query, Post, Body } from '@nestjs/common';
import { S3Service } from './s3.service';
@Controller('s3')
export class S3Controller {
constructor(private readonly s3Service: S3Service) {}
@Get('download-url')
async getDownloadUrl(@Query('key') key: string) {
// Validate the file key or implement authorization checks
const url = await this.s3Service.getDownloadUrl(key);
return { url };
}
@Post('upload-url')
async getUploadUrl(@Body('key') key: string) {
// Validate the file key or implement authorization checks
const url = await this.s3Service.getUploadUrl(key);
return { url };
}
}
Register the controller and service in your NestJS module (e.g., app.module.ts
or a dedicated s3.module.ts
).
Now you have two endpoints:
GET /s3/download-url?key=FILENAME
-> returns a JSON containing{"url": "https://..."}
POST /s3/upload-url
with body{ "key": "FILENAME" }
-> returns a JSON containing{"url": "https://..."}
7. Implementation in Next.js (Frontend)
We’ll create a simple Next.js page that:
- Requests a pre-signed URL from our NestJS backend.
- Uploads the file using the pre-signed URL.
- Retrieves a download URL if needed.
Step 1: Create an API route (Optional)
If you want to proxy everything through Next.js, you can create an API route under pages/api/s3.ts
. However, we already have a NestJS backend that handles generating pre-signed URLs. Often, you can call your NestJS endpoints directly from the Next.js frontend. For clarity, we’ll show how to call the NestJS routes directly from Next.js.
Step 2: Upload File Using Pre-signed URL
// pages/index.tsx
import React, { useState } from 'react';
function HomePage() {
const [selectedFile, setSelectedFile] = useState<File | null>(null);
const [message, setMessage] = useState('');
const handleFileSelect = (event: React.ChangeEvent<HTMLInputElement>) => {
if (!event.target.files) return;
setSelectedFile(event.target.files[0]);
};
const uploadFile = async () => {
if (!selectedFile) {
setMessage('No file selected');
return;
}
try {
// 1. Get the upload URL from NestJS
const response = await fetch('http://localhost:3000/s3/upload-url', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ key: selectedFile.name }),
});
const data = await response.json();
const { url } = data;
// 2. Upload the file to S3 using the pre-signed URL
const upload = await fetch(url, {
method: 'PUT',
body: selectedFile,
// Optionally add the correct Content-Type header
// headers: { 'Content-Type': selectedFile.type },
});
if (upload.ok) {
setMessage(`File uploaded successfully: ${selectedFile.name}`);
} else {
setMessage('Upload failed');
}
} catch (error: any) {
console.error(error);
setMessage('An error occurred during upload');
}
};
return (
<div>
<h1>Secure S3 Upload with Next.js</h1>
<input type="file" onChange={handleFileSelect} />
<button onClick={uploadFile}>Upload</button>
<p>{message}</p>
</div>
);
}
export default HomePage;
Note: We assume our NestJS backend is running on http://localhost:3000
. Adjust as per your configuration. In production, you’d point to your NestJS production domain/endpoint.
Step 3: Download File Using Pre-signed URL
Similarly, if you want to download a file:
- Request the download pre-signed URL from
GET /s3/download-url?key=filename.jpg
. - Redirect or fetch that URL to get the file.
For example:
// pages/download.tsx
import React, { useState } from 'react';
function DownloadPage() {
const [filename, setFilename] = useState('');
const [downloadUrl, setDownloadUrl] = useState('');
const getDownloadUrl = async () => {
try {
const response = await fetch(`http://localhost:3000/s3/download-url?key=${filename}`);
const data = await response.json();
setDownloadUrl(data.url);
} catch (error) {
console.error(error);
}
};
return (
<div>
<h1>Download File from S3</h1>
<input
type="text"
placeholder="Enter file name"
value={filename}
onChange={(e) => setFilename(e.target.value)}
/>
<button onClick={getDownloadUrl}>Get Download URL</button>
{downloadUrl && (
<a href={downloadUrl} target="_blank" rel="noopener noreferrer">
Download {filename}
</a>
)}
</div>
);
}
export default DownloadPage;
8. Preventing Bucket Listing
Preventing bucket listing by unauthorized users is crucial. If you remove the s3:ListBucket
permission from your policy or only allow it for your dedicated IAM user, no one can list your bucket contents. Additionally, if someone tries to access your bucket URL directly (e.g., https://my-secure-bucket-12345.s3.amazonaws.com/
), they will not see your file listing if:
- Public access is blocked.
- They do not have the correct credentials.
- Your bucket ACL is private.
Key points:
- By default, if you block public access and have no open bucket policies, the bucket is not listable publicly.
- Keep
ListBucket
only for your IAM user or your application role if you need it internally.
9. Common Security Threats and Best Practices
1. Overly Permissive Bucket Policies
- Threat: Accidentally making the bucket public.
- Prevention: Block public access, explicitly deny public ACLs, and check your policies carefully.
2. Leaking Access Keys
- Threat: If your
AWS_ACCESS_KEY_ID
orAWS_SECRET_ACCESS_KEY
is publicly exposed (e.g., in GitHub). - Prevention: Use environment variables, AWS Secrets Manager, or Parameter Store, and never commit secrets.
3. Unrestricted CORS
- Threat: If you set
AllowedOrigin
to*
, any domain can initiate requests. - Prevention: Restrict to specific origins. In production, use your domain specifically.
4. Long-lived Pre-signed URLs
- Threat: Pre-signed URLs valid for too long can be reused maliciously.
- Prevention: Keep
expiresIn
short (e.g., seconds to minutes). Invalidate or rotate if necessary.
5. Missing Server-side Validation
- Threat: Attackers can manipulate the file key or parameters.
- Prevention: Validate the filename, path, size, and user authorization in your NestJS code.
6. Direct Upload from Frontend Without Authorization
- Threat: If you generate pre-signed URLs on the frontend directly, you may leak credentials or allow unauthorized uploads.
- Prevention: Always generate pre-signed URLs from a secure server-side environment (NestJS in our example).
Additional Best Practices
- Use TLS/HTTPS for all requests to your NestJS or Next.js application—never send keys over plaintext HTTP.
- Enable S3 server-side encryption with SSE-S3 or SSE-KMS.
- Monitor bucket access logs using AWS CloudTrail or S3 server access logs.
- Rotate IAM keys regularly.
- Use versioning in S3 to protect against accidental overwrites/deletions.
Conclusion
By following these steps and best practices, you will have a robust, secure, and scalable solution for handling file uploads and downloads using Amazon S3, Next.js, and NestJS. Key takeaways include:
- Always block public access by default.
- Use IAM policies (principle of least privilege) and pre-signed URLs for secure access.
- Configure CORS specifically to your application domain.
- Validate and sanitize inputs on your server side (NestJS).
- Keep your credentials safe and never store them in client code or version control