Tiny, runtime-agnostic, S3 client.
A lightweight, dependency-free S3 client that works across Node, Deno, Bun and modern browsers. Compatible with AWS S3 and S3-compatible providers (Cloudflare R2, Hetzner, Backblaze B2, Garage, etc.). Focused on a small, ergonomic API for streaming downloads, uploads, multipart uploads, presigned URLs and common object operations.
Key features:
[!WARNING]
This package is in active development. It is not recommended for production use yet unless you are willing to help with testing and feedback.
Expect breaking changes, as I prioritize usability and correctness over stability at this stage.
Install the package:
# ✨ Auto-detect (supports npm, yarn, pnpm, deno and bun)
npx nypm install uns3
Import:
undefinedESM (Node.js, Bun, Deno)
import { S3Client, S3Error } from "uns3";
undefinedCDN (Deno, Bun and Browsers)
import { S3Client, S3Error } from "https://esm.sh/uns3";
First, create an instance of the S3Client. You need to provide your S3-compatible service’s region, endpoint, and your credentials.
import { S3Client } from "uns3";
const client = new S3Client({
// e.g. "us-east-1" or "auto" for R2
region: "auto",
// e.g. "https://s3.amazonaws.com" or your custom endpoint
endpoint: "https://<ACCOUNT_ID>.r2.cloudflarestorage.com",
credentials: {
accessKeyId: "<ACCESS_KEY_ID>",
secretAccessKey: "<SECRET_ACCESS_KEY>",
},
// Optional default bucket
defaultBucket: "my-bucket",
});
All methods return a Promise.
get()Retrieves an object from an S3 bucket. It returns a standard Response object, allowing you to stream the body.
// Get a full object
const response = await client.get({ key: "my-file.txt" });
const text = await response.text();
console.log(text);
// Get a partial object (range request)
const partialResponse = await client.get({
key: "my-large-file.zip",
range: { start: 0, end: 1023 }, // first 1KB
});
const chunk = await partialResponse.arrayBuffer();
undefinedConditional Requests & Cachingundefined
The get() and head() methods support conditional request headers (ifMatch, ifNoneMatch, ifModifiedSince, ifUnmodifiedSince). When the object hasn’t changed, S3 returns a 304 Not Modified response, which is treated as a success.
// Conditional GET using ETag
const response = await client.get({
key: "cached-file.txt",
ifNoneMatch: '"abc123"', // ETag from previous request
});
if (response.status === 304) {
console.log("Content hasn't changed, use cached version");
} else {
// Status is 200, process new content
const content = await response.text();
}
This is especially useful when serving S3 responses through a server framework (e.g., Nitro, Nuxt) to browsers, as the library correctly handles browser cache validation.
head()Retrieves metadata from an object without returning the object itself.
const response = await client.head({ key: "my-file.txt" });
console.log("Content-Type:", response.headers.get("content-type"));
console.log("ETag:", response.headers.get("etag"));
console.log("Size:", response.headers.get("content-length"));
put()Uploads an object to an S3 bucket. The body can be a string, Blob, ArrayBuffer, Uint8Array, or a ReadableStream.
// Upload from a string
await client.put({
key: "hello.txt",
body: "Hello, World!",
contentType: "text/plain", // also inferred from key extension
});
// Upload from a plain object (automatically stringified)
await client.put({
key: "hello.json",
body: {
message: "Hello, World!",
},
// contentType is automatically set to application/json
});
// Upload from a Blob
const blob = new Blob(["<h1>Hello</h1>"], { type: "text/html" });
await client.put({
key: "index.html",
body: blob,
});
undefinedConditional Overwrites (Advanced)undefined
The put() method supports optional conditional headers (ifMatch, ifNoneMatch) for preventing accidental overwrites. Note that not all S3-compatible providers support these headers.
// Only overwrite if the current ETag matches
const response = await client.put({
key: "document.txt",
body: "Updated content",
ifMatch: '"abc123"', // Current object's ETag
});
if (response.status === 412) {
console.log("Precondition failed - object was modified by someone else");
} else {
console.log("Upload successful");
}
When conditional headers are used and the condition fails, S3 returns 412 Precondition Failed (not 304 Not Modified like GET/HEAD operations).
del()Deletes an object from a bucket. Note: DELETE operations do not support conditional headers.
await client.del({ key: "my-file-to-delete.txt" });
list()Lists objects in a bucket.
const result = await client.list({
prefix: "documents/",
delimiter: "/", // To group objects by folder
});
console.log("Files:", result.contents);
// [ { key: 'documents/file1.txt', ... }, ... ]
console.log("Subdirectories:", result.commonPrefixes);
// [ 'documents/images/', ... ]
getSignedUrl()Generates a presigned URL that can be used to grant temporary access to an S3 object.
// Get a presigned URL for downloading an object (expires in 1 hour)
const downloadUrl = await client.getSignedUrl({
method: "GET",
key: "private-document.pdf",
expiresInSeconds: 3600,
});
console.log("Download URL:", downloadUrl);
// Get a presigned URL for uploading an object
const uploadUrl = await client.getSignedUrl({
method: "PUT",
key: "new-upload.zip",
expiresInSeconds: 600, // 10 minutes
});
console.log("Upload URL:", uploadUrl);
For large files, you can use multipart uploads.
initiateMultipart()Start a new multipart upload and get an uploadId.
const { uploadId } = await client.initiateMultipart({
key: "large-video.mp4",
contentType: "video/mp4",
});
uploadPart()Upload a part of the file. You need to provide the uploadId and a partNumber (from 1 to 10,000).
const parts = [];
const file = new Blob([
/* ... large content ... */
]);
const chunkSize = 5 * 1024 * 1024; // 5MB
for (let i = 0; i * chunkSize < file.size; i++) {
const partNumber = i + 1;
const chunk = file.slice(i * chunkSize, (i + 1) * chunkSize);
const { etag } = await client.uploadPart({
uploadId,
key: "large-video.mp4",
partNumber,
body: chunk,
});
parts.push({ partNumber, etag });
}
completeMultipart()Finish the multipart upload after all parts have been uploaded.
await client.completeMultipart({
uploadId,
key: "large-video.mp4",
parts: parts,
});
undefinedConditional Overwrites (Advanced)undefined
The completeMultipart() method supports optional conditional headers (ifMatch, ifNoneMatch) for preventing accidental overwrites. Note that not all S3-compatible providers support these headers.
// Only overwrite if the current ETag matches
const response = await client.completeMultipart({
uploadId,
key: "large-video.mp4",
parts: parts,
ifMatch: '"abc123"', // Current object's ETag
});
if (response.status === 412) {
console.log("Precondition failed - object was modified by someone else");
} else {
console.log("Upload successful");
}
When conditional headers are used and the condition fails, S3 returns 412 Precondition Failed (not 304 Not Modified like GET/HEAD operations).
abortMultipart()If something goes wrong, you can abort the multipart upload to clean up the parts that have already been uploaded.
await client.abortMultipart({
uploadId,
key: "large-video.mp4",
});
mrmime by Luke Edwards.Published under the MIT license.
Made by community 💛
🤖 auto updated with automd
We use cookies
We use cookies to analyze traffic and improve your experience. You can accept or reject analytics cookies.