Sairo v3.2 · Cost Intelligence · Petabyte Scale

Object storage,
beautifully engineered.

A lightning-fast, self-hosted web client for your S3-compatible endpoints. Stop wrestling with clunky cloud consoles and execute single-digit millisecond searches across hundreds of thousands of objects, right from your browser.

Deploy via Docker Read the Docs
Sairo — localhost:8000
production-assets
data-warehouse
backups-2026
ml-training-sets
NameSizeModified
uploads/user_avatar.png2.4 MB2 hours ago
assets/hero_avatar.png1.1 MB3 days ago
cache/user_avatar_thumb.png84 KB1 week ago
3 results in 2.4ms · FTS5 index
Works with
AWS S3
Cloudflare R2
MinIO
Wasabi
Backblaze B2
Ceph

Machined for performance.

Every feature built into a single, self-contained Docker image. No plugins. No extensions.

45+ Format File Preview

Inspect Parquet schemas, browse CSV tables, render PDFs, view images, and syntax-highlight 30+ code formats — all inline, without downloading.

See all formats →

Instant Directory Browsing

Background crawling means you never wait for a folder to load. The FastAPI backend streams data to the React UI, achieving sub-3ms listing speeds on production buckets.

How it works →
134,707 Objects Indexed
1,300+/s Index Speed
4 threads · WAL-mode SQLite

100% Local · Secure Sharing

Your keys never leave your infrastructure. Share any object via password-protected, expirable URLs — no third-party services involved.

Security docs →
RBAC 2FA/TOTP OAuth LDAP Encrypted at Rest Audit Log

High-Throughput I/O

Handle massive assets without the browser freezing. Sairo sustains 114 MB/s upload throughput and handles 528 requests/second under concurrent load.

Upload docs →
Uploads 3 of 5 complete
production-export-2026.csv 50 MB
68% · 114 MB/s
ml-dataset-v3.parquet 10 MB
34% · 76 MB/s
db-backup-jan.sql.gz ✓ 436ms
config.yaml ✓ 45ms
model-weights.bin ✓ 131ms
Virtualized object browserSingle-digit ms searchCost heatmaps (13 providers)Optimization recommendationsMultipart cleanup (stale detection)AI storage intelligence (MCP)Petabyte-scale performanceStorage analytics dashboardVersion management8-tab bucket settingsPassword-protected share URLsMulti-endpoint S337-action audit trailRBAC with per-bucket perms2FA / TOTPOAuth & LDAPAPI tokensCLI tool (sairo)

Built for speed.

Sairo indexes every object key with SQLite FTS5. No pagination. No loading spinners. Results in milliseconds.

Search across 134,707 objects (38 TB production bucket)
AWS Console
~14s
Sairo Web
2.4ms

Paginated prefix listing vs. pre-built FTS5 full-text index · Full benchmark results

CLI: List 134K objects in a production bucket
mc ls
3m 2s
sairo ls
0.05s

Live S3 enumeration vs. indexed query · 3,400x faster · Install the CLI

2.4ms Search p50
528 Req/s
114 MB/s Upload
sub-5ms API p50

Validated performance.

Benchmarked via Docker on Apple Silicon. Production deployments on NVMe Linux hardware yield even higher throughput.

Search Latency

FTS5 Trigram · 134,707 objects · 38 TB

Queryp50p95
parquet2.3ms4.1ms
events2.4ms16.2ms
analytics2.5ms3.4ms
metadata2.4ms4.3ms
20262.2ms22.0ms

Fastest observed: 1.7ms

API Response Times

Single Uvicorn worker · Docker container

Endpointp50p95
/healthz2.1ms3.6ms
/api/buckets4.3ms5.8ms
/api/auth/me2.6ms5.3ms
Object listing2.2ms5.1ms
Presigned URL3.1ms5.6ms

Throughput

Upload + concurrent load benchmarks

MetricResult
50 MB upload436ms114 MB/s
10 MB upload131ms76 MB/s
5 concurrent236req/s
10 concurrent333req/s
25 concurrent528req/s

Full methodology and raw data: benchmark/results/LATEST-RESULTS.md

Engineered for scale and security.

No microservices. No message queues. No external databases.

01

Single Container

FastAPI backend serves the React SPA as static files. No reverse proxy needed. Just docker run and go.

Docker guide →
02

SQLite FTS5 Index

Every bucket gets a WAL-mode SQLite database. Object listings are instant, even at 134K+ objects per bucket.

Search docs →
03

Prefix-Parallel Crawling

Background crawler indexes your storage using 4 threads per bucket. Configurable RECRAWL_INTERVAL keeps your index synced.

All config vars →
04

Enterprise Auth

Secure JWT sessions with configurable durations, LDAP integration, OAuth providers, and TOTP two-factor authentication.

OAuth & LDAP →
05

Rate Limiting

Built-in configurable API rate limiting protects your endpoints. Global throttle with per-route overrides.

Configure limits →
06

Rigorous Testing

Validated via pytest and npm test before every release. 25 E2E spec files covering every feature.

View source →

Terminal-native. Index-powered.

The Sairo CLI queries the same indexed backend as the web UI. Stop waiting for mc ls to enumerate millions of objects.

Terminal
$ sairo search production-data "parquet" --limit 5
KEYSIZEMODIFIED
events/2026-03/hour=12/data.parquet892 MB2h ago
events/2026-03/hour=11/data.parquet847 MB3h ago
events/2026-03/hour=10/data.parquet824 MB4h ago
analytics/2026-03/part-0001.parquet641 MB5h ago
analytics/2026-03/part-0002.parquet618 MB5h ago
5 results in 2.1ms · 134,707 objects indexed
$ sairo du production-data -d 1
PREFIXOBJECTSSIZE%
events/74,20114.8 TB38.0%
analytics/48,33012.1 TB31.1%
warehouse/12,17611.3 TB29.0%
38.2 TB total · 0.03s
3,400x faster than mc ls
24 commands
--json on every command
OS keyring credential storage
brew install ashwathstephen/sairo/sairo or CLI docs →

Up and running in 30 seconds.

One command. One container. Every S3 provider.

docker run -d --name sairo -p 8000:8000 \
  -e S3_ENDPOINT=https://s3.us-east-1.amazonaws.com \
  -e S3_ACCESS_KEY=AKIA... \
  -e S3_SECRET_KEY=wJal... \
  -e ADMIN_PASS=choose-a-strong-password \
  -v sairo-data:/data \
  stephenjr002/sairo:latest
docker run -d --name sairo -p 8000:8000 \
  -e S3_ENDPOINT=http://minio:9000 \
  -e S3_ACCESS_KEY=minioadmin \
  -e S3_SECRET_KEY=minioadmin \
  -e S3_PATH_STYLE=true \
  -e ADMIN_PASS=choose-a-strong-password \
  -v sairo-data:/data \
  stephenjr002/sairo:latest
docker run -d --name sairo -p 8000:8000 \
  -e S3_ENDPOINT=https://<account-id>.r2.cloudflarestorage.com \
  -e S3_ACCESS_KEY=your-r2-access-key \
  -e S3_SECRET_KEY=your-r2-secret-key \
  -e S3_PATH_STYLE=true \
  -e ADMIN_PASS=choose-a-strong-password \
  -v sairo-data:/data \
  stephenjr002/sairo:latest
docker run -d --name sairo -p 8000:8000 \
  -e S3_ENDPOINT=https://s3.us-east-1.wasabisys.com \
  -e S3_ACCESS_KEY=your-wasabi-key \
  -e S3_SECRET_KEY=your-wasabi-secret \
  -e ADMIN_PASS=choose-a-strong-password \
  -v sairo-data:/data \
  stephenjr002/sairo:latest

Then open http://localhost:8000 and log in. That's it.

Ready to upgrade your workflow?

Deploy the container and connect your first bucket today.

View on GitHub Read the Docs