S3 Object Storage

S3-compatible object storage backed by Rook-Ceph on Kubernetes. Store and serve any data, at any scale.

Use any S3-compatible tool or SDK — boto3, awscli, rclone, MinIO client. Per-GB monthly pricing with no egress fees.

  • S3-compatible API endpoint at s3.pidginhost.cloud
  • Backed by Rook-Ceph with triple replication
  • Per-bucket access keys, public/private visibility
  • No egress fees — pay only for storage used
S3-Compatible API Rook-Ceph No egress fees 24/7 RO & EN support

What you get

Every bucket includes a dedicated endpoint, per-bucket access keys, and configurable public or private visibility.

  • Dedicated endpoint: https://s3.pidginhost.cloud
  • Unique Access Key ID and Secret per bucket
  • Quota-based sizing — resize at any time
  • Public buckets for static website hosting or CDN origin
  • Private buckets for backups, logs, and application data

Provision a bucket from the dashboard and start uploading in seconds.

Use cases

Backups & Archives

  • Database dumps, VM snapshots
  • rclone, restic, Velero compatible

Static Assets

  • Images, videos, files served via S3 URL
  • Django Storages / S3 media backend

Application Data

  • Logs, ML datasets, artifact storage
  • GitLab, Loki, Tempo S3 backend

Pricing

Simple per-GB monthly pricing. No egress fees, no request fees, no surprises.

Object Storage

1.0000 € / GB / month

  • Billed monthly based on quota allocated
  • Resize up or down at any time
  • No egress fees
  • No per-request fees

How pricing works

You set a quota in GB when creating a bucket. You are billed for the quota allocated, not the data actually stored.

Example

  • 100 GB bucket = 1.0000 × 100 € / month
  • Resize to 200 GB and billing updates next cycle

All prices in EUR, VAT excluded where applicable.

Quick start

Create a bucket in the dashboard, then connect with any S3-compatible tool.

boto3 (Python)

import boto3

s3 = boto3.client(
    "s3",
    endpoint_url="https://s3.pidginhost.cloud",
    aws_access_key_id="YOUR_ACCESS_KEY",
    aws_secret_access_key="YOUR_SECRET_KEY",
)

s3.upload_file("file.txt", "my-bucket", "file.txt")

AWS CLI

aws configure set aws_access_key_id YOUR_ACCESS_KEY
aws configure set aws_secret_access_key YOUR_SECRET_KEY

aws s3 ls s3://my-bucket \
  --endpoint-url https://s3.pidginhost.cloud

aws s3 cp file.txt s3://my-bucket/ \
  --endpoint-url https://s3.pidginhost.cloud

FAQ

Yes. The endpoint implements the AWS S3 API via Rook-Ceph RGW. Use any S3-compatible client: boto3, awscli, rclone, MinIO client, or Django Storages.

No. You only pay for the GB quota you allocate. There are no egress fees and no per-request charges.

Yes. You can resize your bucket quota at any time from the control panel. Billing adjusts on the next cycle.

After creating a bucket, reveal your Access Key ID and Secret Key from the bucket detail page in the control panel. Keys are unique per bucket.

Data is stored on Rook-Ceph with triple replication across nodes. Objects are protected against single-node failure by default.

Need more storage or custom SLAs?

Large datasets, enterprise retention policies, or on-premise Ceph — contact us.

Get in touch

Pair with Kubernetes?

Use S3 storage as a backend for your Kubernetes workloads — Loki, Velero, GitLab, and more.

Kubernetes Clusters