Open source end-to-end encrypted file transfer app for humans and agents, keeping plaintext file names, contents, and keys off the server
npx skills add https://github.com/xixu-me/xdrop --skill xdropInstale esta skill com a CLI e comece a usar o fluxo de trabalho SKILL.md em seu espaço de trabalho.
[!TIP]
欢迎加入“Xget 开源与 AI 交流群”,一起交流开源项目、AI 应用、工程实践、效率工具和独立开发;如果你也在做产品、写代码、折腾项目或者对开源和 AI 感兴趣,欢迎进群认识更多认真做事、乐于分享的朋友。
English | 汉语
Xdrop is an open source end-to-end encrypted file transfer app for humans and agents, keeping
plaintext file names, contents, and keys off the server.
Agents can directly encrypt and upload local files or folders to generate a share link, or download and decrypt files locally using an existing link.
Humans can also open the share link directly in a browser to decrypt and download the files locally.
Agents can use Xdrop to upload files, return end-to-end encrypted share links, and use Xdrop
links for local decryption.
Install the companion skill:
bunx skills add xixu-me/skills -s xdrop
After that, the agent can use Xdrop from the terminal to:
#k=..., and decrypt it locally.Example prompts:
Upload ./dist to https://xdrop.example.com and give me a 1-hour Xdrop link.On this VM, send /var/log/myapp through Xdrop so I can inspect it locally.Download this Xdrop link into ~/downloads and keep the original folder structure.For browser-based sharing, the core lifecycle works like this. Agents use the same encrypted
transfer format and full share links for terminal uploads and local decryption.
/t/:transferId#k=.... The #k=... fragment stays inXdrop keeps plaintext file names, paths, contents, and decryption keys off the server. The server
still sees operational metadata such as transfer timestamps, file counts, chunk counts, file
sizes, and rate-limit identifiers.
Key technical details:
transferId, fileId, chunkIndex,The default deployment below shows the human browser flow. Agents interact with the same API and
share-link format for terminal uploads and local decryption.
flowchart LR
subgraph Sender["Human sender browser"]
Select["Choose files or a folder"]
Worker["Crypto worker<br/>AES-256-GCM + HKDF-SHA-256"]
Local["OPFS / IndexedDB<br/>resume state and local controls"]
Browser["Browser app<br/>React + upload/download runtime"]
Select --> Worker
Worker <--> Local
Browser <--> Worker
Browser <--> Local
end
subgraph Edge["Default Xdrop deployment"]
nginx["nginx<br/>serves SPA and proxies /api + /xdrop"]
API["Go API<br/>transfer lifecycle, presigning, cleanup"]
nginx --> API
end
Postgres["PostgreSQL<br/>transfers, files, chunks, hashed manage tokens"]
Redis["Redis<br/>rate limiting"]
Storage["S3-compatible storage<br/>encrypted manifest and chunk objects"]
Receiver["Human receiver browser<br/>opens /t/:id#k=..."]
Browser -->|create/register/finalize| nginx
Browser -->|presigned PUT uploads| nginx
API --> Postgres
API --> Redis
API -->|presigned PUT/GET URLs| Storage
nginx -->|/xdrop proxy| Storage
nginx -->|web app + public API| Receiver
Receiver -->|presigned GET downloads| nginx
Receiver -->|decrypts locally with #k fragment| Receiver
In the default Docker deployment, nginx serves the built frontend and proxies both /api and
/xdrop. If S3_PUBLIC_ENDPOINT points at a different public object-storage endpoint, presigned
upload and download requests can bypass the nginx proxy while the rest of the architecture stays
the same.
For a public deployment, run Xdrop behind a reverse proxy such as Caddy or nginx:
xdrop container listens on a loopback-only host port such as 127.0.0.1:8080.127.0.0.1 only unless you haveS3_PUBLIC_ENDPOINT and ALLOWED_ORIGINS to your public site URL, for examplehttps://xdrop.example.com.If you only want to run the published image, you do not need to clone the whole repository on the
server.
Download the required deployment files:
mkdir -p xdrop/infra/minio
cd xdrop
curl -fsSL -o docker-compose.yml \
https://github.com/xixu-me/xdrop/raw/refs/heads/main/docker-compose.yml
curl -fsSL -o infra/minio/init.sh \
https://github.com/xixu-me/xdrop/raw/refs/heads/main/infra/minio/init.sh
chmod +x infra/minio/init.sh
Optionally, download .env.example as a reference for supported settings:
curl -fsSL -o .env.example \
https://github.com/xixu-me/xdrop/raw/refs/heads/main/.env.example
If you want to build your own image, clone the repository instead so Docker can use the full build
context. In most cases, it is better to build in CI or on a separate machine and only pull the
final image onto the server.
Install Docker and Docker Compose on the server, then review the xdrop service environment in
docker-compose.yml.
At minimum, update these values for your real deployment:
S3_PUBLIC_ENDPOINTALLOWED_ORIGINSTypical production values look like this:
services:
minio:
ports:
- '127.0.0.1:9000:9000'
- '127.0.0.1:9001:9001'
xdrop:
ports:
- '127.0.0.1:8080:80'
environment:
S3_PUBLIC_ENDPOINT: https://xdrop.example.com
ALLOWED_ORIGINS: https://xdrop.example.com
Treat .env.example as the reference list of supported settings. Changing .env.example alone
does not affect the running stack because the provided Compose file uses inline environment values.
docker compose up -d
This uses ghcr.io/xixu-me/xdrop:latest.
This is enough when the published image already matches the frontend settings you want.
Important caveats:
VITE_SITE_URL are baked into the image.XDROP_IMAGE=ghcr.io/your-org/xdrop:latest docker compose up -d
Build your own image when you need different frontend build-time settings:
git clone https://github.com/xixu-me/xdrop.git
cd xdrop
docker compose -f docker-compose.yml -f docker-compose.build.yml up -d --build
Edit the build args in docker-compose.build.yml before you run that command.
Example:
services:
xdrop:
build:
args:
VITE_SITE_URL: https://xdrop.example.com
VITE_API_BASE_URL: /api/v1
On low-memory servers, building directly on the host may be slow or fail. In that case, build
elsewhere, push the image to a registry, and deploy it with XDROP_IMAGE.
Example Caddyfile:
xdrop.example.com {
encode gzip zstd
reverse_proxy 127.0.0.1:8080
}
Then reload Caddy:
systemctl reload caddy
After the stack starts, open https://xdrop.example.com.
xdrop, postgres, redis, minio, and the bucket bootstrap container.80 and 443.bun install --frozen-lockfile
For local development, start PostgreSQL, Redis, and MinIO with Docker:
docker compose up -d postgres redis minio minio-setup
cd apps/api
go run ./cmd/api
From the repo root in a second terminal:
bun run dev:web
Open http://localhost:5173. During local development, the Vite dev
server proxies:
/api to http://localhost:8080/xdrop to http://localhost:9000This keeps frontend hot reload while talking to the local Go API and MinIO.
bun run lint:web
bun run typecheck:web
bun run test:web
bun run test:web:coverage
bun run build:web
Install Playwright browsers once if needed:
bun run test:e2e:install
The E2E suite expects Xdrop at http://localhost:8080 by default and uses the local postgres
and redis Compose services during the tests. Start the full stack first:
docker compose -f docker-compose.yml -f docker-compose.build.yml up -d --build
Then run the suite:
bun run test:e2e
Set E2E_BASE_URL and E2E_API_URL if you want to target a different environment.
From apps/api:
go test ./... -coverprofile=coverage.out -covermode=atomic
Some API integration tests use Docker-backed testcontainers. If Docker is unavailable, those tests
are skipped and coverage will be lower than CI.
bun run format
bun run format:check
apps/
api/ Go API
cmd/api/ API entrypoint
internal/ Domain packages
web/ React frontend
public/ Static assets
src/ App, components, features, and utilities
packages/
shared/ Shared TypeScript constants and helpers
src/ Shared source files
tests/
e2e/ Playwright end-to-end tests
infra/ Deployment and container configuration
scripts/ Repository automation and helper scripts
See .env.example for the full list. The most important settings are:
API_ADDRDATABASE_URLREDIS_ADDRS3_ENDPOINTS3_PUBLIC_ENDPOINTS3_BUCKETALLOWED_ORIGINSVITE_API_BASE_URLVITE_SITE_URLAGPL-3.0-only. See LICENSE.