Run anything, anywhere.
Batch execution for developers and AI agents. Any Docker container—with or without GPU. Self-host on your infra, bare metal, or air-gapped. Or let us handle it.
Any Docker container, GPU optional
Cloud, bare metal, offline, air-gapped
Self-host for HIPAA/SOC2 compliance
OpenAPI spec, structured API →
Queued, running, and completed runs stay visible in one place.
Download structured outputs and plug them into the next stage.
- Provision and maintain servers
- Install and update dependencies
- Handle failures, retries, and timeouts
- Scale workers up and down
- Manage job queues and state
$ curl -X POST https://api.bsub.io/jobs \
-H "Authorization: Bearer $TOKEN" \
-F "[email protected]" \
-F "processor=transcode"
# Done. Get results via webhook.
One API call. No infrastructure. Results delivered to your webhook or polled when ready.
Submit. Process. Done.
Submit your job
Upload files via CLI, REST API, or SDK. Use our processors or bring your own Docker container.
We run it
Your job runs on managed workers—or your own infrastructure. No timeouts, no cold starts.
Get your results
Artifacts delivered via webhook or ready for download. Structured outputs for the next stage.
Process any file, any way you need.
Use our built-in processors or bring your own Docker containers. GPU acceleration available for ML workloads.
Bring any container.
Your images, your ML models, your custom tools. Same simple API.
Accelerate when needed.
Run Whisper, Stable Diffusion, or your own models with GPU support.
Run as long as you need.
No 15-minute timeouts. Process large files for hours if needed.
Results delivered to you.
Get notified when jobs complete. No polling required.
Run anything, anywhere.
Free tier includes 100 jobs/month. All processors. No credit card required.