An intelligent workspace where AI agents, code execution, and human collaboration converge.
binG is not just another chat interface—it's a full-stack agentic workspace that combines AI conversation with real code execution, voice interaction, and multi-agent orchestration. Build, test, and deploy applications with AI assistance in an isolated, secure sandbox environment.
| Traditional Chat | binG Workspace |
|---|---|
| Text-only responses | Executable code + live terminal |
| Static conversations | Persistent sandbox sessions |
| No environment access | Full Linux sandbox (Daytona/Runloop) |
| Single AI model | Multi-provider orchestration |
| Browser TTS only | Livekit + Neural TTS (ElevenLabs/Cartesia) |
- Multi-Agent Orchestration: Coordinate multiple AI agents for complex tasks
- Vercel AI SDK Integration: Native tool calling with streaming support
- Self-Healing Agents: Automatic error recovery with intelligent retry logic
- Tool Integration: 800+ tools via Composio + Nango (GitHub, Slack, Notion, etc.)
- Code Execution: Run generated code in isolated sandboxes
- Terminal Access: Full xterm.js terminal with fish-like autocomplete
- Persistent Sessions: Sandboxes persist across page reloads
- Plan-Act-Verify Workflow: Structured agent execution with validation
- Multi-Provider Fallback: Automatic failover (OpenAI → Anthropic → Google)
- Plan-Act-Verify Workflow: Discovery → Planning → Editing → Verification phases
- Self-Healing: Automatic retry on errors (syntax, logic, transient failures)
- Syntax Verification: Real-time validation for TypeScript, JSON, YAML, Python, Shell
- Streaming Responses: Real-time token streaming with tool call visibility
- Human-in-the-Loop: Approval workflow for sensitive operations
- Checkpointing: Save/restore agent state (Redis or in-memory)
- Tool Executor: Centralized tool execution with metrics and logging
- Nango Integrations: GitHub, Slack, Notion tools with rate limiting
- Multi-Provider Fallback: OpenAI → Anthropic → Google (automatic failover)
- Human-in-the-Loop (HITL): Approval required for sensitive operations
- Checkpoint/Resume: Pause and resume long-running tasks
- Type-Safe Tools: Zod-validated AI SDK tools with surgical ApplyDiff
- Isolated Sandboxes: Each user gets a dedicated Linux environment
- Multiple Sandbox Providers: Daytona, Runloop, Blaxel (ultra-fast), Fly.io Sprites (persistent VMs)
- Pre-installed Packages: Node.js, Python, Git, build tools ready to use
- Persistent Cache: Shared package cache (2-3x faster sandbox creation)
- Split Terminal View: Multiple terminals side-by-side
- Command History: Intelligent autocomplete and history navigation
- Tar-Pipe Sync: 10x faster file sync for large projects (Sprites)
- SSHFS Mount: Mount remote sandbox filesystem locally (Sprites)
- Neural TTS: ElevenLabs & Cartesia integration (human-quality voices)
- Livekit Rooms: Multi-user voice channels for collaboration
- Speech Recognition: Real-time transcription with Web Speech API
- Auto-Speak: AI responses automatically spoken when enabled
- Voice Commands: Hands-free operation support
- Per-User Sandboxes: Complete isolation between users
- Ephemeral Environments: Sandboxes auto-destroy after inactivity
- No Host Access: Sandboxes cannot access host filesystem
- Resource Limits: CPU/memory quotas prevent abuse
- Rate Limiting: Configurable rate limits prevent abuse
- Audit Logging: All commands logged for compliance
- Checkpoint System: Save/restore sandbox state (Sprites)
- Instant Terminal UI: Terminal opens instantly, sandbox connects lazily
- Friendly Loading: Progressive disclosure hides initialization time
- Smart Fallbacks: Graceful degradation when services unavailable
- Responsive Design: Works on desktop, tablet, and mobile
- Dark Theme: Easy on the eyes for extended sessions
- Multi-Provider Support: Mistral AI (FLUX1.1 Ultra), Google Imagen (free + paid), Replicate (SDXL, Flux)
- ComfyUI-Style Controls: Aspect ratio, quality presets, style selection
- Virtual Filesystem: Save/generated images to workspace
- Fallback Chain: Automatic provider failover (Mistral → Google Free → Google Paid → Replicate)
- Quota Management: Daily usage tracking for free tier (500 images/day limit)
- Free Tier:
gemini-2.5-flash-image-preview(500 images/day withGEMINI_API_KEY) - Paid Models: All other Google Imagen models require paid Gemini API access
- Text-to-Video: Generate videos from text prompts using Alibaba WAN, Google Veo, Kling AI
- Image-to-Video: Animate still images with motion and effects
- Multi-Provider Support: Vercel AI and Google Veo video models with automatic failover
- Advanced Controls: Duration, motion strength, camera movement, style presets
- Quality Presets: Low (2s) to Ultra (16s) with resolution options up to 4K
- Experimental Feature: Enable with
NEXT_PUBLIC_VIDEO_GENERATION_ENABLED=true - Paid Models: All video models require paid API access (Veo 3.0/3.1 via
GEMINI_API_KEY)
- E2E Tests: 80+ Playwright tests for all major workflows
- Component Tests: 20+ React component tests
- Contract Tests: 27+ API schema validation tests
- Visual Regression: 15+ screenshot baseline tests
- Performance Tests: 25+ benchmark tests with optimization recommendations
- Total Coverage: 349+ tests across 43+ test files
┌─────────────────────────────────────────────────────────────┐
│ binG Workspace │
├─────────────────────────────────────────────────────────────┤
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Chat UI │ │ Terminal │ │ Code Panel │ │
│ │ (React) │ │ (xterm.js) │ │ (Monaco) │ │
│ └──────┬───────┘ └──────┬───────┘ └──────┬───────┘ │
│ │ │ │ │
│ └─────────────────┴─────────────────┘ │
│ │ │
│ ┌────────▼────────┐ │
│ │ API Routes │ │
│ │ (Next.js) │ │
│ └────────┬────────┘ │
│ │ │
│ ┌─────────────────┼─────────────────┐ │
│ │ │ │ │
│ ┌──────▼──────┐ ┌──────▼──────┐ ┌──────▼──────┐ │
│ │ LLM Providers│ │ Sandboxes │ │ Livekit │ │
│ │ (OpenRouter,│ │ (Daytona, │ │ (Voice │ │
│ │ Google, │ │ Runloop) │ │ Rooms) │ │
│ │ Mistral) │ │ │ │ │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
└─────────────────────────────────────────────────────────────┘
# Clone repository
git clone https://github.com/quazfenton/binG.git
cd binG
# Install dependencies
pnpm install
# Copy environment template
cp env.example .env.local
# Edit .env.local with your API keys (see Configuration section)
nano .env.local
# Optional: Install advanced sandbox providers
pnpm add -O @blaxel/sdk @blaxel/core @fly/sprites @modelcontextprotocol/sdk
# Optional: Install image generation providers
pnpm add -O @mistralai/mistralai replicate
# Optional: Install SSHFS for local filesystem mount (macOS)
brew install macfuse sshfs
# Optional: Install Playwright for E2E testing
pnpm add -D @playwright/test @axe-core/playwright
npx playwright install
# Start development server
pnpm dev
# Run tests (recommended before committing)
pnpm test
# Open browser
open http://localhost:3000# Build and run with Docker Compose
docker-compose up -d
# View logs
docker-compose logs -f
# Stop services
docker-compose down# At least ONE LLM provider must be configured
OPENROUTER_API_KEY=sk-or-... # Recommended (access to 100+ models)
GOOGLE_API_KEY=... # Google Gemini (LLM language models)
GEMINI_API_KEY=... # Google Gemini (Imagen/Veo image/video generation)
ANTHROPIC_API_KEY=sk-ant-... # Claude
MISTRAL_API_KEY=... # Mistral AI
GITHUB_MODELS_API_KEY=... # GitHub Models (via Azure)
# Sandbox Provider (for code execution)
SANDBOX_PROVIDER=daytona # or 'runloop', 'blaxel', 'sprites'
DAYTONA_API_KEY=... # Get from https://daytona.io
# RUNLOOP_API_KEY=... # Alternative to Daytona
# Blaxel Sandbox (Optional - Ultra-fast resume <25ms)
BLAXEL_API_KEY=... # Get from https://console.blaxel.ai
BLAXEL_WORKSPACE=...
# Fly.io Sprites (Optional - Persistent VMs with checkpoints)
SPRITES_TOKEN=... # Get from https://sprites.dev/account
# Voice Features (Optional)
LIVEKIT_API_KEY=...
LIVEKIT_API_SECRET=...
NEXT_PUBLIC_LIVEKIT_URL=wss://...
# Neural TTS (Optional - enhances voice quality)
ELEVENLABS_API_KEY=... # Human-quality voices
CARTESIA_API_KEY=... # Ultra-low latency TTS
# Tool Integration (Optional)
COMPOSIO_API_KEY=... # 800+ tool integrations# Persistent Cache (2-3x faster sandbox creation)
SANDBOX_PERSISTENT_CACHE=true
SANDBOX_CACHE_VOLUME_NAME=global-package-cache
SANDBOX_CACHE_SIZE=2GB
# Warm Pool (instant sandbox availability)
SANDBOX_WARM_POOL=true
SANDBOX_WARM_POOL_SIZE=2
# Rate Limiting (prevent abuse)
SANDBOX_RATE_LIMITING_ENABLED=true
SANDBOX_RATE_LIMIT_COMMANDS_MAX=100
SANDBOX_RATE_LIMIT_FILE_OPS_MAX=50
# Sprites Advanced Features
SPRITES_ENABLE_TAR_PIPE_SYNC=true # 10x faster file sync
SPRITES_ENABLE_SSHFS=true # Mount filesystem locally
SPRITES_CHECKPOINT_AUTO_CREATE=true # Auto-save before dangerous ops
# Video Generation (Experimental)
NEXT_PUBLIC_VIDEO_GENERATION_ENABLED=false # Set to true to enable video generation
VIDEO_GENERATION_ALLOWED_MODELS=vercel # Supported: vercel
# Blaxel MCP Server (for AI assistants)
BLAXEL_MCP_ENABLED=true
# Logging
LOG_LEVEL=info # silent | error | warn | info | debug- Docker 20.10+
- Docker Compose 2.0+
- 4GB RAM minimum (8GB recommended)
- 20GB disk space
git clone https://github.com/quazfenton/binG.git
cd binG
cp .env.example .envEdit .env with your API keys (see Configuration section above).
# Build and start all services
docker-compose up -d
# Check status
docker-compose ps
# View logs
docker-compose logs -f appOpen http://localhost:3000 in your browser.
For production deployments:
-
Change default ports:
# docker-compose.yml ports: - "8080:3000" # Change to your preferred port
-
Add SSL/TLS:
# Use a reverse proxy like Caddy or Nginx docker run -d \ -p 443:443 \ -v /path/to/certs:/certs \ caddy caddy reverse-proxy --from your-domain.com --to binG:3000 -
Set up monitoring:
# Add Prometheus/Grafana for metrics docker-compose -f docker-compose.monitoring.yml up -d -
Configure backups:
# Backup persistent volumes docker run --rm \ -v bing_database:/data \ -v $(pwd)/backups:/backups \ alpine tar czf /backups/database-$(date +%Y%m%d).tar.gz /data
Issue: Container won't start
# Check logs
docker-compose logs app
# Rebuild container
docker-compose build --no-cache app
docker-compose up -dIssue: Sandbox creation fails
# Verify Daytona API key
docker-compose exec app curl -H "Authorization: Bearer $DAYTONA_API_KEY" \
https://api.daytona.io/health
# Check sandbox provider status
docker-compose logs | grep -i sandboxIssue: High memory usage
# Limit container memory
# docker-compose.yml
services:
app:
deploy:
resources:
limits:
memory: 2G| Scenario | Without Cache | With Persistent Cache |
|---|---|---|
| First sandbox | 10 min | 10 min |
| Subsequent | 10 min | 2-3 min |
| Bandwidth/user | 1.2 GB | 100 MB |
| Storage | 1.5 GB/sandbox | 2 GB shared |
- Enable persistent cache for teams >5 users
- Use warm pool for instant availability
- Choose regional sandbox provider for lower latency
- Set LOG_LEVEL=warn in production (reduces I/O)
- Change default JWT_SECRET to cryptographically secure value
- Enable HTTPS/TLS for all traffic
- Set up firewall rules (only expose necessary ports)
- Configure rate limiting (prevent abuse)
- Enable audit logging (compliance)
- Set up monitoring/alerting (detect anomalies)
- Regular security updates (patch dependencies)
- Backup database daily (disaster recovery)
Never commit API keys to version control!
# Use environment variables or secrets manager
export OPENROUTER_API_KEY="sk-or-..."
# Or use Docker secrets
docker secret create openrouter_key .env_openrouterCreate a custom Daytona image with pre-installed packages:
# Dockerfile.sandbox
FROM daytona/typescript:latest
RUN npm install -g typescript ts-node prettier eslint
RUN pip install requests flask fastapi numpy pandas
LABEL com.daytona.image="custom-typescript-full"Build and push:
docker build -t your-registry/custom-typescript -f Dockerfile.sandbox .
docker push your-registry/custom-typescriptConfigure in .env:
SANDBOX_CUSTOM_IMAGE=your-registry/custom-typescriptCoordinate multiple AI agents for complex tasks:
// Example: Code review workflow
const agents = [
{ role: 'reviewer', model: 'claude-3-5-sonnet' },
{ role: 'tester', model: 'gpt-4o' },
{ role: 'documenter', model: 'gemini-2.5-pro' },
];
// Each agent handles their specialtyConfigure neural TTS voices:
# ElevenLabs voices
ELEVENLABS_VOICE_ID=EXAVITQu4vr4xnSDxMaL # "Sarah" - Professional
ELEVENLABS_STABILITY=0.5
ELEVENLABS_SIMILARITY_BOOST=0.75
# Cartesia voices
CARTESIA_VOICE_ID=692530db-220c-4789-9917-79a844212011
CARTESIA_MODEL=sonic-englishBlaxel - Ultra-fast cloud sandboxes
- Resume time: <25ms from standby
- Auto scale-to-zero (free when idle)
- Persistent volumes support
- VPC networking for enterprise
- Best for: Fast iteration, stateless batch processing
Fly.io Sprites - Persistent VMs with full Linux environment
- True persistence (ext4 filesystem)
- Hardware isolation (dedicated microVM)
- Checkpoint system (save/restore state)
- Auto-hibernation (<500ms wake)
- SSHFS mount (local filesystem access)
- Best for: Long-lived dev environments, CI/CD runners
Tar-Pipe Sync - 10x faster file synchronization
- Compressed tar stream to sandbox
- Ideal for large projects (10+ files)
- Reduces data transfer by 60%
- Available for Sprites provider
SSHFS Mount - Mount sandbox filesystem locally
- Real-time sync between local and remote
- Edit with your favorite local IDE
- Available for Sprites provider
- Requires:
brew install macfuse sshfs(macOS) orapt-get install sshfs(Linux)
Checkpoint System - Save and restore sandbox state
- Auto-create before dangerous operations
- Manual checkpoints on demand
- Retention policies (max count, max age)
- Available for Sprites provider
MCP Server - Expose sandbox to AI assistants
- Model Context Protocol integration
- Works with Cursor, Claude Desktop, etc.
- Tools: execute_command, write_file, read_file, list_directory
- Available for Blaxel provider
Rate Limiting - Prevent abuse and manage resources
- Per-user or per-IP limits
- Configurable per operation type
- Automatic cleanup of expired entries
- Express middleware integration
- Vercel AI SDK Features - Complete guide to AI agent, self-healing, and tool integration
- Implementation Review - Detailed code review and architecture
- Test Report - Test coverage documentation (209 tests)
- Sandbox Caching Guide - Optimize sandbox creation speed
- Hiding Creation Time - UX improvements for perceived performance
- Voice Service Improvements - Neural TTS integration guide
- Database Migrations - Schema management and migrations
- Technical Improvements - Recent enhancements summary
- Advanced Features Guide - SSHFS, MCP Server, Rate Limiting
- Sprites Integration - Blaxel & Sprites usage guide
- Environment Variables Audit - Complete env vars reference
- Missing Packages Report - Package dependencies guide
- API Endpoints - All 95+ API endpoints
- Implementation Status - Current implementation status
- Test Summary - Test coverage report (124 tests)
We welcome contributions!
# Fork and clone
git clone https://github.com/YOUR_USERNAME/binG.git
cd binG
# Install dependencies
pnpm install
# Create feature branch
git checkout -b feature/your-feature
# Make changes and test
pnpm dev
pnpm test
# Commit and push
git commit -m "feat: add your feature"
git push origin feature/your-featureFor major changes, please open an issue first to discuss what you would like to change.
# Run all tests
pnpm test
# Run E2E tests (Playwright)
npx playwright test
# Run unit tests (Vitest)
npx vitest run
# Run component tests
npx vitest run __tests__/components/
# Run visual regression tests
npx playwright test tests/e2e/visual-regression.test.ts
# Run performance tests with recommendations
npx playwright test tests/e2e/performance-advanced.test.ts
# View HTML report
npx playwright show-report- E2E Tests: 80+ tests for all major workflows
- Component Tests: 20+ React component tests
- Contract Tests: 27+ API schema validation tests
- Visual Regression: 15+ screenshot baseline tests
- Performance Tests: 25+ benchmark tests
- Total: 349+ tests across 43+ test files
See Test Coverage Report for details.
- API Endpoints - Complete API reference (100+ endpoints)
- New API Features - Latest API additions from this session
- E2E Testing Guide - Playwright test setup and usage
- Implementation Plans - Technical implementation details
- Mistral Agent Guide - Mistral integration
- Sprites Enhancement - Sprites provider features
- Images Tab Guide - Image generation setup
MIT License - See LICENSE file for details.
- Daytona - Sandbox infrastructure
- Livekit - Voice/video infrastructure
- ElevenLabs - Neural TTS
- Cartesia - Ultra-low latency TTS
- Composio - Tool integrations
- OpenRouter - Multi-model access
- Issues: https://github.com/quazfenton/binG/issues
- Discussions: https://github.com/quazfenton/binG/discussions
Built with ❤️ by the binG Team
Last Updated: December 2024
Version: 2.0.0