close
Skip to content

quazfenton/binG

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

526 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CodeRabbit Pull Request Reviews

🚀 binG - Agentic Compute Workspace

An intelligent workspace where AI agents, code execution, and human collaboration converge.

binG is not just another chat interface—it's a full-stack agentic workspace that combines AI conversation with real code execution, voice interaction, and multi-agent orchestration. Build, test, and deploy applications with AI assistance in an isolated, secure sandbox environment.

binG Workspace


🎯 What Makes binG Different?

Traditional Chat binG Workspace
Text-only responses Executable code + live terminal
Static conversations Persistent sandbox sessions
No environment access Full Linux sandbox (Daytona/Runloop)
Single AI model Multi-provider orchestration
Browser TTS only Livekit + Neural TTS (ElevenLabs/Cartesia)

✨ Core Features

🤖 Agentic Capabilities

  • Multi-Agent Orchestration: Coordinate multiple AI agents for complex tasks
  • Vercel AI SDK Integration: Native tool calling with streaming support
  • Self-Healing Agents: Automatic error recovery with intelligent retry logic
  • Tool Integration: 800+ tools via Composio + Nango (GitHub, Slack, Notion, etc.)
  • Code Execution: Run generated code in isolated sandboxes
  • Terminal Access: Full xterm.js terminal with fish-like autocomplete
  • Persistent Sessions: Sandboxes persist across page reloads
  • Plan-Act-Verify Workflow: Structured agent execution with validation
  • Multi-Provider Fallback: Automatic failover (OpenAI → Anthropic → Google)

🧠 Advanced AI Agent (NEW - Vercel AI SDK Integration)

  • Plan-Act-Verify Workflow: Discovery → Planning → Editing → Verification phases
  • Self-Healing: Automatic retry on errors (syntax, logic, transient failures)
  • Syntax Verification: Real-time validation for TypeScript, JSON, YAML, Python, Shell
  • Streaming Responses: Real-time token streaming with tool call visibility
  • Human-in-the-Loop: Approval workflow for sensitive operations
  • Checkpointing: Save/restore agent state (Redis or in-memory)
  • Tool Executor: Centralized tool execution with metrics and logging
  • Nango Integrations: GitHub, Slack, Notion tools with rate limiting
  • Multi-Provider Fallback: OpenAI → Anthropic → Google (automatic failover)
  • Human-in-the-Loop (HITL): Approval required for sensitive operations
  • Checkpoint/Resume: Pause and resume long-running tasks
  • Type-Safe Tools: Zod-validated AI SDK tools with surgical ApplyDiff

💻 Development Environment

  • Isolated Sandboxes: Each user gets a dedicated Linux environment
  • Multiple Sandbox Providers: Daytona, Runloop, Blaxel (ultra-fast), Fly.io Sprites (persistent VMs)
  • Pre-installed Packages: Node.js, Python, Git, build tools ready to use
  • Persistent Cache: Shared package cache (2-3x faster sandbox creation)
  • Split Terminal View: Multiple terminals side-by-side
  • Command History: Intelligent autocomplete and history navigation
  • Tar-Pipe Sync: 10x faster file sync for large projects (Sprites)
  • SSHFS Mount: Mount remote sandbox filesystem locally (Sprites)

🎙️ Voice & Audio

  • Neural TTS: ElevenLabs & Cartesia integration (human-quality voices)
  • Livekit Rooms: Multi-user voice channels for collaboration
  • Speech Recognition: Real-time transcription with Web Speech API
  • Auto-Speak: AI responses automatically spoken when enabled
  • Voice Commands: Hands-free operation support

🔒 Security & Isolation

  • Per-User Sandboxes: Complete isolation between users
  • Ephemeral Environments: Sandboxes auto-destroy after inactivity
  • No Host Access: Sandboxes cannot access host filesystem
  • Resource Limits: CPU/memory quotas prevent abuse
  • Rate Limiting: Configurable rate limits prevent abuse
  • Audit Logging: All commands logged for compliance
  • Checkpoint System: Save/restore sandbox state (Sprites)

🎨 User Experience

  • Instant Terminal UI: Terminal opens instantly, sandbox connects lazily
  • Friendly Loading: Progressive disclosure hides initialization time
  • Smart Fallbacks: Graceful degradation when services unavailable
  • Responsive Design: Works on desktop, tablet, and mobile
  • Dark Theme: Easy on the eyes for extended sessions

🖼️ Image Generation (NEW)

  • Multi-Provider Support: Mistral AI (FLUX1.1 Ultra), Google Imagen (free + paid), Replicate (SDXL, Flux)
  • ComfyUI-Style Controls: Aspect ratio, quality presets, style selection
  • Virtual Filesystem: Save/generated images to workspace
  • Fallback Chain: Automatic provider failover (Mistral → Google Free → Google Paid → Replicate)
  • Quota Management: Daily usage tracking for free tier (500 images/day limit)
  • Free Tier: gemini-2.5-flash-image-preview (500 images/day with GEMINI_API_KEY)
  • Paid Models: All other Google Imagen models require paid Gemini API access

🎥 Video Generation (Experimental - Coming Soon)

  • Text-to-Video: Generate videos from text prompts using Alibaba WAN, Google Veo, Kling AI
  • Image-to-Video: Animate still images with motion and effects
  • Multi-Provider Support: Vercel AI and Google Veo video models with automatic failover
  • Advanced Controls: Duration, motion strength, camera movement, style presets
  • Quality Presets: Low (2s) to Ultra (16s) with resolution options up to 4K
  • Experimental Feature: Enable with NEXT_PUBLIC_VIDEO_GENERATION_ENABLED=true
  • Paid Models: All video models require paid API access (Veo 3.0/3.1 via GEMINI_API_KEY)

🧪 Comprehensive Testing (NEW)

  • E2E Tests: 80+ Playwright tests for all major workflows
  • Component Tests: 20+ React component tests
  • Contract Tests: 27+ API schema validation tests
  • Visual Regression: 15+ screenshot baseline tests
  • Performance Tests: 25+ benchmark tests with optimization recommendations
  • Total Coverage: 349+ tests across 43+ test files

🏗️ Architecture Overview

┌─────────────────────────────────────────────────────────────┐
│                     binG Workspace                          │
├─────────────────────────────────────────────────────────────┤
│  ┌──────────────┐  ┌──────────────┐  ┌──────────────┐     │
│  │   Chat UI    │  │   Terminal   │  │  Code Panel  │     │
│  │  (React)     │  │  (xterm.js)  │  │  (Monaco)    │     │
│  └──────┬───────┘  └──────┬───────┘  └──────┬───────┘     │
│         │                 │                 │              │
│         └─────────────────┴─────────────────┘              │
│                           │                                │
│                  ┌────────▼────────┐                       │
│                  │  API Routes     │                       │
│                  │  (Next.js)      │                       │
│                  └────────┬────────┘                       │
│                           │                                │
│         ┌─────────────────┼─────────────────┐             │
│         │                 │                 │              │
│  ┌──────▼──────┐  ┌──────▼──────┐  ┌──────▼──────┐       │
│  │ LLM Providers│  │  Sandboxes  │  │   Livekit   │       │
│  │ (OpenRouter,│  │  (Daytona,  │  │   (Voice    │       │
│  │  Google,    │  │   Runloop)  │  │   Rooms)    │       │
│  │  Mistral)   │  │             │  │             │       │
│  └─────────────┘  └─────────────┘  └─────────────┘       │
└─────────────────────────────────────────────────────────────┘

🚀 Quick Start

Option 1: Local Development

# Clone repository
git clone https://github.com/quazfenton/binG.git
cd binG

# Install dependencies
pnpm install

# Copy environment template
cp env.example .env.local

# Edit .env.local with your API keys (see Configuration section)
nano .env.local

# Optional: Install advanced sandbox providers
pnpm add -O @blaxel/sdk @blaxel/core @fly/sprites @modelcontextprotocol/sdk

# Optional: Install image generation providers
pnpm add -O @mistralai/mistralai replicate

# Optional: Install SSHFS for local filesystem mount (macOS)
brew install macfuse sshfs

# Optional: Install Playwright for E2E testing
pnpm add -D @playwright/test @axe-core/playwright
npx playwright install

# Start development server
pnpm dev

# Run tests (recommended before committing)
pnpm test

# Open browser
open http://localhost:3000

Option 2: Docker Deployment (Recommended for Production)

# Build and run with Docker Compose
docker-compose up -d

# View logs
docker-compose logs -f

# Stop services
docker-compose down

Option 3: One-Click Deploy

Deploy with Vercel Deploy to Railway


⚙️ Configuration

Required Environment Variables

# At least ONE LLM provider must be configured
OPENROUTER_API_KEY=sk-or-...        # Recommended (access to 100+ models)
GOOGLE_API_KEY=...                   # Google Gemini (LLM language models)
GEMINI_API_KEY=...                   # Google Gemini (Imagen/Veo image/video generation)
ANTHROPIC_API_KEY=sk-ant-...         # Claude
MISTRAL_API_KEY=...                  # Mistral AI
GITHUB_MODELS_API_KEY=...            # GitHub Models (via Azure)

# Sandbox Provider (for code execution)
SANDBOX_PROVIDER=daytona             # or 'runloop', 'blaxel', 'sprites'
DAYTONA_API_KEY=...                  # Get from https://daytona.io
# RUNLOOP_API_KEY=...               # Alternative to Daytona

# Blaxel Sandbox (Optional - Ultra-fast resume <25ms)
BLAXEL_API_KEY=...                   # Get from https://console.blaxel.ai
BLAXEL_WORKSPACE=...

# Fly.io Sprites (Optional - Persistent VMs with checkpoints)
SPRITES_TOKEN=...                    # Get from https://sprites.dev/account

# Voice Features (Optional)
LIVEKIT_API_KEY=...
LIVEKIT_API_SECRET=...
NEXT_PUBLIC_LIVEKIT_URL=wss://...

# Neural TTS (Optional - enhances voice quality)
ELEVENLABS_API_KEY=...              # Human-quality voices
CARTESIA_API_KEY=...                 # Ultra-low latency TTS

# Tool Integration (Optional)
COMPOSIO_API_KEY=...                 # 800+ tool integrations

Optional Optimizations

# Persistent Cache (2-3x faster sandbox creation)
SANDBOX_PERSISTENT_CACHE=true
SANDBOX_CACHE_VOLUME_NAME=global-package-cache
SANDBOX_CACHE_SIZE=2GB

# Warm Pool (instant sandbox availability)
SANDBOX_WARM_POOL=true
SANDBOX_WARM_POOL_SIZE=2

# Rate Limiting (prevent abuse)
SANDBOX_RATE_LIMITING_ENABLED=true
SANDBOX_RATE_LIMIT_COMMANDS_MAX=100
SANDBOX_RATE_LIMIT_FILE_OPS_MAX=50

# Sprites Advanced Features
SPRITES_ENABLE_TAR_PIPE_SYNC=true     # 10x faster file sync
SPRITES_ENABLE_SSHFS=true             # Mount filesystem locally
SPRITES_CHECKPOINT_AUTO_CREATE=true   # Auto-save before dangerous ops

# Video Generation (Experimental)
NEXT_PUBLIC_VIDEO_GENERATION_ENABLED=false  # Set to true to enable video generation
VIDEO_GENERATION_ALLOWED_MODELS=vercel    # Supported: vercel

# Blaxel MCP Server (for AI assistants)
BLAXEL_MCP_ENABLED=true

# Logging
LOG_LEVEL=info                       # silent | error | warn | info | debug

🐳 Docker Deployment Guide

Prerequisites

  • Docker 20.10+
  • Docker Compose 2.0+
  • 4GB RAM minimum (8GB recommended)
  • 20GB disk space

Step 1: Clone and Configure

git clone https://github.com/quazfenton/binG.git
cd binG
cp .env.example .env

Edit .env with your API keys (see Configuration section above).

Step 2: Start Services

# Build and start all services
docker-compose up -d

# Check status
docker-compose ps

# View logs
docker-compose logs -f app

Step 3: Access Application

Open http://localhost:3000 in your browser.

Step 4: Production Hardening

For production deployments:

  1. Change default ports:

    # docker-compose.yml
    ports:
      - "8080:3000"  # Change to your preferred port
  2. Add SSL/TLS:

    # Use a reverse proxy like Caddy or Nginx
    docker run -d \
      -p 443:443 \
      -v /path/to/certs:/certs \
      caddy caddy reverse-proxy --from your-domain.com --to binG:3000
  3. Set up monitoring:

    # Add Prometheus/Grafana for metrics
    docker-compose -f docker-compose.monitoring.yml up -d
  4. Configure backups:

    # Backup persistent volumes
    docker run --rm \
      -v bing_database:/data \
      -v $(pwd)/backups:/backups \
      alpine tar czf /backups/database-$(date +%Y%m%d).tar.gz /data

Docker Troubleshooting

Issue: Container won't start

# Check logs
docker-compose logs app

# Rebuild container
docker-compose build --no-cache app
docker-compose up -d

Issue: Sandbox creation fails

# Verify Daytona API key
docker-compose exec app curl -H "Authorization: Bearer $DAYTONA_API_KEY" \
  https://api.daytona.io/health

# Check sandbox provider status
docker-compose logs | grep -i sandbox

Issue: High memory usage

# Limit container memory
# docker-compose.yml
services:
  app:
    deploy:
      resources:
        limits:
          memory: 2G

📊 Performance Benchmarks

Scenario Without Cache With Persistent Cache
First sandbox 10 min 10 min
Subsequent 10 min 2-3 min
Bandwidth/user 1.2 GB 100 MB
Storage 1.5 GB/sandbox 2 GB shared

Optimization Tips

  1. Enable persistent cache for teams >5 users
  2. Use warm pool for instant availability
  3. Choose regional sandbox provider for lower latency
  4. Set LOG_LEVEL=warn in production (reduces I/O)

🔐 Security Best Practices

Production Checklist

  • Change default JWT_SECRET to cryptographically secure value
  • Enable HTTPS/TLS for all traffic
  • Set up firewall rules (only expose necessary ports)
  • Configure rate limiting (prevent abuse)
  • Enable audit logging (compliance)
  • Set up monitoring/alerting (detect anomalies)
  • Regular security updates (patch dependencies)
  • Backup database daily (disaster recovery)

API Key Management

Never commit API keys to version control!

# Use environment variables or secrets manager
export OPENROUTER_API_KEY="sk-or-..."

# Or use Docker secrets
docker secret create openrouter_key .env_openrouter

🛠️ Advanced Usage

Custom Sandbox Images

Create a custom Daytona image with pre-installed packages:

# Dockerfile.sandbox
FROM daytona/typescript:latest

RUN npm install -g typescript ts-node prettier eslint
RUN pip install requests flask fastapi numpy pandas

LABEL com.daytona.image="custom-typescript-full"

Build and push:

docker build -t your-registry/custom-typescript -f Dockerfile.sandbox .
docker push your-registry/custom-typescript

Configure in .env:

SANDBOX_CUSTOM_IMAGE=your-registry/custom-typescript

Multi-Agent Orchestration

Coordinate multiple AI agents for complex tasks:

// Example: Code review workflow
const agents = [
  { role: 'reviewer', model: 'claude-3-5-sonnet' },
  { role: 'tester', model: 'gpt-4o' },
  { role: 'documenter', model: 'gemini-2.5-pro' },
];

// Each agent handles their specialty

Voice Customization

Configure neural TTS voices:

# ElevenLabs voices
ELEVENLABS_VOICE_ID=EXAVITQu4vr4xnSDxMaL  # "Sarah" - Professional
ELEVENLABS_STABILITY=0.5
ELEVENLABS_SIMILARITY_BOOST=0.75

# Cartesia voices
CARTESIA_VOICE_ID=692530db-220c-4789-9917-79a844212011
CARTESIA_MODEL=sonic-english

🆕 New Features (Latest Release)

Sandbox Providers

Blaxel - Ultra-fast cloud sandboxes

  • Resume time: <25ms from standby
  • Auto scale-to-zero (free when idle)
  • Persistent volumes support
  • VPC networking for enterprise
  • Best for: Fast iteration, stateless batch processing

Fly.io Sprites - Persistent VMs with full Linux environment

  • True persistence (ext4 filesystem)
  • Hardware isolation (dedicated microVM)
  • Checkpoint system (save/restore state)
  • Auto-hibernation (<500ms wake)
  • SSHFS mount (local filesystem access)
  • Best for: Long-lived dev environments, CI/CD runners

Advanced Features

Tar-Pipe Sync - 10x faster file synchronization

  • Compressed tar stream to sandbox
  • Ideal for large projects (10+ files)
  • Reduces data transfer by 60%
  • Available for Sprites provider

SSHFS Mount - Mount sandbox filesystem locally

  • Real-time sync between local and remote
  • Edit with your favorite local IDE
  • Available for Sprites provider
  • Requires: brew install macfuse sshfs (macOS) or apt-get install sshfs (Linux)

Checkpoint System - Save and restore sandbox state

  • Auto-create before dangerous operations
  • Manual checkpoints on demand
  • Retention policies (max count, max age)
  • Available for Sprites provider

MCP Server - Expose sandbox to AI assistants

  • Model Context Protocol integration
  • Works with Cursor, Claude Desktop, etc.
  • Tools: execute_command, write_file, read_file, list_directory
  • Available for Blaxel provider

Rate Limiting - Prevent abuse and manage resources

  • Per-user or per-IP limits
  • Configurable per operation type
  • Automatic cleanup of expired entries
  • Express middleware integration

📚 Documentation

AI Agent & Vercel AI SDK

Core Features

Advanced Features


🤝 Contributing

We welcome contributions!

Development Setup

# Fork and clone
git clone https://github.com/YOUR_USERNAME/binG.git
cd binG

# Install dependencies
pnpm install

# Create feature branch
git checkout -b feature/your-feature

# Make changes and test
pnpm dev
pnpm test

# Commit and push
git commit -m "feat: add your feature"
git push origin feature/your-feature

For major changes, please open an issue first to discuss what you would like to change.


🧪 Testing

Run Tests

# Run all tests
pnpm test

# Run E2E tests (Playwright)
npx playwright test

# Run unit tests (Vitest)
npx vitest run

# Run component tests
npx vitest run __tests__/components/

# Run visual regression tests
npx playwright test tests/e2e/visual-regression.test.ts

# Run performance tests with recommendations
npx playwright test tests/e2e/performance-advanced.test.ts

# View HTML report
npx playwright show-report

Test Coverage

  • E2E Tests: 80+ tests for all major workflows
  • Component Tests: 20+ React component tests
  • Contract Tests: 27+ API schema validation tests
  • Visual Regression: 15+ screenshot baseline tests
  • Performance Tests: 25+ benchmark tests
  • Total: 349+ tests across 43+ test files

See Test Coverage Report for details.


📚 Documentation


📄 License

MIT License - See LICENSE file for details.


🙏 Acknowledgments


📬 Support


Built with ❤️ by the binG Team

Last Updated: December 2024
Version: 2.0.0

About

agent web, cli, & desktop workspace with coding/IDE functionality & live preview system, cloud workers & task automation, sandboxed execution, 3rd party platform integration & MCP tools, free LLM chat/voice with wide selection providers/models or customizable orchestration, frontend for Codex/Claude, virtual filesystem & cloud storage

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors