close
Skip to content

secureonelabs/Decepticon

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

231 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

English 한국어

Decepticon Logo

Decepticon — Autonomous Red Team Agent

"Another AI hacker? Let us guess — it runs nmap and writes a report."


v1demo.mp4

Install

Prerequisites: Docker and Docker Compose v2. Supported on macOS (Apple Silicon + Intel), Linux (amd64 + arm64), and Windows via WSL2 (Ubuntu or Kali). Native Windows is not supported — install WSL2 first, then run the commands below from inside the WSL shell.

curl -fsSL https://decepticon.red/install | bash
decepticon onboard   # Interactive setup wizard (provider, API key, model profile)
decepticon           # Start everything: terminal CLI + web dashboard at http://localhost:3000

Quick start · Full setup walkthrough


💖 Support Decepticon

Sponsor

We're building Decepticon toward an Offensive Vaccine for the AI-driven threat landscape. If you believe in autonomous red teaming as a path to stronger defense, consider supporting the project.


Benchmark Results

Benchmark Difficulty Pass Rate
XBOW validation-benchmarks Hard (Level 3) 7 / 8

Full benchmark index


What is Decepticon?

The "AI + hacking" space is full of demos that run nmap and print a report. That's not what this is.

Decepticon is a professional autonomous Red Team agent. It executes realistic attack chains — reconnaissance, exploitation, privilege escalation, lateral movement, C2 — the way a real adversary would, not the way a scanner does.

But more importantly: it operates under the discipline that separates red teamers from script kiddies. Before a single packet leaves the wire, Decepticon generates a complete engagement package — RoE, ConOps, Deconfliction Plan, and OPPLAN with MITRE ATT&CK mapping — and every action runs inside those defined rules.

Engagement workflow deep dive


Why Decepticon?

Real kill chains, not checkbox scans. Decepticon reads an OPPLAN and pursues objectives through whatever path opens up — pivoting, adapting, chaining techniques.

Interactive shells, actually. Real offensive tools are interactive (msfconsole, sliver-client, evil-winrm). Decepticon runs every command inside persistent tmux sessions with automatic prompt detection — so when a tool drops into an interactive prompt, the agent sends follow-up commands without workarounds.

Hardened sandbox isolation. All commands run inside a Kali Linux sandbox on a dedicated operational network (sandbox-net), separate from the management plane (decepticon-net). LangGraph drives the sandbox via the Docker socket. → Architecture

Offense serves defense. The planned Offensive Vaccine loop will turn findings into defense improvements through an attack → defend → verify cycle.


Architecture

Decepticon Infrastructure

Two-network design — management services (LiteLLM, PostgreSQL, LangGraph, Web) on decepticon-net; sandbox, C2 server, and targets on sandbox-net. Neo4j is dual-homed so the agent (on management) can persist findings written from inside the sandbox.

Architecture deep dive · Knowledge graph


Agents

16 specialist agents organized by kill chain phase, with a fresh context window per objective — no accumulated noise.

Orchestration · Reconnaissance · Exploitation · Post-Exploitation · Vulnerability Research · Domain Specialists (AD, Cloud, Smart Contracts, Reversing, Analyst).

Full agent roster and middleware stack


Models & Providers

Tier-based, credentials-aware fallback chain. You declare which credentials you have in priority order; Decepticon builds the primary→fallback chain at every tier from there.

Profile Tier per agent Use case
eco (default) Per-agent (HIGH for orchestrator/exploiter/patcher/analyst, MID for execution, LOW for recon/soundwave) Production
max Every agent on HIGH High-value targets
test Every agent on LOW Development / CI

Tier-mapped providers: Anthropic, OpenAI, Google Gemini, MiniMax, DeepSeek, xAI, Mistral, OpenRouter, Nvidia NIM, Ollama (local). Subscription OAuth: Claude Max/Pro/Team, ChatGPT Pro/Plus/Team, Gemini Advanced, Copilot Pro, SuperGrok, Perplexity Pro.

Configure via decepticon onboard. → Full model reference & fallback examples


Documentation

Topic Doc
Installation and first engagement Getting Started
Complete setup, OAuth, providers, dashboard Setup Guide
All CLI commands and keyboard shortcuts CLI Reference
All make targets Makefile Reference
Agent roster and middleware Agents
Model profiles and fallback chain Models
Skill system and format spec Skills
Web dashboard features and setup Web Dashboard
System architecture and network isolation Architecture
Neo4j knowledge graph Knowledge Graph
End-to-end engagement workflow Engagement Workflow
Offensive Vaccine loop Offensive Vaccine
Contributing to Decepticon Contributing

Contributing

git clone https://github.com/PurpleAILAB/Decepticon.git
cd Decepticon
make dogfood  # Full OSS UX (launcher → onboard → CLI) on local code
make dev      # Backend hot-reload (compose watch) — daily dev loop

Contributing guide


Community

Join the Discord — ask questions, share engagement logs, discuss techniques.


Disclaimer

Do not use this project on any system or network without explicit written authorization from the system owner. Unauthorized access to computer systems is illegal. You are solely responsible for your actions. The authors and contributors assume no liability for misuse.


License

Apache-2.0


Decepticon

About

Autonomous Hacking Agent for Red Team

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • Python 67.3%
  • TypeScript 22.4%
  • Go 7.3%
  • Shell 0.8%
  • Dockerfile 0.7%
  • Makefile 0.6%
  • Other 0.9%