AI coding agents can read your code, reason about changes, and act on your behalf. To choose the right one, it helps to understand the four common workflow types: integrated development environment (IDE), terminal, pull request (PR), and cloud.
In this tutorial, you’ll:
- Identify the four common agent interaction modes
- Understand what makes each workflow distinct
- Recognize which mode fits common development scenarios
- Weigh the risks and tradeoffs of each workflow
Before exploring the four workflow types, it’s worth looking at what makes a coding tool agentic in the first place.
Take the Quiz: Test your knowledge with our interactive “AI Coding Agents Guide: A Map of the Four Workflow Types” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
AI Coding Agents Guide: A Map of the Four Workflow TypesCheck your understanding of how AI coding agents fit into your workflow through four interaction modes: IDE, terminal, pull request, and cloud.
Get Your Cheat Sheet: Click here to download your free AI coding agents cheat sheet and keep the four workflow types at your fingertips when choosing the right agent for the job.
Understanding AI Coding Agents
While standard chatbots provide one-off answers, coding agents are designed for autonomy, operating through a continuous execution loop to solve complex tasks. This loop typically follows four distinct steps:
- Read: They read relevant files from your codebase to form their context.
- Reason: They determine the logical steps needed to achieve your goal.
- Act: They execute those steps by editing files, running terminal commands, or using external tools.
- Evaluate: They check the results of their actions to see if more work is needed.
This loop repeats until the task is completed or the agent hands control back to you. Unlike simple predictive text or one-off prompts, agents bridge the gap between suggestion and execution by autonomously navigating the development workflow.
The core agent loop will generally stay the same, but where an agent runs will shape how you interact with it:
- In an editor, it works alongside you.
- In a terminal, you guide it step by step.
- In pull requests, it reviews changes asynchronously.
- In the cloud, it works in a managed environment and reports back later.
These environments define four primary agent types, each enabling a distinct workflow: IDE agents, terminal agents, PR agents, and cloud agents.
Exploring the Four Workflow Types
The four workflow types describe interaction modes and don’t always map cleanly to product categories. The same tool often spans multiple workflows. For example, Claude Code runs in your terminal, in your editor, and in the cloud with Claude Code on the web. It can also review pull requests with Code Review.
The goal is to match the workflow to the task. The diagram below summarizes the four types at a glance:

This chart gives you a quick reference for comparing the four types. The sections below dig into each one.
IDE Agents
IDE agents live inside your code editor and work alongside you in real time. They suggest edits inline, show visual diffs, and let you accept or reject changes without leaving your editing environment.
This category has two common forms. AI-native IDEs such as Cursor, Windsurf, and Kiro are built from the ground up around AI capabilities. Some AI-native IDEs, especially tools like Kiro, support a more spec-driven workflow where you describe the task upfront and let the agent work through it.
IDE integrations like the GitHub Copilot extension, the Claude Code in VS Code extension, and Gemini Code Assist add agent features to editors you already use. Compared to AI-native IDEs, IDE integrations usually fit a more file-targeted workflow centered on interactive editing and refactoring. However, the choice is developer-specific, so try a few and see which style suits your workflow.
Keep in mind that cloud-backed IDE agents send your code to external servers for processing. Often, teams require approved tooling or local-only models for privacy reasons. Tools like Continue let you run models locally if your code can’t leave your machine.
Terminal Agents
Terminal agents run in your shell. You describe a task, and the agent reads files, proposes edits, and runs commands. You generally approve or reject each step before the agent moves on.
The terminal workflow works well for complex changes and navigating large codebases. You can point the agent at your whole project and let it trace through imports and related files, and propose coordinated changes across many modules. Terminal agents are also helpful when you’re jumping into a new codebase and need to get up to speed quickly. This category includes tools such as Claude Code, Aider, Gemini CLI, OpenCode, and Codex CLI.
For a direct comparison of two of the tools in this list, Real Python’s tutorial on Gemini CLI vs Claude Code walks through how each one handles common Python tasks.
Learn Claude Code Live: Want to see Claude Code in action? Join our live 2-day course, Claude Code: AI-Assisted Development in the Terminal, for a hands-on walkthrough with live Q&A.
Because terminal agents operate in the shell, they integrate seamlessly with your existing development workflow. The most common way to interact with them is to launch them in your terminal and use their built-in interface. More advanced workflows include steps such as piping logs into them, chaining them with other CLI tools, and running them inside automation scripts.
When you run the agent interactively, the step-by-step approval model gives you high control while still letting the agent handle the heavy lifting. If you want longer sessions without continuous approval prompts, you can explore auto mode in Claude Code.
Some terminal agents can connect to local models through tools like Ollama. If you’re unable to send code to external services due to company policies on proprietary code, a local model setup can be a good option to explore.
Pull Request Agents
Pull request (PR) agents are structurally different from the other three types. They’re asynchronous, meaning you don’t watch the agent work in real time. Instead, the agent often triggers automatically when a pull request is opened or updated. It runs on its own schedule, flags issues, suggests fixes, and leaves comments for you to review.
This workflow generally operates on shared branches visible to everyone on the team, not on your local workspace. The verification process involves human code review. The agent flags potential bugs, style violations, and logic issues, but a human reviewer makes the final call on whether to merge. In other words, PR agents usually act as a safety net before merging rather than as a tool you steer live while coding.
PR-agent workflows typically center on version control platforms like GitHub, GitLab, and Bitbucket. Tools like CodeRabbit and GitHub Copilot code review support this workflow. Even so, they don’t have to be triggered only from your version control platform. For example, GitHub Copilot code review can also be requested from places like VS Code, the GitHub Copilot CLI, a mobile device, and more.
In practice, the workflow looks like this: You open a pull request, and after some time, the agent posts a review with comments about your code. It might catch an unhandled edge case, flag a missing test, or suggest a cleaner approach. You respond to its comments just as you would to a teammate’s review by accepting the suggestion, pushing a fix, or dismissing it.
Keep in mind that on many teams, repository-level AI tools are approved or blocked centrally, so privacy decisions often happen at the organizational level rather than the individual level.
Cloud Agents
Cloud agents generally offer the most autonomy. You describe a task, the agent works in a remote or managed environment, and later reports back with a branch, pull request, or prototype.
This makes cloud agents a good fit for greenfield prototyping or work that takes longer than you’d want to sit and supervise. This category includes tools such as Devin, Claude Code on the web, Codex web, and Cursor’s Cloud Agents.
You can often access cloud agents through Slack, issue trackers, or a web browser. For example, you can mention Claude with @Claude in Slack and ask it to complete a specific task for you, and it will spin up a Claude Code session on the web.
There’s a tradeoff, though. You get more autonomy but often give up real-time control, because your code runs on infrastructure outside your local machine. That remote execution makes cloud agents most useful when the task is clearly scoped and the output is easy to review, such as a branch, pull request, or prototype.
Not every cloud agent uses the same execution model. Claude Code on the web runs on Anthropic-managed cloud infrastructure. GitHub’s Copilot cloud agent runs in an ephemeral development environment powered by GitHub Actions. Cursor’s Cloud Agents can also run on machines you control through My Machines. In short, where a cloud agent actually executes depends on the vendor, so factor that into your privacy compliance checks.
As with all AI-generated code, human review remains essential. That matters even more with cloud agents because they operate with a high level of autonomy. Every team has its own guidelines on working with AI-generated code, but a good rule of thumb is to never push or ship code that you haven’t laid eyes on.
Many cloud agents rely on vendor-managed infrastructure, and some organizations block them due to security or compliance requirements. Other cloud agents can run against machines you control. Either way, check your company’s policy before using them with proprietary code.
Navigating Category Overlap
Tool overlap is common. Three tools make that especially clear because each one shows up across all four workflow types.
-
Claude Code spans all four workflows. In the terminal, it works as a shell-based agent. In an IDE, it has native integrations for editors like VS Code and JetBrains. In the cloud, it runs as Claude Code on the web. For PR workflows, Anthropic offers Code Review as a managed service and Claude Code GitHub Actions for teams running their own CI pipelines.
-
Cursor covers all four workflows, too. Its main editor experience is its IDE. Cursor CLI handles terminal use, Cloud Agents manages cloud execution, and Bugbot automates pull request reviews.
-
GitHub Copilot also spans all four workflows. You can run it in your IDE, take it to the terminal with GitHub Copilot CLI, request PR feedback with GitHub Copilot code review, and hand off background work to a GitHub Copilot cloud agent.
Note: Agent frameworks like LangGraph and CrewAI are related but distinct. They’re libraries for building agents, not tools for writing code. If you’re interested in building your own AI agents, Real Python’s LLM Application Development learning path is a good starting point. You can also explore how to Build an LLM RAG Chatbot With LangChain for a hands-on project that uses agent-like patterns.
The takeaway is that the taxonomy in this tutorial describes workflows, that is, how you’re working. The product might stay the same, but the interaction mode changes depending on where and how you use it. Most agentic coding tools now span more than one category, and that overlap will likely keep growing.
Avoiding Common Pitfalls
With agentic coding becoming increasingly powerful, it’s easier than ever to fall into common traps. To get the full benefit of coding agents without compromising quality, privacy, or control, watch out for these mistakes:
-
Assuming one agent type handles everything: IDE agents excel at interactive editing, terminal agents handle complex multi-file changes, PR agents catch issues asynchronously, and cloud agents tackle brand-new features and prototyping. Matching the workflow to the task matters more than picking a single tool.
-
Ignoring privacy and compliance constraints: Many cloud agents run on remote infrastructure. PR agents operate in shared repositories. IDE and terminal agents with cloud backends send code to external APIs for inference. Before adopting any agent, check whether your code is allowed to leave your local machine and review your company’s policies on AI tool usage. Some teams can only use local models or self-managed environments.
-
Over-automating without review: Assume AI-generated code contains mistakes. It may have subtle bugs, weak exception handling, or patterns that don’t match your team’s conventions. Review all generated code carefully before merging. The more autonomous the agent, the more important developer oversight becomes. A careful review is much cheaper than a production failure.
Coding agents pay off when you treat them as collaborators, not replacements. Pick the right agent for each task, respect your team’s privacy boundaries, and keep a human reviewer in the loop.
Conclusion
Agentic coding is already reshaping how developers write, review, and ship code. Not all agents work the same way, though. The interaction mode matters as much as the tool itself.
In this tutorial, you’ve learned how to:
- Identify the four coding agent workflow types: IDE, terminal, PR, and cloud
- Understand what makes each workflow distinct, from real-time inline editing to highly autonomous cloud tasks
- Recognize which mode fits common development scenarios
- Weigh the risks and tradeoffs associated with each workflow
The boundaries between these categories will keep blurring as tools evolve. Rather than chasing the “best” tool, focus on the interaction mode that matches your current task. Use IDE agents for interactive editing, terminal agents for complex multi-file work, PR agents for automated review, and cloud agents for well-scoped tasks that can run in the background and be reviewed later.
To continue building your skills with AI-assisted development, explore Real Python’s Python Coding With AI learning path and the AI Coding Tools reference page for a comprehensive look at the tools available today.
Get Your Cheat Sheet: Click here to download your free AI coding agents cheat sheet and keep the four workflow types at your fingertips when choosing the right agent for the job.
Frequently Asked Questions
Now that you have some experience with AI coding agents in Python, you can use the questions and answers below to check your understanding and recap what you’ve learned.
These FAQs are related to the most important concepts you’ve covered in this tutorial. Click the Show/Hide toggle beside each question to reveal the answer.
An AI coding agent is a tool that reads your code, reasons about changes, and acts on your behalf through a continuous loop of reading, reasoning, acting, and evaluating. That loop is what separates an agent from autocomplete or a one-shot chatbot exchange.
The four workflow types are IDE agents, terminal agents, pull request (PR) agents, and cloud agents. They differ by where the agent runs, how you interact with it, and how much autonomy it has.
Chatbots respond with one-off answers, while coding agents operate through a continuous execution loop and can edit files, run commands, and use external tools. Agents keep working until the task is done or they hand control back to you.
Some agents can connect to local models through tools like Ollama or Continue, which keeps your code on your machine. This option matters when company policies prevent sending proprietary code to external services.
Yes, human review remains essential for all AI-generated code. The more autonomous the agent, the more important careful review becomes, so a good rule of thumb is to never push or ship code that you haven’t laid eyes on.
Take the Quiz: Test your knowledge with our interactive “AI Coding Agents Guide: A Map of the Four Workflow Types” quiz. You’ll receive a score upon completion to help you track your learning progress:
Interactive Quiz
AI Coding Agents Guide: A Map of the Four Workflow TypesCheck your understanding of how AI coding agents fit into your workflow through four interaction modes: IDE, terminal, pull request, and cloud.



