<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Aditya Agarwal</title>
    <description>The latest articles on DEV Community by Aditya Agarwal (@adioof).</description>
    <link>https://dev.to/adioof</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/adioof"/>
    <language>en</language>
    <item>
      <title>AI killed the junior dev ladder. Nobody has a plan for what's next.</title>
      <dc:creator>Aditya Agarwal</dc:creator>
      <pubDate>Mon, 20 Apr 2026 19:26:04 +0000</pubDate>
      <link>https://dev.to/adioof/ai-killed-the-junior-dev-ladder-nobody-has-a-plan-for-whats-next-4p28</link>
      <guid>https://dev.to/adioof/ai-killed-the-junior-dev-ladder-nobody-has-a-plan-for-whats-next-4p28</guid>
      <description>&lt;p&gt;We are creating an industry that employs senior engineers but at the same time are actively eliminating the very means of producing them. This is not a hot take. This is the math.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Ladder Is Disappearing
&lt;/h2&gt;

&lt;p&gt;Junior dev work has always been the training ground. You build a button component. You wire up a form. You fix a CSS bug and seriously reevaluate your life decisions. Those reps are how you learn to think about systems.&lt;/p&gt;

&lt;p&gt;Now those tasks are increasingly AI-generated. Simple UI components. Boilerplate endpoints. Basic CRUD. The exact work that used to be a junior's entire job for the first year or two.&lt;/p&gt;

&lt;p&gt;Reports are already surfacing of frontend devs being cut from teams because AI now handles their workload. Not senior devs. Not architects. The juniors. The ones who were supposed to become the next generation of senior talent.&lt;/p&gt;

&lt;h2&gt;
  
  
  Nobody Is Talking About the Pipeline
&lt;/h2&gt;

&lt;p&gt;Here's the thing that gets me. Every major tech company is out here racing to ship AI coding tools. They're all hyping "10x developer productivity" and "do more with fewer engineers."&lt;/p&gt;

&lt;p&gt;Not a single one of them has publicly grappled with the obvious follow-up question: &lt;strong&gt;if AI is handling junior-level work, how does anyone become senior?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It's like removing the minor leagues and wondering why MLB talent dried up five years later. 🤷&lt;/p&gt;

&lt;p&gt;Senior engineers didn't emerge from a vacuum. They got there by doing junior work badly, getting feedback, and slowly building intuition. That process takes years of hands-on reps.&lt;/p&gt;

&lt;p&gt;→ You don't learn system design by reading about it.&lt;br&gt;
→ You learn it by building something that breaks at scale.&lt;br&gt;
→ You learn debugging by staring at a bug for three hours, not by asking a model to fix it.&lt;br&gt;
→ You learn code review by having your own code torn apart.&lt;/p&gt;

&lt;p&gt;Remove those reps and you don't get "AI-augmented juniors." You get people who never develop the instincts that make senior engineers valuable.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Structural Paradox
&lt;/h2&gt;

&lt;p&gt;The industry desperately needs senior engineers. Every company out there is crying about the talent shortage. Every hiring manager will tell you they can't find enough experienced people.&lt;/p&gt;

&lt;p&gt;And yet the collective response is to automate away the exact experience that produces those people. &lt;strong&gt;We're eating the seed corn.&lt;/strong&gt; 🌽&lt;/p&gt;

&lt;p&gt;Some will argue that juniors should just "adapt" — learn to prompt better, focus on architecture earlier, skip the grunt work. I'm skeptical. That's like saying medical students should skip residency and just supervise robots. The grunt work &lt;em&gt;is&lt;/em&gt; the education.&lt;/p&gt;

&lt;p&gt;Others will say AI creates new types of junior work. Maybe. But nobody has defined what that work looks like. Nobody has built the new ladder. We're just pulling up the old one and hoping something appears.&lt;/p&gt;

&lt;h2&gt;
  
  
  This Isn't a Tech Problem
&lt;/h2&gt;

&lt;p&gt;This is a &lt;strong&gt;people problem&lt;/strong&gt; disguised as a tech story. The question isn't whether AI can write a React component. Obviously it can.&lt;/p&gt;

&lt;p&gt;The question is what happens to the industry in five to ten years when the current batch of seniors burns out, retires, or moves into management — and there's no one behind them with real experience.&lt;/p&gt;

&lt;p&gt;We're optimizing for short-term productivity while creating a long-term talent crisis. Every team shipping faster today with fewer juniors is borrowing against a future they haven't thought about.&lt;/p&gt;

&lt;p&gt;I don't have a clean answer here. But I know that "just let the market figure it out" isn't a plan. It's a cop-out. And the ones who'll pay the price are the 22-year-olds graduating right now into an industry that automated their first rung and called it progress. 😐&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What's your take — is there a realistic path for juniors to build real skills in an AI-first world, or are we sleepwalking into a talent collapse?&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>career</category>
      <category>ai</category>
      <category>hiring</category>
      <category>opinion</category>
    </item>
    <item>
      <title>Vibe coding produces the silhouette of software, not software</title>
      <dc:creator>Aditya Agarwal</dc:creator>
      <pubDate>Mon, 20 Apr 2026 13:34:52 +0000</pubDate>
      <link>https://dev.to/adioof/vibe-coding-produces-the-silhouette-of-software-not-software-nhg</link>
      <guid>https://dev.to/adioof/vibe-coding-produces-the-silhouette-of-software-not-software-nhg</guid>
      <description>&lt;p&gt;It seems like every week there's a new "Here's a full-stack app I made in 48 hours with AI!" screen recording that wows you visually. Then you click the repo link and you're just… staring at a corpse in a suit.&lt;/p&gt;

&lt;p&gt;The term "vibe coding" has entered the lexicon. It describes shipping AI-generated code you haven't actually reviewed. You prompt, you accept, you deploy. Vibes.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Silhouette Problem
&lt;/h2&gt;

&lt;p&gt;There's a metaphor floating around developer forums that nails this perfectly: AI-generated apps are &lt;strong&gt;silhouettes&lt;/strong&gt; of software. From a distance, the shape is right. The outline matches. But there's nothing behind it.&lt;/p&gt;

&lt;p&gt;A silhouette of a chair looks like a chair. You still can't sit in it.&lt;/p&gt;

&lt;p&gt;These demo apps pass what I'd call the &lt;strong&gt;screenshot test&lt;/strong&gt;. They have auth screens, dashboards, CRUD operations, maybe even a dark mode toggle. But try to do anything slightly off the happy path and the whole thing folds like wet cardboard.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cargo Cult Engineering
&lt;/h2&gt;

&lt;p&gt;The "I built a Reddit clone in a weekend" genre is cargo cult engineering at scale. It mimics the &lt;strong&gt;artifacts&lt;/strong&gt; of real software — routes, components, database tables — without any of the &lt;strong&gt;decisions&lt;/strong&gt; that make software survive contact with users.&lt;/p&gt;

&lt;p&gt;→ No input validation beyond what the framework gives you for free.&lt;br&gt;
→ No error handling strategy. Just silent failures everywhere.&lt;br&gt;
→ No concept of edge cases, race conditions, or concurrent users.&lt;br&gt;
→ No tests. Obviously no tests.&lt;/p&gt;

&lt;p&gt;Real software is boring. It's the 200 lines of retry logic around a flaky third-party API. It's the migration script that handles the column rename without dropping production data. It's the argument your team had for three days about whether that field should be nullable.&lt;/p&gt;

&lt;p&gt;Vibe coding skips all of that. That's the point. That's also the problem.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Deletion That Says Everything
&lt;/h2&gt;

&lt;p&gt;A public online thread where veteran developers criticized vibe coding got memory-holed by the mods, apparently. I find that telling.&lt;/p&gt;

&lt;p&gt;This is an uncomfortable conversation because it pokes at something the industry wants to believe right now. We &lt;strong&gt;want&lt;/strong&gt; AI to make us 10x faster. But pointing out that "10x faster to a demo" and "10x faster to production" are wildly different claims is a buzzkill. But it's true.&lt;/p&gt;

&lt;p&gt;Junior developers are especially vulnerable here. If you've never built software that had to survive a normal Tuesday afternoon with real users, a vibe-coded app genuinely looks complete to you. You don't know what you're not seeing. That's not a character flaw — it's an experience gap. But the gap is real 🔥&lt;/p&gt;

&lt;h2&gt;
  
  
  What Actually Matters
&lt;/h2&gt;

&lt;p&gt;I'm not anti-AI in coding. I use it daily. The difference is I treat AI output the way I'd treat code from an enthusiastic intern — potentially useful, definitely needs review, occasionally dangerous.&lt;/p&gt;

&lt;p&gt;→ AI is great at &lt;strong&gt;generating boilerplate&lt;/strong&gt; I was going to write anyway.&lt;br&gt;
→ AI is terrible at &lt;strong&gt;making architectural decisions&lt;/strong&gt; it doesn't understand the context for.&lt;br&gt;
→ The value of a senior engineer was never typing speed. It was judgment.&lt;/p&gt;

&lt;p&gt;Vibe coding removes judgment from the loop and calls it productivity. That's not a workflow improvement. That's a regression dressed up in a time-lapse video 😅&lt;/p&gt;

&lt;h2&gt;
  
  
  The Real Cost
&lt;/h2&gt;

&lt;p&gt;The silhouette ships fast. Then someone has to maintain it. That someone opens the codebase, sees thousands of lines of AI-generated code with no clear intent behind any of it, and starts over from scratch.&lt;/p&gt;

&lt;p&gt;I've seen this happen. The rewrite takes longer than building it right would have in the first place. Every single time.&lt;/p&gt;

&lt;p&gt;The demo impressed people. The deploy didn't. &lt;strong&gt;Software isn't a screenshot.&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;I'm curious to know — where do you draw the line between using AI as a tool and letting it drive? Have you ever inherited a vibe-coded project? I'm here for the war stories 👇&lt;/p&gt;

</description>
      <category>ai</category>
      <category>discuss</category>
      <category>programming</category>
      <category>webdev</category>
    </item>
    <item>
      <title>2000 modules to render a button. Web dev earned this reputation.</title>
      <dc:creator>Aditya Agarwal</dc:creator>
      <pubDate>Mon, 20 Apr 2026 10:23:25 +0000</pubDate>
      <link>https://dev.to/adioof/2000-modules-to-render-a-button-web-dev-earned-this-reputation-3fe8</link>
      <guid>https://dev.to/adioof/2000-modules-to-render-a-button-web-dev-earned-this-reputation-3fe8</guid>
      <description>&lt;p&gt;A viral thread just asked developers what feels over-engineered in modern web dev. The answers hit like a group therapy session nobody knew they needed.&lt;/p&gt;

&lt;p&gt;Thousands of comments. The loudest ones all circled the same absurdity: we're compiling thousands of modules to ship a button that says "Submit." And honestly? Nobody could argue back.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Space Shuttle Problem
&lt;/h2&gt;

&lt;p&gt;Someone compared modern front-end tooling to building a space shuttle for a marketing site. That metaphor is painfully accurate.&lt;/p&gt;

&lt;p&gt;You need a static page with a contact form. By the time you're done, you've got a bundler, a transpiler, a CSS-in-JS library, a state manager, a routing layer, and a deployment pipeline that would make NASA nervous. The button works, though. So that's nice.&lt;/p&gt;

&lt;p&gt;The thing is, none of these tools are &lt;em&gt;bad&lt;/em&gt;. Webpack solved real problems. React solved real problems. TypeScript solved real problems. But stacking all of them on a five-page site isn't solving a problem — it's performing competence.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why We Keep Doing This
&lt;/h2&gt;

&lt;p&gt;Here's the part nobody wants to say out loud: &lt;strong&gt;simple solutions don't get you promoted.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Nobody writes a blog post about the time they shipped a project with plain HTML and a single CSS file. Nobody gets a senior title for choosing the boring option. The incentive structure in our industry rewards complexity, even when that complexity serves the toolchain more than the user.&lt;/p&gt;

&lt;p&gt;→ Complex architectures look impressive in system design interviews&lt;br&gt;
→ "I built it with zero dependencies" doesn't make your résumé pop&lt;br&gt;
→ Conference talks about simple solutions don't fill rooms&lt;br&gt;
→ Pull requests with 47 new packages &lt;em&gt;feel&lt;/em&gt; like progress&lt;/p&gt;

&lt;p&gt;We cargo-cult complexity because the industry tells us complexity equals skill. Then we wonder why onboarding a junior dev takes three weeks before they can touch a single component.&lt;/p&gt;

&lt;h2&gt;
  
  
  It Wasn't Always Like This (But Also It Was)
&lt;/h2&gt;

&lt;p&gt;I'm not a fan of making the past seem better than it was. Dealing with jQuery spaghetti code was a nightmare in itself. PHP templating with inline SQL left our applications so vulnerable you could drive a truck through the holes. There were real issues with the old days.&lt;/p&gt;

&lt;p&gt;However, somewhere between hand-writing everything and needing 2000 modules to render a button, we overshot. We went from solving user problems to solving developer-experience problems to solving problems that only exist because of the tools we chose to solve the previous problems. It's turtles all the way down. 🐢&lt;/p&gt;

&lt;h2&gt;
  
  
  The Uncomfortable Fix
&lt;/h2&gt;

&lt;p&gt;The solution doesn't lie in a new tool. It comes down to restraint.&lt;/p&gt;

&lt;p&gt;Before you add a dependency, ask yourself one question: &lt;strong&gt;who does this serve?&lt;/strong&gt; If the answer is "it makes the DX nicer for our eight-person team," that might be fine. If the answer is "everyone uses it," that's not a reason — that's peer pressure.&lt;/p&gt;

&lt;p&gt;The most effective engineers I've worked with have one thing in common. They're happy being boring. They choose the smallest tool that accomplishes the task at hand, and then they get on with their lives. They don't need the architecture to be interesting. They need the product to ship. 🚀&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Start with what solves the user's problem, and work backwards.&lt;/strong&gt; Not with whatever new hotness the latest blog post introduced to your ecosystem.&lt;/p&gt;

&lt;h2&gt;
  
  
  So Where Does That Leave Us?
&lt;/h2&gt;

&lt;p&gt;The thread wasn't just people complaining. It was a signal. A lot of developers are quietly exhausted by tooling that exists to justify itself.&lt;/p&gt;

&lt;p&gt;We earned this. Every time we &lt;code&gt;npm install&lt;/code&gt;'d our way to a 400MB &lt;code&gt;node_modules&lt;/code&gt; folder for a freakin' landing page, we earned it. The good news is we can also un-earn it — one boring, right-sized decision at a time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What's the most over-engineered setup you've ever seen on a project that definitely didn't need it?&lt;/strong&gt; Give me your horror stories. 👇&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>webdev</category>
      <category>javascript</category>
      <category>opinion</category>
    </item>
    <item>
      <title>Stop picking Cursor or Claude Code. Pay for both, you cheapskate.</title>
      <dc:creator>Aditya Agarwal</dc:creator>
      <pubDate>Sun, 19 Apr 2026 16:14:39 +0000</pubDate>
      <link>https://dev.to/adioof/stop-picking-cursor-or-claude-code-pay-for-both-you-cheapskate-3f80</link>
      <guid>https://dev.to/adioof/stop-picking-cursor-or-claude-code-pay-for-both-you-cheapskate-3f80</guid>
      <description>&lt;p&gt;Each week I see a new post asking "Cursor vs Claude Code — which do you choose?" And each week I keep on scrolling. The comparison is false, and deep down, you know it is.&lt;/p&gt;

&lt;p&gt;You're not making a choice between two competitors. You're making a choice between a hammer and a screwdriver, and then boasting that you only needed one.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Debate Exists
&lt;/h2&gt;

&lt;p&gt;Cursor was $20 a month (then it flipped to a metered model in mid-2025), and Claude Code is $20 a month. That's the whole issue. People will sign up to $20 of forgotten streaming services a month, but $40 of the tool they rely on for eight hours a day? Breakout the spreadsheet, we need a pro-con list.&lt;/p&gt;

&lt;p&gt;The "vs" sets the premise. It grants you the excuse to only try to get away with one.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Claude Code Actually Does Well
&lt;/h2&gt;

&lt;p&gt;Claude Code is right there in your terminal. It has a 200k token context window. We're not exaggerating for marketing. It's why Claude can hold the entire codebase in its head.&lt;/p&gt;

&lt;p&gt;I reach for it whenever I need to think through the contents of dozens of files. Refactoring a shared type that touches forty components? Claude Code doesn't lose track of the change halfway through. It's the tool for &lt;strong&gt;big-picture work&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;→ Architectural decisions across a monorepo&lt;br&gt;
→ Multi-file refactors that need full context&lt;br&gt;
→ Asking "where does this data actually flow?" and getting a real answer&lt;/p&gt;

&lt;h2&gt;
  
  
  What Cursor Actually Does Well
&lt;/h2&gt;

&lt;p&gt;Cursor is right there in your editor. It's watching your keystrokes. It's autocompleting the line you're halfway through writing before you've had a chance to finish thinking it through.&lt;/p&gt;

&lt;p&gt;That inline autocomplete is ridiculously good for tiny, precise, single-file edits. Writing a new function, fixing a test, tweaking a component. Cursor is faster than anything else I've used.&lt;/p&gt;

&lt;p&gt;One thing that developers never mention enough: Cursor becomes inefficient in large codebases. When projects become very large, the indexing becomes slow, memory usage grows, and the AI can't look at enough of the code in your project, which means suggestions may drift. Suggestions drift. You begin to get completions that reference patterns for the wrong part of your project.&lt;/p&gt;

&lt;p&gt;That's not a bug. It's a tradeoff. Cursor is optimized for &lt;strong&gt;speed and precision in a small radius&lt;/strong&gt;. Claude Code is optimized for &lt;strong&gt;depth across a large surface area&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Actual Workflow
&lt;/h2&gt;

&lt;p&gt;Here's what my day looks like now:&lt;/p&gt;

&lt;p&gt;→ Morning planning session in Claude Code — I describe what I want to build, let it reason across the full codebase, and sketch out the approach&lt;br&gt;
→ Implementation in Cursor — I write the actual code with fast autocomplete and targeted edits&lt;br&gt;
→ Back to Claude Code when something breaks across boundaries — when a change in one service ripples into three others&lt;/p&gt;

&lt;p&gt;It's not complicated. One tool thinks wide. The other tool types fast. 🛠️&lt;/p&gt;

&lt;p&gt;The $40/mo combined cost is less than most developers spend on coffee in a week. If these tools save you even thirty minutes a day — and they save me way more than that — the ROI isn't even a conversation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Stop Optimizing for the Wrong Thing
&lt;/h2&gt;

&lt;p&gt;We're in a weird moment where AI tools are genuinely changing how code gets written. Spending energy on "which single tool is best" is like arguing over vim vs emacs while the building is on fire 🔥&lt;/p&gt;

&lt;p&gt;The answer is boring. Pay for both. Use each one where it's strong. Move on and ship something.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;So here's my question: if you're using both already, what's your split look like — and if you're only using one, what's actually stopping you from trying the other?&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>cursor</category>
      <category>claudecode</category>
      <category>devtools</category>
    </item>
    <item>
      <title>Cloudflare and GitHub are building identity systems for AI agents. We're not ready for this.</title>
      <dc:creator>Aditya Agarwal</dc:creator>
      <pubDate>Sun, 19 Apr 2026 13:19:44 +0000</pubDate>
      <link>https://dev.to/adioof/cloudflare-and-github-are-building-identity-systems-for-ai-agents-were-not-ready-for-this-7ff</link>
      <guid>https://dev.to/adioof/cloudflare-and-github-are-building-identity-systems-for-ai-agents-were-not-ready-for-this-7ff</guid>
      <description>&lt;p&gt;AI agents are getting their own credentials and nobody is asking who's accountable when they leak. That sentence should terrify you more than it does.&lt;/p&gt;

&lt;p&gt;I've been managing secrets at a 15-person startup for a few years now. We can barely keep &lt;em&gt;human&lt;/em&gt; API keys out of Git history. The idea of every AI agent running around with its own identity makes me want to close my laptop and go farm goats.&lt;/p&gt;

&lt;p&gt;But here we are.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Actually Happened
&lt;/h2&gt;

&lt;p&gt;Cloudflare just launched a new scannable API token format with prefixes like &lt;code&gt;cfat_&lt;/code&gt;. This is smart — it means tokens are instantly recognizable by pattern-matching tools. GitHub Secret Scanning can detect leaked Cloudflare tokens when they show up in a commit, though the revocation process may require manual remediation rather than being fully automatic.&lt;/p&gt;

&lt;p&gt;That's genuinely good engineering. Two major platforms cooperating to shrink the window between "oops" and "revoked." I respect it.&lt;/p&gt;

&lt;p&gt;But zoom out for a second. &lt;strong&gt;Why does this need to exist at all?&lt;/strong&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Real Problem Nobody Wants to Say Out Loud
&lt;/h2&gt;

&lt;p&gt;Non-human identities already outnumber human ones in most organizations. Read that again. Service accounts, CI/CD tokens, bot credentials, API keys — they've been quietly multiplying for years. Now add AI agents to the pile.&lt;/p&gt;

&lt;p&gt;Each agent requires credentials to do anything useful. Call an API. Read a database. Deploy a service. Each one becomes a new secret to rotate, scope, monitor, and eventually lose track of.&lt;/p&gt;

&lt;p&gt;Here's what I've seen firsthand:&lt;/p&gt;

&lt;p&gt;→ Secrets get copy-pasted into &lt;code&gt;.env&lt;/code&gt; files that end up in repos&lt;br&gt;
→ Service accounts get created for a "quick test" and never get deleted&lt;br&gt;
→ Nobody owns the rotation schedule because nobody owns the bot&lt;br&gt;
→ When something leaks, the first question is always "wait, what even uses this?"&lt;/p&gt;

&lt;p&gt;That's the state of things &lt;em&gt;today&lt;/em&gt;. With humans mostly in the loop. 🫠&lt;/p&gt;

&lt;h2&gt;
  
  
  AI Agents Make This Exponentially Worse
&lt;/h2&gt;

&lt;p&gt;When a human leaks a key, you yell at the human. You do a postmortem. You add a pre-commit hook. There's a feedback loop.&lt;/p&gt;

&lt;p&gt;When an AI agent leaks a key — or gets prompt-injected into exposing one — who's accountable? The developer who deployed it? The platform that hosted it? The agent framework that didn't sandbox credentials properly?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nobody has a good answer yet.&lt;/strong&gt; And startups are already shipping agents with broad API access because speed wins over security every single time at that stage. I know because I've been that person choosing speed.&lt;/p&gt;

&lt;p&gt;The Cloudflare + GitHub integration is a safety net. But safety nets work best when you're not actively trying to juggle chainsaws on a tightrope. At startup scale, with a two-person platform team, you're absolutely juggling chainsaws.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Think We Should Be Doing
&lt;/h2&gt;

&lt;p&gt;I don't have a complete answer. But I have opinions:&lt;/p&gt;

&lt;p&gt;→ &lt;strong&gt;Agents should get short-lived credentials by default.&lt;/strong&gt; Not long-lived API keys. Tokens that expire in minutes, not months.&lt;br&gt;
→ &lt;strong&gt;Every non-human identity needs an owner.&lt;/strong&gt; A real human on the hook. No orphan service accounts.&lt;br&gt;
→ &lt;strong&gt;Scope should be laughably narrow.&lt;/strong&gt; If an agent only needs to read from one endpoint, it gets access to one endpoint. Period.&lt;br&gt;
→ &lt;strong&gt;Audit logs for agent actions should be first-class.&lt;/strong&gt; Not an afterthought bolted on after the first incident.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;cfat_&lt;/code&gt; prefix and auto-revocation are steps in the right direction. But they're band-aids on a wound we haven't even fully discovered yet. 🩹&lt;/p&gt;

&lt;h2&gt;
  
  
  Here's the Thing
&lt;/h2&gt;

&lt;p&gt;We built identity management for humans over decades and we're still bad at it. Now we're handing credentials to autonomous software that can act at machine speed, make unpredictable decisions, and get tricked by a well-crafted prompt.&lt;/p&gt;

&lt;p&gt;The infrastructure isn't ready. The policies aren't ready. The org charts definitely aren't ready. And yet the agents are already shipping.&lt;/p&gt;

&lt;p&gt;I'm not saying stop building agents. I'm saying &lt;strong&gt;treat agent identity as a first-class security problem right now&lt;/strong&gt;, not after the first big breach makes it obvious.&lt;/p&gt;

&lt;p&gt;So here's my question: &lt;strong&gt;who owns non-human identity at your company?&lt;/strong&gt; Is it security? Platform? DevOps? Or is it the terrifying answer — nobody? 🔐&lt;/p&gt;

</description>
      <category>ai</category>
      <category>security</category>
      <category>cloudflare</category>
      <category>devops</category>
    </item>
    <item>
      <title>Cloudflare wants agents to write and deploy their own code. That should terrify you.</title>
      <dc:creator>Aditya Agarwal</dc:creator>
      <pubDate>Sun, 19 Apr 2026 10:13:47 +0000</pubDate>
      <link>https://dev.to/adioof/cloudflare-wants-agents-to-write-and-deploy-their-own-code-that-should-terrify-you-2jaa</link>
      <guid>https://dev.to/adioof/cloudflare-wants-agents-to-write-and-deploy-their-own-code-that-should-terrify-you-2jaa</guid>
      <description>&lt;p&gt;We're giving AI agents access to production infrastructure and behaving as if we're simply releasing a new feature. I need to talk about this.&lt;/p&gt;

&lt;p&gt;Recently, Cloudflare introduced a set of tools that allow AI agents to write code, run it, and deploy it - all on their own. There's no human involved in the process. They just announced this and the developer community seems... excited? 🤔&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Is Different
&lt;/h2&gt;

&lt;p&gt;We have been using AI code helpers for some time now. Copilot recommends a line of code. ChatGPT writes a function. You then inspect it, test it, and deploy it on your own.&lt;/p&gt;

&lt;p&gt;This is different. Here, the agent not only writes the code but also runs it on the production server. You are not the pilot here, you are more like a passenger who might check the flight path through the window sometimes.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Cloudflare Actually Built
&lt;/h2&gt;

&lt;p&gt;So, using these Cloudflare tools:&lt;/p&gt;

&lt;p&gt;→ &lt;strong&gt;Project Think&lt;/strong&gt; — long-running stateful AI agents that persist across sessions and maintain context over time. Not a one-shot prompt-response. A thinking entity that remembers what it's doing.&lt;/p&gt;

&lt;p&gt;→ &lt;strong&gt;Dynamic Workers&lt;/strong&gt; — AI-generated code gets executed inside sandboxed isolates. The agent writes something, and it runs. In Cloudflare's infrastructure. At the edge.&lt;/p&gt;

&lt;p&gt;→ &lt;strong&gt;Codemode&lt;/strong&gt; — instead of making individual sequential tool calls, models are encouraged to &lt;em&gt;write and run code that orchestrates those predefined tools&lt;/em&gt; as their primary way of interacting with the world. The agent doesn't pick items from the menu one at a time. It writes a script that combines them.&lt;/p&gt;

&lt;p&gt;Each component individually? Neat engineering. All three together? That's an autopilot deployment pipeline for inanimate software agents.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Sandboxing Argument Doesn't Comfort Me
&lt;/h2&gt;

&lt;p&gt;I can already hear the arguments: "It's all compartmentalized! Isolates are secure!"&lt;/p&gt;

&lt;p&gt;Of course. Sandboxes are useful until they're no longer effective. Throughout the history of computing, every sandbox has been evaded, circumvented, or incorrectly configured by an exhausted engineer at 2am.&lt;/p&gt;

&lt;p&gt;Even assuming the sandbox remains intact forever — that's not the real problem. I'm worried about &lt;em&gt;what the agent decides to deploy&lt;/em&gt; in the first place. A sandboxed isolate that runs horrendous business logic is still horrendous business logic. It's just isolated horrendous business logic. 💀&lt;/p&gt;

&lt;h2&gt;
  
  
  We're Normalizing Without Discussing
&lt;/h2&gt;

&lt;p&gt;What bugs me isn't the technology itself. It's how casual we are about "AI writes and ships its own code" this quickly.&lt;/p&gt;

&lt;p&gt;We sorted deployment guardrails for decades. Code review. Staging environments. Feature flags. Canary releases. All because &lt;em&gt;humans&lt;/em&gt; make mistakes when shipping code.&lt;/p&gt;

&lt;p&gt;And now we're skipping most of that for a system that hallucinates confidently, calling it "developer productivity."&lt;/p&gt;

&lt;p&gt;I'm not anti-AI. I use AI tools daily. But there's a meaningful difference between "AI helps me write code faster" and "AI writes and deploys code without me." We're blurring that line and pretending it's fine.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where This Goes
&lt;/h2&gt;

&lt;p&gt;I think we end up in one of two places:&lt;/p&gt;

&lt;p&gt;→ Agents get real guardrails — approval workflows, automated testing gates, human checkpoints — and this becomes genuinely useful infrastructure.&lt;/p&gt;

&lt;p&gt;→ Or we speedrun past the safety conversations because shipping fast feels too good, and we learn the hard way why those deployment ceremonies existed.&lt;/p&gt;

&lt;p&gt;Right now, the industry seems to be sprinting toward option two. 🚀&lt;/p&gt;

&lt;p&gt;The tooling is impressive. Cloudflare's engineering here is legitimately clever. But clever infrastructure serving an unexamined workflow is how you get elegant disasters.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Here's my question for you:&lt;/strong&gt; At what point does "AI-assisted development" become "AI-autonomous development," and who should be drawing that line — platform providers, engineering teams, or regulators?&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>webdev</category>
      <category>ai</category>
      <category>opinion</category>
    </item>
    <item>
      <title>Most webhook security guides protect the wrong side. The scary part is delivery.</title>
      <dc:creator>Aditya Agarwal</dc:creator>
      <pubDate>Sat, 18 Apr 2026 19:09:42 +0000</pubDate>
      <link>https://dev.to/adioof/most-webhook-security-guides-protect-the-wrong-side-the-scary-part-is-delivery-6pm</link>
      <guid>https://dev.to/adioof/most-webhook-security-guides-protect-the-wrong-side-the-scary-part-is-delivery-6pm</guid>
      <description>&lt;p&gt;Everyone secures webhook ingestion. Almost nobody talks about SSRF via the delivery worker.&lt;/p&gt;

&lt;p&gt;I've been staring at webhook architectures for years, and the security conversation is almost always backwards. We obsess over verifying inbound payloads while leaving the outbound side wide open.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Your HMAC Doesn't Save You Here
&lt;/h2&gt;

&lt;p&gt;HMAC verification only protects ingestion, not outbound delivery to tenant URLs. That signature proves the payload came from who it claims. Great.&lt;/p&gt;

&lt;p&gt;But your delivery worker — the thing that POSTs events to customer-provided URLs — has a completely different threat model. HMAC doesn't even enter the picture on that side.&lt;/p&gt;

&lt;p&gt;Think about it. A tenant registers &lt;code&gt;https://totally-legit-domain.com/webhook&lt;/code&gt; as their endpoint. You validate the URL looks fine. You maybe even check it doesn't resolve to a private IP. Then you move on with your life.&lt;/p&gt;

&lt;h2&gt;
  
  
  DNS Rebinding: The Actual Scary Part
&lt;/h2&gt;

&lt;p&gt;Here's where it gets ugly. DNS rebinding can redirect webhook deliveries to internal IPs like &lt;code&gt;169.254.169.254&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The attack works like this:&lt;/p&gt;

&lt;p&gt;→ Tenant registers a domain they control&lt;br&gt;
→ At registration time, it resolves to a perfectly normal public IP&lt;br&gt;
→ Your validation passes&lt;br&gt;
→ Later, the DNS record flips to &lt;code&gt;169.254.169.254&lt;/code&gt; (the cloud metadata endpoint)&lt;br&gt;
→ Your delivery worker happily POSTs to it, potentially leaking cloud credentials&lt;/p&gt;

&lt;p&gt;Your worker just became a proxy into your own infrastructure. The tenant didn't hack anything. They just gave you a URL and waited. 🎯&lt;/p&gt;

&lt;p&gt;This isn't theoretical. Cloud metadata endpoints are the crown jewels. One leaked IAM credential from that &lt;code&gt;169.254&lt;/code&gt; address and it's game over.&lt;/p&gt;

&lt;h2&gt;
  
  
  Validate at Delivery Time, Every Time
&lt;/h2&gt;

&lt;p&gt;Private IP blocklists must be checked at delivery time, not just registration time. I can't stress this enough.&lt;/p&gt;

&lt;p&gt;Checking the URL once when the tenant sets it up is not sufficient. DNS records change. That's literally what DNS rebinding exploits.&lt;/p&gt;

&lt;p&gt;Every single outbound request from your delivery worker needs to:&lt;/p&gt;

&lt;p&gt;→ Resolve the hostname fresh&lt;br&gt;
→ Check the resolved IP against a private range blocklist &lt;em&gt;before&lt;/em&gt; opening the connection&lt;br&gt;
→ Reject anything pointing to &lt;code&gt;10.x&lt;/code&gt;, &lt;code&gt;172.16-31.x&lt;/code&gt;, &lt;code&gt;192.168.x&lt;/code&gt;, &lt;code&gt;169.254.x&lt;/code&gt;, or localhost&lt;/p&gt;

&lt;p&gt;Some HTTP libraries will follow redirects automatically too. A 302 hop to an internal IP is just as dangerous. You need to validate at every step of the chain, not just the initial resolution.&lt;/p&gt;

&lt;h2&gt;
  
  
  This Is an Architecture Problem, Not a Config Problem
&lt;/h2&gt;

&lt;p&gt;The frustrating part is that most webhook guides treat security as "add HMAC and you're done." That's security theater for the delivery path. 🔒&lt;/p&gt;

&lt;p&gt;If you're building a webhook system, the delivery worker is the most dangerous component you own. It makes outbound HTTP requests to attacker-controlled URLs. Read that sentence again.&lt;/p&gt;

&lt;p&gt;You're essentially running an HTTP client that takes instructions from your tenants. That deserves the same paranoia you'd give to user-uploaded code execution.&lt;/p&gt;

&lt;p&gt;At a high level, the decisions that actually matter:&lt;/p&gt;

&lt;p&gt;→ Pin DNS resolution to the moment of delivery and validate the IP&lt;br&gt;
→ Disable HTTP redirects or re-validate after each hop&lt;br&gt;
→ Run delivery workers in a network segment with no access to internal services or metadata endpoints&lt;br&gt;
→ Treat every tenant URL as hostile, every time, forever&lt;/p&gt;

&lt;h2&gt;
  
  
  The Takeaway
&lt;/h2&gt;

&lt;p&gt;Your inbound webhook security is probably fine. Your outbound delivery worker is probably a loaded footgun pointed at your cloud metadata endpoint. The fix isn't complicated — validate DNS resolution at delivery time, block private IPs, isolate the worker network. But you have to actually do it, and most teams don't because every tutorial stops at HMAC. 😅&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What does your webhook delivery pipeline look like — are you validating resolved IPs on every outbound request, or just at registration time?&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>webhooks</category>
      <category>backend</category>
      <category>infrastructure</category>
    </item>
    <item>
      <title>Pinning GitHub Actions to a tag is mass negligence and we all just watched it happen</title>
      <dc:creator>Aditya Agarwal</dc:creator>
      <pubDate>Sat, 18 Apr 2026 13:14:31 +0000</pubDate>
      <link>https://dev.to/adioof/pinning-github-actions-to-a-tag-is-mass-negligence-and-we-all-just-watched-it-happen-51p0</link>
      <guid>https://dev.to/adioof/pinning-github-actions-to-a-tag-is-mass-negligence-and-we-all-just-watched-it-happen-51p0</guid>
      <description>&lt;p&gt;Many of your CI pipelines can easily be manipulated to execute any code with a single force-push. And you likely unwittingly enabled this yourself.&lt;/p&gt;

&lt;p&gt;I certainly did.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Actually Happened
&lt;/h2&gt;

&lt;p&gt;In March 2026, LiteLLM was breached using a poisoned Trivy GitHub Action. The threat actor didn't publish a new, obviously-malicious action under a typo-squatted name. They force-pushed malicious code to &lt;strong&gt;existing release tags&lt;/strong&gt; that teams were already using.&lt;/p&gt;

&lt;p&gt;In other words, the &lt;code&gt;@v1&lt;/code&gt; or &lt;code&gt;@v2&lt;/code&gt; that you pinned to? It's mutable. Anyone with write access to that repo can point it at completely different code whenever they want.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Tag Pinning Is a Trust-Me Handshake
&lt;/h2&gt;

&lt;p&gt;Here's what most workflows you'll see in the wild look like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;some-org/some-action@v1&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;See that &lt;code&gt;@v1&lt;/code&gt;? It feels nice and pinned, right? Looks like a version. Your brain pattern-matches it to npm semver or Docker tags and moves on.&lt;/p&gt;

&lt;p&gt;However, it's just a Git tag. &lt;strong&gt;Git tags are not immutable.&lt;/strong&gt; A maintainer — or an attacker who has compromised a maintainer — could delete and recreate that tag pointing at any commit they want. Your next workflow run pulls the new code silently.&lt;/p&gt;

&lt;p&gt;No diff. No notification. No PR review. Nothing.&lt;/p&gt;

&lt;p&gt;→ Tag pinning gives you the &lt;strong&gt;illusion&lt;/strong&gt; of reproducibility without actual reproducibility.&lt;br&gt;
→ You're trusting every maintainer of every action, forever, with access to your CI secrets.&lt;br&gt;
→ A single compromised token upstream means your &lt;code&gt;GITHUB_TOKEN&lt;/code&gt;, cloud credentials, and deploy keys are exposed.&lt;/p&gt;

&lt;p&gt;Every startup I've worked at has pinned to tags. Every template repo on GitHub has pinned to tags. Every "getting started" tutorial ever has told you to pin to a tag. We all collectively normalized this. 🤷&lt;/p&gt;
&lt;h2&gt;
  
  
  The Fix Is Boring and That's the Problem
&lt;/h2&gt;

&lt;p&gt;In fact, GitHub themselves recommend pinning actions to &lt;strong&gt;full commit SHAs&lt;/strong&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;uses&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;some-org/some-action@a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;A commit SHA is immutable. You can't force-push over it. If someone pushes malicious code, it gets a new SHA, and your workflow will keep running the old, safe commit.&lt;/p&gt;

&lt;p&gt;→ SHA pinning is the only pinning that actually pins anything.&lt;br&gt;
→ Tools like Dependabot and Renovate can auto-update SHA pins with readable diffs.&lt;br&gt;
→ You can add a comment with the tag for readability: &lt;code&gt;@a1b2c3... # v2.1.0&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Yes, it's ugly. Yes, it's annoying. But "annoying" beats "compromised" every single time.&lt;/p&gt;

&lt;h2&gt;
  
  
  This Is a Supply Chain Problem We Keep Ignoring
&lt;/h2&gt;

&lt;p&gt;We dedicated years to studying &lt;code&gt;left-pad&lt;/code&gt;, &lt;code&gt;event-stream&lt;/code&gt;, and &lt;code&gt;colors.js&lt;/code&gt;. We created lockfiles, SBOMs, and signed packages. Then we turned around and gave our CI pipelines — the things with &lt;strong&gt;write access to production&lt;/strong&gt; — zero supply chain discipline.&lt;/p&gt;

&lt;p&gt;Your CI runner has secrets that your application code doesn't. Cloud provider keys. Package registry tokens. Deploy credentials. For most organizations, it's the single highest-value target, and we're protecting it with vibes. 🔓&lt;/p&gt;

&lt;p&gt;The LiteLLM incident wasn't sophisticated. It was embarrassingly simple, and that's what makes it terrifying.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Changed
&lt;/h2&gt;

&lt;p&gt;After reading about this, I spent an afternoon auditing our workflows at the startup where I work. Every single third-party action was pinned to a tag. &lt;strong&gt;Every one.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I replaced all of those with SHA pins + tag comments, and added Renovate to automatically open PRs with the new SHAs. The whole thing took maybe two hours. &lt;strong&gt;Two hours&lt;/strong&gt; to close a door that was wide open to any upstream compromise.&lt;/p&gt;

&lt;p&gt;If you haven't done this yet, maybe today's the day.&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;Here's my question for you:&lt;/strong&gt; Do you pin to SHAs already, and if not, what's actually stopping you? Is it tooling, awareness, or just inertia?&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>programming</category>
      <category>webdev</category>
    </item>
    <item>
      <title>The Vercel Bill Conversation Every Startup Avoids (Until It's Too Late)</title>
      <dc:creator>Aditya Agarwal</dc:creator>
      <pubDate>Sun, 12 Apr 2026 21:04:00 +0000</pubDate>
      <link>https://dev.to/adioof/the-vercel-bill-conversation-every-startup-avoids-until-its-too-late-5bj6</link>
      <guid>https://dev.to/adioof/the-vercel-bill-conversation-every-startup-avoids-until-its-too-late-5bj6</guid>
      <description>&lt;p&gt;Our team was shocked when we received a $4,700 Vercel bill. The architecture we had set up was pretty awesome! But then the bill arrived. We quickly realized three things were crippling our budget.&lt;/p&gt;

&lt;p&gt;Nobody saw it coming.&lt;/p&gt;

&lt;p&gt;We built a Next.js monorepo with ISR, edge functions, and image optimization.&lt;/p&gt;

&lt;p&gt;The architecture was beautiful.&lt;/p&gt;

&lt;p&gt;Then the bill arrived.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Architecture That Broke The Bank
&lt;/h2&gt;

&lt;p&gt;We went all-in on Vercel's magic.&lt;/p&gt;

&lt;p&gt;ISR for 50,000 product pages.&lt;/p&gt;

&lt;p&gt;Edge functions for personalization.&lt;/p&gt;

&lt;p&gt;Image optimization for 10,000 user uploads.&lt;/p&gt;

&lt;p&gt;It was fast. Really fast.&lt;/p&gt;

&lt;p&gt;Our Lighthouse scores were 98+ across the board.&lt;/p&gt;

&lt;p&gt;Users loved it.&lt;/p&gt;

&lt;p&gt;VCs loved it.&lt;/p&gt;

&lt;p&gt;The bill? Not so much.&lt;/p&gt;




&lt;h2&gt;
  
  
  Where The Money Went
&lt;/h2&gt;

&lt;p&gt;Three things burned 90% of our spend:&lt;/p&gt;

&lt;p&gt;1️⃣ &lt;strong&gt;ISR revalidation storms&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Every product update triggered a cascade.&lt;/p&gt;

&lt;p&gt;50,000 pages × 3 ISR calls each.&lt;/p&gt;

&lt;p&gt;Vercel charges per function invocation.&lt;/p&gt;

&lt;p&gt;Our $200/month estimate became $2,800.&lt;/p&gt;

&lt;p&gt;2️⃣ &lt;strong&gt;Edge function fan-out&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Personalization meant checking 8 microservices.&lt;/p&gt;

&lt;p&gt;Each request spawned 8 parallel edge functions.&lt;/p&gt;

&lt;p&gt;Concurrent users? Exponential growth.&lt;/p&gt;

&lt;p&gt;3️⃣ &lt;strong&gt;Image optimization at scale&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Vercel's Image Optimization is brilliant.&lt;/p&gt;

&lt;p&gt;It's also $20 per 1,000 transformations.&lt;/p&gt;

&lt;p&gt;10,000 user images × multiple sizes = ouch.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Fix Nobody Wants To Admit
&lt;/h2&gt;

&lt;p&gt;We moved three things off Vercel:&lt;/p&gt;

&lt;p&gt;→ ISR to CloudFlare Pages + KV ($20/month)&lt;/p&gt;

&lt;p&gt;→ Edge functions to CloudFlare Workers ($5)&lt;/p&gt;

&lt;p&gt;→ Image optimization to Cloudinary (pay-per-GB)&lt;/p&gt;

&lt;p&gt;The result?&lt;/p&gt;

&lt;p&gt;Same performance.&lt;/p&gt;

&lt;p&gt;Bill: $287.&lt;/p&gt;

&lt;p&gt;The team spent 3 weeks migrating.&lt;/p&gt;

&lt;p&gt;The CFO asked why we didn't do this earlier.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Real Lesson
&lt;/h2&gt;

&lt;p&gt;Vercel's pricing model rewards simplicity.&lt;/p&gt;

&lt;p&gt;Complex architectures punish you.&lt;/p&gt;

&lt;p&gt;Every ISR page is a function call.&lt;/p&gt;

&lt;p&gt;Every edge function is concurrent execution.&lt;/p&gt;

&lt;p&gt;Every image transformation is a transaction.&lt;/p&gt;

&lt;p&gt;Startups copy Vercel's marketing examples.&lt;/p&gt;

&lt;p&gt;Then get the bill.&lt;/p&gt;




&lt;h2&gt;
  
  
  Your Turn
&lt;/h2&gt;

&lt;p&gt;Has your team had the Vercel bill conversation yet?&lt;/p&gt;

&lt;p&gt;Or are you waiting for the $5,000 surprise?&lt;/p&gt;

&lt;p&gt;What's your breaking point?&lt;/p&gt;

&lt;p&gt;👇&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>javascript</category>
      <category>webdev</category>
      <category>career</category>
    </item>
    <item>
      <title>My Team Tracks AI-Generated Code. The Number Shocked Us.</title>
      <dc:creator>Aditya Agarwal</dc:creator>
      <pubDate>Sat, 11 Apr 2026 15:03:55 +0000</pubDate>
      <link>https://dev.to/adioof/my-team-tracks-ai-generated-code-the-number-shocked-us-25a2</link>
      <guid>https://dev.to/adioof/my-team-tracks-ai-generated-code-the-number-shocked-us-25a2</guid>
      <description>&lt;p&gt;My team tracks how much of our codebase is AI-generated. The number shocked us.&lt;/p&gt;

&lt;p&gt;We deployed Buildermark last week. It's an open-source tool that scans Git history and flags AI-written lines.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why We Started Measuring
&lt;/h2&gt;

&lt;p&gt;Every startup has that moment.&lt;/p&gt;

&lt;p&gt;You're reviewing a PR and realize you can't tell who wrote it. The human or the AI.&lt;/p&gt;

&lt;p&gt;We hit 40% AI-generated code by volume. Some files were 90%.&lt;/p&gt;

&lt;p&gt;The CTO asked for the report. Then asked what it meant.&lt;/p&gt;

&lt;p&gt;Nobody had an answer.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Three Problems Nobody Talks About
&lt;/h2&gt;

&lt;p&gt;→ &lt;strong&gt;Problem 1: Ownership blur&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When AI writes the fix, who owns the bug?&lt;/p&gt;

&lt;p&gt;We found junior devs treating Claude output as gospel. They'd copy-paste without understanding.&lt;/p&gt;

&lt;p&gt;Senior engineers would approve because "it looks fine."&lt;/p&gt;

&lt;p&gt;→ &lt;strong&gt;Problem 2: The review gap&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Human-written code gets scrutinized. AI-written code gets rubber-stamped.&lt;/p&gt;

&lt;p&gt;We caught security issues in AI-generated config files. Stuff a human would never write.&lt;/p&gt;

&lt;p&gt;→ &lt;strong&gt;Problem 3: The bus factor&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If your AI provider degrades (like Claude did last month), your velocity tanks overnight.&lt;/p&gt;

&lt;p&gt;We're now vendor-locked to Codeium's style. Claude's patterns. GitHub Copilot's idioms.&lt;/p&gt;




&lt;h2&gt;
  
  
  What We Changed This Week
&lt;/h2&gt;

&lt;p&gt;We added a pre‑commit hook that tags AI‑generated lines.&lt;/p&gt;

&lt;p&gt;Every PR shows the percentage in the description.&lt;/p&gt;

&lt;p&gt;If it's over 50%, it needs extra review. No shortcuts.&lt;/p&gt;

&lt;p&gt;We also started tracking "AI debt" – lines that only one person understands because they came from a prompt nobody wrote down.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Real Metric That Matters
&lt;/h2&gt;

&lt;p&gt;Lines of AI code is vanity.&lt;/p&gt;

&lt;p&gt;The real metric is: &lt;strong&gt;How many AI‑generated lines survive to production without a human understanding them?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We're at 12%.&lt;/p&gt;

&lt;p&gt;That's 12% of our codebase that could break and nobody would know why.&lt;/p&gt;




&lt;p&gt;Is your team measuring AI code?&lt;/p&gt;

&lt;p&gt;What percentage would surprise you?&lt;/p&gt;

&lt;p&gt;👇&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>ai</category>
      <category>webdev</category>
      <category>career</category>
    </item>
    <item>
      <title>My team reviews 15 PRs a day at our startup. Nobody burns out.</title>
      <dc:creator>Aditya Agarwal</dc:creator>
      <pubDate>Sat, 11 Apr 2026 09:05:26 +0000</pubDate>
      <link>https://dev.to/adioof/my-team-reviews-15-prs-a-day-at-our-startup-nobody-burns-out-h49</link>
      <guid>https://dev.to/adioof/my-team-reviews-15-prs-a-day-at-our-startup-nobody-burns-out-h49</guid>
      <description>&lt;p&gt;My team reviews 15 PRs a day at our startup.&lt;/p&gt;

&lt;p&gt;Nobody burns out.&lt;/p&gt;

&lt;p&gt;Here's what actually happened.&lt;/p&gt;




&lt;h2&gt;
  
  
  Before
&lt;/h2&gt;

&lt;p&gt;When we were 5 engineers, reviewing PRs was easy.&lt;/p&gt;

&lt;p&gt;You'd glance, comment, merge.&lt;/p&gt;

&lt;p&gt;Then we hit 15 people.&lt;/p&gt;

&lt;p&gt;PRs piled up. Developers waited 2 days for feedback. Product managers got anxious. The CTO asked why velocity dropped.&lt;/p&gt;

&lt;p&gt;We tried everything.&lt;/p&gt;

&lt;p&gt;→ GitHub's default review requests&lt;br&gt;
→ Slack reminders&lt;br&gt;
→ Even a Discord bot that pinged people&lt;/p&gt;

&lt;p&gt;Nothing worked.&lt;/p&gt;

&lt;p&gt;The problem wasn't tools. It was culture.&lt;/p&gt;

&lt;p&gt;We were treating code review as a courtesy. Not a requirement.&lt;/p&gt;




&lt;h2&gt;
  
  
  What changed: 3 rules
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Rule 1: Every PR gets a review within 4 hours. Or it auto-merges.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Yes, really.&lt;/p&gt;

&lt;p&gt;We use a GitHub Action that checks time. If 4 hours pass with no review, it merges.&lt;/p&gt;

&lt;p&gt;This sounds terrifying. But it works.&lt;/p&gt;

&lt;p&gt;Because nobody wants broken code in production. So they review.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rule 2: Review comments must be actionable.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;No "maybe consider this." No "what if we tried…"&lt;/p&gt;

&lt;p&gt;If you comment, you must suggest a concrete change. Or approve.&lt;/p&gt;

&lt;p&gt;This cut review cycles by 70%.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Rule 3: The author owns the fix. Not the reviewer.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If you suggest a change, the PR author implements it. You don't take over their keyboard.&lt;/p&gt;

&lt;p&gt;This was the hardest shift. Senior engineers hated it. They wanted to "just fix it quickly."&lt;/p&gt;

&lt;p&gt;But that created dependency. Now juniors learn faster — because they have to understand the feedback, not just accept a magic fix.&lt;/p&gt;




&lt;h2&gt;
  
  
  The weird part?
&lt;/h2&gt;

&lt;p&gt;Our bug rate dropped.&lt;/p&gt;

&lt;p&gt;Not because code got perfect. Because reviews got focused.&lt;/p&gt;

&lt;p&gt;When you know you have 4 hours, you're ruthless. You skip nitpicks. You focus on what matters.&lt;/p&gt;

&lt;p&gt;Architecture. Security. Performance. Not formatting — we use Biome for that.&lt;/p&gt;




&lt;h2&gt;
  
  
  The real lesson
&lt;/h2&gt;

&lt;p&gt;We trusted automation over people. We trusted rules over goodwill. And it worked.&lt;/p&gt;

&lt;p&gt;Most teams do the opposite. More process. More meetings. More approval layers.&lt;/p&gt;

&lt;p&gt;We removed them.&lt;/p&gt;

&lt;p&gt;What's stopping you? Probably fear.&lt;/p&gt;

&lt;p&gt;Fear of broken code. Fear of junior mistakes. Fear of losing control.&lt;/p&gt;

&lt;p&gt;But control is an illusion. Code will break anyway. Mistakes will happen.&lt;/p&gt;

&lt;p&gt;The question is: do you learn from them fast — or hide them slow?&lt;/p&gt;

&lt;p&gt;Our system surfaces problems fast. Fast feedback. Fast fixes. Fast learning.&lt;/p&gt;

&lt;p&gt;That's the real velocity boost. Not more lines of code. Better lines of code.&lt;/p&gt;

&lt;p&gt;What would happen if your team had a 4-hour review SLA?&lt;/p&gt;

&lt;p&gt;👇&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>career</category>
      <category>programming</category>
      <category>webdev</category>
    </item>
    <item>
      <title>The Linux Kernel Just Published AI Coding Guidelines. The Rest of Us Should Pay Attention.</title>
      <dc:creator>Aditya Agarwal</dc:creator>
      <pubDate>Sat, 11 Apr 2026 08:35:12 +0000</pubDate>
      <link>https://dev.to/adioof/the-linux-kernel-just-published-ai-coding-guidelines-the-rest-of-us-should-pay-attention-4h7d</link>
      <guid>https://dev.to/adioof/the-linux-kernel-just-published-ai-coding-guidelines-the-rest-of-us-should-pay-attention-4h7d</guid>
      <description>&lt;p&gt;The Linux kernel just published official guidelines for using AI coding assistants.&lt;/p&gt;

&lt;p&gt;It's a two-page doc. And it says more about where we're at than any hot take I've seen this week.&lt;/p&gt;




&lt;h2&gt;
  
  
  What it actually says
&lt;/h2&gt;

&lt;p&gt;You can use AI tools to contribute to the kernel. But you own everything the AI writes.&lt;/p&gt;

&lt;p&gt;Every line. Every bug. Every security flaw.&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;Signed-off-by&lt;/code&gt; tag? Only humans can add that. AI agents are explicitly banned from signing off on commits.&lt;/p&gt;

&lt;p&gt;Instead, there's a new tag: &lt;code&gt;Assisted-by: AGENT_NAME:MODEL_VERSION&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;If AI played a meaningful role in your code, you disclose it. That's the deal.&lt;/p&gt;




&lt;h2&gt;
  
  
  What Linus actually said
&lt;/h2&gt;

&lt;p&gt;He doesn't want the documentation to become a "political battlefield" over AI.&lt;/p&gt;

&lt;p&gt;His exact take: there's "zero point in talking about AI slop" in the docs, because bad actors who submit garbage AI code won't disclose it anyway.&lt;/p&gt;

&lt;p&gt;The guidelines are for good actors. Everyone else is already a problem.&lt;/p&gt;

&lt;p&gt;That's a pragmatic take you don't hear often.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why the rest of us should care
&lt;/h2&gt;

&lt;p&gt;Most of us aren't contributing to the Linux kernel. But the kernel's process is where software engineering norms get formalized first.&lt;/p&gt;

&lt;p&gt;They invented the patch-based workflow. The DCO. The code review culture the entire open source ecosystem copied.&lt;/p&gt;

&lt;p&gt;This is them saying: AI assistance is real, it's here, and we're going to treat it like any other tool — not ban it, not blindly embrace it, just hold contributors accountable for what they ship.&lt;/p&gt;

&lt;p&gt;That accountability model is worth stealing.&lt;/p&gt;




&lt;h2&gt;
  
  
  The &lt;code&gt;Assisted-by&lt;/code&gt; tag is a disclosure mechanism, not a judgment
&lt;/h2&gt;

&lt;p&gt;It doesn't say "AI wrote this, be suspicious."&lt;/p&gt;

&lt;p&gt;It says "a tool helped, here's which one, now the human owns it."&lt;/p&gt;

&lt;p&gt;Compare that to how most companies handle AI-generated code right now.&lt;/p&gt;

&lt;p&gt;No disclosure. No accountability. Just commits that look human until something breaks.&lt;/p&gt;

&lt;p&gt;The Linux kernel just modeled what responsible AI contribution looks like.&lt;/p&gt;

&lt;p&gt;Whether the rest of the industry follows is a different question.&lt;/p&gt;

&lt;p&gt;Are you disclosing AI assistance in your commits? And do you think your team should?&lt;/p&gt;

&lt;p&gt;👇&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>career</category>
      <category>webdev</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
