<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Benjamin Cane on Medium]]></title>
        <description><![CDATA[Stories by Benjamin Cane on Medium]]></description>
        <link>https://medium.com/@madflojo?source=rss-96013faddf78------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Sat, 16 May 2026 06:04:07 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@madflojo/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[YOLO Is a Terrible Strategy for Validating Production Changes]]></title>
            <link>https://itnext.io/yolo-is-a-terrible-strategy-for-validating-production-changes-a157369a0382?source=rss-96013faddf78------2</link>
            <guid isPermaLink="false">https://medium.com/p/a157369a0382</guid>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[coding]]></category>
            <category><![CDATA[programming]]></category>
            <dc:creator><![CDATA[Benjamin Cane]]></dc:creator>
            <pubDate>Thu, 07 May 2026 00:00:17 GMT</pubDate>
            <atom:updated>2026-05-15T22:06:58.775Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*OESeq5QAldRa4xuf" /><figcaption>Photo by <a href="https://unsplash.com/@bijesh33?utm_source=medium&amp;utm_medium=referral">bijesh regmi</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><p>YOLO is a terrible strategy for validating production changes.</p><p>How many times have you seen it?</p><p>Your platform is running smoothly. No alerts, no issues. Then suddenly, something breaks.</p><p>After digging in, you discover the cause: another system you depend on made a change, and that change broke your platform.</p><p>They didn’t notice it broke. You did, much too late…</p><p>How many times have you been the cause of another platform breaking?</p><h3>🥶 Cold Reality</h3><p>I wish the above scenario were rare, but it happens constantly across the technology industry.</p><p>It happens between internal teams, third-party integrations, and shared infrastructure teams.</p><p>These scenarios make you wonder, “How was that change validated?”</p><p>Maybe they tested it, and their validation had gaps. Maybe they did little validation at all. If any.</p><p>Either way, the result is the same: <strong>they validated their change with 100% of production traffic.</strong> Bad plan.</p><h3>💡 Better Ways to Validate Changes</h3><p>There are many ways teams can reduce production risk when rolling out changes, and the best teams combine the following approaches.</p><h3>Canary Releases 🐤</h3><p>I talk about canary deployments often.</p><p>Instead of moving 100% of traffic at once, move small percentages gradually and observe behavior closely.</p><p><strong>That observed part matters.</strong> Look at error rates, latency changes (beyond normal platform warmup), resource spikes, and unexpected retries. All of these indicate customer impact.</p><p>Canary deployments are one of the best ways to reduce the blast radius of changes, identify problems quickly, and self-correct.</p><h3>Shadow Traffic 🪞</h3><p>Traffic mirroring sends production traffic to a new version before routing live traffic there.</p><p>Responses are ignored, but you observe behavior and monitor the same signals you would with a canary release without sacrificing a customer request.</p><h3>Synthetic Traffic 🤖</h3><p>Synthetic traffic simulates user behavior continuously. It’s great for monitoring customer experience, but also a great way to validate new deployments.</p><p>Route synthetic traffic to upgraded instances first and verify behavior before moving real traffic. If it fails with synthetic traffic, it likely won’t survive real traffic.</p><h3>Smoke Tests 😶‍🌫️</h3><p>The classic approach. After deployment, run a small set of fast tests to confirm the platform is fundamentally working.</p><p>Smoke tests don’t need to be fancy; they can be shell scripts, API calls, read-only requests, a test file, or full end-to-end validation.</p><p>Their purpose is simple: to quickly catch obvious breakage.</p><h3>🧠 Final Thoughts</h3><p>Don’t think of the above methods as mutually exclusive choices. Combine them.</p><p>Some platforms I work on combine canary releases, shadow traffic, and synthetic traffic. Others use smoke tests plus canary releases.</p><p>The more layers of validation you have, the more likely you are to catch issues before your customers do. Because having your customers validate changes for you is a poor strategy.</p><p><em>Originally published at </em><a href="https://bencane.com/posts/2026-05-07/"><em>https://bencane.com</em></a><em> on May 7, 2026.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=a157369a0382" width="1" height="1" alt=""><hr><p><a href="https://itnext.io/yolo-is-a-terrible-strategy-for-validating-production-changes-a157369a0382">YOLO Is a Terrible Strategy for Validating Production Changes</a> was originally published in <a href="https://itnext.io">ITNEXT</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Deterministic routing is one of the most effective ways distributed systems reduce consistency…]]></title>
            <link>https://itnext.io/deterministic-routing-is-one-of-the-most-effective-ways-distributed-systems-reduce-consistency-d60634c9d481?source=rss-96013faddf78------2</link>
            <guid isPermaLink="false">https://medium.com/p/d60634c9d481</guid>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[coding]]></category>
            <dc:creator><![CDATA[Benjamin Cane]]></dc:creator>
            <pubDate>Thu, 30 Apr 2026 00:00:44 GMT</pubDate>
            <atom:updated>2026-05-09T17:57:08.608Z</atom:updated>
            <content:encoded><![CDATA[<h3>Deterministic routing is one of the most effective ways distributed systems reduce consistency problems at scale</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*D-jUadmu1JhfeJnl" /><figcaption>Photo by <a href="https://unsplash.com/@vonshnauzer?utm_source=medium&amp;utm_medium=referral">Egor Myznik</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><p>Deterministic routing is one of the most effective ways distributed systems reduce consistency problems at scale.</p><p>It is a foundational technique used by many modern databases, caches, and large-scale platforms. Understand how it works and you can apply the same pattern in your own systems.</p><h3>🤔 Understanding the Problem</h3><p>At some point, every successful system hits the limits of a single database instance.</p><p>A single server can only handle so many connections, queries, writes, storage capacity, or CPU/memory demands. Even with the best hardware, performance eventually degrades. So systems scale horizontally.</p><p>Instead of sending all traffic to a single database server, requests are distributed across multiple nodes.</p><p>At the same time, resiliency matters. If one server fails and all data resides there, the outage can be severe.</p><p>So modern databases spread data across multiple nodes, availability zones, and regions.</p><p>Distributing load and data solves both capacity and resiliency problems. But it introduces another challenge.</p><p>How do you keep request behavior consistent when data is distributed across multiple systems?</p><h3>⚠️ Why Replication Is Not Enough</h3><p>Replication helps, but it does not solve every consistency problem.</p><p>Imagine a write lands on Server 1. Immediately after, a read request for the same data lands on Server 67. Will Server 67 have the latest version? Maybe, but often not.</p><h3>Asynchronous Replication</h3><p>With asynchronous replication, Server 1 will accept the write and replicate the data to other servers in the background. That means a follow-up read on any other node may return stale data.</p><h3>Synchronous Replication</h3><p>With synchronous replication, the write on Server 1 will wait for an acknowledgment from all replicas before returning a success. While this improves consistency guarantees, it increases latency.</p><p>The farther apart a replica is, the worse this gets. Local writes may be fast, but cross-region writes will be slow. Plus, is it really feasible to replicate data across every single node?</p><p>So the question becomes: <em>How do you preserve consistency, without paying latency taxes?</em></p><h3>🔀 Route Requests to the Data</h3><p>A highly effective answer is deterministic routing.</p><p>Instead of moving data to where requests might land, move requests to where the data already exists.</p><p>If requests for the same key can go to the same node, you gain predictable ownership, reduced stale reads, lower coordination overhead, and easier horizontal scaling.</p><h3>👨‍🏫 How Deterministic Routing Works</h3><p>At a high level, the system needs a repeatable way to decide where requests should go.</p><p>A common approach is hashing.</p><ul><li>A hash of user123 always goes to Node 7</li><li>A hash of user456 always goes to Node 42</li></ul><p>As long as the same key produces the same result, requests can be consistently routed to the same owner. Many modern databases implement deterministic routing through techniques like consistent hashing, partition maps, and shard ranges.</p><h3>🗺️ Where Routing Logic Lives</h3><p>Different systems solve routing in different places.</p><h3>Client-side Routing</h3><p>The client library knows the partition map and sends requests directly to the correct node. Used by many distributed caches and databases.</p><h3>Proxy / Router Tier</h3><p>A small router sits in front of nodes and forwards traffic appropriately. Useful when client behavior cannot be influenced.</p><h3>Server-side Forwarding</h3><p>Requests land anywhere, and the receiving node forwards internally to the owning node. Simple for clients, doesn’t introduce a proxy failure point, but introduces complex cluster discovery/health monitoring.</p><p>Each model has tradeoffs.</p><h3>🧰 Routing Does Not Replace Replication</h3><p>Deterministic routing is powerful, but not magic. What happens when the owning node is down? You still need replication.</p><p>Modern databases combine both: deterministic routing for performance and ownership, plus replication for durability and failover.</p><h3>🧠 Why This Matters Beyond Databases</h3><p>Distributed databases use this approach, but it is not unique to them.</p><p>Deterministic routing can be used to solve: session ownership, user affinity, in-memory workflow coordination, work queue partitioning, and more.</p><p>I’ve used deterministic routing many times to solve load distribution and consistency problems.</p><p>At scale, the answer is not always more/better hardware. Consistency and availability problems are not always solved with replication alone.</p><p>Sometimes the best answer is simply to send the request to the right place.</p><p><em>Originally published at </em><a href="https://bencane.com/posts/2026-04-30/"><em>https://bencane.com</em></a><em> on April 30, 2026.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=d60634c9d481" width="1" height="1" alt=""><hr><p><a href="https://itnext.io/deterministic-routing-is-one-of-the-most-effective-ways-distributed-systems-reduce-consistency-d60634c9d481">Deterministic routing is one of the most effective ways distributed systems reduce consistency…</a> was originally published in <a href="https://itnext.io">ITNEXT</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[When you think of microservices, you probably think of centralized shared services]]></title>
            <link>https://itnext.io/when-you-think-of-microservices-you-probably-think-of-centralized-shared-services-0c174b377b63?source=rss-96013faddf78------2</link>
            <guid isPermaLink="false">https://medium.com/p/0c174b377b63</guid>
            <category><![CDATA[technology]]></category>
            <category><![CDATA[coding]]></category>
            <category><![CDATA[programming]]></category>
            <dc:creator><![CDATA[Benjamin Cane]]></dc:creator>
            <pubDate>Thu, 23 Apr 2026 00:00:27 GMT</pubDate>
            <atom:updated>2026-05-09T17:57:43.245Z</atom:updated>
            <content:encoded><![CDATA[<h3>When you think of microservices, you probably think of centralized shared services. But there’s another valid pattern that is rarely discussed</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*qdVIzY9gGt7q9Qpk" /><figcaption>Photo by <a href="https://unsplash.com/@framesforyourheart?utm_source=medium&amp;utm_medium=referral">Frames For Your Heart</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><p>When you think of microservices, you probably think of centralized shared services. But there’s another valid pattern that is rarely discussed: running the same microservice inside multiple platforms.</p><h3>🧩 How It Usually Works</h3><p>Most microservice designs follow the same model:</p><ul><li>Break systems into capabilities, teams, or functions</li><li>Deploy one shared service for each capability</li><li>Any platform that needs it calls that centralized service</li></ul><p>That works well for many cases, but it’s not the only model.</p><h3>🏗️ How We Got Here</h3><p>Before microservices, many organizations used Service-Oriented Architecture (SOA).</p><p>Despite being labeled as antiquated, SOA and microservices are not that different. Both break down systems into capabilities that communicate with each other. The biggest difference is scope.</p><p>In SOA, a “Payments Service” might own:</p><ul><li>Message parsing</li><li>Validation</li><li>Balance checks</li><li>Currency conversion</li><li>Settlement logic</li></ul><p>While other SOA services would own “Users” or “Accounting”. Today, that payment service would be considered an entire platform, with each of those capabilities implemented as microservices within that domain.</p><p>Microservices are often the same idea as SOA, just at a more granular level.</p><h3>🎯 Why Centralization Became the Default</h3><p>One reason microservices gained traction was the need to avoid duplication. Capabilities were often rebuilt across multiple systems. For example, Currency Conversion is needed in Payments, Accounting, and many other platforms.</p><p>Duplication is not just wasteful, it creates real problems: logic drift, coordination overhead, and inconsistent outcomes across systems. Packaging that capability as a standalone service solved real problems: build once, reuse everywhere.</p><h3>⚠️ The Downside of Centralization</h3><p>In cell-based architectures, platforms are usually designed to be self-contained and failure-isolated. That means a mission-critical platform depending on a centralized service shared by other platforms can become a design smell.</p><ul><li>Cross-cell dependencies</li><li>Added latency</li><li>Shared failure domains</li><li>Complex failover scenarios</li></ul><p>So teams, once again, solve these problems by rebuilding the same capability locally.</p><h3>🔁 Another Option</h3><p>Instead of rebuilding the capability each time, deploy the same microservice codebase inside multiple platforms. If both Payments and Accounting need a currency conversion service, deploy the same service within each platform.</p><p>It’s the same codebase and capability, but with local ownership and resilience. You get reuse without forced centralization.</p><h3>🧪 Caveats from Experience</h3><p>This pattern works when applied carefully.</p><h4>1️⃣ Strong Ownership</h4><p>A shared codebase still needs a clear owning team. Others can contribute, but someone must own quality, roadmap, and releases.</p><h4>2️⃣ Pick the Right Capabilities</h4><p>Not everything is a great fit. Something like currency conversion is well-scoped, relatively stateless, and doesn’t have unique business logic based on which platform is calling it. It’s a strong example.</p><p>But other services that have unique logic for each platform domain or require consistency across different platforms are less of a fit.</p><h4>3️⃣ Operational Discipline</h4><p>Using the same codebase doesn’t automatically solve all problems; you can still run into drift across platforms if each is running a different version. Changes in behavior still sometimes need coordination.</p><p>But with a single codebase, these issues are far easier to address.</p><h3>💭 Final Thoughts</h3><p>Microservices gave us reusable building blocks. Sometimes the best use of a microservice is not one centralized deployment. Sometimes it’s many local deployments of the same capability.</p><p>Just reuse the software while maintaining autonomy.</p><p><em>Originally published at </em><a href="https://bencane.com/posts/2026-04-23/"><em>https://bencane.com</em></a><em> on April 23, 2026.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=0c174b377b63" width="1" height="1" alt=""><hr><p><a href="https://itnext.io/when-you-think-of-microservices-you-probably-think-of-centralized-shared-services-0c174b377b63">When you think of microservices, you probably think of centralized shared services</a> was originally published in <a href="https://itnext.io">ITNEXT</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Are you using traffic mirroring in production? If not, try it out]]></title>
            <link>https://itnext.io/are-you-using-traffic-mirroring-in-production-if-not-try-it-out-e5ca3d926975?source=rss-96013faddf78------2</link>
            <guid isPermaLink="false">https://medium.com/p/e5ca3d926975</guid>
            <category><![CDATA[devops]]></category>
            <category><![CDATA[technology]]></category>
            <category><![CDATA[coding]]></category>
            <category><![CDATA[programming]]></category>
            <dc:creator><![CDATA[Benjamin Cane]]></dc:creator>
            <pubDate>Thu, 16 Apr 2026 00:00:59 GMT</pubDate>
            <atom:updated>2026-04-26T19:38:10.074Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*Hc6-_hA0orG6TH_8" /><figcaption>Photo by <a href="https://unsplash.com/@rishabhdharmani?utm_source=medium&amp;utm_medium=referral">Rishabh Dharmani</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><p>Are you using traffic mirroring in production? If not, you might be missing one of the safest ways to test and observe production changes.</p><h3>🚦 What is Traffic Mirroring?</h3><p>Traffic mirroring in Istio or Envoy Proxy lets you send a copy of live traffic to a secondary target.</p><p>When enabled, traffic to /service routes to cluster1 as normal, and a mirrored copy is sent to cluster2.</p><p><strong>The key:</strong> mirrored traffic is fire-and-forget. Responses are ignored and never impact the primary request.</p><h3>🧪 Why It’s Powerful</h3><h4>1️⃣ Shadow Traffic for Safe Testing</h4><p>The most common use case is shadow traffic.</p><p>When migrating platforms or deploying a new version of an application, you can send real traffic to the new system, observe behavior, and validate responses.</p><p>All without impacting users. No risky cutovers. You see exactly how the new system behaves under real load.</p><h4>2️⃣ Out-of-Band Traffic Inspection</h4><p>Another powerful use case is traffic inspection.</p><p>Inline inspection is risky. It adds latency, introduces new failure points, and becomes part of the critical path.</p><p>With traffic mirroring, you can inspect traffic, analyze requests, and detect anomalies.</p><p>All without impacting the primary path.</p><h3>😶‍🌫️ Reality Check</h3><p>It’s not perfect. There is some overhead.</p><p>Mirroring adds load to the sidecar, which may or may not be acceptable for your system. In my experience, it’s negligible, but it’s something you should measure in your own environment before deploying to production.</p><h3>🧠 Final Thoughts</h3><p>Traffic mirroring is one of the safest ways to validate migrations, test new systems, and observe real production behavior.</p><p>The hard part isn’t mirroring traffic. It’s running two production systems in parallel. That’s the real cost, and the real tradeoff.</p><p>But if you can afford that cost, traffic mirroring is an incredibly powerful tool.</p><p>If you want to dig deeper:</p><p><em>Originally published at </em><a href="https://bencane.com/posts/2026-04-16/"><em>https://bencane.com</em></a><em> on April 16, 2026.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=e5ca3d926975" width="1" height="1" alt=""><hr><p><a href="https://itnext.io/are-you-using-traffic-mirroring-in-production-if-not-try-it-out-e5ca3d926975">Are you using traffic mirroring in production? If not, try it out</a> was originally published in <a href="https://itnext.io">ITNEXT</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Agent Skills Are Becoming the Best Way to Capture Institutional Knowledge]]></title>
            <link>https://itnext.io/agent-skills-are-becoming-the-best-way-to-capture-institutional-knowledge-1458b36b4124?source=rss-96013faddf78------2</link>
            <guid isPermaLink="false">https://medium.com/p/1458b36b4124</guid>
            <category><![CDATA[technology]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[coding]]></category>
            <dc:creator><![CDATA[Benjamin Cane]]></dc:creator>
            <pubDate>Thu, 09 Apr 2026 00:00:55 GMT</pubDate>
            <atom:updated>2026-04-17T21:17:06.311Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*D5khO4oyFs6to0UK" /><figcaption>Photo by <a href="https://unsplash.com/@opernfan17x?utm_source=medium&amp;utm_medium=referral">Rainhard Wiesinger</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><p>Use Agent Skills to capture institutional knowledge and make it usable by coding agents.</p><p>Every organization has institutional knowledge.</p><ul><li>Internal frameworks</li><li>Preferred practices</li><li>Platform-specific capabilities</li></ul><p>It exists everywhere. But it’s often undocumented… or buried in a wiki no one reads.</p><p>As coding agents take on more work, this problem gets worse.</p><p>If you ask an agent to build a new service, you want it to use your internal framework, follow your patterns, and respect your organizational constraints.</p><p>A human engineer would ask questions. An agent won’t, unless you give it that context.</p><h3>📚 Agent Skills as Knowledge Distribution</h3><p>Most people think about Agent Skills as actions:</p><ul><li>Convert markdown to PDF</li><li>Review this pull request</li><li>Commit my changes</li></ul><p>But the more interesting use case is guidance.</p><p>Skills aren’t just for doing things. They’re for shaping agent output.</p><p>Agents discover and use skills based on intent.</p><p>If a user asks: “Create a new Python service.”</p><p>The agent looks for relevant skills:</p><ul><li>Language conventions (PEP 8, etc.)</li><li>Internal frameworks</li><li>Organizational standards</li></ul><p>That’s where institutional knowledge belongs.</p><p>Instead of hoping engineers remember to tell the agent:</p><ul><li>“We use Flask, not Django.”</li><li>“Stick to the standard library.”</li><li>“Follow this service layout.”</li></ul><p>You capture that into a skill. The agent applies it automatically.</p><h3>🧠 Why This Matters</h3><p>Institutional knowledge only works if it’s:</p><ul><li>Discoverable</li><li>Applied consistently</li></ul><p>Agent Skills give you both.</p><p>They turn tribal knowledge into something agents can find, understand, and use.</p><h3>⚠️ The Tradeoff (For Now)</h3><p>Right now, this introduces duplication.</p><p>Most teams already have internal docs, style guides, &amp; wikis.</p><p>And now you’re putting the same information into skills. Which feels like extra work.</p><p>But it poses an interesting question:</p><p>As agents become the primary interface… Will engineers read the wiki? Or ask the agent?</p><h3>🧠 Final Thoughts</h3><p>As agents take on more of the implementation work, where you store knowledge becomes more important. Making that knowledge accessible to agents becomes essential.</p><p>Agent Skills aren’t just automation tools.</p><p>They are becoming the interface for standards, practices, and institutional knowledge.</p><p>And teams that embrace that early will see more consistent output from both humans and agents.</p><p><em>Originally published at </em><a href="https://bencane.com/posts/2026-04-09/"><em>https://bencane.com</em></a><em> on April 9, 2026.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=1458b36b4124" width="1" height="1" alt=""><hr><p><a href="https://itnext.io/agent-skills-are-becoming-the-best-way-to-capture-institutional-knowledge-1458b36b4124">Agent Skills Are Becoming the Best Way to Capture Institutional Knowledge</a> was originally published in <a href="https://itnext.io">ITNEXT</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Saved Prompts Are Dead. Agent Skills Are the Future]]></title>
            <link>https://itnext.io/saved-prompts-are-dead-agent-skills-are-the-future-7815f23f5183?source=rss-96013faddf78------2</link>
            <guid isPermaLink="false">https://medium.com/p/7815f23f5183</guid>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[agentic-development]]></category>
            <category><![CDATA[coding]]></category>
            <dc:creator><![CDATA[Benjamin Cane]]></dc:creator>
            <pubDate>Thu, 02 Apr 2026 00:00:45 GMT</pubDate>
            <atom:updated>2026-04-11T20:09:22.812Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*n5z8ao_FdG16F5aQ" /><figcaption>Photo by <a href="https://unsplash.com/@onurbuz?utm_source=medium&amp;utm_medium=referral">Onur Buz</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><p>Saved prompts are dead. Agent Skills are the next step.</p><p>If you’ve been around for a while, you probably have a file full of bash one-liners.</p><p>Small scripts or commands you saved because they solved a problem you didn’t want to automate properly.</p><p>When coding agents arrived, prompts became the new one-liners.</p><p>Useful prompts were saved, reused, and eventually turned into “prompt files”, then slash commands like /do-something.</p><p>But that model has already evolved.</p><h3>⚙️ Agent Skills</h3><p>Agent Skills are the next iteration.</p><p>At a basic level, a skill looks a lot like a saved prompt: a directory with a markdown file.</p><p>What makes it different is how it’s used.</p><p>Skills include metadata like name and description, allowing agents to discover them.</p><p>Instead of explicitly calling a prompt every time, the agent can determine when to use a skill based on intent.</p><p>This is referred to as progressive disclosure:</p><ul><li>Agent loads skill metadata</li><li>Matches it to your task</li><li>Then loads and executes the full skill when needed</li></ul><p>You can still call skills directly (/, $, @), but you don&#39;t always have to.</p><h3>🧠 More Than Just Prompts</h3><p>The real differentiator is that skills aren’t just prompts.</p><p>They can include reference documentation, templates, and scripts.</p><p>This means you’re no longer just telling the agent what to do.</p><p>You’re giving it tools and context to execute and validate tasks.</p><p>For more complex workflows, it’s often easier to write a script and teach the agent how to use it than to encode everything in a prompt.</p><h3>⚠️ A Word of Caution</h3><p>This power comes with risk.</p><p>Skills can include executable logic and tell agents to perform tasks.</p><p>That means a shared skill can contain malicious or unsafe behavior.</p><p>Treat them like any script you install:</p><ul><li>Understand what they do</li><li>Know where they come from</li><li>Review before using (watch out for hidden text or obfuscated instructions)</li></ul><h3>🧠 Final Thoughts</h3><p>Agent skills are a meaningful step forward.</p><p>They let you codify workflows, preferences, and repeatable agent tasks in a way that agents can discover.</p><p>They’re a strong productivity accelerator and a powerful way to capture institutional knowledge in a form agents can actually use.</p><p>(More on that in the next post.)</p><p><em>Originally published at </em><a href="https://bencane.com/posts/2026-04-02/"><em>https://bencane.com</em></a><em> on April 2, 2026.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=7815f23f5183" width="1" height="1" alt=""><hr><p><a href="https://itnext.io/saved-prompts-are-dead-agent-skills-are-the-future-7815f23f5183">Saved Prompts Are Dead. Agent Skills Are the Future</a> was originally published in <a href="https://itnext.io">ITNEXT</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Generating Code Faster Is Only Valuable If You Can Validate Every Change With Confidence]]></title>
            <link>https://itnext.io/generating-code-faster-is-only-valuable-if-you-can-validate-every-change-with-confidence-5148a37c2320?source=rss-96013faddf78------2</link>
            <guid isPermaLink="false">https://medium.com/p/5148a37c2320</guid>
            <category><![CDATA[coding]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[technology]]></category>
            <dc:creator><![CDATA[Benjamin Cane]]></dc:creator>
            <pubDate>Thu, 26 Mar 2026 00:00:28 GMT</pubDate>
            <atom:updated>2026-04-03T20:26:53.854Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*thcsa3J0HrkMymmk" /><figcaption>Photo by <a href="https://unsplash.com/@alexkondratiev?utm_source=medium&amp;utm_medium=referral">Alex</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><p>Generating code faster is only valuable if you can validate every change with confidence.</p><p>Software engineering has never really been about writing code. Coding is often the easy part.</p><p>Testing is harder, and many teams struggle with it.</p><p>As tools make it easier to generate code quickly, that gap widens. If you can produce changes faster than you can validate them, you eventually create more code than you can safely operate.</p><p>Which begs the question: What does good testing actually look like?</p><h3>🔍 What Good Looks Like</h3><p>One of the biggest challenges I see is that teams struggle to understand what “good” testing means and never define it.</p><p>Pipelines are often built early in a project, when the team is small, and they rarely keep pace with the system and organization as they grow.</p><p>My starting principle is simple:</p><ul><li>At pull request time, you should have strong confidence that the change will not break the service or platform being modified.</li><li>Within a day of merging, you should have strong confidence that the change hasn’t broken the full customer journey that the platform supports.</li></ul><h3>🔁 On Pull Request</h3><p>For backend platforms, I like to see three levels of automated testing before merging.</p><h3>Code Tests (Unit Tests)</h3><p>This level is the foundation. Unit tests validate internal logic, error handling, and edge cases. Techniques such as fuzz testing and benchmarking also reveal issues early. As the test pyramid tells us, this is where the majority of testing and logic validation should take place.</p><h3>Service-Level Functional Tests</h3><p>Too many teams stop at unit tests for pull requests. Functional tests should also be run in CI for every pull request.</p><p>Services should be tested in isolation with functional tests. Dependencies can be mocked, but things like databases should ideally run for real (Dockerized).</p><p>This is where API contracts are validated and regressions can be identified without wondering whether the issue came from this change or another service.</p><h3>Platform-Level Functional Tests</h3><p>Testing a service alone isn’t enough. Changes can break upstream or downstream dependencies. Platform-level tests spin up the entire platform in CI and validate that services interact correctly.</p><p>These tests ensure the platform continues to work as a system.</p><p>For platforms with strict latency or resiliency requirements, I recommend introducing light stress tests at both the service and platform levels. These aren’t full performance tests, but they act as early indicators of performance regressions.</p><p>If these three layers pass, you should have high confidence in the change. But not complete confidence.</p><h3>🌙 Nightly Testing</h3><p>Some failures take time to appear.</p><p>Memory leaks, performance degradation, and cross-platform integration issues may not show up immediately.</p><p>That’s why I like to run a nightly build (or every few hours).</p><p>This environment runs end-to-end customer journey tests, performance tests, and chaos tests.</p><p>These are typically the same tests used during release validation, but running them continuously accelerates feedback. If something breaks, you learn about it early, before the pressure of a release.</p><h3>🧠 Final Thoughts</h3><p>There is no universal approach everyone can follow.</p><p>Different systems have different needs; mission-critical systems may focus heavily on correctness and resilience. Non-mission-critical systems may focus more on validating core functionality.</p><p>Your testing strategy depends heavily on architecture, dependencies, and operational constraints. But if your organization is increasing its ability to generate code quickly, your testing capabilities must evolve at the same pace.</p><p>AI-generated code becomes much easier to review when you already have high confidence in your testing.</p><p><em>Originally published at </em><a href="https://bencane.com/posts/2026-03-26/"><em>https://bencane.com</em></a><em> on March 26, 2026.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=5148a37c2320" width="1" height="1" alt=""><hr><p><a href="https://itnext.io/generating-code-faster-is-only-valuable-if-you-can-validate-every-change-with-confidence-5148a37c2320">Generating Code Faster Is Only Valuable If You Can Validate Every Change With Confidence</a> was originally published in <a href="https://itnext.io">ITNEXT</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[When You Go to Production with gRPC, Make Sure You’ve Solved Load Distribution First]]></title>
            <link>https://itnext.io/when-you-go-to-production-with-grpc-make-sure-youve-solved-load-distribution-first-2f5042bfe4f1?source=rss-96013faddf78------2</link>
            <guid isPermaLink="false">https://medium.com/p/2f5042bfe4f1</guid>
            <category><![CDATA[software-architecture]]></category>
            <category><![CDATA[software-engineering]]></category>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[devops]]></category>
            <dc:creator><![CDATA[Benjamin Cane]]></dc:creator>
            <pubDate>Thu, 19 Mar 2026 00:00:09 GMT</pubDate>
            <atom:updated>2026-03-28T14:58:00.000Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*gEuw3nu4Jgyzzbjxg_JQ3w@2x.jpeg" /><figcaption>Photo by <a href="https://www.buymeacoffee.com/mikevandenbos">Mike van den Bos</a> on <a href="https://unsplash.com/?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><p>When you go to production with gRPC, make sure you’ve solved load distribution first.</p><p>I was recently talking with another engineer who is rolling out gRPC into production. He asked what the biggest gotchas were.</p><p>My first answer: Load Distribution.</p><h3>🚦 HTTP/1 vs. HTTP/2</h3><p>Most teams first implement services using REST over HTTP/1 and then migrate to gRPC as they seek its performance benefits.</p><p>That shift introduces a subtle but important change in how traffic gets distributed across instances.</p><p>With HTTP/1, requests are generally tied closely to connections. A client opens a connection, sends a request, waits for the response, and then sends another (if connection re-use is enabled).</p><p>HTTP/2 (which underpins gRPC) works differently.</p><p>HTTP/2 multiplexes requests over persistent connections. A client can send many requests over the same connection without waiting for responses.</p><p>This is one of the reasons gRPC provides a performance boost, but it can create unexpected load distribution issues.</p><p>If your infrastructure isn’t built for an HTTP/2 world, you’ll quickly find traffic becoming unevenly distributed.</p><h3>🏗️ Infrastructure Support</h3><p>In an HTTP/1 world, load balancing at the connection (Layer 4) level often works well enough. But with HTTP/2, connections live much longer and carry far more concurrent traffic.</p><p>If your load balancer distributes traffic based only on connections, a busy client may hammer a single instance while others sit idle.</p><p>Unfortunately, much of the infrastructure still doesn’t fully support HTTP/2-aware load balancing.</p><p>Depending on your environment, your load balancers or ingress controllers may operate primarily at Layer 4. That works fine for HTTP/1, but once you introduce HTTP/2 via gRPC, the effectiveness changes significantly.</p><h3>⚙️ Supporting gRPC</h3><p>To get the most out of gRPC, the best approach is to use infrastructure that understands HTTP/2 and load-balances requests rather than just connections.</p><p>If that’s not possible, another option is client-side load balancing.</p><p>Many gRPC clients support opening a pool of connections and distributing requests across them. You still benefit from HTTP/2’s persistent connections, but you avoid concentrating all traffic on a single backend instance.</p><h3>🧠 Final Thoughts</h3><p>gRPC offers many advantages, including performance, strongly typed contracts, and efficient communication. But it also introduces different networking behavior.</p><p>If you’re rolling out gRPC into production, make sure your load balancing infrastructure is ready for an HTTP/2 world.</p><p><em>Originally published at </em><a href="https://bencane.com/posts/2026-03-19/"><em>https://bencane.com</em></a><em> on March 19, 2026.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=2f5042bfe4f1" width="1" height="1" alt=""><hr><p><a href="https://itnext.io/when-you-go-to-production-with-grpc-make-sure-youve-solved-load-distribution-first-2f5042bfe4f1">When You Go to Production with gRPC, Make Sure You’ve Solved Load Distribution First</a> was originally published in <a href="https://itnext.io">ITNEXT</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[You may be building for availability, but are you building for resiliency?]]></title>
            <link>https://itnext.io/you-may-be-building-for-availability-but-are-you-building-for-resiliency-c49f6e45c883?source=rss-96013faddf78------2</link>
            <guid isPermaLink="false">https://medium.com/p/c49f6e45c883</guid>
            <category><![CDATA[software-engineering]]></category>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[technology]]></category>
            <dc:creator><![CDATA[Benjamin Cane]]></dc:creator>
            <pubDate>Thu, 12 Mar 2026 00:00:50 GMT</pubDate>
            <atom:updated>2026-03-28T14:27:23.508Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*vdkYMgPbZOiqOFu7DZDM2w@2x.jpeg" /><figcaption>Photo by <a href="https://instagram.com/rawan_aahmed?igshid=YmMyMTA2M2Y=">Rawan Ahmed</a> on <a href="https://unsplash.com/?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><p>You may be building for availability, but are you building for resiliency? Many teams design for availability. Far fewer design for resiliency.</p><p>A concept that took me a while to really grasp is that building highly available systems and highly resilient systems is not the same thing.</p><p>The difference is how the system reacts to failure.</p><h3>🚄 High Availability</h3><p>When you build for high availability, the goal is simple: ensure there is always another path.</p><p>If something fails, traffic can be redirected somewhere else.</p><p>For example, a service might run across multiple availability zones or regions. If one fails, traffic is routed to another.</p><p>Detecting failures and redirecting traffic are core elements of building for high availability.</p><p>Availability is about rerouting traffic when something fails.</p><h3>🚂 High Resiliency</h3><p>Building for resiliency is different.</p><p>The solution to failure isn’t another path; it’s how the system handles the error.</p><p>When a dependency fails, the decision becomes:</p><p>Do we retry? Do we continue without that dependency? Do we degrade functionality? Do we stop processing altogether?</p><p>Resiliency is about defining what happens when things go wrong.</p><p>Sometimes you can continue processing. Sometimes you can defer work and fix it later.</p><p>Resiliency is absorbing failure instead of avoiding it.</p><h3>🧩 A Simple Example</h3><p>When you design systems with resiliency in mind, you tend to treat dependencies differently.</p><p>A simple example is configuration.</p><p>Many systems use distributed configuration services so that runtime behavior can change without redeployment.</p><p>But that configuration service then becomes a dependency. To avoid turning it into a hard dependency, many systems cache the configuration in memory.</p><p>When updates occur, the system fetches the new configuration and switches only after it’s fully loaded into memory.</p><p>If configuration refresh fails, the system continues operating with the last known configuration. Transient failures don’t bring the system down.</p><p>That’s resiliency.</p><h3>🧠 Final Thoughts</h3><p>When I talk about non-functional requirements, you’ll hear me say:</p><p>“Highly available and resilient systems”</p><p>I separate them intentionally because the approaches are different.</p><p>Availability ensures there is always another path. Resiliency ensures the system can continue operating when failures occur.</p><p>Availability routes around failure. Resiliency survives failure. You need both.</p><p><em>Originally published at </em><a href="https://bencane.com/posts/2026-03-12/"><em>https://bencane.com</em></a><em> on March 12, 2026.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=c49f6e45c883" width="1" height="1" alt=""><hr><p><a href="https://itnext.io/you-may-be-building-for-availability-but-are-you-building-for-resiliency-c49f6e45c883">You may be building for availability, but are you building for resiliency?</a> was originally published in <a href="https://itnext.io">ITNEXT</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[When your coding agent doesn’t understand your project, you’ll get junk]]></title>
            <link>https://itnext.io/when-your-coding-agent-doesnt-understand-your-project-you-ll-get-junk-8e0d789986fd?source=rss-96013faddf78------2</link>
            <guid isPermaLink="false">https://medium.com/p/8e0d789986fd</guid>
            <category><![CDATA[coding]]></category>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[software-development]]></category>
            <dc:creator><![CDATA[Benjamin Cane]]></dc:creator>
            <pubDate>Thu, 05 Mar 2026 00:00:28 GMT</pubDate>
            <atom:updated>2026-03-14T19:57:24.648Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*vZLsiB495vXTM3SpqfGLnw.png" /></figure><p>When your coding agent doesn’t understand your project, you’ll get junk.</p><p>Junk in, junk out.</p><p>One of the best ways to get more from agentic coding tools is to give the agent context.</p><p>The more an agent understands your project, the better its work will be.</p><p>If you ask an agent to add a method to a class, it will. It might read the file. It might infer some structure. But it won’t understand the project’s intent.</p><p>If you asked a human engineer to make the same change, they would have questions.</p><p>What is the purpose of this project? How is it used? What constraints exist?</p><p>If they skipped that step, you’d get exactly what you asked for, even if it was wrong.</p><p>That’s the same challenge many face with coding agents. A lack of context means it only does what it’s told — which isn’t always what you actually need.</p><p>But when it understands a project, it operates with far more clarity.</p><h3>🧙‍♂️ My “Old School” Method</h3><p>Before I start serious work with an agent, I have it learn the project.</p><p>Read the docs 📚 Review the codebase ⚙️ Understand the architecture 🏙️ Learn how to build, test, and run the project locally 👩‍🔧</p><p>I even ask the agent to summarize its understanding back to me.</p><p>This started as a saved prompt, turned into a slash command, and is now a skill.</p><p>This step is a huge productivity boost.</p><h3>🤖 Agents Files (AGENTS.md)</h3><p>Over the past year, an open standard for providing agents with structured context has emerged.</p><p>Instead of prompting the agent to rediscover your project every time, document that context once — and the agent will reference it going forward.</p><p>Most modern agents support an Agents.md file and reference it during each interaction.</p><h3>💽 What Goes in an Agents File?</h3><p>Think of the Agents file as onboarding documentation, but for an agent.</p><p>Project context:</p><p>Team context:</p><ul><li>Code style preferences</li><li>Testing philosophy (TDD or YOLO)</li><li>Tech stack constraints</li></ul><p>Any tribal knowledge you’d expect a new team member to learn belongs in an Agents file.</p><h3>👨‍💻 Personal Agent Files</h3><p>Many tools also support a personal Agents file in your home directory.</p><p>That’s where your workflow preferences live. Are you a two-space tabs person? Do you want your agent to prefer table tests?</p><p>If you have preferences you want to apply to every project, but are unique to you, they go in the personal Agents file.</p><h3>🧠 Final Thoughts</h3><p>Using an Agents file dramatically improves agent quality.</p><p>Even then, I still use my “learn-this” slash command — sometimes that extra context makes a difference.</p><p>If you wouldn’t drop a new engineer into a project without context, don’t do it to your agents.</p><p><em>Originally published at </em><a href="https://bencane.com/posts/2026-03-05/"><em>https://bencane.com</em></a><em> on March 5, 2026.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=8e0d789986fd" width="1" height="1" alt=""><hr><p><a href="https://itnext.io/when-your-coding-agent-doesnt-understand-your-project-you-ll-get-junk-8e0d789986fd">When your coding agent doesn’t understand your project, you’ll get junk</a> was originally published in <a href="https://itnext.io">ITNEXT</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
    </channel>
</rss>