<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Aggelos Bellos on Medium]]></title>
        <description><![CDATA[Stories by Aggelos Bellos on Medium]]></description>
        <link>https://medium.com/@aggelosbellos?source=rss-de20e3589897------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Mon, 20 Apr 2026 20:09:48 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@aggelosbellos/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[APIs as Infrastructure: Optimizing for Change]]></title>
            <link>https://medium.com/@aggelosbellos/apis-as-infrastructure-optimizing-for-change-43f49d6d653c?source=rss-de20e3589897------2</link>
            <guid isPermaLink="false">https://medium.com/p/43f49d6d653c</guid>
            <category><![CDATA[technology]]></category>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[api]]></category>
            <category><![CDATA[tech]]></category>
            <category><![CDATA[api-development]]></category>
            <dc:creator><![CDATA[Aggelos Bellos]]></dc:creator>
            <pubDate>Mon, 17 Nov 2025 00:00:42 GMT</pubDate>
            <atom:updated>2025-11-17T14:00:46.127Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*QigKbQ1ZCXnakLgWaPbqHA.png" /></figure><p>Managing APIs is hard. An application usually supports a single version of itself. It can be refactored, restructured, and redesigned with relative freedom. On the other hand, an API has to maintain stability for all its versions that are being consumed.</p><p>As requirements change, more and more time is spent on how to avoid breaking changes instead of actually delivering value.</p><h3>Frozen in Time</h3><p>An API is a contract between the provider and the consumer. This means that once a version is released, it should remain frozen in time. This is not only true for the contract of the API but also for its implementation. However, this is not a “should” but what happens in practice.</p><p>When an API is released, we don’t care about its internal implementation anymore. Yes, there will be bugs and yes, there are multiple shared components. But other than these, we do not make changes. If it works, don’t change it. Right?</p><p>Theoretically, we could develop a completely new application for each new version. This would allow us to build something and just make sure we keep it alive. Practically, the cost of maintaining such an approach is prohibitive. Furthermore, this is not how software is built in practice.</p><p>Software is built in small incremental batches. So why don’t we optimize our APIs for small incremental changes?</p><h3>Problem</h3><p>Code frozen in time sounds good, but what about code that reflects data? Data is also evolving with each new requirement. Take this for example:</p><pre> if ($course-&gt;active) {<br>    // do something<br> } else {<br>    // do something else<br> }</pre><p>If we delete the active field in favor of a new status field, then we will have code that depends on a field that does not exist anymore. Even a new application wouldn&#39;t help us here as there is still the dependency on the data.</p><p>There are ways to mitigate this problem. The easiest one is to check the version of the API and adapt the code accordingly.</p><pre> if ($apiVersion &gt;= 2) {<br>     if ($course-&gt;status === &#39;active&#39;) {<br>         // do something<br>     } else {<br>         // do something else<br>     }<br> } else {<br>     if ($course-&gt;active) {<br>         // do something<br>     } else {<br>         // do something else<br>     }<br> }</pre><p>As you can see, this approach quickly becomes unmanageable. Each new version adds more complexity to the code.</p><p>A more common approach is to use feature flags or to split each version in a different folder / class:</p><pre> - api<br>      - v1<br>          - CourseController.php<br>      - v2<br>          - CourseController.php</pre><p>While this can work, it does not scale well with small incremental changes. Furthermore, you tend to lose what is the latest state of the application.</p><h3>APIs as Infrastructure</h3><p>To solve these problems, we decided to treat APIs as infrastructure. An approach first introduced by <a href="https://stripe.com/blog/api-versioning">Stripe</a> back in 2017.</p><p>The idea is simple. Your code always reflects the latest version of your API. Each time you need to introduce a change, you update your code to reflect the new requirements. Then, you add a VersionChange that lets you go back in time.</p><p>Instead of branching the system into multiple versions, we move the system forward and let transformations pull older versions backward. This keeps change concentrated in one place instead of fragmented across versions.</p><p>Let us build on our previous example. We have a CourseEntity that had an active field in V1 and in V2 introduced a status field. We would update our code to reflect the latest version, which means that our code would use only the status field.</p><p>To make sure we won’t break the previous versions we add a VersionChange that restores the active field from the status field.</p><pre> class ConvertCourseStatusToActiveChange {<br>    private string $description = &#39;The active field was replaced by the status field for ...&#39;;<br> <br>    public function apply(CourseEntity $course): CourseEntity {<br>        $course-&gt;active = $course-&gt;status === &#39;active&#39;;<br>        return $course;<br>    }<br> }</pre><p>When a request comes in, we check the requested version and apply all the necessary changes to bring the data to the requested version.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/927/1*OQse-TXrKKk8svkNXzvFlA.png" /></figure><h3>Here’s why we like this approach:</h3><ul><li>The code always reflects the latest version of the API.</li><li>Small incremental changes are easy to implement.</li><li>Each version change has a mandatory description that explains why the change was necessary.</li><li>We can freeze old versions without duplicating code.</li></ul><p>Treating APIs as infrastructure lets us evolve safely, incrementally, and without fear of breaking the past.</p><h3>Keeping Versions Aligned With Reality</h3><p>Most API versioning schemes assume that products evolve through major releases. Versions like example.com/api/v1/courses and example.com/api/v2/courses work well when changes arrive in large batches.</p><p>The problem is that major releases require coordination across departments and strict lifecycle planning. More importantly, they contradict everything we have said so far: $small_incremental_changes !== $major_release.</p><p>Small, steady changes are easier for consumers to adopt. Ideally, the versioning scheme should reflect that and communicate something meaningful to them.</p><p>Date based versioning ( YYYY-MM-DD ) does exactly that. Each version corresponds to a real point in time, making the incremental nature of our changes visible and predictable. It aligns the version history with how the API actually evolves, instead of forcing artificial release boundaries.</p><h3>Design First, Change Maybe</h3><p>Code will always remain a technical debt. Having a framework that supports change does not mean that we can avoid thinking about design. Some changes will always ripple through the system. We try to balance between over-engineering and pragmatism.</p><p>What matters is creating an environment where change is expected, guided, and safe. A structure that lets us introduce new behavior incrementally, without rewriting the past. An approach where old versions can be frozen with confidence, and new versions can evolve without fear.</p><p>By treating APIs as long-lived infrastructure rather than short-lived features, we make this balance possible. We keep the codebase aligned with the current truth of the system, we document why each version exists, and we ensure that past behavior stays accessible without forcing duplication or hacks.</p><h3>Bonus</h3><p>While we were experimenting with this approach, we found an open-source project that implements in FastAPI what Stripe describes in their blog post. Having a concrete implementation really helped us to implement this approach in PHP.</p><p>You can check it out here: <a href="https://github.com/zmievsa/cadwyn">cadwyn</a> .</p><p><em>Originally published at </em><a href="https://blog.talentlms.io/posts/apis-as-infrastructure/"><em>https://blog.talentlms.io</em></a><em> on November 17, 2025.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=43f49d6d653c" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Getting used to our problems]]></title>
            <link>https://medium.com/@aggelosbellos/getting-used-to-our-problems-af7de4d3d85b?source=rss-de20e3589897------2</link>
            <guid isPermaLink="false">https://medium.com/p/af7de4d3d85b</guid>
            <category><![CDATA[technology]]></category>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[teamwork]]></category>
            <category><![CDATA[tech]]></category>
            <category><![CDATA[engineering]]></category>
            <dc:creator><![CDATA[Aggelos Bellos]]></dc:creator>
            <pubDate>Wed, 09 Jul 2025 09:50:14 GMT</pubDate>
            <atom:updated>2025-07-09T09:51:12.081Z</atom:updated>
            <content:encoded><![CDATA[<h4>How engineers inherit dysfunction and stop asking “Why?”</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*9V1UR1BloQ9j7qXW" /></figure><p>There’s a famous behavioral experiment involving five monkeys, a ladder, and a bunch of bananas.</p><p>The bananas are placed at the top of the ladder. Whenever a monkey climbs up to get them, all five monkeys are sprayed with cold water. Over time, they learn: <strong>no one climbs the ladder</strong>. If one monkey tries, the others pull it down just to avoid punishment.</p><p>Then, one monkey is replaced. The new monkey, unaware of the rule, goes for the bananas. The others beat it down. One by one, each monkey is replaced. Eventually, all five are new. None of them have ever been sprayed.</p><p>And yet, no one climbs the ladder. If one tries, the others attack.</p><p>No one knows why.<br>It’s just how things are done.</p><h3>How This Happens in Engineering</h3><p>This happens in software teams all the time.</p><p>We inherit workarounds, rituals, and silent rules:</p><ul><li>Never deploy after 3 p.m.</li><li>Don’t touch that config file.</li><li>The deploy process requires these manual steps.</li></ul><p>Over time, these aren’t seen as weird. They’re just how the system works. And if a new engineer asks “Why?” we give them a shortcut or a script. Not an answer.</p><p>We’ve stopped climbing the ladder.<br>And most of us don’t even remember the cold water.</p><blockquote>The longer you’re in a system, the more its flaws feel like furniture</blockquote><h3>The Veteran’s Blind Spot</h3><p>Engineers who’ve been with a system for years have deep knowledge and deep blind spots.</p><p>They know which parts of the codebase are dangerous. They have survived outages, refactors, rewrites. But that survival often makes them too comfortable with dysfunction:</p><ul><li>They stop noticing flaky tests because they’re used to rerunning them.</li><li>They stop fighting tech debt because they’ve already adapted to it.</li><li>They stop advocating for fixes because they’ve internalized the pain.</li></ul><p>Eventually, they start teaching the next generation how to survive and not how to improve.</p><h3>From Broken to Policy</h3><p>There’s a moment when a system’s flaws stop being technical and they become cultural.</p><p>We stop saying, “This is broken.”<br>We start saying, “This is how we do things.”</p><p>Examples you’ve probably seen:</p><ul><li>Broken deploy process? Add a checklist and call it a protocol.</li><li>Legacy API? Add five more client-side hacks.</li><li>Dangerous shell script? Rename it run_this_first.sh and pin it to Slack.</li></ul><p>We start to write documentation that doesn’t fix pain, it just <em>translates</em> it.</p><p>When broken becomes normal, you’ve stopped evolving.</p><h3>Why We Accept the Pain</h3><p>We don’t tolerate this stuff because we’re lazy. We do it because it feels safer than change.</p><ul><li><strong>Fear</strong>: “If we touch it, we might break everything.”</li><li><strong>Fatigue</strong>: “We’ve tried fixing it. It’s not worth the fight.”</li><li><strong>Familiarity</strong>: “We know how to work around it.”</li><li><strong>Bias</strong>: “If it’s survived this long, it can’t be that bad.”</li></ul><p>These are natural reactions. But when everyone in the team has them, nothing changes. We just pass the pain downstream, from engineer to engineer.</p><h3>Signs You’ve Inherited Dysfunction</h3><p>Think your team might be stuck in monkey-mode? Look for these:</p><p>✅ Pain that everyone works around, but no one logs<br>✅ Tribal knowledge passed through Slack, not docs<br>✅ Engineers who solve symptoms, not causes<br>✅ Retros with no mention of chronic friction<br>✅ “That’s just how it is” used to end discussions</p><p>If your team feels “comfortable”, ask: <strong>comfortable with what?</strong></p><h3>How to Break the Pattern</h3><p>You don’t need a heroic rewrite.<br>You just need people who are willing to ask “Why?”</p><h3>🔁 Rotate Ownership</h3><p>Let fresh eyes tackle old systems. New team members see what veterans overlook.</p><h3>🛠 Budget for Cleanup</h3><p>Spend 10–20% of each sprint addressing long-ignored pain. Convert tech debt to <em>team debt.</em></p><h3>🧭 Reward Curious Engineers</h3><p>Celebrate people who question process. Normalize dissent. Promote refactoring, not just patching.</p><h3>🧼 Make Pain Visible</h3><p>Write honest documentation. Label things as fragile, flaky, or painful. Let visibility lead to action.</p><h3>Rethinking the Ritual</h3><p>Culture is what we normalize.</p><p>That’s why this article isn’t about broken code, it’s about broken behavior. The longer we let dysfunction become invisible, the harder it is to see the opportunities for change.</p><p>The monkeys in the experiment weren’t irrational.<br>They were just following precedent.</p><p>Engineers aren’t irrational either.<br>But if we never ask “Why?”, we’ll keep beating down the ones who try to fix things.</p><blockquote>You don’t have to be the engineer who built the ladder.</blockquote><blockquote>But you can be the one who climbs it again.</blockquote><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=af7de4d3d85b" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Team Topologies Gone Wrong: When Platform Teams Forget Their Customers]]></title>
            <link>https://medium.com/@aggelosbellos/team-topologies-gone-wrong-when-platform-teams-forget-their-customers-5b26cf6ada8e?source=rss-de20e3589897------2</link>
            <guid isPermaLink="false">https://medium.com/p/5b26cf6ada8e</guid>
            <category><![CDATA[platform-engineering]]></category>
            <category><![CDATA[technology]]></category>
            <category><![CDATA[software-engineering]]></category>
            <category><![CDATA[team-topologies]]></category>
            <category><![CDATA[engineering-mangement]]></category>
            <dc:creator><![CDATA[Aggelos Bellos]]></dc:creator>
            <pubDate>Sun, 15 Jun 2025 23:23:40 GMT</pubDate>
            <atom:updated>2025-06-15T23:23:40.663Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*HoUBlls89Efjm3Ul" /></figure><blockquote>A platform team’s job is not to enforce standards, it’s to reduce friction.</blockquote><p>Team Topologies introduced a powerful framework for structuring engineering organizations around flow, clarity, and responsibility. Among its four team types, <strong>platform teams</strong> are designed to <strong>support stream-aligned teams</strong>, reducing their cognitive load by providing reusable, internal services and tooling.</p><p>But if platform teams are all about enablement, how can they start slowing everyone down?</p><h3>Thinking the Platform Is the Product</h3><p>When platform teams grow distant from the developers they serve, they begin to act like there are no customers at all. This illusion fuels isolation, prescriptive thinking and misalignment. The platform becomes detached from the actual needs of its users . Ironically, they are just down the hall (or Slack channel).</p><p>Instead of accelerating delivery, the platform becomes another source of drag.</p><p>Common symptoms include:</p><ul><li>Shipping features based on roadmap fantasies, not real requests</li><li>Focusing on code, not usability or feedback</li><li>Measuring success by delivery, not adoption</li></ul><blockquote><em>If nobody uses it, it doesn’t matter that it’s beautiful.</em></blockquote><p>✅ <strong>Fix it:</strong> Treat your work like a product. Talk to users. Validate ideas. Measure satisfaction. Iterate.</p><h3>Operating in the Shadows</h3><p>Ideally, enablement should happen frictionlessly. But without proper communication, it’s just impossible. However, some have cracked the code in a funny way. They have completely removed themselves from the process.</p><p>They vanish in isolation and avoid any ( proper ) communication. Then, on the day of “the big release”, something magical happens. A Slack message!</p><blockquote>- Hello team, the new API that we have never asked on how it should work or be integrated with your code is finally here. You can use it by following…</blockquote><blockquote>- Wait.. who are these people again?</blockquote><p>The result?</p><ul><li>Surprise rollouts that break workflows.</li><li>Features teams don’t want or understand.</li><li>No feedback loops.</li></ul><blockquote>Internal Marketing = Alignment<br><em>An invisible team isn’t a humble team. It’s a disconnected one.</em></blockquote><p>✅ <strong>Fix it:</strong> Build in the open. Share your roadmap. Market your features internally. Visibility builds trust.</p><h3>Shifting from Enablers to Enforcers</h3><p>Being in a position where others request for help is a powerful, yet dangerous, one. What was once just a structure for flow, has become a structure for control. Often, the drift doesn’t happen due to malice but due to lack of communication and teaching skills. It is easier to tell someone what do to than to make them understand why.</p><p>While their intention is consistency, the result is resistance.</p><p>Stream-aligned teams start:</p><ul><li>Building shadow tools.</li><li>Delaying adoption.</li><li>Avoiding the platform team entirely.</li></ul><blockquote><em>Mandates kill trust. Enablement builds it.</em></blockquote><p>✅ <strong>Fix it:</strong> Offer opinionated defaults, not hard rules. If your platform is truly helpful, teams will <em>want</em> to use it.</p><h3>Solving for Hypotheticals, Not Humans</h3><p>Too many platform teams try to future-proof everything:</p><ul><li>Designing for edge cases that may never come.</li><li>Building complex frameworks “just in case.”</li><li>Abstracting problems no one actually has.</li></ul><p>Meanwhile, their customers struggle with basic pain points: slow CI, broken deployments, confusing APIs.</p><blockquote><em>Elegance is worthless if it ignores reality.</em></blockquote><p>✅ <strong>Fix it:</strong> Walk in your customers’ shoes. Sit in their standups. Observe their pain. Then build what they need, not what looks good on a whiteboard.</p><h3>Measuring Output, Not Outcomes</h3><p>If you’re measuring success by:</p><ul><li>Number of microservices created,</li><li>APIs published,</li><li>or internal uptime alone…</li></ul><p>You’re missing the point.</p><p>Platform teams should care about:</p><ul><li>Time to onboard a new service.</li><li>Number of support questions.</li><li>Developer satisfaction.</li><li>Feature adoption rate.</li></ul><p>✅ <strong>Fix it:</strong> Instrument everything. Not just logs, but feedback. Track usage, not just deployment.</p><h3>No Clear Interaction Model</h3><p>Team Topologies stresses the importance of <strong>interaction modes</strong>: X-as-a-Service, collaboration, facilitation. Without them, platform teams default to either building in isolation or pairing endlessly without strategy.</p><p>When teams don’t know <em>how</em> to engage with you, they usually won’t.</p><p>✅ <strong>Fix it:</strong> Make engagement easy and explicit:</p><ul><li>Offer onboarding sessions.</li><li>Run workshops.</li><li>Publish “How to work with us” guides.</li></ul><blockquote><em>The easier you are to work with, the more valuable you become.</em></blockquote><h3>You’re Not Building a Platform. You’re Building Flow</h3><p>At the center of the model are <strong>stream-aligned teams</strong>. Everyone else, including platform teams, exists to <strong>support them as enablers</strong>. That means your success is <strong>measured by their success</strong>.</p><p>A great platform team enables stream-aligned teams to:</p><ul><li>Deliver faster,</li><li>Struggle less,</li><li>And spend more time on product.</li></ul><p>To get there, you must:</p><ul><li>Treat developers as your customers.</li><li>Stay close to their reality.</li><li>Deliver value early and often.</li><li>Create feedback loops.</li><li>And never forget: <strong>you don’t exist to control. You exist to empower.</strong></li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=5b26cf6ada8e" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[PHPStan: How to test if a file exists when requiring it]]></title>
            <link>https://medium.com/@aggelosbellos/static-analysis-how-to-test-if-a-file-exists-when-requiring-it-in-php-4755bcfc8337?source=rss-de20e3589897------2</link>
            <guid isPermaLink="false">https://medium.com/p/4755bcfc8337</guid>
            <category><![CDATA[technology]]></category>
            <category><![CDATA[php]]></category>
            <category><![CDATA[php-developers]]></category>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[tech]]></category>
            <dc:creator><![CDATA[Aggelos Bellos]]></dc:creator>
            <pubDate>Fri, 02 Aug 2024 01:04:39 GMT</pubDate>
            <atom:updated>2024-09-02T02:47:53.978Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*es7V6XEEjwrNv9Iq_i-byQ.png" /></figure><blockquote><strong>Update:</strong> The rule was merged into PHPStan itself! You can find the PR here: <a href="https://github.com/phpstan/phpstan-src/pull/3294">phpstan/phpstan-src#3294</a></blockquote><p>During a refactoring in one of the projects that I work on, we threw away a few legacy classes that where mostly unused. Since the project is a legacy one, the &quot;autoloader&quot; for the classes was actually a require_once . Unfortunately for us, we missed a few of these statements in some of our scripts and they crashed the application when being called. The reason is that require_once throws a fatal error if the file does not exist.</p><p>Besides the human factor, what went wrong and how do we prevent this from happening again?</p><p>We had all these fancy tools on our toolchain ( PHPStan, CodeSniffer ) so how did we ended up with such a trivial bug?</p><p>Well, it turns out that neither of them actually evaluates the arguments passed to the require , include statements 😱</p><h3>Let there be a custom rule: PHPStan 1:3</h3><p>So, we found why we didn’t catch it ( it never worked dah ) let’s find a way so it never happens again.</p><blockquote>You can skip the article and dive right into the code: <a href="https://github.com/Bellangelo/phpstan-require-file-exists">https://github.com/Bellangelo/phpstan-require-file-exists</a></blockquote><p>The best candidate for such a thing seems to be a custom rule in PHPStan. To make PHPStan understand our rule we just need to implement its interface:</p><pre>use PHPStan\Rules\Rule;<br><br>class RequireFileExistsRule implements Rule {}</pre><p>After this, we need to specify what type our Rule listens to through the getNoteType() method. Fortunately for us, PHPStan and PhpParser have done an amazing work to unify several node types of PHP under a single node. In our case, require , require_once , include , include_once are all under the Include_ class.</p><p>Here how our code transforms:</p><pre>use PHPStan\Rules\Rule;<br>use PhpParser\Node\Expr\Include_;<br><br>class RequireFileExistsRule implements Rule {<br><br>  public function getNodeType(): string<br>  {<br>    return Include_::class;<br>  }<br><br>}</pre><p>Now, the “only” missing part is to accept the nodes we requested. This happens through the processNode(Node $node, Scope $scope) :</p><pre>public function processNode(Node $node, Scope $scope): array<br>{<br>  if ($node instanceof Include_) {<br>    $filePath = $this-&gt;resolveFilePath($node-&gt;expr, $scope);<br>    if ($filePath !== null &amp;&amp; !file_exists($filePath)) {<br>      return [<br>        RuleErrorBuilder::message(<br>          sprintf(<br>            &#39;Included or required file &quot;%s&quot; does not exist.&#39;,<br>            $filePath<br>          )<br>        )-&gt;build(),<br>      ];<br>     }<br>    }<br><br>    return [];<br>  }</pre><p>Let’s go line-by-line to see what it does.</p><ol><li>First, we check that the passed $node is an instance of Include_ .</li><li>Then, we resolve the file path ( through magic for now ) by passing the nodes of the Include_</li><li>If the $filePath is not null and the file does not exist then we return an error. We want to make the $filePath be null in case we cannot evaluate it.</li></ol><p>Of course, inside the method all the magic happens in the resolveFilePath . We need to find out what nodes can a Include_ contain and handle every possible case. It turns out, that almost every possible node can exist inside a Include_</p><h4>String_</h4><p>The easiest node to start with, is of course, a string. Thankfully, it doesn’t require any special handling:</p><pre>if ($node instanceof String_) {<br>   return $node-&gt;value;<br>}</pre><h4>Dir</h4><p>The most common way of creating absolute paths, __DIR__ . Once again, PhpParser is on our side and has a node for it. The only catch is that it doesn’t provide any value that we can use to calculate the filename. Instead, we need to use the Scope that PHPUnit provides us with:</p><pre>if ($node instanceof Dir) {<br>   return dirname($scope-&gt;getFile());<br>}</pre><h4>Concat</h4><p>An enexpected but real thing. When you use concatenation most static analysis tools convert it into a different node type. This node type can then contain almost anything. Sounds like a job for recursion:</p><pre>if ($node instanceof Concat) {<br>   $left = $this-&gt;resolveFilePath($node-&gt;left, $scope);<br>   $right = $this-&gt;resolveFilePath($node-&gt;right, $scope);<br>   <br>   if ($left !== null &amp;&amp; $right !== null) {<br>      return $left . $right;<br>   }<br>}</pre><h3>Can we do better?</h3><p>Obviously, it is a lot more difficult to handle variables, method, functions etc etc. Anything that is more dynamic than our previous cases. Although, class constants might be a good candidate. Let’s try to check if the Node is a class constant and fetch its value through reflection:</p><pre>if ($node instanceof ClassConstFetch) {<br> return $this-&gt;resolveClassConstant($node);<br>}</pre><pre>private function resolveClassConstant(ClassConstFetch $node): ?string<br>{<br>  if ($node-&gt;class instanceof Node\Name &amp;&amp; $node-&gt;name instanceof Node\Identifier) {<br>    $className = (string) $node-&gt;class;<br>    $constantName = $node-&gt;name-&gt;toString();<br><br>    if ($this-&gt;reflectionProvider-&gt;hasClass($className)) {<br>      $classReflection = $this-&gt;reflectionProvider-&gt;getClass($className);<br><br>      if ($classReflection-&gt;hasConstant($constantName)) {<br>         $constantReflection = $classReflection-&gt;getConstant($constantName);<br>         $constantValue = $constantReflection-&gt;getValue();<br>         <br>         if (is_string($constantValue)) {<br>           return $constantValue;<br>         }<br>      }<br>    }<br>  }<br>  <br>  return null;<br>}</pre><p>There should be other cases as well that we can do a better job, such as global constants ( define ) but you are more than welcome to open a PR for any case that thought is missing: <a href="https://github.com/Bellangelo/phpstan-require-file-exists">https://github.com/Bellangelo/phpstan-require-file-exists</a></p><p>Don’t forget to star the project if you like it ⭐️</p><p>You can always install it by running:</p><pre>composer require bellangelo/phpstan-require-file-exists</pre><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=4755bcfc8337" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Modular Monoliths: Microservices without a cost?]]></title>
            <link>https://medium.com/@aggelosbellos/modular-monoliths-microservices-without-a-cost-623bb0625816?source=rss-de20e3589897------2</link>
            <guid isPermaLink="false">https://medium.com/p/623bb0625816</guid>
            <category><![CDATA[technology]]></category>
            <category><![CDATA[devops]]></category>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[software-architecture]]></category>
            <category><![CDATA[microservices]]></category>
            <dc:creator><![CDATA[Aggelos Bellos]]></dc:creator>
            <pubDate>Mon, 27 May 2024 02:17:07 GMT</pubDate>
            <atom:updated>2024-05-27T02:17:07.629Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*4qwLtS61rqVAP9t3hVrmBQ.jpeg" /></figure><p>In recent years, the Microservice architecture still has attached some “coolness”. While the term premiered 13 years ago, the promises that it brings still holds up the hype around it. However, as most solutions, Microservices are often adopted for the wrong reasons or to put it better, for the problems that they do not have ( yet ). Is the hype justified, or are we overlooking simpler, more effective solutions?</p><h3>The promise of the Microservices</h3><p>Microservices are all about scaling, whether is code or people. This architecture lets you manage and deploy certain “aspects” of your application independently from the others. Thus, converting your applications into a collections of mini / micro services. The new size of each new service forces a reduction to its scope itself. Sometimes, this is the goal itself. A reduced scope comes with increased manageability. But by who? Of course, by people. For good or worse, the abilities we value most ( readability, reusability, etc ), ultimately depend on the people who interact with the code.</p><h3>Local complexity vs Global complexity</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*g7yJRw9xfwZt_sVtFTNibg.png" /></figure><p>The run to optimize each new service is not a simple marathon but a triathlon. To name a few of the main challenges:</p><ol><li><strong>Inter-service communications:</strong> What do you mean the network is down? Should I retry the request? Wait.. did I just DDOS myself?</li><li><strong>Transactions:</strong> Hmm.. for some reason transactions stopped working. What did you say? Saga pattern, orchestration? Stop talking nonsense, we are engineers, not musicians.</li><li><strong>Dependency management / Deployment: </strong>Finally, every service can be deployed independently. What do you mean that the Service C needs to be deployed with a specific version of Service D?</li></ol><p>Not all of these challenges can be seen when you look at just one service. On our local little island, we don’t care about outsiders. Although, we should. The Microservices architecture splits the application by bringing another player to the game, the network. Although appealing, reliable communication between services is challenging due to potential network failures and message delays.</p><p>Part of the network problem is transactions. Again, transactions inside our local island will work perfectly fine. The problem are the distributed transactions. Managing transactions across distributed services often requires complex patterns like Sagas and Orchestration, which can add significant overhead and complexity.</p><p>Last but not least, we have independent deployment. While it is a key benefit of microservices, but inter-service dependencies can complicate the process, sometimes necessitating synchronized deployments.</p><p>Microservices bring many things and one of them is complexity. We all want our service to be fully optimized, but we cannot do it without considering the complexity of the whole system.</p><h3>Introducing Modular Monoliths</h3><p>Modular Monoliths take a local perspective of the Microservices’s promise.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*L03mfOtL3f77JXDp9zaPOw.gif" /></figure><h4>Network</h4><p>Since most of the things happen locally, we can immediately remove the network. Without the network, the problem of the inter-service communications disappears, or depending on the application it is reduced.<br>So far, this sounds exactly like a normal monolith. The difference is that we will be keeping the services.</p><h4>Modules</h4><p>A Modular Monolith is split into Modules. Each Module should mostly work as a service in a Microservice. This means:</p><ol><li><strong>Encapsulation:</strong> Each module encapsulates its own functionality, keeping its internal implementation hidden from other modules. This ensures that changes within a module do not affect others, promoting stability and ease of maintenance.</li><li><strong>Defined Interfaces:</strong> Modules communicate with each other through well-defined interfaces or APIs. This clear separation of concerns allows for more organized and manageable code, making it easier to understand and develop.</li><li><strong>Independent Development:</strong> Teams can work on different modules independently, similar to how they would handle Microservices. This parallel development can speed up the overall development process and reduce bottlenecks.</li></ol><h4><strong>Dependency management / Deployment</strong></h4><p>While Modules operate like a service they still co-exist in the same codebase. This means we can have shared resources between and most importantly, handle them as a single deployment unit. Thus, eradicating completely the need for a complex deployment system.</p><h3>No silver bullet</h3><p>Modular Monoliths sounds really great when compared to Microservices. The truth is that there is no silver bullet when we are talking about architecture. The only thing that separates a Module from another Module is the folder structure. Everything else is shared, creating a resource contention. <br>If everything is shared then so is scaling. Unlike Microservices, which can be scaled independently based on their specific demands, a modular monolith requires scaling the entire application.<br>Bypassing the filesystem is something that everyone can do. Even with modularization, there is a risk of modules becoming tightly coupled over time. This can happen due to shared code, common libraries, or direct references, making it harder to maintain a clean separation of concerns.</p><h3>Conclusion</h3><p>Choosing the right software architecture is crucial for the success of any project. The introduced complexity by the Microservices does not always justify the remarkable scalability and flexibility it offers. Conversely, Modular Monoliths offer modularity while keeping a simple single deployment unit , yet they still face challenges such as resource contention and scaling limitations.</p><p>The key to selecting the best architecture lies in understanding the specific context and needs of your project. Factors such as team size, expertise, project scale, and long-term maintenance when making your decision. Sometimes, the simplicity and efficiency of a modular monolith can be the perfect fit for a project’s requirements, while other times, the granularity and scalability of Microservices are necessary to meet the demands of a growing application.</p><p>If you are starting a new project, choosing a Microservice approach will be like shooting yourself in the foot. Most projects require just a Monolith and an early stage project is one of them. Ultimately, the most important aspect is to make an informed choice that best supports your team’s capabilities and your project’s success.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=623bb0625816" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Database Connection Pooling: Optimizing Database Interactions for Performance and Scalability]]></title>
            <link>https://medium.com/@aggelosbellos/database-connection-pooling-optimizing-database-interactions-for-performance-and-scalability-62d95a1f7b4c?source=rss-de20e3589897------2</link>
            <guid isPermaLink="false">https://medium.com/p/62d95a1f7b4c</guid>
            <category><![CDATA[php]]></category>
            <category><![CDATA[database]]></category>
            <category><![CDATA[technology]]></category>
            <category><![CDATA[architecture]]></category>
            <category><![CDATA[python]]></category>
            <dc:creator><![CDATA[Aggelos Bellos]]></dc:creator>
            <pubDate>Thu, 04 Jan 2024 01:58:47 GMT</pubDate>
            <atom:updated>2024-01-04T01:58:47.953Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*gxEbme03Mn4qbWCzkmbL8g.png" /></figure><p>Efficiently managing database interactions is crucial for performance and scalability. A common element in database interactions is the opening and closing of connections. Could there be a way to streamline this process by sharing connections across multiple requests?</p><h3>Basics of Database Connection Pooling</h3><h4>What is Database Connection Pooling?</h4><p>Database connection pooling refers to the practice of maintaining a cache of database connection objects. This cache, or “pool,” allows for the reuse of connections, eliminating the overhead of establishing a new connection with each database interaction.</p><h4>How It Works</h4><p>When an application needs to execute a database query, it retrieves a connection from the pool rather than opening a new one. After the operation, the connection is returned to the pool for future use. This process significantly reduces the time and resources spent on opening and closing connections.</p><h3>Advantages of Connection Pooling</h3><h4>Performance Improvement</h4><p>By reusing existing connections, connection pooling minimizes the latency associated with establishing new connections, leading to faster query execution.</p><h4>Resource Management</h4><p>Connection pools help in efficient resource management by controlling the number of active connections, thus preventing database overload.</p><h4>Scalability</h4><p>With connection pooling, applications can handle more concurrent database operations, enhancing scalability.</p><h3>Implementing Connection Pooling</h3><h4>Choosing a Pooling Library</h4><p>Selecting an appropriate connection pooling library is pivotal. Each library has its unique features and configurations.</p><p>Configuration Parameters Key parameters to configure in a connection pool include:</p><ul><li><strong>Maximum and Minimum Pool Size:</strong> Determines the pool’s capacity.</li><li><strong>Connection Timeout:</strong> The maximum time to wait for a connection from the pool.</li><li><strong>Idle Connection Test Period:</strong> Frequency of checking idle connections.</li></ul><h4>Best Practices</h4><ul><li><strong>Monitor Your Pool:</strong> Regular monitoring can help in fine-tuning the pool size and parameters.</li><li><strong>Exception Handling:</strong> Implement robust exception handling to deal with scenarios where connections are unavailable.</li><li><strong>Resource Cleanup:</strong> Ensure that connections are properly closed and returned to the pool.</li></ul><h3>Tailoring Pooling Strategies to Language Environments</h3><h3>Shared Process Languages</h3><p>In programming environments where multiple requests are handled within the same process, such as in Java or .NET applications, the implementation of connection pooling can be directly integrated within the application. This is because these environments typically run a continuous process that can maintain state (like a connection pool) across multiple requests.</p><h4>How It Works in Shared Process Languages</h4><ul><li><strong>Shared Memory Space:</strong> Since all requests share the same memory space, a connection pool can be maintained as a global or singleton resource accessible by all parts of the application.</li><li><strong>Long-Lived Processes:</strong> These environments usually have long-lived processes (like a web server or application server), which makes it feasible to keep a pool of connections alive and readily available for incoming requests.</li></ul><h4>Benefits in Shared Process Environments</h4><ul><li><strong>Immediate Access:</strong> Connections are immediately available to any request processed by the application, reducing the overhead of connection creation and disposal.</li><li><strong>Resource Efficiency:</strong> This approach maximizes the efficient use of database connections, as the pool is centrally managed and optimized by the application itself.</li></ul><h4>Implementation Considerations</h4><ul><li><strong>Thread Safety:</strong> Ensure that the connection pool is thread-safe, as it will be accessed by multiple concurrent requests.</li><li><strong>Configuration Tuning:</strong> Optimize the pool size and other parameters based on the application’s load and database usage patterns.</li><li><strong>Monitoring and Management:</strong> Continuously monitor the pool’s performance and health. Implement management features like connection validation and aging to maintain pool efficiency.</li></ul><h3>Process-Per-Request Languages</h3><p>In languages where each request runs in its own process (such as PHP or Python), an external service is needed to manage the pooling. These services vary based on the database system in use. For PostgreSQL, <a href="https://www.pgbouncer.org/">PgBouncer</a> is a widely used and actively maintained solution. For MySQL and its derivatives like MariaDB, <a href="https://www.proxysql.com/">ProxySQL</a> is a highly efficient choice.</p><h3>Conclusion</h3><p>Database connection pooling is an essential strategy for enhancing the performance and scalability of applications. By understanding its principles and implementing it effectively, software engineers can achieve significant improvements in database interaction efficiency. Embracing these practices not only optimizes resource usage but also paves the way for more robust and scalable application architectures.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=62d95a1f7b4c" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Database Replication: Solving the Time Machine Issue]]></title>
            <link>https://medium.com/@aggelosbellos/database-replication-solving-the-time-machine-issue-c654b5bf52da?source=rss-de20e3589897------2</link>
            <guid isPermaLink="false">https://medium.com/p/c654b5bf52da</guid>
            <category><![CDATA[database]]></category>
            <category><![CDATA[technology]]></category>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[architecture]]></category>
            <category><![CDATA[software-development]]></category>
            <dc:creator><![CDATA[Aggelos Bellos]]></dc:creator>
            <pubDate>Thu, 04 Jan 2024 00:47:37 GMT</pubDate>
            <atom:updated>2024-01-04T09:20:24.508Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*UA7VkLVqnccxHVRO60VmDQ.png" /></figure><p>Database replication is a common practice for scaling your application. It involves deploying multiple copies of the same database to ensure data availability and reliability. While this horizontal scaling is simple in theory, it still hides a lot of complexity. Since a request can hit a different replica each time it can lead to inconsistencies and confusion. <br>If you developed a system like this, congrats. You just, accidentally, made a time machine!</p><h3>Understanding Database Replication</h3><p>Before diving even further on the “time machine” issue, let’s better understand what database replication is. At its core, database replication is the process of copying and maintaining database objects, like tables, in multiple location. This can happen in real-time or in near-real-time.</p><p>The operation can be:<br><strong>Synchronous:</strong> This means that for the operation to be considered completed, the changes must be written to all the replicas.</p><p><strong>Asynchronous</strong>: The changes are synchronised at a later time allowing potential temporal disparities.</p><h4>Why Replicate?</h4><p>Replication serves several key purposes:</p><ol><li><strong>High Availability: </strong>By having multiple copies, we can ensure that the data remains accessible. In the event of the hardware failure, network issue, or other disruptions another replica can serve the request.</li><li><strong>Load Balancing: </strong>Multiple workers mean less work for each worker. Distributing the load across multiple servers reduces the load that reaches each server and as such, enhance the performance.</li><li><strong>Disaster Recovery: </strong>In catastrophic events, having data replicated in multiple geographic locations can safeguard against data loss.</li></ol><p>However, replication comes with some challenges. The foremost among these is ensuring consistency across replicas. Systems that favor high availability and partition tolerance are the most common targets that are “hit” by this challenge.</p><h3>The Time Machine Issue</h3><p>In layman’s terms, the time machine issue emerges when different replicas contain data from different moments in time. This situation can lead to paradoxical scenarios where multiple identical requests can result in different responses, despite the requests and the underlying queries being simultaneous. The root of this problem lies in replication lag — the time it takes for changes made to the primary database to propagate to its replicas.</p><h4>Impact on Users</h4><p>For users, this inconsistency can be baffling and frustrating. Imagine an e-commerce scenario where a customer clicks to view a product from the search but it results to a “404 — Not Found” page. Such experiences reduce the trust and reliability in the system.</p><h4>Tackling the Challenge</h4><p>Addressing the time machine issue requires a many-sided approach. Here are some strategies:</p><ol><li><strong>Improved Synchronization: </strong>Optimising the synchronization protocols can minimise the replication lag and ensure the replicas are as up-to-date as possible.</li><li><strong>Read-Write Splitting with Smart Routing: </strong>Directing all write operations to a primary node and reads to replicas, coupled with intelligent routing that considers data freshness, can mitigate the risks of reading stale data. The database itself might even be able to handle this by marking which data are “ready” to be served. This combines an asynchronous approach with a synchronous mechanism.</li><li><strong>User-Centric Solutions</strong>: I am sure you have heard that session stickiness is a bad thing, but what about database connection stickiness? Connection stickiness can ensure that a user will hit the same replica in all of his requests. A simple implementation to this consists of creating a function that returns a replica based on the user’s id. Depending on the application’s needs even this might not be enough. For example, if you have a SaaS product and customer has its own portal you might want your function to be based on the portal’s id. This way, you achieve consistency among all the users you “care” about. An even more complex scenario might be a social network, where each user is everywhere and deciding for a specific replica is well.. complicated.</li></ol><h3>Conclusion</h3><p>In conclusion, the “time machine” issue in database replication is a complex problem that requires careful consideration and tailored solutions. By understanding the root of the problem and employing strategic measures based on your business needs, you can ensure reliable and accurate data to users.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=c654b5bf52da" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Enhancing Microservices Architecture: Harnessing the Power of the Sidecar Pattern]]></title>
            <link>https://medium.com/@aggelosbellos/enhancing-microservices-architecture-harnessing-the-power-of-the-sidecar-pattern-8e4adaadc49c?source=rss-de20e3589897------2</link>
            <guid isPermaLink="false">https://medium.com/p/8e4adaadc49c</guid>
            <category><![CDATA[software-architecture]]></category>
            <category><![CDATA[microservices]]></category>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[microservice-architecture]]></category>
            <category><![CDATA[technology]]></category>
            <dc:creator><![CDATA[Aggelos Bellos]]></dc:creator>
            <pubDate>Thu, 07 Dec 2023 23:48:57 GMT</pubDate>
            <atom:updated>2023-12-08T06:22:02.998Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*DGH29IQd9e8626gKFKnyqQ.png" /></figure><p>A well designed service in software architecture must be modular and have a single responsibility. Yet, as services evolve and mature, the demand for new features is increased. While some of them can be developed as a whole new service there are some cases where a more integrated approach is preferred. In this article, we will discuss how the sidecar pattern can be used to extend our services while keeping its resiliency.</p><h3>What is the Sidecar Pattern?</h3><p>The Sidecar pattern is nothing more than an additional container. This container runs alongside your main application’s container. Unlike traditional practices, the sidecar container shares resources with the application container such as file system and network. This is done so it can seamlessly enhance the application’s functionality. Most often, this happens without the knowledge of the application. Although, the application might not know about the sidecar container their lifecycle is mutual. This means that they are deployed and handled together as a single application.</p><h3>Benefits</h3><ol><li><strong>Isolation of Responsibilities:</strong> The sidecar pattern can keep supportive functionalities outside from the main application. This means that features such as logging, monitoring and configuration can be handled by other teams ( DevOps ).</li><li><strong>Improved Scalability and Performance:</strong> By offloading certain aspects of the application to sidecar components, the main service can perform its core functions more efficiently. By having a “close friend” to handle the heavy load the application itself can keep up with the demand without any disruption.</li><li><strong>Enhanced Security and Reliability:</strong> In the sidecar pattern, separate components handle critical security tasks such as authentication and encryption. This approach adds a robust layer of security without cluttering the primary service with complex security protocols. Moreover, it boosts system reliability by localizing and managing fault handling and recovery processes within these sidecars.</li><li><strong>Easier Upgrades and Maintenance:</strong> This is a side-effect of the isolation of responsibilities. As each container is isolated it means that it can have different maintenance and upgrade cycles.</li><li><strong>Flexibility in Technology Choices:</strong> Running a new container abstracts us from the “problems” of the main service. This gives us the flexibility to choose the right language and tools for the right job.</li></ol><h3><strong>When not to use it?</strong></h3><ol><li><strong>Simple or Monolithic Applications:</strong> The Sidecar pattern is used to reduce complexity. For simple or truly monolithic applications, this pattern might introduce unnecessary complexity.</li><li><strong>Tight Coupling Needs:</strong> If the application requires tight coupling between its main functionality and supportive services (like logging or monitoring), using the sidecar pattern can be counterproductive. The pattern works best when there’s a clear distinction and independence between the main service and its sidecars.</li><li><strong>Limited Resources or Infrastructure:</strong> Running 2 containers as a single application requires orchestration. Having a limited infrastructure environment that doesn’t provide such options make this pattern difficult to implement. Furthermore, containers requires more resources than bare-metal which is another thing to consider.</li><li><strong>Performance-Sensitive Applications:</strong> If performance is a critical factor, an esoteric approach might be preferred. Sometimes, even the small extra hops that the data need to do over the local network might be not viable.</li><li><strong>Small Development Teams:</strong> An additional container is an additional thing to maintain. Small teams usually work better when they manage a monolithic application and the sidecar pattern is quite the opposite.\</li></ol><h3>Strategic advantage in software development</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*dXj-rthCbGAbNjepIZSjaw.png" /></figure><p>The Sidecar pattern offers a significant strategic advantage in software development, particularly in the context of collaboration with DevOps teams. By adopting this pattern, developers can utilise plug-n-play modules designed by a specialised team. Solution such as logging and monitoring can be standardised and re-used across different services and teams. This not enables new services to be production-ready immediately but also retracts the focus of the developers to the main application.</p><h3>Real world examples</h3><p>Having explored the Sidecar pattern in theory, let’s examine its real-world applications.</p><h4>Adding HTTPS to a legacy service</h4><p>Not all services are designed to accept HTTPS, especially in the legacy universe. Due to how coupled a legacy service finding a solution that does not require any changes to the main service is always a good idea. <br>Imagine that your service listens to 127.0.0.1. This means that our service listens only to local connection. We can add a Nginx service through the Sidecar pattern that share the same network as our main service. This way, our Nginx service can listen to outside traffic, in whatever protocol, and pass it to our legacy service. We have successfully modernised our legacy application by developing, basically, a SSL Proxy.</p><h4>Auto-updating of configuration</h4><p>There are cases where we need a service to automatically update its configuration from a specific source. This is not usually the focus of the service itself so we can outsource it to an external service. Taking advantage, again, of the Sidecar pattern we can create a container that share the same file system with our application’s container. This way, even a simple git pull loop would be sufficient to fetch any new changes. Depending the case, a more sophisticated approach could be used such as web hooks or by directly providing an API.</p><h4>Modernising asynchronous jobs</h4><p>Imagine, that we have a legacy application that runs asynchronous jobs by calling directly a script ( exports, mass actions, etc ). After a while, we want to have more control over the whole process as there are many cases of resource exhaustion. To separate the resources we will utilise the <a href="https://medium.com/@aggelosbellos/building-resilient-systems-exploring-the-bulkhead-and-circuit-breaker-patterns-c11705af3c44">Bulkhead pattern</a> and deploy the scripts and their dependencies in a different environment than the main service. Sounds good in theory, but since the scripts run in a different service now, how can we run them?<br>Following the modern standards we choose to follow the Publisher — Subscriber pattern for the communication part but still, our legacy scripts cannot be refactored. For this reason, we will deploy the whole new approach as an extension to our scripts. This means, that the Subscriber service will exist in the sidecar container which will share the same file system as the container of our scripts so it can call them. We have successfully transitioned a part of our application to microservices, even though the container for our scripts remains a monolithic application in itself.</p><h3>Summary</h3><p>The Sidecar pattern enhances, rather than complicates, the primary service through isolated yet connected components. We saw how it allows for a neat division of labor, entrusting supportive functionalities like logging and monitoring to specialized sidecar components. While all of these sounds good, as we discussed, it is not a pattern that can be used for all the cases. Although, in the right problem it can even improve the progress of your teams.</p><p>In summary, our exploration of the Sidecar Pattern reveals it as a harmonious blend of functionality, security, and efficiency, while respecting the principles of microservices architecture.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=8e4adaadc49c" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Building Resilient Systems: Exploring the Bulkhead and Circuit Breaker Patterns]]></title>
            <link>https://medium.com/@aggelosbellos/building-resilient-systems-exploring-the-bulkhead-and-circuit-breaker-patterns-c11705af3c44?source=rss-de20e3589897------2</link>
            <guid isPermaLink="false">https://medium.com/p/c11705af3c44</guid>
            <category><![CDATA[technology]]></category>
            <category><![CDATA[software-architecture]]></category>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[microservices]]></category>
            <category><![CDATA[tech]]></category>
            <dc:creator><![CDATA[Aggelos Bellos]]></dc:creator>
            <pubDate>Mon, 04 Dec 2023 23:19:31 GMT</pubDate>
            <atom:updated>2023-12-05T08:59:40.343Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="The Bulkhead and Circuit Breaker patterns" src="https://cdn-images-1.medium.com/max/1024/1*QsrGOpgPhq68SYK1DaVepw.png" /></figure><p>In the era of microservices, it’s common for services to either depend on others or act as dependencies. These situations call for a method to robustly safeguard our application’s components when a dependency acts up or doesn’t respond at all.</p><h3>The Bulkhead Pattern</h3><p>Coming from ship design, where bulkheads divide a ship into sections to stop flooding, this pattern does something similar for microservices, helping to keep them safe. It mainly focuses on the isolation of the resources and thus preventing a whole systemic failure. The isolation is achieved through resource pools consisting of anything that can be exhausted ( CPU, GPU, memory, etc ).</p><h3>Read world example: Read vs Update</h3><p>Moving beyond theory, let’s dive into the real world. Supposedly we have 2 endpoints:</p><ul><li>GET /products/{id} : Returns the data of a single product. It is a lightweight action that its response can easily be served directly from the cache.</li><li>PATCH /products/{id} : Updates the data of a single product. This ‘heavy’ action may involve updating data across multiple storage systems and invalidating cache entries.</li></ul><p>We can easily see that the PATCH action is a more resource-intensive request that can easily hurt the performance of the application.Consider the scenario where this endpoint receives multiple simultaneous requests. A lot of write requests will hit the database and start clogging the available connections. Of course, the issue is not isolated and it affects also the users that just want to view the product’s data.</p><p>A typical solution is to set a maximum number of threads that can handle this endpoint. This will work as a rate-limiter and will minimise the maximum resources that it can consume.</p><h3>Real world example: Service as a dependency</h3><p>Goind straight to the point, we once again have 2 endpoints:</p><ul><li>GET /movies : Returns a list of movies. Its response may or may not be served from the cache.</li><li>GET /movies/{id} : Returns the data of a specific movie. To construct the necessary data it has 2 dependencies:<br>1. RatingsService: It returns the ratings for the movie.<br>2. CommentsService: It returns the comments for the movie.</li></ul><p>The first endpoint is straightforward, with minimal dependencies. Caching its results could significantly boost performance.</p><p>On the other hand, the movies/{id} is dependant to the performance of its dependencies. While we might have isolated each service’s resource, a reduced performance in any dependency will affect the “parent” service and probably the other dependencies as well. To protect the resources of the “parent” service a rate-limiting approach might be insufficient. The reason for this is that the idling process might be more stress to other services or exhaust the resources of itself. Through the usage of time-outs and fallbacks we can still return a partially correct response and prevent the failure of the whole service. For example, even the RatingsService is down or it has a degraded performance we can still return the movie’s main data and its comments.</p><p>We increased the isolation even more but what if a service starts acting up? how do we stop from making things worse?</p><h3>The Circuit Breaker pattern</h3><p>In a distributed environment each service is autonomous. By definition, this makes each service unreliable. A server can be on fire or simple network errors might occur. Because of this uncertainty there is a need for a service to alert when it has stopped accepting requests.</p><p>It sounds good, but why do we need an external service to know when another service is unresponsive? we can just ping it directly, right? <br>The answer of every good engineer, it depends. For simple applications this might be sufficient and you might never have a problem. For bigger application the things are a little bit different. When a service becomes unavailable, its requests will start to pill-up and eventually might be too many for the rervice to restart itself and start again. So an infinite process of pinging and restarting might happen.</p><p>Here comes to the rescue the Circuit Breaker pattern. We basically build on top of our components another service that is responsible to monitor their health and proxy the traffic to them. In case the perfomance of a component gets degraded the Circuit Breaker stops proxying the traffic to the service. Using a controlled pinging system (where only one client checks health status), the Circuit Breaker can resume directing traffic to the component. More complex criteria can set about when the switch is activated but the main idea stays the same.</p><h3>Summary</h3><p>In our adventure of resilient systems we have discussed two pivotal patterns: The Bulkhead and the Circuit Breaker. The Bulkhead pattern, drawing inspiration from naval architecture, emphasizes the importance of resource isolation. It ensures the issues of one segment do not escalate into a systemic failure.</p><p>On the other hand, the Circuit Breaker addresses the challenges of the inter-service dependencies in a distributed environment. It keeps an eye on services’s health and carefully manages the flow of requests to stop the system from getting overwhelmed. This pattern is really important for dealing with unexpected problems like network issues or server failures.</p><p>In conclusion, both patterns play a critical role in building robust and resilient microservices architectures. By isolating services and smartly managing traffic, these patterns collectively enhance system stability and reliability. Their application can significantly uplift the resilience and efficiency of distributed systems, ensuring smoother and more reliable operations in dynamic and challenging environments.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=c11705af3c44" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Domain Events and Domain Services]]></title>
            <link>https://medium.com/@aggelosbellos/domain-events-and-domain-services-8f879c632b32?source=rss-de20e3589897------2</link>
            <guid isPermaLink="false">https://medium.com/p/8f879c632b32</guid>
            <category><![CDATA[technology]]></category>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[software-engineering]]></category>
            <category><![CDATA[domain-driven-design]]></category>
            <category><![CDATA[software-architecture]]></category>
            <dc:creator><![CDATA[Aggelos Bellos]]></dc:creator>
            <pubDate>Sun, 19 Nov 2023 01:03:34 GMT</pubDate>
            <atom:updated>2023-11-19T01:03:34.333Z</atom:updated>
            <content:encoded><![CDATA[<p>As you begin developing your application, you may encounter elements that are inherently vertical and don’t fit into a specific domain context. Additionally, there might be times when you need to execute code from unrelated contexts in response to certain events. This article explores how Domain Events and Domain Services address these challenges.</p><blockquote>Most probably you will have both problems.</blockquote><h3>Domain Events</h3><h4>What Are They?</h4><p>A domain event is essentially a message that signifies an action within our business context. <br><em>Examples: User Registered, User Bought Course, Course Assigned to User.</em></p><h4><strong>How to use them:</strong></h4><p>Domain Events are mostly used with the combination of Event-Driven Architecture. In this architecture, we emit events that subscribers respond to, ensuring a separation of concerns as each subscriber operates independently.<br>In the context of Domain-Driven Design (DDD), we can link different aggregates to a specific business event. For example, once an aggregate completes a business action, it immediately emits a business event to signal this action. Subscribers, which could be other aggregates or processes, then execute their own business logic and update their states accordingly. It’s important to note that the publishers and subscribers operate independently, without concern for who is listening to the event.</p><h4>What is the Difference Between a Domain Event and a Regular Event in Event-Driven Architecture?</h4><p>While both types of events might utilize the same queuing and publishing mechanisms, a key distinction exists. Domain Events are, at their core, business events. Their naming should reflect this, adhering to our business language to ensure comprehensibility among all project stakeholders.<br>Another notable difference lies in the use of tense in naming. Domain Events should always be in the past tense as they describe actions that have already occurred. In contrast, events in Event-Driven Architecture can be direct commands or updates for state changes. For instance, a ‘User Update Email’ event might prompt an aggregate to change its state.<br>To summarize, Event-Driven Architecture allows for events to function as commands or notifications about actions, whereas Domain Events specifically indicate that a business action has occurred.</p><h3>Domain Services</h3><p>Most of the time, certain logic is shared across different domain contexts, aggregates, or even value objects and typically does not belong exclusively to any of them. Domain Services are the solution to this challenge.<br>A Domain Service is nothing more than a stateless object that handles the business logic by working as co-ordinator between different parts of our application.<br>While a Domain Service can handle multiple aggregates we should keep in mind the atomic operation of them. We shouldn’t use them as way to bypass the transactional life of the aggregates. Instead, Domain Services should be seen as a means to execute calculations across different aggregates and facilitate data sharing.</p><h3>What to use and when?</h3><p>Deciding whether to use Domain Events or Domain Services hinges on the specific requirements of your application’s functionality. In practice, most applications benefit from the integration of both.</p><ul><li><strong>Use Domain Events</strong> when you need to signal that something significant has occurred in your business domain. They are ideal for scenarios where various parts of your application need to react to specific events, but these reactions are independent of each other. For example, after a user registers, you might have multiple systems that need to respond to this event, like sending a welcome email or updating a user database.</li><li><strong>Use Domain Services</strong> when there’s a need for complex business logic that spans multiple domain entities or aggregates. They are especially useful for orchestrating processes that involve multiple steps or interactions between different parts of your system. For instance, a Domain Service might coordinate a series of steps to complete a financial transaction, involving validation, updating multiple records, and notifying the relevant parties.</li></ul><p>In essence, choose Domain Events for decentralized, reactive processes and Domain Services for centralized, complex business logic coordination. Often, these two concepts will coexist and complement each other within a well-architected application.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=8f879c632b32" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>