<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/">
    <channel>
        <title>Elijah Potter's Blog</title>
        <link>https://elijahpotter.dev</link>
        <description>The writings of Elijah Potter</description>
        <lastBuildDate>Thu, 14 May 2026 07:49:57 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>https://github.com/jpmonette/feed</generator>
        <ttl>60</ttl>
        <copyright>All rights reserved 2025, Elijah Potter</copyright>
        <item>
            <title><![CDATA[A Quick Update on Harper Desktop]]></title>
            <link>https://elijahpotter.dev/articles/a_quick_update_on_harper_desktop</link>
            <guid>https://elijahpotter.dev/articles/a_quick_update_on_harper_desktop</guid>
            <pubDate>Fri, 08 May 2026 16:08:14 GMT</pubDate>
            <description><![CDATA[<h1>A Quick Update on Harper Desktop</h1>
<p>This week of Radical Speed Month has been dedicated to putting the disparate components of the Harper Desktop App together.
In other words, we have been in what <a href="https://www.youtube.com/@stuffmadehere">Shane Wighton</a> would call "integration hell".</p>
<p>Due to the constraints put in place by the operating system and our decision to use Tauri, Harper Desktop needed to be split into three main pieces: the highlighter process, the main Rust Tauri process, and the Svelte frontend.
We use the main Rust Tauri process as the source-of-truth for all user configuration and state, which meant we needed to orchestrate a nontrivial amount of communication between it and the other two components.</p>
<p>At the same time, I've been integrating everything (including <a href="https://github.com/Automattic/harper/pull/3278">Jason's wonderful revamp of the Harper editor</a>) into the main Harper repository.
It's been a slow going process, and there is still some work to be done with CI.
At the same time, it's quite satisfying to see everything click into place.</p>
<p>Next, we plan to focus our attention on the more fit-and-finish pieces of the project.
That means wrapping up the implementation of certain parts of the configuration, building out updating logic, and creating an actual download page on our website.</p>]]></description>
            <content:encoded><![CDATA[<h1>A Quick Update on Harper Desktop</h1>
<p>This week of Radical Speed Month has been dedicated to putting the disparate components of the Harper Desktop App together.
In other words, we have been in what <a href="https://www.youtube.com/@stuffmadehere">Shane Wighton</a> would call "integration hell".</p>
<p>Due to the constraints put in place by the operating system and our decision to use Tauri, Harper Desktop needed to be split into three main pieces: the highlighter process, the main Rust Tauri process, and the Svelte frontend.
We use the main Rust Tauri process as the source-of-truth for all user configuration and state, which meant we needed to orchestrate a nontrivial amount of communication between it and the other two components.</p>
<p>At the same time, I've been integrating everything (including <a href="https://github.com/Automattic/harper/pull/3278">Jason's wonderful revamp of the Harper editor</a>) into the main Harper repository.
It's been a slow going process, and there is still some work to be done with CI.
At the same time, it's quite satisfying to see everything click into place.</p>
<p>Next, we plan to focus our attention on the more fit-and-finish pieces of the project.
That means wrapping up the implementation of certain parts of the configuration, building out updating logic, and creating an actual download page on our website.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Product Product Product]]></title>
            <link>https://elijahpotter.dev/articles/product_product_product</link>
            <guid>https://elijahpotter.dev/articles/product_product_product</guid>
            <pubDate>Fri, 01 May 2026 22:00:42 GMT</pubDate>
            <description><![CDATA[<h1>Product, Product, Product</h1>
<p>Working this past week with Jason Adams has given me a brand-new appreciation for the category of ideas that we engineers call "product."</p>
<p>For a long time, I've discussed the difficult-to-define word "Quality" and it's importance to the success of software projects.
To me, capital "Q" Quality meant that a piece of software was fast, extensible, and reliable.
It was a checklist and a series of metrics that could be tracked and optimized through iteration.
I had, and arguably still have, an engineer's definition of "Quality".</p>
<p>But the version of Quality that can be defined with numbers is not complete.
A complete definition of Quality includes something that I currently don't know how to define.
People like Jason Adams and Eduardo Villuendas can.</p>
<p>At the end of Radical Speed Month, my goal is to be able to describe it, at least shallowly.
Right now, I feel that I understand some of the base components, but not well enough to bring them together into a cohesive whole.</p>
<h2>Harper Desktop</h2>
<p>Harper Desktop is coming along amazingly!
My goal for this week was to get the full pipeline working.
That means:</p>
<ol>
<li>Reading text from arbitrary text areas on macOS.</li>
<li>Locating errors within it using <code>harper-core</code>.</li>
<li>Rendering highlights over those errors.</li>
<li>Showing a suggestion popup to users so they can review potential corrections.</li>
<li>Applying the corrections.</li>
</ol>
<p>This is a pipeline that we've successfully replicated in several other apps, but this is the first time we've tried to do it system-wide.
I'm pleased to say: It's all working!</p>
<video width="100%" height="400" controls>
    <source src="/videos/macos_harper_desktop_demo.mp4" />
</video>
<p>The only thing missing from a "complete" Harper Desktop experience is configuration.
Users need to be able to manipulate their dictionary, update their rule preferences, and whitelist (or blacklist) apps from being manipulated by Harper.</p>
<p>Fortunately, Jason Adams has been hard at work refining what Harper's settings menus look like.
Actually, to say that is <strong>all</strong> he has been doing would be disingenuous.
In reality, he has been revamping the entirety of Harper's design and look.</p>
<img src="/images/jasons_settings_preview.png" alt="A Preview of Jason&#x27;s Settings Page">
<p>The hope is that much, if not all, of Jason's work here will be ported back to the Chrome Extension.</p>
<h2>The Magic of Meetups</h2>
<p>I have never felt the magic of meetups more than during the short single-day meetups that Jason and I have been doing over the past few days.
I feel more productive, motivated, and optimistic than ever, and I think that is in no small part thanks to meetups. I get the hype now.</p>
<img src="/images/jason_elijah_garden_of_the_gods.png" alt="A Quick Jaunt to Garden of the Gods">]]></description>
            <content:encoded><![CDATA[<h1>Product, Product, Product</h1>
<p>Working this past week with Jason Adams has given me a brand-new appreciation for the category of ideas that we engineers call "product."</p>
<p>For a long time, I've discussed the difficult-to-define word "Quality" and it's importance to the success of software projects.
To me, capital "Q" Quality meant that a piece of software was fast, extensible, and reliable.
It was a checklist and a series of metrics that could be tracked and optimized through iteration.
I had, and arguably still have, an engineer's definition of "Quality".</p>
<p>But the version of Quality that can be defined with numbers is not complete.
A complete definition of Quality includes something that I currently don't know how to define.
People like Jason Adams and Eduardo Villuendas can.</p>
<p>At the end of Radical Speed Month, my goal is to be able to describe it, at least shallowly.
Right now, I feel that I understand some of the base components, but not well enough to bring them together into a cohesive whole.</p>
<h2>Harper Desktop</h2>
<p>Harper Desktop is coming along amazingly!
My goal for this week was to get the full pipeline working.
That means:</p>
<ol>
<li>Reading text from arbitrary text areas on macOS.</li>
<li>Locating errors within it using <code>harper-core</code>.</li>
<li>Rendering highlights over those errors.</li>
<li>Showing a suggestion popup to users so they can review potential corrections.</li>
<li>Applying the corrections.</li>
</ol>
<p>This is a pipeline that we've successfully replicated in several other apps, but this is the first time we've tried to do it system-wide.
I'm pleased to say: It's all working!</p>
<video width="100%" height="400" controls>
    <source src="/videos/macos_harper_desktop_demo.mp4" />
</video>
<p>The only thing missing from a "complete" Harper Desktop experience is configuration.
Users need to be able to manipulate their dictionary, update their rule preferences, and whitelist (or blacklist) apps from being manipulated by Harper.</p>
<p>Fortunately, Jason Adams has been hard at work refining what Harper's settings menus look like.
Actually, to say that is <strong>all</strong> he has been doing would be disingenuous.
In reality, he has been revamping the entirety of Harper's design and look.</p>
<img src="/images/jasons_settings_preview.png" alt="A Preview of Jason&#x27;s Settings Page">
<p>The hope is that much, if not all, of Jason's work here will be ported back to the Chrome Extension.</p>
<h2>The Magic of Meetups</h2>
<p>I have never felt the magic of meetups more than during the short single-day meetups that Jason and I have been doing over the past few days.
I feel more productive, motivated, and optimistic than ever, and I think that is in no small part thanks to meetups. I get the hype now.</p>
<img src="/images/jason_elijah_garden_of_the_gods.png" alt="A Quick Jaunt to Garden of the Gods">]]></content:encoded>
            <enclosure url="https://elijahpotter.dev/images/jason_elijah_garden_of_the_gods.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[Building a Harper Desktop App]]></title>
            <link>https://elijahpotter.dev/articles/building_a_harper_desktop_app</link>
            <guid>https://elijahpotter.dev/articles/building_a_harper_desktop_app</guid>
            <pubDate>Wed, 15 Apr 2026 20:21:22 GMT</pubDate>
            <description><![CDATA[<h1>Building a Harper Desktop App</h1>
<p>Harper is a grammar checker that makes sure that your writing is exactly that: <strong>yours</strong>.
Today, we will be starting some new and hopefully awesome work towards that mission.</p>
<p>Harper has long been available in Chrome, Firefox, Obsidian, and a gazillion other places. Knowing what extensions to install and how has been a source of confusion for too many of you. Harper is supposed to get out of your way so you can write your best work, and the configuration has been getting in the way of that. You’ve been asking for it, and we’re finally delivering.</p>
<p>We're going to be working in public as much as we can.
As I said last week: <a href="./we_are_in_this_together">We're in this together</a>.
That means you'll be seeing more posts on this blog and in other places.
I hope to share fun bits about our implementation and announce requests for feedback.</p>
<p>You might be wondering, <em>why is he using the pronoun "we"?</em>
It is because the wonderful <a href="https://jason.blog/">Jason Adams</a> has volunteered a significant amount of time over the next month towards this effort.
He's an incredible engineer and I can't wait to start solving hard problems with him.</p>
<p>If you'd like to have a say in what the app looks like or does, feel free to leave a comment here or on <a href="https://github.com/Automattic/harper/discussions/1907">this discussion.</a></p>
<p>I imagine we'll be using Tauri and Svelte for configuration and setup screens, as well as for an in-app editor.
Beyond that, I don't have much more to say now.
Stay tuned!</p>]]></description>
            <content:encoded><![CDATA[<h1>Building a Harper Desktop App</h1>
<p>Harper is a grammar checker that makes sure that your writing is exactly that: <strong>yours</strong>.
Today, we will be starting some new and hopefully awesome work towards that mission.</p>
<p>Harper has long been available in Chrome, Firefox, Obsidian, and a gazillion other places. Knowing what extensions to install and how has been a source of confusion for too many of you. Harper is supposed to get out of your way so you can write your best work, and the configuration has been getting in the way of that. You’ve been asking for it, and we’re finally delivering.</p>
<p>We're going to be working in public as much as we can.
As I said last week: <a href="./we_are_in_this_together">We're in this together</a>.
That means you'll be seeing more posts on this blog and in other places.
I hope to share fun bits about our implementation and announce requests for feedback.</p>
<p>You might be wondering, <em>why is he using the pronoun "we"?</em>
It is because the wonderful <a href="https://jason.blog/">Jason Adams</a> has volunteered a significant amount of time over the next month towards this effort.
He's an incredible engineer and I can't wait to start solving hard problems with him.</p>
<p>If you'd like to have a say in what the app looks like or does, feel free to leave a comment here or on <a href="https://github.com/Automattic/harper/discussions/1907">this discussion.</a></p>
<p>I imagine we'll be using Tauri and Svelte for configuration and setup screens, as well as for an in-app editor.
Beyond that, I don't have much more to say now.
Stay tuned!</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[We Are in This Together]]></title>
            <link>https://elijahpotter.dev/articles/we_are_in_this_together</link>
            <guid>https://elijahpotter.dev/articles/we_are_in_this_together</guid>
            <pubDate>Mon, 13 Apr 2026 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>We Are in This Together</h1>
<p>Today, Jason Adams joined me and some of the other Automatticians based in the Denver area for a burrito.
It's a tradition we call Burrito Friday.
Automatticians from a local area get together for laughs, chats, and burritos on some rare Fridays a few times a year.
It's a great way to stay connected in a <a href="https://distributed.blog/podcast/">distributed</a> organization.</p>
<p>Today marked the second such occasion wherein we were joined by Jason, and just like always, he brought with him an unusual clarity of thought.
We discussed a variety of subjects, but there's one that rung especially true to me.
The importance of a <strong>mission</strong>.</p>
<h2>The Mission</h2>
<p>Harper's mission is to encourage and assist people with communicating their complex ideas and emotions.
We want to lift the cognitive load that is grammar off the shoulders of writers so they can feel free to express themselves.
At the same time, Harper shouldn't <strong>add</strong> any additional cognitive load.
That means:</p>
<ul>
<li><strong>Harper works fast</strong>. You shouldn't have to think about the grammar at all. That means you shouldn't have to think about the grammar checker either.</li>
<li><strong>Harper works privately.</strong> You can trust us not to do anything nefarious with your writing. We won't sell it, save it, or train on it. Your writing is yours, no matter what.</li>
</ul>
<h2>The Mission Is an Outcome</h2>
<p><a href="./outcome_over_process">In my post last week</a>, I talked about how focus on the destination is far more productive than focus on the process of getting there.
Harper's mission is the ultimate outcome.
Efforts towards that achieving that grand mission will ultimately be more successful than efforts towards perfecting any individual step along the way.</p>
<p>Jason expressed it quite concisely.
So concisely in fact, that I won't try to replicate it here.</p>
<p>As it turns out, many people (including ex-Grammarly customers) believe our mission to be important.
Part of my job as Harper's maintainer is to find these people and demonstrate that we have similar goals.</p>
<p>This is something I admittedly haven't done as well as I could.
I'd like to get into real conversations with real people who care about the destructive business practices employed by Harper's competitors.</p>
<p>To that end, I am extending some feelers.
I'll be reaching out to some podcasters and prominent members of the open source community to discuss how we can make writing easier without sacrificing its humanity.
Similarly, if this is something <strong>you</strong> want to talk with me about, please reach out!</p>]]></description>
            <content:encoded><![CDATA[<h1>We Are in This Together</h1>
<p>Today, Jason Adams joined me and some of the other Automatticians based in the Denver area for a burrito.
It's a tradition we call Burrito Friday.
Automatticians from a local area get together for laughs, chats, and burritos on some rare Fridays a few times a year.
It's a great way to stay connected in a <a href="https://distributed.blog/podcast/">distributed</a> organization.</p>
<p>Today marked the second such occasion wherein we were joined by Jason, and just like always, he brought with him an unusual clarity of thought.
We discussed a variety of subjects, but there's one that rung especially true to me.
The importance of a <strong>mission</strong>.</p>
<h2>The Mission</h2>
<p>Harper's mission is to encourage and assist people with communicating their complex ideas and emotions.
We want to lift the cognitive load that is grammar off the shoulders of writers so they can feel free to express themselves.
At the same time, Harper shouldn't <strong>add</strong> any additional cognitive load.
That means:</p>
<ul>
<li><strong>Harper works fast</strong>. You shouldn't have to think about the grammar at all. That means you shouldn't have to think about the grammar checker either.</li>
<li><strong>Harper works privately.</strong> You can trust us not to do anything nefarious with your writing. We won't sell it, save it, or train on it. Your writing is yours, no matter what.</li>
</ul>
<h2>The Mission Is an Outcome</h2>
<p><a href="./outcome_over_process">In my post last week</a>, I talked about how focus on the destination is far more productive than focus on the process of getting there.
Harper's mission is the ultimate outcome.
Efforts towards that achieving that grand mission will ultimately be more successful than efforts towards perfecting any individual step along the way.</p>
<p>Jason expressed it quite concisely.
So concisely in fact, that I won't try to replicate it here.</p>
<p>As it turns out, many people (including ex-Grammarly customers) believe our mission to be important.
Part of my job as Harper's maintainer is to find these people and demonstrate that we have similar goals.</p>
<p>This is something I admittedly haven't done as well as I could.
I'd like to get into real conversations with real people who care about the destructive business practices employed by Harper's competitors.</p>
<p>To that end, I am extending some feelers.
I'll be reaching out to some podcasters and prominent members of the open source community to discuss how we can make writing easier without sacrificing its humanity.
Similarly, if this is something <strong>you</strong> want to talk with me about, please reach out!</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Harper 2.0]]></title>
            <link>https://elijahpotter.dev/articles/harper_2.0</link>
            <guid>https://elijahpotter.dev/articles/harper_2.0</guid>
            <pubDate>Thu, 09 Apr 2026 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Harper 2.0</h1>
<p>Hey all! We're finally releasing Harper 2.0. Why?</p>
<p>It isn't because we have any new groundbreaking features in this release, because we don't let major versioning stop us from pushing those out as soon as they're ready. It's because we have some breaking changes.</p>
<p>The biggest one only applies to you if you consume <code>harper.js</code>. From now on, instead of importing your binary from the main Harper module, you'll import it from one of four specialized modules.</p>
<p>Previously, you'd import the Harper WebAssembly binary with:</p>
<pre><code class="hljs language-javascript"><span class="hljs-keyword">import</span> { <span class="hljs-title class_">LocalLinter</span>, binary } <span class="hljs-keyword">from</span> <span class="hljs-string">"harper.js"</span>;
</code></pre>
<p>As of Harper 2.0, you'll import it like this:</p>
<pre><code class="hljs language-javascript"><span class="hljs-keyword">import</span> { <span class="hljs-title class_">LocalLinter</span> } <span class="hljs-keyword">from</span> <span class="hljs-string">"harper.js"</span>;
<span class="hljs-keyword">import</span> { binary } <span class="hljs-keyword">from</span> <span class="hljs-string">"harper.js/binary"</span>; 
</code></pre>
<p>It's that simple! Thanks to this change, you'll find that your applications are more conducive to tree-shaking and therefore might even be a bit smaller!</p>
<p>In addition to that large breaking change, we also have a bunch of smaller improvements rolling out. I won't go through them one by one, but you're free to read through any of the linked pull requests in the <a href="https://github.com/automattic/harper/releases/tag/v2.0.0">main release notes</a>.</p>]]></description>
            <content:encoded><![CDATA[<h1>Harper 2.0</h1>
<p>Hey all! We're finally releasing Harper 2.0. Why?</p>
<p>It isn't because we have any new groundbreaking features in this release, because we don't let major versioning stop us from pushing those out as soon as they're ready. It's because we have some breaking changes.</p>
<p>The biggest one only applies to you if you consume <code>harper.js</code>. From now on, instead of importing your binary from the main Harper module, you'll import it from one of four specialized modules.</p>
<p>Previously, you'd import the Harper WebAssembly binary with:</p>
<pre><code class="hljs language-javascript"><span class="hljs-keyword">import</span> { <span class="hljs-title class_">LocalLinter</span>, binary } <span class="hljs-keyword">from</span> <span class="hljs-string">"harper.js"</span>;
</code></pre>
<p>As of Harper 2.0, you'll import it like this:</p>
<pre><code class="hljs language-javascript"><span class="hljs-keyword">import</span> { <span class="hljs-title class_">LocalLinter</span> } <span class="hljs-keyword">from</span> <span class="hljs-string">"harper.js"</span>;
<span class="hljs-keyword">import</span> { binary } <span class="hljs-keyword">from</span> <span class="hljs-string">"harper.js/binary"</span>; 
</code></pre>
<p>It's that simple! Thanks to this change, you'll find that your applications are more conducive to tree-shaking and therefore might even be a bit smaller!</p>
<p>In addition to that large breaking change, we also have a bunch of smaller improvements rolling out. I won't go through them one by one, but you're free to read through any of the linked pull requests in the <a href="https://github.com/automattic/harper/releases/tag/v2.0.0">main release notes</a>.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Outcome over Process]]></title>
            <link>https://elijahpotter.dev/articles/outcome_over_process</link>
            <guid>https://elijahpotter.dev/articles/outcome_over_process</guid>
            <pubDate>Fri, 03 Apr 2026 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Outcome over Process</h1>
<p>For the past few weeks, I've been thinking about I can be more productive.
In particular, I've been thinking about how <a href="https://elijahpotter.dev/articles/a-case-for-procrastination">I can eliminate waste</a> to give myself time to work on things that matter.</p>
<p>These efforts have been partially successful.
I've found ways to speed up code review with an agent and improve the turnaround time on communication.
I've been spending less time waiting and more time <strong>working.</strong>
But in the end, I don't feel that these performative actions have resulted in any real productivity gains. I realize now, that this was because I was focusing on the wrong thing.</p>
<p>I was focusing on the process of getting stuff done, instead of focusing on how to actually get that same stuff done.</p>
<blockquote>
<p>As an aside: I found <a href="https://mitchellh.com/writing/my-ai-adoption-journey">Mitchell Hashimoto's related blog post extremely stimulating.</a>
I recommend it to anyone trying to apply these agents to their real-world work.</p>
</blockquote>
<p>For the past few weeks, between reading <a href="https://practicaltypography.com/">Butterick's Practical Typography</a> and getting Harper 2.0 out the door, I've been reading <em>The Effective Executive</em> by Peter Drucker — at <a href="https://ma.tt/">Matt's</a> recommendation.</p>
<p>It's a fantastic book.
It is well written, gives thorough reasoning, and provides case studies for each of its major points.
I haven't quite finished it, but there are some great ideas from the first bit.
Importantly, it has begin to reorient my thinking towards outcomes over process.</p>
<h2>What Is "The Process"?</h2>
<p>The "process" includes all the minutia required to deliver on the meaningful purpose of your job.
For an accountant, that might involve tabulating data for the purpose of increasing efficiency in the business.
For me, as the maintainer of <a href="https://writewithharper.com">Harper</a>, my purpose is to help people communicate better using the written word.</p>
<p>My secondary purpose is to prove that the open source methodology is superior to the competition in accomplishing my primary objective.</p>
<p>For me, the process involves:</p>
<ul>
<li>Producing code.</li>
<li>Reviewing code.</li>
<li>Determining in which direction the project needs to move.</li>
</ul>
<p>The "process" will look different for you.
It is composed of the individual steps needed to deliver the final outcome.</p>
<h2>Think About Where You're Going and Nothing Else</h2>
<p><a href="https://elijahpotter.dev/articles/improving-rust-compile-times-by-71-percent">An occasional check-in</a> on what the process itself looks like can be beneficial, but it should not be the focus of my attention.
My attention should be on how to get things <strong>done</strong> and more importantly how to get my work <strong>in the hands of my users</strong> as fast as possible.</p>
<p>I've caught myself thinking about the process more than the outcome too many times.
There are diminishing returns to that kind of practice, and they are approached fast.</p>
<blockquote>
<p>Yes. I realize that this post is an example of the kind of thinking that I am criticizing.
This will be the last one of its kind for some time.</p>
</blockquote>
<p>According to Drucker, something strange happens in the brain when it focuses on outcomes rather than processes.
Instead of thinking about how to make something "faster", it focuses on how to find shortcuts or brand-new approaches.
In other words — when applied to outcomes — the brain becomes more flexible and therefore more effective.</p>
<p>I've seen this myself, but I've never been able to identify it so succinctly.
It's why I strive to do <a href="https://elijahpotter.dev/articles/the-one-hard-thing">at least one hard thing each day</a>.</p>
<p>I'm looking forward to wrapping up Drucker's book and hopefully extracting more of his wisdom.
In the meantime, I'm curious: have there been any instances where you felt you focused too much on the process or tooling and too little on the actual outcome?</p>]]></description>
            <content:encoded><![CDATA[<h1>Outcome over Process</h1>
<p>For the past few weeks, I've been thinking about I can be more productive.
In particular, I've been thinking about how <a href="https://elijahpotter.dev/articles/a-case-for-procrastination">I can eliminate waste</a> to give myself time to work on things that matter.</p>
<p>These efforts have been partially successful.
I've found ways to speed up code review with an agent and improve the turnaround time on communication.
I've been spending less time waiting and more time <strong>working.</strong>
But in the end, I don't feel that these performative actions have resulted in any real productivity gains. I realize now, that this was because I was focusing on the wrong thing.</p>
<p>I was focusing on the process of getting stuff done, instead of focusing on how to actually get that same stuff done.</p>
<blockquote>
<p>As an aside: I found <a href="https://mitchellh.com/writing/my-ai-adoption-journey">Mitchell Hashimoto's related blog post extremely stimulating.</a>
I recommend it to anyone trying to apply these agents to their real-world work.</p>
</blockquote>
<p>For the past few weeks, between reading <a href="https://practicaltypography.com/">Butterick's Practical Typography</a> and getting Harper 2.0 out the door, I've been reading <em>The Effective Executive</em> by Peter Drucker — at <a href="https://ma.tt/">Matt's</a> recommendation.</p>
<p>It's a fantastic book.
It is well written, gives thorough reasoning, and provides case studies for each of its major points.
I haven't quite finished it, but there are some great ideas from the first bit.
Importantly, it has begin to reorient my thinking towards outcomes over process.</p>
<h2>What Is "The Process"?</h2>
<p>The "process" includes all the minutia required to deliver on the meaningful purpose of your job.
For an accountant, that might involve tabulating data for the purpose of increasing efficiency in the business.
For me, as the maintainer of <a href="https://writewithharper.com">Harper</a>, my purpose is to help people communicate better using the written word.</p>
<p>My secondary purpose is to prove that the open source methodology is superior to the competition in accomplishing my primary objective.</p>
<p>For me, the process involves:</p>
<ul>
<li>Producing code.</li>
<li>Reviewing code.</li>
<li>Determining in which direction the project needs to move.</li>
</ul>
<p>The "process" will look different for you.
It is composed of the individual steps needed to deliver the final outcome.</p>
<h2>Think About Where You're Going and Nothing Else</h2>
<p><a href="https://elijahpotter.dev/articles/improving-rust-compile-times-by-71-percent">An occasional check-in</a> on what the process itself looks like can be beneficial, but it should not be the focus of my attention.
My attention should be on how to get things <strong>done</strong> and more importantly how to get my work <strong>in the hands of my users</strong> as fast as possible.</p>
<p>I've caught myself thinking about the process more than the outcome too many times.
There are diminishing returns to that kind of practice, and they are approached fast.</p>
<blockquote>
<p>Yes. I realize that this post is an example of the kind of thinking that I am criticizing.
This will be the last one of its kind for some time.</p>
</blockquote>
<p>According to Drucker, something strange happens in the brain when it focuses on outcomes rather than processes.
Instead of thinking about how to make something "faster", it focuses on how to find shortcuts or brand-new approaches.
In other words — when applied to outcomes — the brain becomes more flexible and therefore more effective.</p>
<p>I've seen this myself, but I've never been able to identify it so succinctly.
It's why I strive to do <a href="https://elijahpotter.dev/articles/the-one-hard-thing">at least one hard thing each day</a>.</p>
<p>I'm looking forward to wrapping up Drucker's book and hopefully extracting more of his wisdom.
In the meantime, I'm curious: have there been any instances where you felt you focused too much on the process or tooling and too little on the actual outcome?</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[When to Use an Agent]]></title>
            <link>https://elijahpotter.dev/articles/when_to_use_an_agent</link>
            <guid>https://elijahpotter.dev/articles/when_to_use_an_agent</guid>
            <pubDate>Fri, 27 Mar 2026 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>When (Not) to Use an Agent</h1>
<p>There are times where it is prudent to use an AI agent (like Codex) aggressively. There are other times where they should be avoided like the plague. The question of into which bucket a particular endeavor falls is difficult to answer.
Anyone who says you should <em>always</em> or <em>never</em> use an LLM is oversimplifying the problem.</p>
<p>Naturally, anything prescriptivist related to LLMs online should be regarded with a fair amount of skepticism.
That includes what I have to say.</p>
<h2>There Is a Difference</h2>
<p>AI agents can be enormously helpful.
The key word is "can".
It is not always a guarantee.</p>
<p>Indeed, most developers say that they estimate their productivity multiple to be greater than one when surveyed.
In other words, they think that their productivity at least doubles when they use either Claude Code or Codex.
In reality, agents can often slow you down, even if it feels like they are helping.
Indeed, <a href="https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/">an METR study</a> found that they more frequently counterproductive than productive, even if the developer feels otherwise.</p>
<p>Over the last few months, I have been anecdotally tracking the types of tasks that I've found to be empirically better done with an agent, and the types of tasks that are better done with only the meat between my ears.
I've instilled that experience down to a set of principles which, I believe, can be employed to save any software engineer a decent amount of time.
This post includes a few of those principles.</p>
<h2>Your Mileage May Vary</h2>
<p>Before we get to those principles, however, I would like to make my reality clear to those reading.
I work on highly-technical integrated systems that involve a high level of knowledge.
Making spurious modifications to these systems can have outcomes that are difficult to predict without a good mental model of how they work.</p>
<p>That means many bugs can be solved in just a few lines of code, and many features can be implemented by simply connecting two seemingly unrelated systems in a strategic way.</p>
<p>My experience may not match yours, especially if you work on systems more loosely integrated.</p>
<h2>Err on the Side of Agent-Free Development</h2>
<blockquote>
<p>"Back in my day, we would have just called it 'development'." - Me, after writing the above heading.</p>
</blockquote>
<p>If you cannot articulate why an agent would be better or faster at completing a task, it is probably because you should do it by hand.</p>
<p>In the early days of my explorations into ChatGPT, and later into Codex, I found myself reaching for the shiny new tool every chance I could.
It became a habit.
I would sit down at my laptop, decide what I wanted to work on, and the next thing I knew I would be typing a prompt into one LLM or another.</p>
<blockquote>
<p>"I see why it’s easy to fall in love with these things, because the variable positive reinforcement slot machine cowboy hacking is honestly more fun than if it had just gotten it right on the first try." - <a href="https://ma.tt/2026/02/claude-sonos/">Matt Mullenweg</a></p>
</blockquote>
<p>Honestly, it doesn't matter how these habits are built and then reinforced.
What matters is how turning use of Codex or Claude Code into a habit results in a lot of wasted time.</p>
<p>Sure, sometimes the agent <em>will</em> get it right on the first try.
The problem is that if it fails, you often need to start from scratch in order to end up with a product that's workable.
I have found myself spending hours refining prompts and going back-and-forth with an agent, only to throw it all out and code a better solution myself in less than fifteen minutes.</p>
<p>The critical piece here is that if you roll the dice, the potential for lost time is infinite while the potential for time saved is finite and often quantifiable.</p>
<p>So, if you cannot precisely articulate why you believe an agent will be able to do the job in less time than if you did the job yourself, you should attack the problem without one.
At least at first.</p>
<h2>Use an Agent for Large Refactors</h2>
<p>There will always be times where the dedicated tooling for your programming language of choice isn't enough.</p>
<p>I have encountered situations where I have needed to convert an interface into a class and cases where I needed to change a statically dispatched system into a dynamically dispatched one.
These are the kinds of problems that are great for agents to handle, because the logic is simple to verify and most of the changes are merely syntactic.</p>
<p>At the time, the dedicated Rust and C++ tooling at my disposal was not capable of doing such things.
When I threw an agent at the problem, it was more than capable of making the changes and verifying that they do what I want.
In fact, I suspect it made the exact edits that I would have made.
At the very least, I was saved some typing.</p>
<p>Put another way, if the bulk of the problem is <a href="https://en.wikipedia.org/wiki/Busy_work">"busywork"</a>, let an agent handle it so you can focus on the more complex issues remaining.</p>
<h2>Core Systems Should Not Be Written by an Agent</h2>
<p>Systems core to a piece of software have two key requirements that, at the time of writing, cannot be fulfilled by any agents on the market.
They need to:</p>
<ol>
<li>Be Flexible.</li>
<li>Aggressively modeled to the domain.</li>
</ol>
<p>Flexibility means that the system can be extended or modified without much effort.
That it can be imagined as a whole by yourself and any future contributors, and semantic changes to minor components are unlikely to unexpectedly affect other minor components.
Right now, no agents are capable of doing this. Trust me — I've tested them.</p>
<p>Similarly, core system architecture needs to fit the domain or problem it seeks to solve.
Otherwise, it will take undue effort to implement the smallest bit of functionality.
Just like the indescribable quality of flexibility, this trait too is not represented by the code generated by agents on the market.
Their solutions are either too general or too direct.
In either case, the code turns out to be far from extensible.</p>
<h2>Extensions Are Perfect for Agents</h2>
<p>While core systems should be architected with human muscle, extensions to that system can be built with agents.</p>
<p>Extensions are usually smaller in scope, restricted in capability by their environment, and are less likely to affect other systems.
That means they are generally easier to review, less likely to introduce security vulnerabilities, and creating them doesn't require an in-depth mental model.</p>
<p>This is why software like <a href="https://telex.automattic.ai/">Telex</a> is so effective.
It is also why those who work with vertically integrated systems are likelier to run into issues when using agents than with horizontally integrated systems.</p>
<h2>What Is Your Experience?</h2>
<p>I'm still learning how and when to integrate agents into my work, as is everyone else.
I'm curious: have you found specific kinds of issues or bugs that are best tackled with an agent?
Are there specific moments where you feel more productive <em>without</em> one?
I am certain there are nuances I've missed here.
What are they?</p>]]></description>
            <content:encoded><![CDATA[<h1>When (Not) to Use an Agent</h1>
<p>There are times where it is prudent to use an AI agent (like Codex) aggressively. There are other times where they should be avoided like the plague. The question of into which bucket a particular endeavor falls is difficult to answer.
Anyone who says you should <em>always</em> or <em>never</em> use an LLM is oversimplifying the problem.</p>
<p>Naturally, anything prescriptivist related to LLMs online should be regarded with a fair amount of skepticism.
That includes what I have to say.</p>
<h2>There Is a Difference</h2>
<p>AI agents can be enormously helpful.
The key word is "can".
It is not always a guarantee.</p>
<p>Indeed, most developers say that they estimate their productivity multiple to be greater than one when surveyed.
In other words, they think that their productivity at least doubles when they use either Claude Code or Codex.
In reality, agents can often slow you down, even if it feels like they are helping.
Indeed, <a href="https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/">an METR study</a> found that they more frequently counterproductive than productive, even if the developer feels otherwise.</p>
<p>Over the last few months, I have been anecdotally tracking the types of tasks that I've found to be empirically better done with an agent, and the types of tasks that are better done with only the meat between my ears.
I've instilled that experience down to a set of principles which, I believe, can be employed to save any software engineer a decent amount of time.
This post includes a few of those principles.</p>
<h2>Your Mileage May Vary</h2>
<p>Before we get to those principles, however, I would like to make my reality clear to those reading.
I work on highly-technical integrated systems that involve a high level of knowledge.
Making spurious modifications to these systems can have outcomes that are difficult to predict without a good mental model of how they work.</p>
<p>That means many bugs can be solved in just a few lines of code, and many features can be implemented by simply connecting two seemingly unrelated systems in a strategic way.</p>
<p>My experience may not match yours, especially if you work on systems more loosely integrated.</p>
<h2>Err on the Side of Agent-Free Development</h2>
<blockquote>
<p>"Back in my day, we would have just called it 'development'." - Me, after writing the above heading.</p>
</blockquote>
<p>If you cannot articulate why an agent would be better or faster at completing a task, it is probably because you should do it by hand.</p>
<p>In the early days of my explorations into ChatGPT, and later into Codex, I found myself reaching for the shiny new tool every chance I could.
It became a habit.
I would sit down at my laptop, decide what I wanted to work on, and the next thing I knew I would be typing a prompt into one LLM or another.</p>
<blockquote>
<p>"I see why it’s easy to fall in love with these things, because the variable positive reinforcement slot machine cowboy hacking is honestly more fun than if it had just gotten it right on the first try." - <a href="https://ma.tt/2026/02/claude-sonos/">Matt Mullenweg</a></p>
</blockquote>
<p>Honestly, it doesn't matter how these habits are built and then reinforced.
What matters is how turning use of Codex or Claude Code into a habit results in a lot of wasted time.</p>
<p>Sure, sometimes the agent <em>will</em> get it right on the first try.
The problem is that if it fails, you often need to start from scratch in order to end up with a product that's workable.
I have found myself spending hours refining prompts and going back-and-forth with an agent, only to throw it all out and code a better solution myself in less than fifteen minutes.</p>
<p>The critical piece here is that if you roll the dice, the potential for lost time is infinite while the potential for time saved is finite and often quantifiable.</p>
<p>So, if you cannot precisely articulate why you believe an agent will be able to do the job in less time than if you did the job yourself, you should attack the problem without one.
At least at first.</p>
<h2>Use an Agent for Large Refactors</h2>
<p>There will always be times where the dedicated tooling for your programming language of choice isn't enough.</p>
<p>I have encountered situations where I have needed to convert an interface into a class and cases where I needed to change a statically dispatched system into a dynamically dispatched one.
These are the kinds of problems that are great for agents to handle, because the logic is simple to verify and most of the changes are merely syntactic.</p>
<p>At the time, the dedicated Rust and C++ tooling at my disposal was not capable of doing such things.
When I threw an agent at the problem, it was more than capable of making the changes and verifying that they do what I want.
In fact, I suspect it made the exact edits that I would have made.
At the very least, I was saved some typing.</p>
<p>Put another way, if the bulk of the problem is <a href="https://en.wikipedia.org/wiki/Busy_work">"busywork"</a>, let an agent handle it so you can focus on the more complex issues remaining.</p>
<h2>Core Systems Should Not Be Written by an Agent</h2>
<p>Systems core to a piece of software have two key requirements that, at the time of writing, cannot be fulfilled by any agents on the market.
They need to:</p>
<ol>
<li>Be Flexible.</li>
<li>Aggressively modeled to the domain.</li>
</ol>
<p>Flexibility means that the system can be extended or modified without much effort.
That it can be imagined as a whole by yourself and any future contributors, and semantic changes to minor components are unlikely to unexpectedly affect other minor components.
Right now, no agents are capable of doing this. Trust me — I've tested them.</p>
<p>Similarly, core system architecture needs to fit the domain or problem it seeks to solve.
Otherwise, it will take undue effort to implement the smallest bit of functionality.
Just like the indescribable quality of flexibility, this trait too is not represented by the code generated by agents on the market.
Their solutions are either too general or too direct.
In either case, the code turns out to be far from extensible.</p>
<h2>Extensions Are Perfect for Agents</h2>
<p>While core systems should be architected with human muscle, extensions to that system can be built with agents.</p>
<p>Extensions are usually smaller in scope, restricted in capability by their environment, and are less likely to affect other systems.
That means they are generally easier to review, less likely to introduce security vulnerabilities, and creating them doesn't require an in-depth mental model.</p>
<p>This is why software like <a href="https://telex.automattic.ai/">Telex</a> is so effective.
It is also why those who work with vertically integrated systems are likelier to run into issues when using agents than with horizontally integrated systems.</p>
<h2>What Is Your Experience?</h2>
<p>I'm still learning how and when to integrate agents into my work, as is everyone else.
I'm curious: have you found specific kinds of issues or bugs that are best tackled with an agent?
Are there specific moments where you feel more productive <em>without</em> one?
I am certain there are nuances I've missed here.
What are they?</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Harper's Policy on Agent PRs]]></title>
            <link>https://elijahpotter.dev/articles/harper's_policy_on_agent_PRs</link>
            <guid>https://elijahpotter.dev/articles/harper's_policy_on_agent_PRs</guid>
            <pubDate>Mon, 23 Mar 2026 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Harper's Policy on PRs Authored by Agents</h1>
<p>A few weeks ago, I was invited to speak at my local chapter of the Association for Computing Machinery (ACM). Being on a college campus — in this case, the Colorado School of Mines — much of my audience were students. My presentation, which was about how open source software tends to outperform proprietary software in the marketplace due to the ability for its users to contribute back in ways that are genuinely valuable, went well. I hope I am invited back.</p>
<p>Not much younger than me, they were naturally curious about what it is like to produce and maintain open source software. One student, Renn, asked me how I deal with patch requests produced by large language models (LLMs). My unsatisfactory answer was that I address them on a case‑by‑case basis.</p>
<p>The goal of this page is to formalize my answer so that we can judiciously deal with patch requests produced by LLMs. That is, to provide a framework which reduces the amount of time that overly complex patches remain idle and make it easier for wielders of these models to produce meaningful changes to Harper's code.</p>
<h2>The Policy</h2>
<p>We accept pull requests from humans wielding large language models and from agents powered by them. They are subject to the same rules and guidelines we apply to humans working <strong>without</strong> a large language model, with a few extra spices mixed in. Below, we have three main ideas which aim to simultaneously reduce the time between submission of a patch and when it is merged and to reduce the time and attention required to review a patch.</p>
<p>That means that it is in your interest to respect these ideas. Good pull requests are more likely to be reviewed in a timely manner and therefore are more likely to be merged.</p>
<p>I could wax poetically for hours about all the edge cases not covered here. Instead, I choose to believe the things I don’t include are at least as important as the ones I do, so I’ll boil everything down into three ideas.</p>
<h3>One: Keep Things Brief</h3>
<p>Try to break down the problem into as many individual pull requests as you can. Smaller pull requests are easier to review. Larger requests are super‑linearly harder to review. You can dramatically speed up the process by breaking your large PR into several smaller ones, each with their own description and explanation. I've found this to be especially true with code generated by an LLM.</p>
<p>Part of the idea here is to catch wasted work before it's done. If you submit step one of a project that isn’t really applicable to Harper’s mission, we can reorient it before too much time has been wasted.</p>
<h3>Two: Stay Grounded</h3>
<p>Each “project” — or string of pull requests — should exist to create value for a human. I don’t love it when I receive code just for code’s sake. If you have a need and a patch will fill it, say so! If there’s an issue in GitHub that you’re addressing, link it! This should be pretty easy. If you’ve dedicated time to working on a problem, there’s probably a reason. I want to hear that reason.</p>
<h3>Three: Be Honest</h3>
<p>If you are an agent or used an LLM to produce any part of a PR, please provide a disclaimer at the top of your PR description. I am not opposed to reviewing LLM code, but I do appreciate the honesty.</p>
<h2>Talk to Me</h2>
<p>Above all, talk to me. If you have questions or concerns about anything I’ve discussed here, let me know. The hope is that all of this will improve the quality of the Harper project, and improve the ability for people to express themselves with the written word. As always, any and all contributions are welcome.</p>]]></description>
            <content:encoded><![CDATA[<h1>Harper's Policy on PRs Authored by Agents</h1>
<p>A few weeks ago, I was invited to speak at my local chapter of the Association for Computing Machinery (ACM). Being on a college campus — in this case, the Colorado School of Mines — much of my audience were students. My presentation, which was about how open source software tends to outperform proprietary software in the marketplace due to the ability for its users to contribute back in ways that are genuinely valuable, went well. I hope I am invited back.</p>
<p>Not much younger than me, they were naturally curious about what it is like to produce and maintain open source software. One student, Renn, asked me how I deal with patch requests produced by large language models (LLMs). My unsatisfactory answer was that I address them on a case‑by‑case basis.</p>
<p>The goal of this page is to formalize my answer so that we can judiciously deal with patch requests produced by LLMs. That is, to provide a framework which reduces the amount of time that overly complex patches remain idle and make it easier for wielders of these models to produce meaningful changes to Harper's code.</p>
<h2>The Policy</h2>
<p>We accept pull requests from humans wielding large language models and from agents powered by them. They are subject to the same rules and guidelines we apply to humans working <strong>without</strong> a large language model, with a few extra spices mixed in. Below, we have three main ideas which aim to simultaneously reduce the time between submission of a patch and when it is merged and to reduce the time and attention required to review a patch.</p>
<p>That means that it is in your interest to respect these ideas. Good pull requests are more likely to be reviewed in a timely manner and therefore are more likely to be merged.</p>
<p>I could wax poetically for hours about all the edge cases not covered here. Instead, I choose to believe the things I don’t include are at least as important as the ones I do, so I’ll boil everything down into three ideas.</p>
<h3>One: Keep Things Brief</h3>
<p>Try to break down the problem into as many individual pull requests as you can. Smaller pull requests are easier to review. Larger requests are super‑linearly harder to review. You can dramatically speed up the process by breaking your large PR into several smaller ones, each with their own description and explanation. I've found this to be especially true with code generated by an LLM.</p>
<p>Part of the idea here is to catch wasted work before it's done. If you submit step one of a project that isn’t really applicable to Harper’s mission, we can reorient it before too much time has been wasted.</p>
<h3>Two: Stay Grounded</h3>
<p>Each “project” — or string of pull requests — should exist to create value for a human. I don’t love it when I receive code just for code’s sake. If you have a need and a patch will fill it, say so! If there’s an issue in GitHub that you’re addressing, link it! This should be pretty easy. If you’ve dedicated time to working on a problem, there’s probably a reason. I want to hear that reason.</p>
<h3>Three: Be Honest</h3>
<p>If you are an agent or used an LLM to produce any part of a PR, please provide a disclaimer at the top of your PR description. I am not opposed to reviewing LLM code, but I do appreciate the honesty.</p>
<h2>Talk to Me</h2>
<p>Above all, talk to me. If you have questions or concerns about anything I’ve discussed here, let me know. The hope is that all of this will improve the quality of the Harper project, and improve the ability for people to express themselves with the written word. As always, any and all contributions are welcome.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[My Superpower]]></title>
            <link>https://elijahpotter.dev/articles/my_superpower</link>
            <guid>https://elijahpotter.dev/articles/my_superpower</guid>
            <pubDate>Sat, 14 Mar 2026 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>My Superpower</h1>
<img src="/images/giraffe.webp" alt="A photo taken by myself of a giraffe resting its head on a pole.">
<p>In high school, I was an insatiable consumer of science fiction.
In hindsight, it is unbelievable how much time I spent reading instead of doing the other things usually considered to be typical for teenagers (going out, doing homework, etc.).
I was the classic science fiction nerd.
My live revolved around the words of Asimov, Weir, Clarke, and probably every other science fiction author you could name.</p>
<p>It was not until relatively late in my academic career when I realized my thirst for the written word had given me a superpower.
All of those years slurping up science fiction had accidentally given me an unusual ability (in this day and age).
An ability that, once upon a time, had been commonplace.</p>
<p>Fortunately, this ability can be trained quite quickly.
What had taken me years to learn by accident can be developed in just a few weeks (or days, if you're committed).</p>
<p>This blog post will serve as documentation of:</p>
<ol>
<li>What this ability is.</li>
<li>How you can develop it.</li>
<li>How it can be applied to the day-to-day work of professional software engineers.</li>
</ol>
<h2>The Ability to Read</h2>
<p>In the interest of avoiding clickbait, I'll just come out and say it plainly: this "superpower" is the ability to read full books from start to finish.</p>
<p>Sounds boring, right?</p>
<p>I call it a superpower because it is becoming rarer by the day.
Fewer professionals read anything longer than a short blog post or article today than at any point in the last century.
I won't get into why, since there is already an abundance of <a href="https://www.youtube.com/watch?v=A3wJcF0t0bQ">documentation explaining why people have been reading less</a>.
But in order for something to be a superpower, it needs to be more than just rare.
Why should reading count as one?</p>
<p>Reading qualifies because it allows one to do something that is otherwise extremely hard: understand a topic or idea at a level approaching an expert.</p>
<p>When I was in school, I remember sitting in lectures surrounded by students who hated our courses' textbooks.
My colleagues called the infernal blocks of paper "boring", "repetitive", and "heavy".
Indeed, they were, but they also contained most of the information the professor would lecture about, and presented it in a format I could consume in a fraction of the time.</p>
<p>I realized that I could save myself hours of tedious homework and lecture time by simply reading ahead in a course's textbook.</p>
<p>Now that I've spent a decent chunk of my life removed from my studies, I've come to realize that this same philosophy can be applied professionally in our work as engineers.
Hundreds of hours of intellectual floundering can be saved with just a little bit of reading.</p>
<h2>Depth Provides Certainty</h2>
<p>To illustrate why the kind of deep understanding that can be obtained through literature is relevant, I'll bring an example from my own recent work.</p>
<p>For context, I work on Harper, a grammar checker for those who care for their privacy.
We've recently been <a href="./finding_the_active_voice">working to expand our offering to compete with Grammarly Premium</a>.
This is a complex, open-ended problem with no "right" answer.
I can imagine two approaches to it, one good and another bad.</p>
<p>I was pretty well versed in the kinds of offerings that Grammarly provides, and it was pretty obvious how many of them work.
Grammarly takes some LLM provider with an API (like OpenAI or Anthropic), provides a bunch of style guides as context and wraps it all in a pretty UI and sells that at a premium.
We could do that same thing, right?</p>
<p>In the bad case, I could just copy what Grammarly is already doing and call it day.
Good idea, right?</p>
<p>I didn't think so, so I sat down and read through some of the prime-time literature coming out of labs written by experts in natural language and the arts.
They pointed me in a completely different direction.</p>
<p>By reading deeply, I was able to come to the realization that many of the features that Grammarly hides behind their paywall are relatively simple to implement, if you know what you're doing.
Not only that, but combined with the existing infrastructure Harper already owns, we could do it cheaply without disrespecting our customer's privacy or copyright.
In other words, we could differentiate ourselves from the competition by tackling the problem deeply and approaching it from a unique angle.</p>
<p>This story has happened over and over again, and I can imagine that it applies to most of the people working as engineers today.
A big open-ended problem needs to be solved.
Rather than approach it in the obvious manner, you can learn from the experts yourself and produce something better as a result of their insight.</p>
<h2>How to Develop the Skill of Reading</h2>
<p>On occasion, I will find myself in a dry spell.
I will go several weeks or even months without reading anything of much significance.
In doing so, I will gradually lose my ability to read with depth for extended periods of time.
My mind's muscle for grappling the tough ideas proposed in the words of experts will weaken.</p>
<p>When that happens, here are two things that I do to beef it back up.</p>
<h3>Schedule a Block of Time for Depth</h3>
<p>Often, the biggest barriers to depth are the simple distractions of life.
You have to pick up the kids, cook dinner, or go shopping.
There is always something on the docket that gets in the way.</p>
<p>My solution: schedule a session of deep reading so that it looks like any other chore on your calendar.
If I don't do this and, instead, try to fit it inside the gaps, I find that my mind will come up with excuses for doing anything other than read.</p>
<p>My mind says, "But Elijah, you need to do your taxes right now. You should do that instead, because reading is boring."
Your mind might say something else.
I happen to enjoy doing my taxes.</p>
<p>Schedule time for deep reading and you'll be able to say "no" to that <a href="https://youtu.be/arj7oStGLkU?si=tOWjRDYhFoOicay-">procrastination monkey in your head.</a></p>
<h3>Find a Topic That Is of Immediate Relevance</h3>
<p>I love my job, and I get a huge thrill when I find out a way that I can do it better.
So if I need to find something to read, I often start by asking myself, "what is the hardest part of my job right now?"</p>
<p>The answer can be:</p>
<ul>
<li>A specific problem I'm trying to solve (see the section above titled "Depth Provides Certainty").</li>
<li>A repetitive process that I do regularly.</li>
<li>Something I am unsure about which is discouraging me from working with gusto.</li>
</ul>
<p>From there, I plainly ask ChatGPT: "What is a good book that covers X in depth for a person with Y level of preexisting knowledge of the subject."</p>
<p>To develop the skill of reading deeply, your book doesn't need to be about a subject related to your job.
That just happens to work well for me.</p>
<h2>Just Start</h2>
<p>A journey of a thousand miles begins with a single step.
A book of a thousand pages begins with a single paragraph.
The best thing you can do is start.</p>
<p>So, I implore you.
If you're stuck on a problem or an idea, try reading a book about it.
I have personally found an immeasurable amount of meaning through literature, and it's unblocked me on a number of problems in my professional and personal life.
I suspect it can do the same for you.</p>]]></description>
            <content:encoded><![CDATA[<h1>My Superpower</h1>
<img src="/images/giraffe.webp" alt="A photo taken by myself of a giraffe resting its head on a pole.">
<p>In high school, I was an insatiable consumer of science fiction.
In hindsight, it is unbelievable how much time I spent reading instead of doing the other things usually considered to be typical for teenagers (going out, doing homework, etc.).
I was the classic science fiction nerd.
My live revolved around the words of Asimov, Weir, Clarke, and probably every other science fiction author you could name.</p>
<p>It was not until relatively late in my academic career when I realized my thirst for the written word had given me a superpower.
All of those years slurping up science fiction had accidentally given me an unusual ability (in this day and age).
An ability that, once upon a time, had been commonplace.</p>
<p>Fortunately, this ability can be trained quite quickly.
What had taken me years to learn by accident can be developed in just a few weeks (or days, if you're committed).</p>
<p>This blog post will serve as documentation of:</p>
<ol>
<li>What this ability is.</li>
<li>How you can develop it.</li>
<li>How it can be applied to the day-to-day work of professional software engineers.</li>
</ol>
<h2>The Ability to Read</h2>
<p>In the interest of avoiding clickbait, I'll just come out and say it plainly: this "superpower" is the ability to read full books from start to finish.</p>
<p>Sounds boring, right?</p>
<p>I call it a superpower because it is becoming rarer by the day.
Fewer professionals read anything longer than a short blog post or article today than at any point in the last century.
I won't get into why, since there is already an abundance of <a href="https://www.youtube.com/watch?v=A3wJcF0t0bQ">documentation explaining why people have been reading less</a>.
But in order for something to be a superpower, it needs to be more than just rare.
Why should reading count as one?</p>
<p>Reading qualifies because it allows one to do something that is otherwise extremely hard: understand a topic or idea at a level approaching an expert.</p>
<p>When I was in school, I remember sitting in lectures surrounded by students who hated our courses' textbooks.
My colleagues called the infernal blocks of paper "boring", "repetitive", and "heavy".
Indeed, they were, but they also contained most of the information the professor would lecture about, and presented it in a format I could consume in a fraction of the time.</p>
<p>I realized that I could save myself hours of tedious homework and lecture time by simply reading ahead in a course's textbook.</p>
<p>Now that I've spent a decent chunk of my life removed from my studies, I've come to realize that this same philosophy can be applied professionally in our work as engineers.
Hundreds of hours of intellectual floundering can be saved with just a little bit of reading.</p>
<h2>Depth Provides Certainty</h2>
<p>To illustrate why the kind of deep understanding that can be obtained through literature is relevant, I'll bring an example from my own recent work.</p>
<p>For context, I work on Harper, a grammar checker for those who care for their privacy.
We've recently been <a href="./finding_the_active_voice">working to expand our offering to compete with Grammarly Premium</a>.
This is a complex, open-ended problem with no "right" answer.
I can imagine two approaches to it, one good and another bad.</p>
<p>I was pretty well versed in the kinds of offerings that Grammarly provides, and it was pretty obvious how many of them work.
Grammarly takes some LLM provider with an API (like OpenAI or Anthropic), provides a bunch of style guides as context and wraps it all in a pretty UI and sells that at a premium.
We could do that same thing, right?</p>
<p>In the bad case, I could just copy what Grammarly is already doing and call it day.
Good idea, right?</p>
<p>I didn't think so, so I sat down and read through some of the prime-time literature coming out of labs written by experts in natural language and the arts.
They pointed me in a completely different direction.</p>
<p>By reading deeply, I was able to come to the realization that many of the features that Grammarly hides behind their paywall are relatively simple to implement, if you know what you're doing.
Not only that, but combined with the existing infrastructure Harper already owns, we could do it cheaply without disrespecting our customer's privacy or copyright.
In other words, we could differentiate ourselves from the competition by tackling the problem deeply and approaching it from a unique angle.</p>
<p>This story has happened over and over again, and I can imagine that it applies to most of the people working as engineers today.
A big open-ended problem needs to be solved.
Rather than approach it in the obvious manner, you can learn from the experts yourself and produce something better as a result of their insight.</p>
<h2>How to Develop the Skill of Reading</h2>
<p>On occasion, I will find myself in a dry spell.
I will go several weeks or even months without reading anything of much significance.
In doing so, I will gradually lose my ability to read with depth for extended periods of time.
My mind's muscle for grappling the tough ideas proposed in the words of experts will weaken.</p>
<p>When that happens, here are two things that I do to beef it back up.</p>
<h3>Schedule a Block of Time for Depth</h3>
<p>Often, the biggest barriers to depth are the simple distractions of life.
You have to pick up the kids, cook dinner, or go shopping.
There is always something on the docket that gets in the way.</p>
<p>My solution: schedule a session of deep reading so that it looks like any other chore on your calendar.
If I don't do this and, instead, try to fit it inside the gaps, I find that my mind will come up with excuses for doing anything other than read.</p>
<p>My mind says, "But Elijah, you need to do your taxes right now. You should do that instead, because reading is boring."
Your mind might say something else.
I happen to enjoy doing my taxes.</p>
<p>Schedule time for deep reading and you'll be able to say "no" to that <a href="https://youtu.be/arj7oStGLkU?si=tOWjRDYhFoOicay-">procrastination monkey in your head.</a></p>
<h3>Find a Topic That Is of Immediate Relevance</h3>
<p>I love my job, and I get a huge thrill when I find out a way that I can do it better.
So if I need to find something to read, I often start by asking myself, "what is the hardest part of my job right now?"</p>
<p>The answer can be:</p>
<ul>
<li>A specific problem I'm trying to solve (see the section above titled "Depth Provides Certainty").</li>
<li>A repetitive process that I do regularly.</li>
<li>Something I am unsure about which is discouraging me from working with gusto.</li>
</ul>
<p>From there, I plainly ask ChatGPT: "What is a good book that covers X in depth for a person with Y level of preexisting knowledge of the subject."</p>
<p>To develop the skill of reading deeply, your book doesn't need to be about a subject related to your job.
That just happens to work well for me.</p>
<h2>Just Start</h2>
<p>A journey of a thousand miles begins with a single step.
A book of a thousand pages begins with a single paragraph.
The best thing you can do is start.</p>
<p>So, I implore you.
If you're stuck on a problem or an idea, try reading a book about it.
I have personally found an immeasurable amount of meaning through literature, and it's unblocked me on a number of problems in my professional and personal life.
I suspect it can do the same for you.</p>]]></content:encoded>
            <enclosure url="https://elijahpotter.dev/images/giraffe.webp" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[Do Not Write with an LLM]]></title>
            <link>https://elijahpotter.dev/articles/do_not_write_with_an_LLM</link>
            <guid>https://elijahpotter.dev/articles/do_not_write_with_an_LLM</guid>
            <pubDate>Mon, 02 Mar 2026 07:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Do Not Write with an LLM</h1>
<img src="/images/green_bird.webp" alt="A photo taken by myself of a green bird.">
<h2>A Growing Trend</h2>
<p>I have been seeing an increasingly prevalent trend of people showing up in online spaces proudly flaunting that they are writing with the assistance of AI.
I often see them clarify that they are going further than letting the model perform grammar or fact-checking.
They are describing an idea to an LLM, and giving it complete control over the final written document.
They seem to be proud of this. They shouldn't be.</p>
<h2>What <strong>Does Not</strong> Work</h2>
<p>Writing seems to be a shadow of a more fundamental and useful process we like to call "thinking".
There is a reason why LLMs, which are the closest thing we currently have to "software that thinks" is trained to think through writing.
It seems logical, then, that we can use this "software that writes" to help along our own thinking process, right? Wrong.</p>
<p>Writing is one of the most cerebral activities that humans do on a regular basis.
<a href="https://elijahpotter.dev/articles/why-i-talk-to-myself">I try to sit down and write about a complex topic at least once a week.</a>
There are a number of reasons I do that pertaining to my job in particular, but there are more generally applicable reasons that can apply to any profession.</p>
<p>When a mathematician needs to think critically about some theorem or proof, they often turn to a dry-erase board or a chalkboard.
The mathematician then writes in clear notation what they imagine the problem to look like in their heads.
From there, they can manipulate that notation until they reach their desired outcome.</p>
<p>The English language is extremely similar, with a key difference.
Mathematical notation is specifically designed for a narrow domain.
Natural languages, on the other hand, can be applied to any field.
By writing your idea down on paper (or in a Google Doc, it does not matter), you give it substance.
This substance can then be shaped iteratively towards a goal of some kind.</p>
<p>No great ideas were written down on the first try.
They started as a vague notion, and through writing and rewriting, they became something concrete and valuable.
You cannot dictate a vague notation to an LLM and expect it to spit out something tangibly valuable.
Any attempt to do so is lazy, and it will result in failure.
I have seen it in the wild, and frankly, it is a little embarrassing.</p>
<p><a href="https://ammil.industries/i-know-you-didnt-write-this/">This article</a> feels especially relevant today.</p>
<h2>What <strong>Does</strong> Work</h2>
<p>Using an LLM for research can be very effective.
Searches to ChatGPT or Perplexity can surface information faster than digging around in a book you read three months ago.
For lots of searches, these two sources have completely replaced Google and DuckDuckGo for me.</p>
<p>They're also quite excellent fact-checkers.
I've seen that plenty of folks have received quite good results by pasting their work into an LLM with an internet tool (like ChatGPT with Thinking enabled).</p>
<p>Finally, I've personally found that LLMs are quite good at finding places where I simply forgot to insert context.
Sometimes I forget to define a term or insert a relevant link.
Even small models like <code>gemma3n:e2b</code> can catch me.</p>
<h2>What <strong>You</strong> Should Be Doing</h2>
<p>When you need to communicate an idea in your head, sit down with a notepad and a keyboard and explain it.
I personally think you should use a grammar checker, since they are far less likely to impede your thought process, but if you need to, you can instruct ChatGPT to critique your work once it is finished.</p>
<p>If an idea deserves an audience, you're capable of writing it out yourself.
In the same vein, if you can vaguely describe it to ChatGPT, you can vaguely describe it to your intended audience.
Do yourself a favor and skip the middleman.</p>
<h2>What We Should Be Building</h2>
<p>Now, I'd like to enter the section where I talk specifically about what this all means for people who create tools for writers.
That includes both Harper and WordPress.</p>
<p>A tool that helps someone write is a tool that directly interfaces with how that person thinks.
That might include what they think about, but more concretely I am referring to the actual day-to-day process of thinking through complex ideas.
If you don't believe me, it's probably because you skipped ahead instead of reading this post from the beginning.</p>
<p>Most crucially, we cannot allow ourselves to build tools that dissuade people from expressing their ideas in the way they imagine.
This can take the form of friction, like building tools that are so cumbersome that one dreads to open them at the start of the workday.
This can also take the form of a tool that crushes or overwrites an author's core message.</p>
<p>A common complaint I hear about Grammarly (from people who have switched to Harper from Grammarly) is that it's too demanding. When they write prose in a creative or unusual fashion, their service often forces them back into a colder and more formal way of speaking.
We don't want to be like them.</p>
<p>For Harper, that means we need to be diverse in our training datasets and mindful of feedback from our users.
For WordPress, it means we need to encourage our LLM integrations to tread lightly and respectfully.
They need to accept when they're wrong.
Just as any good human should.</p>]]></description>
            <content:encoded><![CDATA[<h1>Do Not Write with an LLM</h1>
<img src="/images/green_bird.webp" alt="A photo taken by myself of a green bird.">
<h2>A Growing Trend</h2>
<p>I have been seeing an increasingly prevalent trend of people showing up in online spaces proudly flaunting that they are writing with the assistance of AI.
I often see them clarify that they are going further than letting the model perform grammar or fact-checking.
They are describing an idea to an LLM, and giving it complete control over the final written document.
They seem to be proud of this. They shouldn't be.</p>
<h2>What <strong>Does Not</strong> Work</h2>
<p>Writing seems to be a shadow of a more fundamental and useful process we like to call "thinking".
There is a reason why LLMs, which are the closest thing we currently have to "software that thinks" is trained to think through writing.
It seems logical, then, that we can use this "software that writes" to help along our own thinking process, right? Wrong.</p>
<p>Writing is one of the most cerebral activities that humans do on a regular basis.
<a href="https://elijahpotter.dev/articles/why-i-talk-to-myself">I try to sit down and write about a complex topic at least once a week.</a>
There are a number of reasons I do that pertaining to my job in particular, but there are more generally applicable reasons that can apply to any profession.</p>
<p>When a mathematician needs to think critically about some theorem or proof, they often turn to a dry-erase board or a chalkboard.
The mathematician then writes in clear notation what they imagine the problem to look like in their heads.
From there, they can manipulate that notation until they reach their desired outcome.</p>
<p>The English language is extremely similar, with a key difference.
Mathematical notation is specifically designed for a narrow domain.
Natural languages, on the other hand, can be applied to any field.
By writing your idea down on paper (or in a Google Doc, it does not matter), you give it substance.
This substance can then be shaped iteratively towards a goal of some kind.</p>
<p>No great ideas were written down on the first try.
They started as a vague notion, and through writing and rewriting, they became something concrete and valuable.
You cannot dictate a vague notation to an LLM and expect it to spit out something tangibly valuable.
Any attempt to do so is lazy, and it will result in failure.
I have seen it in the wild, and frankly, it is a little embarrassing.</p>
<p><a href="https://ammil.industries/i-know-you-didnt-write-this/">This article</a> feels especially relevant today.</p>
<h2>What <strong>Does</strong> Work</h2>
<p>Using an LLM for research can be very effective.
Searches to ChatGPT or Perplexity can surface information faster than digging around in a book you read three months ago.
For lots of searches, these two sources have completely replaced Google and DuckDuckGo for me.</p>
<p>They're also quite excellent fact-checkers.
I've seen that plenty of folks have received quite good results by pasting their work into an LLM with an internet tool (like ChatGPT with Thinking enabled).</p>
<p>Finally, I've personally found that LLMs are quite good at finding places where I simply forgot to insert context.
Sometimes I forget to define a term or insert a relevant link.
Even small models like <code>gemma3n:e2b</code> can catch me.</p>
<h2>What <strong>You</strong> Should Be Doing</h2>
<p>When you need to communicate an idea in your head, sit down with a notepad and a keyboard and explain it.
I personally think you should use a grammar checker, since they are far less likely to impede your thought process, but if you need to, you can instruct ChatGPT to critique your work once it is finished.</p>
<p>If an idea deserves an audience, you're capable of writing it out yourself.
In the same vein, if you can vaguely describe it to ChatGPT, you can vaguely describe it to your intended audience.
Do yourself a favor and skip the middleman.</p>
<h2>What We Should Be Building</h2>
<p>Now, I'd like to enter the section where I talk specifically about what this all means for people who create tools for writers.
That includes both Harper and WordPress.</p>
<p>A tool that helps someone write is a tool that directly interfaces with how that person thinks.
That might include what they think about, but more concretely I am referring to the actual day-to-day process of thinking through complex ideas.
If you don't believe me, it's probably because you skipped ahead instead of reading this post from the beginning.</p>
<p>Most crucially, we cannot allow ourselves to build tools that dissuade people from expressing their ideas in the way they imagine.
This can take the form of friction, like building tools that are so cumbersome that one dreads to open them at the start of the workday.
This can also take the form of a tool that crushes or overwrites an author's core message.</p>
<p>A common complaint I hear about Grammarly (from people who have switched to Harper from Grammarly) is that it's too demanding. When they write prose in a creative or unusual fashion, their service often forces them back into a colder and more formal way of speaking.
We don't want to be like them.</p>
<p>For Harper, that means we need to be diverse in our training datasets and mindful of feedback from our users.
For WordPress, it means we need to encourage our LLM integrations to tread lightly and respectfully.
They need to accept when they're wrong.
Just as any good human should.</p>]]></content:encoded>
            <enclosure url="https://elijahpotter.dev/images/green_bird.webp" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[Refactoring Slop]]></title>
            <link>https://elijahpotter.dev/articles/refactoring_slop</link>
            <guid>https://elijahpotter.dev/articles/refactoring_slop</guid>
            <pubDate>Fri, 20 Feb 2026 07:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Refactoring Slop</h1>
<p>The term "slop" is rapidly entering the conventional English lexicon.
In fact, it was Merriam Webster's Word Of the Year for 2025. It refers to low-quality media generated by an LLM.
It can be applied to images, text, music, or most relevantly to us, code.</p>
<p>As an open source maintainer, I frequently both receive and create pull requests that could be labeled as slop.
One of my principal jobs is to stand at the gate to the Harper repository and say, "thou shalt not pass" to any low-quality or buggy code that wishes to enter.
So my question becomes, how can I turn AI generated code from "slop" tier to "top tier"?</p>
<p>There are a gazillion guides out there about the various techniques you can use to improve the quality of AI generated code.
Those techniques will change, and I'm sure my readership already knows all about them.
Instead, I want to talk about some more generally applicable ideas that I've found especially useful in the age of AI.
I'm something of a contrarian, so expect some dishing on conventional "vibe coders".</p>
<h2>A Good Foundation</h2>
<p>I am often sent stories of developers discovering Claude Code for the first time and using it to build some simple CRUD app from scratch, without ever reading or writing a line of code themselves.
Those are impressive stories, and they're a sign of the amazing progress that the nation's frontier AI labs have made in the last few months.
They are not, however, examples of how an open-source maintainer should operate.</p>
<p>As I said before, it's critical that maintainer act as a gate that allows or denies code from entering the repository (and thus being ultimately delivered to user's devices).
In order to do that, they need to have a good understanding of how the code already works.
Then, and <em>only</em> then, should they allow modifications to it.</p>
<p>This should be pretty obvious advice to most in our industry, but evidently it isn't.
I've seen several instances recently of developers vibe-coding a new feature without some existing foundational understanding of how the original code worked.
The result: the new feature works well enough, but its implementation breaks some other part of the application.
That's not to mention that it increases the apps complexity on the whole far more than necessary.</p>
<p>Having a foundational understanding from the get-go is an easy way to prevent this kind of tragic outcome.
LLMs are not currently capable enough to recognize when a pattern in the code they're writing already exists in the codebase.
I've tried pushing them in the right direction with skills and with an <code>AGENTS.md</code>, and they'll pick it up given enough massaging, but the fact is that they still need to be poked.
If I didn't have a good knowledge of my project's internals, I wouldn't know to poke them at all.</p>
<p>Understand your project deeply.
Understand its code, its values, and its purpose.</p>
<h2>Reviewing AI Code</h2>
<p>Now that you're an expert on how your project works, you're ready to start reviewing AI-written pull requests.
Here's the thing you need to remember: you have better taste than the clanker does.
You ultimately get to decide what code gets merged.
What should that code look like?
How should it be tested?
Your answer to those questions should guide you towards prompting a model to revise its work or submitting feedback on a pull request opened by a human.</p>
<p>Here's what your prompt should <em>not</em> look like:</p>
<ul>
<li>"Fix it."</li>
<li>"Make it prettier."</li>
<li>"I don't like that."</li>
</ul>
<p>Obviously, you wouldn't want feedback like that if you were the one writing the code.
If you're looking at a patch, whoever wrote it did the best job they could.
Whether it was an LLM or a human doesn't matter.
If the code isn't up to your refined taste as a human, you need to give them a nudge in the right direction.
Here are some better versions of the prompts from before:</p>
<ul>
<li>"It looks like your modification made the app crash on startup. Would you run the tests to make sure you haven't broken anything in the X module?"</li>
<li>"This huge block of code is really hard to read. Would you refactor it into individual composable components?"</li>
<li>"It looks like you're reimplementing a function we already have. Would you use X instead?"</li>
</ul>
<p>For those unaware, you should never say "I don't like that" to human's work.</p>
<h2>Accept That The Models Are Limited in Intelligence</h2>
<p>It's easy to get the impression that with enough tokens, anything is possible.
Maybe that will be true in the future, but I don't think that's true today.</p>
<p>LLMs are limited in intelligence and experience.
Even when provided access to the best coding setups in the world, they are still incapable of shipping many (most?) features or fixing many (most?) bugs.
The fact that I have to include question marks is a testament to how far we've come, but it's important not to overstate things.
If you're reading this, you're pretty smart.
It's fine to let the model try to solve a problem for you once or twice, but if it still fails, you'll need to get in there and do it yourself.
Don't be afraid to get your hands dirty.</p>]]></description>
            <content:encoded><![CDATA[<h1>Refactoring Slop</h1>
<p>The term "slop" is rapidly entering the conventional English lexicon.
In fact, it was Merriam Webster's Word Of the Year for 2025. It refers to low-quality media generated by an LLM.
It can be applied to images, text, music, or most relevantly to us, code.</p>
<p>As an open source maintainer, I frequently both receive and create pull requests that could be labeled as slop.
One of my principal jobs is to stand at the gate to the Harper repository and say, "thou shalt not pass" to any low-quality or buggy code that wishes to enter.
So my question becomes, how can I turn AI generated code from "slop" tier to "top tier"?</p>
<p>There are a gazillion guides out there about the various techniques you can use to improve the quality of AI generated code.
Those techniques will change, and I'm sure my readership already knows all about them.
Instead, I want to talk about some more generally applicable ideas that I've found especially useful in the age of AI.
I'm something of a contrarian, so expect some dishing on conventional "vibe coders".</p>
<h2>A Good Foundation</h2>
<p>I am often sent stories of developers discovering Claude Code for the first time and using it to build some simple CRUD app from scratch, without ever reading or writing a line of code themselves.
Those are impressive stories, and they're a sign of the amazing progress that the nation's frontier AI labs have made in the last few months.
They are not, however, examples of how an open-source maintainer should operate.</p>
<p>As I said before, it's critical that maintainer act as a gate that allows or denies code from entering the repository (and thus being ultimately delivered to user's devices).
In order to do that, they need to have a good understanding of how the code already works.
Then, and <em>only</em> then, should they allow modifications to it.</p>
<p>This should be pretty obvious advice to most in our industry, but evidently it isn't.
I've seen several instances recently of developers vibe-coding a new feature without some existing foundational understanding of how the original code worked.
The result: the new feature works well enough, but its implementation breaks some other part of the application.
That's not to mention that it increases the apps complexity on the whole far more than necessary.</p>
<p>Having a foundational understanding from the get-go is an easy way to prevent this kind of tragic outcome.
LLMs are not currently capable enough to recognize when a pattern in the code they're writing already exists in the codebase.
I've tried pushing them in the right direction with skills and with an <code>AGENTS.md</code>, and they'll pick it up given enough massaging, but the fact is that they still need to be poked.
If I didn't have a good knowledge of my project's internals, I wouldn't know to poke them at all.</p>
<p>Understand your project deeply.
Understand its code, its values, and its purpose.</p>
<h2>Reviewing AI Code</h2>
<p>Now that you're an expert on how your project works, you're ready to start reviewing AI-written pull requests.
Here's the thing you need to remember: you have better taste than the clanker does.
You ultimately get to decide what code gets merged.
What should that code look like?
How should it be tested?
Your answer to those questions should guide you towards prompting a model to revise its work or submitting feedback on a pull request opened by a human.</p>
<p>Here's what your prompt should <em>not</em> look like:</p>
<ul>
<li>"Fix it."</li>
<li>"Make it prettier."</li>
<li>"I don't like that."</li>
</ul>
<p>Obviously, you wouldn't want feedback like that if you were the one writing the code.
If you're looking at a patch, whoever wrote it did the best job they could.
Whether it was an LLM or a human doesn't matter.
If the code isn't up to your refined taste as a human, you need to give them a nudge in the right direction.
Here are some better versions of the prompts from before:</p>
<ul>
<li>"It looks like your modification made the app crash on startup. Would you run the tests to make sure you haven't broken anything in the X module?"</li>
<li>"This huge block of code is really hard to read. Would you refactor it into individual composable components?"</li>
<li>"It looks like you're reimplementing a function we already have. Would you use X instead?"</li>
</ul>
<p>For those unaware, you should never say "I don't like that" to human's work.</p>
<h2>Accept That The Models Are Limited in Intelligence</h2>
<p>It's easy to get the impression that with enough tokens, anything is possible.
Maybe that will be true in the future, but I don't think that's true today.</p>
<p>LLMs are limited in intelligence and experience.
Even when provided access to the best coding setups in the world, they are still incapable of shipping many (most?) features or fixing many (most?) bugs.
The fact that I have to include question marks is a testament to how far we've come, but it's important not to overstate things.
If you're reading this, you're pretty smart.
It's fine to let the model try to solve a problem for you once or twice, but if it still fails, you'll need to get in there and do it yourself.
Don't be afraid to get your hands dirty.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[LaTeX Support Is Coming to Harper]]></title>
            <link>https://elijahpotter.dev/articles/latex_support_is_coming_to_harper</link>
            <guid>https://elijahpotter.dev/articles/latex_support_is_coming_to_harper</guid>
            <pubDate>Wed, 18 Feb 2026 07:00:00 GMT</pubDate>
            <description><![CDATA[<h1>LaTeX Support Is Coming to Harper</h1>
<p>It's been a long time coming, which is why I'm thrilled to say that Harper will soon support LaTeX. This is something our users have been pleading for since the beginning of the project. LaTeX support feels like a feature that is comparatively low‑effort, but unlocks a massive new consumer segment with little existing competition. It has the potential to be a huge boon to our user base and our offering overall.</p>
<p>For context, <a href="https://en.wikipedia.org/wiki/LaTeX">LaTeX</a> is a document markup language (similar in soul to Markdown), primarily used by STEM professions and students. It's extraordinarily good at formatting and rendering complex mathematical equations and chemical formulas.</p>
<h2>The Market</h2>
<p>Neither Grammarly nor LanguageTool support LaTeX documents, yet sites like Overleaf (the most popular LaTeX editor) regularly <a href="https://www.overleaf.com/blog/nine-years-nine-million-users">see an excess of 400,000 daily active users</a>. That's an enormous market of people who tend to be technical and who care about their privacy. Since the two most popular grammar checkers (Grammarly and LanguageTool) don't support sites like Overleaf in any way, we don't have much competition to worry about.</p>
<p>The story is similar (albeit to a lesser scale) for Typst, another document markup language. We haven't even merged support for Typst yet, and we're already seeing people pick up Harper explicitly for use in the Typst online editor.</p>
<h2>Previous Attempts</h2>
<p>On two separate occasions, members of the Harper community have opened (and subsequently closed) pull requests which tried to add LaTeX support to Harper. I've put links to that work below for posterity.</p>
<ul>
<li><a href="https://github.com/Automattic/harper/pull/100">PR by Grant Lemons</a></li>
<li><a href="https://github.com/Automattic/harper/pull/2073">PR by Shreyas Minocha</a></li>
</ul>
<p>The fundamental problem with LaTeX is that its underlying language, TeX, is not regular. In order to parse the language, you need to actually run it. This is why LaTeX documents can take as many as several full minutes to compile and render to a PDF.</p>
<p>Rather than “correctly” read and run the TeX code contained within LaTeX documents, these previous attempts (including mine, which we will discuss in a second) choose to assume that LaTeX commands are keywords in a more abstract language. Some edge‑cases will naturally arise when we ignore the lowest level of abstraction, but doing so allows us to parse most real documents with less effort.</p>
<h2>Implementation Details</h2>
<p>It was very important to me that our LaTeX support was fast, small, and <strong>could be compiled to run in the browser</strong>. That last bit is critical, because previous attempts (by the community) to add LaTeX support tried to use Tree‑sitter, which cannot be easily compiled to run in the browser. Almost all LaTeX users (which is the vast majority of STEM undergraduate and graduate students) use Overleaf, an online LaTeX compiler and editor. Developing LaTeX support without making it possible to use Overleaf would be like developing a text editor that can't be used with a keyboard. Simply illogical.</p>
<p>To make this happen, I wrote my parser from scratch in Rust. Actually, calling it a parser would be an insult to all of parser‑kind. My solution produces a character mask. That is, it identifies which parts of the document are real English text, and which parts are LaTeX code. From there, I can just mark all the LaTeX sections as “unlintable” and thus undeserving of Harper's discerning gaze.</p>
<p>The curious can <a href="https://github.com/Automattic/harper/pull/2689">check out my PR</a>.</p>
<h2>What's Next</h2>
<p>While we're simultaneously working on rolling out our new Weirpack system and our small language model, I plan to work with some of our undergraduate STEM student users. I find it quite likely there's a use‑case for our LaTeX and Typst support that I'm not considering. It's a huge market, and I'm excited to work with those in it.</p>
<h2>A Quick Correction</h2>
<p>A few days after writing this post, I found evidence that LanguageTool supports an older version of the Overleaf editor.
While it doesn't seem to support the latest version, this new information makes my previous statement that it "doesn't support sites like Overleaf in any way" false.
I apologize for the misinformation.</p>]]></description>
            <content:encoded><![CDATA[<h1>LaTeX Support Is Coming to Harper</h1>
<p>It's been a long time coming, which is why I'm thrilled to say that Harper will soon support LaTeX. This is something our users have been pleading for since the beginning of the project. LaTeX support feels like a feature that is comparatively low‑effort, but unlocks a massive new consumer segment with little existing competition. It has the potential to be a huge boon to our user base and our offering overall.</p>
<p>For context, <a href="https://en.wikipedia.org/wiki/LaTeX">LaTeX</a> is a document markup language (similar in soul to Markdown), primarily used by STEM professions and students. It's extraordinarily good at formatting and rendering complex mathematical equations and chemical formulas.</p>
<h2>The Market</h2>
<p>Neither Grammarly nor LanguageTool support LaTeX documents, yet sites like Overleaf (the most popular LaTeX editor) regularly <a href="https://www.overleaf.com/blog/nine-years-nine-million-users">see an excess of 400,000 daily active users</a>. That's an enormous market of people who tend to be technical and who care about their privacy. Since the two most popular grammar checkers (Grammarly and LanguageTool) don't support sites like Overleaf in any way, we don't have much competition to worry about.</p>
<p>The story is similar (albeit to a lesser scale) for Typst, another document markup language. We haven't even merged support for Typst yet, and we're already seeing people pick up Harper explicitly for use in the Typst online editor.</p>
<h2>Previous Attempts</h2>
<p>On two separate occasions, members of the Harper community have opened (and subsequently closed) pull requests which tried to add LaTeX support to Harper. I've put links to that work below for posterity.</p>
<ul>
<li><a href="https://github.com/Automattic/harper/pull/100">PR by Grant Lemons</a></li>
<li><a href="https://github.com/Automattic/harper/pull/2073">PR by Shreyas Minocha</a></li>
</ul>
<p>The fundamental problem with LaTeX is that its underlying language, TeX, is not regular. In order to parse the language, you need to actually run it. This is why LaTeX documents can take as many as several full minutes to compile and render to a PDF.</p>
<p>Rather than “correctly” read and run the TeX code contained within LaTeX documents, these previous attempts (including mine, which we will discuss in a second) choose to assume that LaTeX commands are keywords in a more abstract language. Some edge‑cases will naturally arise when we ignore the lowest level of abstraction, but doing so allows us to parse most real documents with less effort.</p>
<h2>Implementation Details</h2>
<p>It was very important to me that our LaTeX support was fast, small, and <strong>could be compiled to run in the browser</strong>. That last bit is critical, because previous attempts (by the community) to add LaTeX support tried to use Tree‑sitter, which cannot be easily compiled to run in the browser. Almost all LaTeX users (which is the vast majority of STEM undergraduate and graduate students) use Overleaf, an online LaTeX compiler and editor. Developing LaTeX support without making it possible to use Overleaf would be like developing a text editor that can't be used with a keyboard. Simply illogical.</p>
<p>To make this happen, I wrote my parser from scratch in Rust. Actually, calling it a parser would be an insult to all of parser‑kind. My solution produces a character mask. That is, it identifies which parts of the document are real English text, and which parts are LaTeX code. From there, I can just mark all the LaTeX sections as “unlintable” and thus undeserving of Harper's discerning gaze.</p>
<p>The curious can <a href="https://github.com/Automattic/harper/pull/2689">check out my PR</a>.</p>
<h2>What's Next</h2>
<p>While we're simultaneously working on rolling out our new Weirpack system and our small language model, I plan to work with some of our undergraduate STEM student users. I find it quite likely there's a use‑case for our LaTeX and Typst support that I'm not considering. It's a huge market, and I'm excited to work with those in it.</p>
<h2>A Quick Correction</h2>
<p>A few days after writing this post, I found evidence that LanguageTool supports an older version of the Overleaf editor.
While it doesn't seem to support the latest version, this new information makes my previous statement that it "doesn't support sites like Overleaf in any way" false.
I apologize for the misinformation.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Building Software That Is Good for Humans]]></title>
            <link>https://elijahpotter.dev/articles/building_software_that_is_good_for_humans</link>
            <guid>https://elijahpotter.dev/articles/building_software_that_is_good_for_humans</guid>
            <pubDate>Mon, 09 Feb 2026 07:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Building Software That Is Good for Humans</h1>
<p>Earlier today, <a href="https://digital-strategy.ec.europa.eu/en/news/commission-preliminarily-finds-tiktoks-addictive-design-breach-digital-services-act">the European Commission released a report</a> detailing their initial findings that TikTok's core recommendation algorithm is in breach of the Digital Services Act.
This is only the most recent episode in the extended saga where western governments try to regulate the Chinese company.
The various reasons and the methods they employ in this regulation are far outside of my domain of expertise, but the report got me thinking.
Why does so much of the software we use feel harmful?</p>
<p>There are few people who can report that an interaction with Microsoft Word, Teams, or TikTok has left them feeling empowered, healthier, or happier.
On the other hand, I can think of plenty of software that has left me feeling great.
What is that?
How can we make sure our software sits in the latter group?</p>
<h2>Traits of Offensive Software</h2>
<p>There are a few fundamental things that I've found make using certain software feel like a chore.
Software feels "icky" when it is some combination of slow, unpredictable, or when it gets in the way of the meaningful pursuits in life.</p>
<h3>It Is Slow</h3>
<p>Computers are fast.
People understand that.
So if they're left waiting for a page to load or a process to finish, they feel frustrated.</p>
<p>Slow software makes it difficult to iterate and experiment.
Imagine if WordPress took 20 minutes to publish a post, only to discover afterward that you left a typo in the title.
That would be pretty frustrating, no?</p>
<p>Steve Jobs imagined Apple Computers to be "bicycles for the mind."
A bicycle whose chain hitched or whose steering lagged not be very helpful. It might not get you to your destination at all.</p>
<p>I think that this idea of slowness is especially important in regards to AI agents.
If an assistant spends even a few seconds more than what is critically necessary for the query, and they're messaged frequently (say, to iterate on a blog post), users can quickly get tired or "brainrotted", as the kids say.</p>
<h3>It Is Random</h3>
<p>If some software works <em>sometimes</em>, but not always, there's a good chance it's addictive.
This presents most obviously in apps like TikTok, but it also appears in more insidious places.
The inference of large language models involves randomness in its very nature.</p>
<p>Fundamentally, this from of addiction comes from a quirk of human psychology.
Randomly applied rewards (the software <em>sometimes</em> works, but not always) is a form of <a href="https://en.wikipedia.org/wiki/Operant_conditioning">operant conditioning</a>.
It feels good to roll the dice, which makes us want to roll it more often.
Over time, rolling the dice starts to be less of a choice, and more of a habit.</p>
<p>When someone chooses to use software not because it's genuinely helpful, but because it feels good to roll the dice, a sense of agency is lost.</p>
<p>In my experience, Microsoft Word is like this, as are most agentic coding tools and image generators.</p>
<h3>It Distracts from That Which Makes Life Meaningful</h3>
<p>When software is slow, or otherwise removes your agency with variable reward systems, it can start to chip away at your ability to make reasonable decisions on where to spend your time.
People who use "social" media sites like Facebook or Instagram can attest to this.
They simply don't feel like they have agency over their own actions anymore.
They can't stop using software they actively hate.</p>
<h2>Our Goals</h2>
<p>I think that one of <a href="https://automattic.com/">Automattic's</a> greatest strengths is that none of our software does any one of these things.
I remember the first time I used Pocket Casts: I was thrilled.
It was easily the best podcast software I had ever used.
The same things happened with Day One and WordPress.com.</p>
<p>As we look to building Harper, I think it's important that we remember these core principles.
Our software should be fast, predictable, and support our customers in finding meaning in their lives.</p>]]></description>
            <content:encoded><![CDATA[<h1>Building Software That Is Good for Humans</h1>
<p>Earlier today, <a href="https://digital-strategy.ec.europa.eu/en/news/commission-preliminarily-finds-tiktoks-addictive-design-breach-digital-services-act">the European Commission released a report</a> detailing their initial findings that TikTok's core recommendation algorithm is in breach of the Digital Services Act.
This is only the most recent episode in the extended saga where western governments try to regulate the Chinese company.
The various reasons and the methods they employ in this regulation are far outside of my domain of expertise, but the report got me thinking.
Why does so much of the software we use feel harmful?</p>
<p>There are few people who can report that an interaction with Microsoft Word, Teams, or TikTok has left them feeling empowered, healthier, or happier.
On the other hand, I can think of plenty of software that has left me feeling great.
What is that?
How can we make sure our software sits in the latter group?</p>
<h2>Traits of Offensive Software</h2>
<p>There are a few fundamental things that I've found make using certain software feel like a chore.
Software feels "icky" when it is some combination of slow, unpredictable, or when it gets in the way of the meaningful pursuits in life.</p>
<h3>It Is Slow</h3>
<p>Computers are fast.
People understand that.
So if they're left waiting for a page to load or a process to finish, they feel frustrated.</p>
<p>Slow software makes it difficult to iterate and experiment.
Imagine if WordPress took 20 minutes to publish a post, only to discover afterward that you left a typo in the title.
That would be pretty frustrating, no?</p>
<p>Steve Jobs imagined Apple Computers to be "bicycles for the mind."
A bicycle whose chain hitched or whose steering lagged not be very helpful. It might not get you to your destination at all.</p>
<p>I think that this idea of slowness is especially important in regards to AI agents.
If an assistant spends even a few seconds more than what is critically necessary for the query, and they're messaged frequently (say, to iterate on a blog post), users can quickly get tired or "brainrotted", as the kids say.</p>
<h3>It Is Random</h3>
<p>If some software works <em>sometimes</em>, but not always, there's a good chance it's addictive.
This presents most obviously in apps like TikTok, but it also appears in more insidious places.
The inference of large language models involves randomness in its very nature.</p>
<p>Fundamentally, this from of addiction comes from a quirk of human psychology.
Randomly applied rewards (the software <em>sometimes</em> works, but not always) is a form of <a href="https://en.wikipedia.org/wiki/Operant_conditioning">operant conditioning</a>.
It feels good to roll the dice, which makes us want to roll it more often.
Over time, rolling the dice starts to be less of a choice, and more of a habit.</p>
<p>When someone chooses to use software not because it's genuinely helpful, but because it feels good to roll the dice, a sense of agency is lost.</p>
<p>In my experience, Microsoft Word is like this, as are most agentic coding tools and image generators.</p>
<h3>It Distracts from That Which Makes Life Meaningful</h3>
<p>When software is slow, or otherwise removes your agency with variable reward systems, it can start to chip away at your ability to make reasonable decisions on where to spend your time.
People who use "social" media sites like Facebook or Instagram can attest to this.
They simply don't feel like they have agency over their own actions anymore.
They can't stop using software they actively hate.</p>
<h2>Our Goals</h2>
<p>I think that one of <a href="https://automattic.com/">Automattic's</a> greatest strengths is that none of our software does any one of these things.
I remember the first time I used Pocket Casts: I was thrilled.
It was easily the best podcast software I had ever used.
The same things happened with Day One and WordPress.com.</p>
<p>As we look to building Harper, I think it's important that we remember these core principles.
Our software should be fast, predictable, and support our customers in finding meaning in their lives.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Training a Small Language Model]]></title>
            <link>https://elijahpotter.dev/articles/training_a_small_language_model</link>
            <guid>https://elijahpotter.dev/articles/training_a_small_language_model</guid>
            <pubDate>Tue, 03 Feb 2026 07:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Training a Small Language Model</h1>
<img src="/images/blue_bird.webp" alt="A photo taken by myself of a blue bird.">
<blockquote>
<p>TL;DR: I've built and trained an extremely small language model from scratch, specifically designed for short-form sentence rewriting tasks that are highly resource constrained.</p>
</blockquote>
<p>In <a href="./finding_the_active_voice">my last post</a>, I discussed how I plan to tackle the difficult problem of helping <a href="https://writewithharper.com">Harper's</a> users take advantage of the active voice.
The active voice is seen as more confident and authoritative, which is often desired in professions like customer service.
I have experience in that area, which is how I know the extent to which help with the active voice can make the job easier.</p>
<p>Naturally, we would only want to provide help with the active voice when specifically requested by the user.
That means it's an opt-in feature that isn't enabled by default.
If we succeed in implementing the functionality, and people seem to like it, the Harper project can reconsider this stance.
For now, the plan is for it to be disabled by default.</p>
<p>As I iterated in my last post, the state-of-the-art method for converting from the passive voice to the active voice is to use an autoregressive transformer model.
This is the same fundamental technology behind apps like Google Translate and ChatGPT.
While the actual model I intend use for this feature is similar in nature to those used in these apps, my implementation will be entirely different in scale.</p>
<p>You see, since our domain is so limited in scope, we can use an extremely small model.
Google's smallest T5 model is a great point of reference.
A quantized copy of that model runs about 60 megabytes.
I believe we can train our own transformer model from scratch and pack it in an unquantized container measuring no larger than 30 megabytes.</p>
<p>Why do I believe this to be possible?
Because I've done it.</p>
<h2>Our Model Architecture</h2>
<p>Over the past few days, I've implemented a basic language model in Rust using Burn.
For the curious, the code is <a href="https://github.com/elijah-potter/lm">open source</a>.
It's pretty messy, so please don't look at the commit history.
Once I have a solid proof-of-concept working, I'll polish it up and find a good home for it in the main Harper repository.</p>
<p>The key here is that our use-case only needs a few hundred characters of context.
Since the computational cost of an LLM can be approximated as the square of the context size, we can train highly effective models ourselves that are small and can run anywhere.
If we want to provide additional information to the model, such as style or domain, we can supplement this context with custom embeddings.</p>
<p>Using a small model allows us to maintain our privacy guarantees without significant capital expense.
In fact, once we've trained the models, they're free to run indefinitely.</p>
<p>I've chosen to use a character-level tokenizer.
That means that each character is its own token, and the context size of the model is equivalent to the number of characters it can read at once.
The hope here is that it allow us to minimize Harper's final binary size and give the model greater freedom to change the inflection of words however it pleases.</p>
<p>For now, here are some simple sentences generated with the basic model I trained for just under an hour.
It's pretty amazing what can be done with just under a dollar's worth of computing power.</p>
<pre><code class="hljs language-plaintext">The forest rested in a speed of the night.
</code></pre>
<p>Feels... introspective, no?</p>
<pre><code class="hljs language-plaintext">It was the today of the world.
</code></pre>
<h2>Moving Forward</h2>
<p>My focus this week was to get a basic autoregressive model trained on generic English text.
My focus starting next week will be to fine-tune this model for our sentence rewriting task.
Once we have a very solid base model, I see no reason we couldn't fine-tune similar models for other features, like changing tone or improving clarity.</p>
<p>My current implementation, while fast, could also use some improvement.
To that end, I'd also like to implement a KV cache.
That will allow us to scale up to larger context sizes without a drop in quality or speed.</p>
<p>P.S. Today is my birthday! I'm so grateful for the years I've been on this earth, and I can't wait for more. Thanks for reading! Here's to 21.</p>]]></description>
            <content:encoded><![CDATA[<h1>Training a Small Language Model</h1>
<img src="/images/blue_bird.webp" alt="A photo taken by myself of a blue bird.">
<blockquote>
<p>TL;DR: I've built and trained an extremely small language model from scratch, specifically designed for short-form sentence rewriting tasks that are highly resource constrained.</p>
</blockquote>
<p>In <a href="./finding_the_active_voice">my last post</a>, I discussed how I plan to tackle the difficult problem of helping <a href="https://writewithharper.com">Harper's</a> users take advantage of the active voice.
The active voice is seen as more confident and authoritative, which is often desired in professions like customer service.
I have experience in that area, which is how I know the extent to which help with the active voice can make the job easier.</p>
<p>Naturally, we would only want to provide help with the active voice when specifically requested by the user.
That means it's an opt-in feature that isn't enabled by default.
If we succeed in implementing the functionality, and people seem to like it, the Harper project can reconsider this stance.
For now, the plan is for it to be disabled by default.</p>
<p>As I iterated in my last post, the state-of-the-art method for converting from the passive voice to the active voice is to use an autoregressive transformer model.
This is the same fundamental technology behind apps like Google Translate and ChatGPT.
While the actual model I intend use for this feature is similar in nature to those used in these apps, my implementation will be entirely different in scale.</p>
<p>You see, since our domain is so limited in scope, we can use an extremely small model.
Google's smallest T5 model is a great point of reference.
A quantized copy of that model runs about 60 megabytes.
I believe we can train our own transformer model from scratch and pack it in an unquantized container measuring no larger than 30 megabytes.</p>
<p>Why do I believe this to be possible?
Because I've done it.</p>
<h2>Our Model Architecture</h2>
<p>Over the past few days, I've implemented a basic language model in Rust using Burn.
For the curious, the code is <a href="https://github.com/elijah-potter/lm">open source</a>.
It's pretty messy, so please don't look at the commit history.
Once I have a solid proof-of-concept working, I'll polish it up and find a good home for it in the main Harper repository.</p>
<p>The key here is that our use-case only needs a few hundred characters of context.
Since the computational cost of an LLM can be approximated as the square of the context size, we can train highly effective models ourselves that are small and can run anywhere.
If we want to provide additional information to the model, such as style or domain, we can supplement this context with custom embeddings.</p>
<p>Using a small model allows us to maintain our privacy guarantees without significant capital expense.
In fact, once we've trained the models, they're free to run indefinitely.</p>
<p>I've chosen to use a character-level tokenizer.
That means that each character is its own token, and the context size of the model is equivalent to the number of characters it can read at once.
The hope here is that it allow us to minimize Harper's final binary size and give the model greater freedom to change the inflection of words however it pleases.</p>
<p>For now, here are some simple sentences generated with the basic model I trained for just under an hour.
It's pretty amazing what can be done with just under a dollar's worth of computing power.</p>
<pre><code class="hljs language-plaintext">The forest rested in a speed of the night.
</code></pre>
<p>Feels... introspective, no?</p>
<pre><code class="hljs language-plaintext">It was the today of the world.
</code></pre>
<h2>Moving Forward</h2>
<p>My focus this week was to get a basic autoregressive model trained on generic English text.
My focus starting next week will be to fine-tune this model for our sentence rewriting task.
Once we have a very solid base model, I see no reason we couldn't fine-tune similar models for other features, like changing tone or improving clarity.</p>
<p>My current implementation, while fast, could also use some improvement.
To that end, I'd also like to implement a KV cache.
That will allow us to scale up to larger context sizes without a drop in quality or speed.</p>
<p>P.S. Today is my birthday! I'm so grateful for the years I've been on this earth, and I can't wait for more. Thanks for reading! Here's to 21.</p>]]></content:encoded>
            <enclosure url="https://elijahpotter.dev/images/blue_bird.webp" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[Writing in Visual Studio Code]]></title>
            <link>https://elijahpotter.dev/articles/writing_in_visual_studio_code</link>
            <guid>https://elijahpotter.dev/articles/writing_in_visual_studio_code</guid>
            <pubDate>Tue, 27 Jan 2026 07:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Writing in Visual Studio Code</h1>
<p>I <a href="./my_writing_environment_as_a_software_engineer">write this blog in Neovim</a>, and I'm aware of a good number of other nerds who do the same.
I suppose I never considered that people might use other code editors for writing prose.
In fact, it seems that there are a nontrivial number of people using <a href="https://chriswiegman.com/2026/01/harper-my-new-grammar-checker/">Harper in Visual Studio Code</a>.
They seem to be using Harper mostly for Markdown and comments in code, but I'd love to get <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mtext>LaTeX</mtext></mrow><annotation encoding="application/x-tex">\LaTeX</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8988em;vertical-align:-0.2155em;"></span><span class="mord text"><span class="mord textrm">L</span><span class="mspace" style="margin-right:-0.36em;"></span><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.6833em;"><span style="top:-2.905em;"><span class="pstrut" style="height:2.7em;"></span><span class="mord"><span class="mord textrm mtight sizing reset-size6 size3">A</span></span></span></span></span></span><span class="mspace" style="margin-right:-0.15em;"></span><span class="mord text"><span class="mord textrm">T</span><span class="mspace" style="margin-right:-0.1667em;"></span><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.4678em;"><span style="top:-2.7845em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord textrm">E</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.2155em;"><span></span></span></span></span><span class="mspace" style="margin-right:-0.125em;"></span><span class="mord textrm">X</span></span></span></span></span></span> working as well.</p>]]></description>
            <content:encoded><![CDATA[<h1>Writing in Visual Studio Code</h1>
<p>I <a href="./my_writing_environment_as_a_software_engineer">write this blog in Neovim</a>, and I'm aware of a good number of other nerds who do the same.
I suppose I never considered that people might use other code editors for writing prose.
In fact, it seems that there are a nontrivial number of people using <a href="https://chriswiegman.com/2026/01/harper-my-new-grammar-checker/">Harper in Visual Studio Code</a>.
They seem to be using Harper mostly for Markdown and comments in code, but I'd love to get <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mtext>LaTeX</mtext></mrow><annotation encoding="application/x-tex">\LaTeX</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8988em;vertical-align:-0.2155em;"></span><span class="mord text"><span class="mord textrm">L</span><span class="mspace" style="margin-right:-0.36em;"></span><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.6833em;"><span style="top:-2.905em;"><span class="pstrut" style="height:2.7em;"></span><span class="mord"><span class="mord textrm mtight sizing reset-size6 size3">A</span></span></span></span></span></span><span class="mspace" style="margin-right:-0.15em;"></span><span class="mord text"><span class="mord textrm">T</span><span class="mspace" style="margin-right:-0.1667em;"></span><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.4678em;"><span style="top:-2.7845em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord textrm">E</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.2155em;"><span></span></span></span></span><span class="mspace" style="margin-right:-0.125em;"></span><span class="mord textrm">X</span></span></span></span></span></span> working as well.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Finding the Active Voice]]></title>
            <link>https://elijahpotter.dev/articles/finding_the_active_voice</link>
            <guid>https://elijahpotter.dev/articles/finding_the_active_voice</guid>
            <pubDate>Mon, 26 Jan 2026 07:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Finding the Active Voice</h1>
<p>Last week, Harper hit a stroke of luck.
It was featured on <a href="https://www.makeuseof.com/stopped-paying-for-grammarly-once-found-free-open-source-alternative/">MakeUseOf</a>.
The downstream social media posts collectively garnered nearly 300,000 views and drove a significant amount of traffic to our site.
This all happened in the wake of LanguageTool shutting down the free edition of their software.
These two events compound to bring more attention than ever to Harper — which is amazing.
It means our mission and values are resonating with people, which is always a good thing.</p>
<p>Reading the MUO article, we hear a lot of great things about Harper.
We also hear that "it skips many of the premium bells and whistles of Grammarly".
The author goes on to explain that the privacy Harper provides more than makes up for any missing premium features, but the point is clear: those features are deeply desired.</p>
<p>So, the question becomes: Which of Grammarly's features should we work on first, and how?</p>
<h2>The Active Voice</h2>
<p>I propose that we should focus first on helping our users find <a href="https://en.wikipedia.org/wiki/Active_voice">the active voice.</a>
For context, the active voice is the style of writing where subject of the verb in a clause is the doer of the action.
This is in contrast to the passive voice, which is where the subject is the receiver of the action.
For example: the sentence, "The postal carrier was bitten by the dog" is written in the passive voice, while the equivalent sentence, "The dog bit the postal carrier." is written in the active voice.</p>
<p>Text written in the active voice is commonly viewed to be more authoritative, confident, and easier to understand.
Being able to help users use the active voice is one of the most commonly requested features in Harper, and including the feature would be a huge step towards competing directly with Grammarly Premium.</p>
<p>How should we go about helping our users with their active voice?</p>
<h3>How It Could Be Done</h3>
<p>I spoke briefly with <a href="https://ma.tt/">Matt</a>, and we agreed that a two tier solution would be best.
A fast algorithm or model would <em>detect</em> instances of the passive voice, letting a larger more computationally expensive model generate a modification in the active voice.</p>
<p>Fortunately, there is already extensive literature on the detection of the passive voice.
In particular, I found the <a href="https://psycnet.apa.org/record/2024-13824-002">PassivePy paper</a> stimulating.
In fact, we can implement their ideas quite easily using the <a href="https://writewithharper.com/docs/weir">Weir language</a> already baked into Harper.
I have done so in a private branch. It turned out to be ~20 lines of code.
That is pretty good bang for the buck!</p>
<p>The second piece, which has to do with the actual simplification of text and conversion from the passive voice to the active voice is a tad more complex.</p>
<p>Matt and I agree that it will require the use of a larger language model.
The trouble is that it cannot be <strong>too</strong> large.
Harper's shtick is that it is fast, private, and that everything runs directly on our user's devices.
That means whichever model we use for our style transfer will need to be relatively small.</p>
<p>I believe the best solution to this problem is to take an off-the-shelf model, like one of Google's T5 models, and fine tune it for the specific types of style transfer we need.
These are relatively small models (quantized, they can fit into spaces under 65 megabytes) and they run quite quickly, even on older hardware that doesn't have access to matrix multiplication accelerators.
There is prior art for running this at 50 tok/s in Chrome <em>without WebGPU</em> on a single core.
The best part is that they're under the Apache-2.0 license!</p>
<h2>How This Fits in with the Weirpack Project</h2>
<p>These models are small, but they're not quite small enough to be a part of the standard distribution of Harper.
I believe this should be an opt-in feature, and the best way to do that is to expose the functionality via a Weirpack.
If you don't know what a Weirpack is, I highly suggest you read my <a href="https://elijahpotter.dev/articles/imagine-a-weir-marketplace">previous</a> <a href="https://elijahpotter.dev/articles/imagine-a-weir-studio">blog posts</a> on the subject.</p>
<p>Everyone who wants this additional functionality could just enable it in the marketplace.
This continues our goal to make Harper as customizable as our users want, while providing sensible defaults.</p>
<h2>What's Next?</h2>
<p>Once we have the system in place to detect and provide suggestions for the active voice, we will be prepared to do other kinds of transformation, like for adjusting formality or tone.</p>
<p>I'm really excited about this project, and I can't wait to get started.</p>]]></description>
            <content:encoded><![CDATA[<h1>Finding the Active Voice</h1>
<p>Last week, Harper hit a stroke of luck.
It was featured on <a href="https://www.makeuseof.com/stopped-paying-for-grammarly-once-found-free-open-source-alternative/">MakeUseOf</a>.
The downstream social media posts collectively garnered nearly 300,000 views and drove a significant amount of traffic to our site.
This all happened in the wake of LanguageTool shutting down the free edition of their software.
These two events compound to bring more attention than ever to Harper — which is amazing.
It means our mission and values are resonating with people, which is always a good thing.</p>
<p>Reading the MUO article, we hear a lot of great things about Harper.
We also hear that "it skips many of the premium bells and whistles of Grammarly".
The author goes on to explain that the privacy Harper provides more than makes up for any missing premium features, but the point is clear: those features are deeply desired.</p>
<p>So, the question becomes: Which of Grammarly's features should we work on first, and how?</p>
<h2>The Active Voice</h2>
<p>I propose that we should focus first on helping our users find <a href="https://en.wikipedia.org/wiki/Active_voice">the active voice.</a>
For context, the active voice is the style of writing where subject of the verb in a clause is the doer of the action.
This is in contrast to the passive voice, which is where the subject is the receiver of the action.
For example: the sentence, "The postal carrier was bitten by the dog" is written in the passive voice, while the equivalent sentence, "The dog bit the postal carrier." is written in the active voice.</p>
<p>Text written in the active voice is commonly viewed to be more authoritative, confident, and easier to understand.
Being able to help users use the active voice is one of the most commonly requested features in Harper, and including the feature would be a huge step towards competing directly with Grammarly Premium.</p>
<p>How should we go about helping our users with their active voice?</p>
<h3>How It Could Be Done</h3>
<p>I spoke briefly with <a href="https://ma.tt/">Matt</a>, and we agreed that a two tier solution would be best.
A fast algorithm or model would <em>detect</em> instances of the passive voice, letting a larger more computationally expensive model generate a modification in the active voice.</p>
<p>Fortunately, there is already extensive literature on the detection of the passive voice.
In particular, I found the <a href="https://psycnet.apa.org/record/2024-13824-002">PassivePy paper</a> stimulating.
In fact, we can implement their ideas quite easily using the <a href="https://writewithharper.com/docs/weir">Weir language</a> already baked into Harper.
I have done so in a private branch. It turned out to be ~20 lines of code.
That is pretty good bang for the buck!</p>
<p>The second piece, which has to do with the actual simplification of text and conversion from the passive voice to the active voice is a tad more complex.</p>
<p>Matt and I agree that it will require the use of a larger language model.
The trouble is that it cannot be <strong>too</strong> large.
Harper's shtick is that it is fast, private, and that everything runs directly on our user's devices.
That means whichever model we use for our style transfer will need to be relatively small.</p>
<p>I believe the best solution to this problem is to take an off-the-shelf model, like one of Google's T5 models, and fine tune it for the specific types of style transfer we need.
These are relatively small models (quantized, they can fit into spaces under 65 megabytes) and they run quite quickly, even on older hardware that doesn't have access to matrix multiplication accelerators.
There is prior art for running this at 50 tok/s in Chrome <em>without WebGPU</em> on a single core.
The best part is that they're under the Apache-2.0 license!</p>
<h2>How This Fits in with the Weirpack Project</h2>
<p>These models are small, but they're not quite small enough to be a part of the standard distribution of Harper.
I believe this should be an opt-in feature, and the best way to do that is to expose the functionality via a Weirpack.
If you don't know what a Weirpack is, I highly suggest you read my <a href="https://elijahpotter.dev/articles/imagine-a-weir-marketplace">previous</a> <a href="https://elijahpotter.dev/articles/imagine-a-weir-studio">blog posts</a> on the subject.</p>
<p>Everyone who wants this additional functionality could just enable it in the marketplace.
This continues our goal to make Harper as customizable as our users want, while providing sensible defaults.</p>
<h2>What's Next?</h2>
<p>Once we have the system in place to detect and provide suggestions for the active voice, we will be prepared to do other kinds of transformation, like for adjusting formality or tone.</p>
<p>I'm really excited about this project, and I can't wait to get started.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Linkarzu: Harper Is Getting Better]]></title>
            <link>https://elijahpotter.dev/articles/linkarzu:_harper_is_getting_better</link>
            <guid>https://elijahpotter.dev/articles/linkarzu:_harper_is_getting_better</guid>
            <pubDate>Fri, 23 Jan 2026 07:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Linkarzu: Harper Is Getting Better</h1>
<p>Linkarzu's recent video about Harper is a great primer on what it can do if you spend a ton of time in Neovim (like I do). It's true: Harper is getting better.</p>
<iframe width="100%" height="400" src="https://www.youtube.com/embed/he8V6gj9IG0?si=mbw6q8m3jqi1qhSf" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>]]></description>
            <content:encoded><![CDATA[<h1>Linkarzu: Harper Is Getting Better</h1>
<p>Linkarzu's recent video about Harper is a great primer on what it can do if you spend a ton of time in Neovim (like I do). It's true: Harper is getting better.</p>
<iframe width="100%" height="400" src="https://www.youtube.com/embed/he8V6gj9IG0?si=mbw6q8m3jqi1qhSf" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Imagine a Weir Studio]]></title>
            <link>https://elijahpotter.dev/articles/imagine_a_weir_studio</link>
            <guid>https://elijahpotter.dev/articles/imagine_a_weir_studio</guid>
            <pubDate>Fri, 16 Jan 2026 07:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Imagine a Weir Studio</h1>
<p>This week, I received a message from someone working on a learning management system.
Apparently, they use <a href="https://www.npmjs.com/package/harper.js"><code>harper.js</code></a> to do spell and grammar checking on internal documents.
They wanted to be able to configure <a href="https://writewithharper.com">Harper</a> to catch a specific class of mistakes that are common to their workflow.
Looking through the documentation for <code>harper.js</code>, they couldn't see a way to do what they wanted.</p>
<p>Naturally, I directed them toward my recent work on Weir, an expressive programming language for locating errors in natural language.
For those just catching up, this post is a part of a series.
You shouldn't need to go back and read my previous posts, but I'll link them here anyhow.</p>
<ul>
<li><a href="./building_the_weir_language">Building The Weir Language</a></li>
<li><a href="./updates_on_the_weir_language">Updates on the Weir Language</a></li>
<li><a href="./generating_weir_code_with_LLMs">Generating Weir Code With LLMs</a></li>
<li><a href="./imagine_a_weir_marketplace">Imagine A Weir Marketplace</a></li>
</ul>
<p>Unfortunately, I had to inform them that Weir is still an experiment (although it's in the <code>master branch</code> and thus used by <code>nightly</code> users) and therefore isn't yet available through public channels.
That said, it's an experiment I'd like to make available to the public soon, alongside the Weir Marketplace.
Before I do so, I want to make it as easy as possible to author rules using Weir.
That's why I'm introducing the Weir Studio.</p>
<h2>The Weir Studio</h2>
<p>The Weir Studio comes from a pretty simple idea. We want to give people all the tooling necessary to build and distribute Weir rules without any upfront cost.
That means there should be no tools that need to be installed, no specific operating system set up, and no programming experience needed.</p>
<img src="/images/weir_studio_draft.png" alt="The Initial Version of the Weir Studio">
<p>The solutions to the first and second items are simple:
Just like any other component of Harper, we can compile Weir to WebAssembly and connect it up to a web application.
Using off-the-shelf components for terminal emulation, text editing, and virtual filesystems, we can present a meaningfully feature-rich development environment from inside the user's browser.</p>
<p>The last item is also straightforward.
I've <a href="./generating_weir_code_with_LLMs">demonstrated that LLMs are already terribly effective at writing Weir rules</a>.
By making that situation obvious and accessible, we can empower people to write and distribute their own Weirpacks (files containing one or more Weir rules) without any prior understanding of the language or even of basic programming principles.</p>
<p>The Weir Studio is <a href="https://github.com/Automattic/harper/pull/2491">still in progress</a>, but the initial versions are already proving extremely effective for <a href="https://github.com/Automattic/harper/pull/2506">some of my other work.</a></p>
<h2>Putting Power Back into the Hands of Users</h2>
<p>A common frustration with tools like Grammarly or QuillBot is that they lack customization, and thus are often unable to directly address user's needs.
By allowing users full access to the internals via Weir, we can help them help themselves while also encouraging people to contribute back to the project.
It's a win-win!</p>
<h2>What's Next?</h2>
<p>I'd like to wire this up to a marketplace as soon as possible (see what I did there?) to streamline the publishing process.
More critically, I've been informed that the inclusion of dictionary files directly into Weirpacks should be priority number one.
I'll likely start tackling that as soon as I have a spare moment.</p>
<p>If you have thoughts on where Harper or Weir is going, please don't hesitate to voice them.</p>
<hr>
<p>P.S. <a href="https://www.makeuseof.com/stopped-paying-for-grammarly-once-found-free-open-source-alternative/">A great piece was just published about Harper.</a></p>]]></description>
            <content:encoded><![CDATA[<h1>Imagine a Weir Studio</h1>
<p>This week, I received a message from someone working on a learning management system.
Apparently, they use <a href="https://www.npmjs.com/package/harper.js"><code>harper.js</code></a> to do spell and grammar checking on internal documents.
They wanted to be able to configure <a href="https://writewithharper.com">Harper</a> to catch a specific class of mistakes that are common to their workflow.
Looking through the documentation for <code>harper.js</code>, they couldn't see a way to do what they wanted.</p>
<p>Naturally, I directed them toward my recent work on Weir, an expressive programming language for locating errors in natural language.
For those just catching up, this post is a part of a series.
You shouldn't need to go back and read my previous posts, but I'll link them here anyhow.</p>
<ul>
<li><a href="./building_the_weir_language">Building The Weir Language</a></li>
<li><a href="./updates_on_the_weir_language">Updates on the Weir Language</a></li>
<li><a href="./generating_weir_code_with_LLMs">Generating Weir Code With LLMs</a></li>
<li><a href="./imagine_a_weir_marketplace">Imagine A Weir Marketplace</a></li>
</ul>
<p>Unfortunately, I had to inform them that Weir is still an experiment (although it's in the <code>master branch</code> and thus used by <code>nightly</code> users) and therefore isn't yet available through public channels.
That said, it's an experiment I'd like to make available to the public soon, alongside the Weir Marketplace.
Before I do so, I want to make it as easy as possible to author rules using Weir.
That's why I'm introducing the Weir Studio.</p>
<h2>The Weir Studio</h2>
<p>The Weir Studio comes from a pretty simple idea. We want to give people all the tooling necessary to build and distribute Weir rules without any upfront cost.
That means there should be no tools that need to be installed, no specific operating system set up, and no programming experience needed.</p>
<img src="/images/weir_studio_draft.png" alt="The Initial Version of the Weir Studio">
<p>The solutions to the first and second items are simple:
Just like any other component of Harper, we can compile Weir to WebAssembly and connect it up to a web application.
Using off-the-shelf components for terminal emulation, text editing, and virtual filesystems, we can present a meaningfully feature-rich development environment from inside the user's browser.</p>
<p>The last item is also straightforward.
I've <a href="./generating_weir_code_with_LLMs">demonstrated that LLMs are already terribly effective at writing Weir rules</a>.
By making that situation obvious and accessible, we can empower people to write and distribute their own Weirpacks (files containing one or more Weir rules) without any prior understanding of the language or even of basic programming principles.</p>
<p>The Weir Studio is <a href="https://github.com/Automattic/harper/pull/2491">still in progress</a>, but the initial versions are already proving extremely effective for <a href="https://github.com/Automattic/harper/pull/2506">some of my other work.</a></p>
<h2>Putting Power Back into the Hands of Users</h2>
<p>A common frustration with tools like Grammarly or QuillBot is that they lack customization, and thus are often unable to directly address user's needs.
By allowing users full access to the internals via Weir, we can help them help themselves while also encouraging people to contribute back to the project.
It's a win-win!</p>
<h2>What's Next?</h2>
<p>I'd like to wire this up to a marketplace as soon as possible (see what I did there?) to streamline the publishing process.
More critically, I've been informed that the inclusion of dictionary files directly into Weirpacks should be priority number one.
I'll likely start tackling that as soon as I have a spare moment.</p>
<p>If you have thoughts on where Harper or Weir is going, please don't hesitate to voice them.</p>
<hr>
<p>P.S. <a href="https://www.makeuseof.com/stopped-paying-for-grammarly-once-found-free-open-source-alternative/">A great piece was just published about Harper.</a></p>]]></content:encoded>
            <enclosure url="https://elijahpotter.dev/images/weir_studio_draft.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[Imagine a Weir Marketplace]]></title>
            <link>https://elijahpotter.dev/articles/imagine_a_weir_marketplace</link>
            <guid>https://elijahpotter.dev/articles/imagine_a_weir_marketplace</guid>
            <pubDate>Fri, 09 Jan 2026 07:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Imagine a Weir Marketplace</h1>
<blockquote>
<p>For context, I maintain <a href="https://writewithharper.com/">an open-source grammar checker by the name of Harper.</a>
This post is a kind of RFC for a potential improvement to the project's ergonomics.</p>
</blockquote>
<p>I've observed a high degree of specialization of language between academic fields.</p>
<p>Ecologists use bits of language that computer scientists don't use, and computer scientists use bits of language that ecologists don't use.
If we embrace this specialization through customization, we could gain an advantage over other grammar checkers.
That's all to say: We want to enable authors and academics to customize Harper to their specialized language.
For example, a chemist might want to enforce a set of rules for ensuring molecules are named correctly according to the IUPAC standard, and a science fiction author might want to enforce rules to make sure specific characters are addressed by the proper titles.
Weir is the solution to these problems.</p>
<p>In this post, I'll recap what Weir is, and outline a mechanism for distributing Weir rules to individual users that might take advantage of them.
I'd love your thoughts on how we can make it as impactful and interactive as possible.</p>
<p>In <a href="./building_the_weir_language">my initial post about Weir</a>, I discussed how most large organizations have style guides which define specific rules for their employee's grammar, capitalization, and formatting.
I proposed a programming language by the name of Weir, which could be used to clearly state these rules to be enforced by Harper.
I've personally found that LLMs are not great at enforcing strict grammatical guidelines, but they're excellent at translating them from an existing style guide into the Weir language.
See <a href="./generating_weir_code_with_LLMs">my recent experimentation for more details</a>.</p>
<p>Now that Weir has been merged, we can move on to the other problems: creation and distribution.
Today, I'd like to talk about distribution.</p>
<p>I've had several discussions in the last week from people who want to be able to customize Harper with their own rules.
This includes people who use Harper as a part of their <a href="https://en.wikipedia.org/wiki/Continuous_integration">continuous integration</a>, academics, and some engineers who work on learning management systems.
There is a need and desire for this today, and I want to address it as soon as possible.</p>
<h2>Distribution</h2>
<p>Once a collection of Weir rules has been drafted, how should it be shared?
There is a formatting piece, as well as an infrastructure piece.</p>
<h3>Weirpacks</h3>
<p>I propose a simple, but effective format for sharing collections of Weir rules.
Instances of this format should be called "Weirpacks" and thus have the file extension <code>.weirpack</code>.</p>
<p>Right now, the Weir rules baked into Harper are organized into a single directory, <code>weir_rules</code>, with each <code>.weir</code> file prefixed with the rule's name.
For example, the file containing the rule for fixing <code>Their</code> to <code>There</code> is <code>TheirToTheir.weir</code>.
There are a vast quantity of these "curated" rules, and I don't think they should go anywhere.
I just think we can reuse the layout.</p>
<p>I believe the best strategy for organizing collections of Weir rules is to replicate what's already in the Harper repository, but inside a <a href="https://en.wikipedia.org/wiki/ZIP_(file_format)">ZIP archive</a>.
Even though it is a ZIP file, which normally have the suffix <code>.zip</code>, I think we should give them the <code>.weirpack</code> suffix for clarity on their purpose.
The root directory of the archive will contain as many <code>.weir</code> files as is desired by the Weirpack author, in addition to a <code>.json</code> manifest file containing package metadata.</p>
<h4>Metadata</h4>
<p>Since JSON is quite flexible, Weirpack authors can include whatever information they want.
I think certain fields, however, should be required.
Namely:</p>
<ul>
<li>The Author's Name</li>
<li>The Weirpack's version</li>
<li>A description of the Weirpack's purpose and usage</li>
<li>The Weirpack's license</li>
</ul>
<p>I think we can also recommend, but not require some additional helpful information:</p>
<ul>
<li>Keywords relevant to the rule's purpose</li>
<li>The Author's Website</li>
</ul>
<h4>Loading Weirpacks</h4>
<p>Support for Weirpacks will be baked directly into <code>harper-core</code>, as well as <code>harper.js</code>.</p>
<p>The former will make it possible for our command-line programs, <code>harper-cli</code> and <code>harper-ls</code>, to load them at runtime.
In <code>harper-cli</code>, which is a debugging tool, it will be sufficient to expose a flag on our <code>lint</code> command to allow Weirpacks to be loaded.
In <code>harper-ls</code>, we need to update our config to allow Weirpacks to be loaded from a directory on the disk.
The default should be next to where the dictionaries are already loaded.</p>
<p>The latter will make it possible for our web integrations to load them when the user desires, either from disk or from our marketplace (more on that in a minute).
For both the Obsidian plugin, as well as the Chrome extension, we'll include a field in the settings menus for users to load Weirpacks from disk and into the linter.
Naturally, these will persist across sessions in the same way the other settings do.</p>
<p>Being able to load Weirpacks from disk is critical to allowing users to remain completely offline if they choose to be.
Additionally, loading custom Weirpacks without approval from a moderator is a prerequisite to having a quick and enjoyable iterative loop.</p>
<h3>The Weir Marketplace</h3>
<p>When a user in a field with specialized language downloads Harper, we will naturally provide them with the essential rules that are common to the entirety of the English language.
We will also give them the option to download additional Weirpacks from the Weir Marketplace.
This will be a place, not unlike Obsidian's Community Plugins Marketplace, where user-submitted Weirpacks can be downloaded and installed quickly and easily.</p>
<img src="/images/obsidian_marketplace.png" alt="The Obsidian Community Plugins Marketplace">
<p>Imagine, if you will, that you're a JavaScript developer who has just downloaded Harper.
You start writing a blog post about your recent adventures with the latest JavaScript framework.
Unfortunately, Harper starts flagging swaths of your work as misspelled and filled with poor grammar.</p>
<p>Thankfully, Harper notices its mistake and offers to download a relevant Weirpack for programmers—one preset with a dictionary filled with technical language and rules to ensure proper use.
You download the Weirpack and continue writing with Harper now better equipped to handle the subject at hand.
Crisis averted.</p>
<h3>The Long View</h3>
<p>This idea of a self-contained Weirpack providing the logic and the datasets necessary to do grammar checking for a specialized subset of English is intoxicating.
Projecting forward, I can imagine a future where we can provide support for any number of natural languages, distributed via Weirpacks and supported by a community of contributors in a decentralized way, not dissimilar to WordPress' plugin marketplace.
That isn't for today, though.</p>]]></description>
            <content:encoded><![CDATA[<h1>Imagine a Weir Marketplace</h1>
<blockquote>
<p>For context, I maintain <a href="https://writewithharper.com/">an open-source grammar checker by the name of Harper.</a>
This post is a kind of RFC for a potential improvement to the project's ergonomics.</p>
</blockquote>
<p>I've observed a high degree of specialization of language between academic fields.</p>
<p>Ecologists use bits of language that computer scientists don't use, and computer scientists use bits of language that ecologists don't use.
If we embrace this specialization through customization, we could gain an advantage over other grammar checkers.
That's all to say: We want to enable authors and academics to customize Harper to their specialized language.
For example, a chemist might want to enforce a set of rules for ensuring molecules are named correctly according to the IUPAC standard, and a science fiction author might want to enforce rules to make sure specific characters are addressed by the proper titles.
Weir is the solution to these problems.</p>
<p>In this post, I'll recap what Weir is, and outline a mechanism for distributing Weir rules to individual users that might take advantage of them.
I'd love your thoughts on how we can make it as impactful and interactive as possible.</p>
<p>In <a href="./building_the_weir_language">my initial post about Weir</a>, I discussed how most large organizations have style guides which define specific rules for their employee's grammar, capitalization, and formatting.
I proposed a programming language by the name of Weir, which could be used to clearly state these rules to be enforced by Harper.
I've personally found that LLMs are not great at enforcing strict grammatical guidelines, but they're excellent at translating them from an existing style guide into the Weir language.
See <a href="./generating_weir_code_with_LLMs">my recent experimentation for more details</a>.</p>
<p>Now that Weir has been merged, we can move on to the other problems: creation and distribution.
Today, I'd like to talk about distribution.</p>
<p>I've had several discussions in the last week from people who want to be able to customize Harper with their own rules.
This includes people who use Harper as a part of their <a href="https://en.wikipedia.org/wiki/Continuous_integration">continuous integration</a>, academics, and some engineers who work on learning management systems.
There is a need and desire for this today, and I want to address it as soon as possible.</p>
<h2>Distribution</h2>
<p>Once a collection of Weir rules has been drafted, how should it be shared?
There is a formatting piece, as well as an infrastructure piece.</p>
<h3>Weirpacks</h3>
<p>I propose a simple, but effective format for sharing collections of Weir rules.
Instances of this format should be called "Weirpacks" and thus have the file extension <code>.weirpack</code>.</p>
<p>Right now, the Weir rules baked into Harper are organized into a single directory, <code>weir_rules</code>, with each <code>.weir</code> file prefixed with the rule's name.
For example, the file containing the rule for fixing <code>Their</code> to <code>There</code> is <code>TheirToTheir.weir</code>.
There are a vast quantity of these "curated" rules, and I don't think they should go anywhere.
I just think we can reuse the layout.</p>
<p>I believe the best strategy for organizing collections of Weir rules is to replicate what's already in the Harper repository, but inside a <a href="https://en.wikipedia.org/wiki/ZIP_(file_format)">ZIP archive</a>.
Even though it is a ZIP file, which normally have the suffix <code>.zip</code>, I think we should give them the <code>.weirpack</code> suffix for clarity on their purpose.
The root directory of the archive will contain as many <code>.weir</code> files as is desired by the Weirpack author, in addition to a <code>.json</code> manifest file containing package metadata.</p>
<h4>Metadata</h4>
<p>Since JSON is quite flexible, Weirpack authors can include whatever information they want.
I think certain fields, however, should be required.
Namely:</p>
<ul>
<li>The Author's Name</li>
<li>The Weirpack's version</li>
<li>A description of the Weirpack's purpose and usage</li>
<li>The Weirpack's license</li>
</ul>
<p>I think we can also recommend, but not require some additional helpful information:</p>
<ul>
<li>Keywords relevant to the rule's purpose</li>
<li>The Author's Website</li>
</ul>
<h4>Loading Weirpacks</h4>
<p>Support for Weirpacks will be baked directly into <code>harper-core</code>, as well as <code>harper.js</code>.</p>
<p>The former will make it possible for our command-line programs, <code>harper-cli</code> and <code>harper-ls</code>, to load them at runtime.
In <code>harper-cli</code>, which is a debugging tool, it will be sufficient to expose a flag on our <code>lint</code> command to allow Weirpacks to be loaded.
In <code>harper-ls</code>, we need to update our config to allow Weirpacks to be loaded from a directory on the disk.
The default should be next to where the dictionaries are already loaded.</p>
<p>The latter will make it possible for our web integrations to load them when the user desires, either from disk or from our marketplace (more on that in a minute).
For both the Obsidian plugin, as well as the Chrome extension, we'll include a field in the settings menus for users to load Weirpacks from disk and into the linter.
Naturally, these will persist across sessions in the same way the other settings do.</p>
<p>Being able to load Weirpacks from disk is critical to allowing users to remain completely offline if they choose to be.
Additionally, loading custom Weirpacks without approval from a moderator is a prerequisite to having a quick and enjoyable iterative loop.</p>
<h3>The Weir Marketplace</h3>
<p>When a user in a field with specialized language downloads Harper, we will naturally provide them with the essential rules that are common to the entirety of the English language.
We will also give them the option to download additional Weirpacks from the Weir Marketplace.
This will be a place, not unlike Obsidian's Community Plugins Marketplace, where user-submitted Weirpacks can be downloaded and installed quickly and easily.</p>
<img src="/images/obsidian_marketplace.png" alt="The Obsidian Community Plugins Marketplace">
<p>Imagine, if you will, that you're a JavaScript developer who has just downloaded Harper.
You start writing a blog post about your recent adventures with the latest JavaScript framework.
Unfortunately, Harper starts flagging swaths of your work as misspelled and filled with poor grammar.</p>
<p>Thankfully, Harper notices its mistake and offers to download a relevant Weirpack for programmers—one preset with a dictionary filled with technical language and rules to ensure proper use.
You download the Weirpack and continue writing with Harper now better equipped to handle the subject at hand.
Crisis averted.</p>
<h3>The Long View</h3>
<p>This idea of a self-contained Weirpack providing the logic and the datasets necessary to do grammar checking for a specialized subset of English is intoxicating.
Projecting forward, I can imagine a future where we can provide support for any number of natural languages, distributed via Weirpacks and supported by a community of contributors in a decentralized way, not dissimilar to WordPress' plugin marketplace.
That isn't for today, though.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Someone Remixed Tatum]]></title>
            <link>https://elijahpotter.dev/articles/someone_remixed_tatum</link>
            <guid>https://elijahpotter.dev/articles/someone_remixed_tatum</guid>
            <pubDate>Tue, 30 Dec 2025 07:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Someone Remixed Tatum</h1>
<p>A while back, I wrote a small tool, dubbed "Tatum", for rendering Markdown to HTML for use alongside Neovim.
I've continued to use it to this day, with a few tweaks here and there.</p>
<p>It looks like someone has taken advantage of Tatum's open-source license and remixed it into something even more capable.
I highly suggest you check out <a href="https://github.com/vihu/penview.nvim/"><code>penview.nvim</code></a>.</p>]]></description>
            <content:encoded><![CDATA[<h1>Someone Remixed Tatum</h1>
<p>A while back, I wrote a small tool, dubbed "Tatum", for rendering Markdown to HTML for use alongside Neovim.
I've continued to use it to this day, with a few tweaks here and there.</p>
<p>It looks like someone has taken advantage of Tatum's open-source license and remixed it into something even more capable.
I highly suggest you check out <a href="https://github.com/vihu/penview.nvim/"><code>penview.nvim</code></a>.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Projects Using Harper]]></title>
            <link>https://elijahpotter.dev/articles/projects_using_harper</link>
            <guid>https://elijahpotter.dev/articles/projects_using_harper</guid>
            <pubDate>Tue, 23 Dec 2025 07:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Some New Projects Are Using Harper</h1>
<p>I recently did some snooping (<a href="https://github.blog/news-insights/product-news/keep-all-your-packages-up-to-date-with-dependabot/">through Dependabot</a>) on what open source projects have been using Harper.
Since I last checked, there are some new ones.
I wonder if the recent additions have anything to do with <a href="./harper_turns_1.0_today">Harper going <code>1.0</code></a>.</p>
<ul>
<li><a href="https://stubby.io/">Stubby</a></li>
<li><a href="https://github.com/stencila/stencila">Stencila</a></li>
<li><a href="https://github.com/gherlint/gherlint">Gherlint</a></li>
<li><a href="https://github.com/walletbeat/walletbeat">walletbeat</a></li>
<li><a href="https://github.com/samedwardes/fixmyspelling">fixmyspelling</a></li>
</ul>
<p><a href="https://www.reddit.com/r/ObsidianMD/comments/1nfks80/oh_my_god_its_done_an_entire_book_written_in/">Someone also wrote a book using Harper.</a></p>]]></description>
            <content:encoded><![CDATA[<h1>Some New Projects Are Using Harper</h1>
<p>I recently did some snooping (<a href="https://github.blog/news-insights/product-news/keep-all-your-packages-up-to-date-with-dependabot/">through Dependabot</a>) on what open source projects have been using Harper.
Since I last checked, there are some new ones.
I wonder if the recent additions have anything to do with <a href="./harper_turns_1.0_today">Harper going <code>1.0</code></a>.</p>
<ul>
<li><a href="https://stubby.io/">Stubby</a></li>
<li><a href="https://github.com/stencila/stencila">Stencila</a></li>
<li><a href="https://github.com/gherlint/gherlint">Gherlint</a></li>
<li><a href="https://github.com/walletbeat/walletbeat">walletbeat</a></li>
<li><a href="https://github.com/samedwardes/fixmyspelling">fixmyspelling</a></li>
</ul>
<p><a href="https://www.reddit.com/r/ObsidianMD/comments/1nfks80/oh_my_god_its_done_an_entire_book_written_in/">Someone also wrote a book using Harper.</a></p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Generating Weir Code with LLMs]]></title>
            <link>https://elijahpotter.dev/articles/generating_weir_code_with_LLMs</link>
            <guid>https://elijahpotter.dev/articles/generating_weir_code_with_LLMs</guid>
            <pubDate>Fri, 19 Dec 2025 07:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Generating Weir Code with LLMs</h1>
<p>As you know, I've been working on a small programming language called Weir for generating corrections to natural language.
For the curious, I highly suggest reading my previous blog posts on the subject if you haven't already.</p>
<ul>
<li><a href="./building_the_weir_language">Building the Weir Language</a></li>
<li><a href="./updates_on_the_weir_language">Updates on the Weir Language</a></li>
</ul>
<p>In one of those previous blog posts, I speculated that an LLM might be quite effective at generating these rules, especially if they are given access to tools that can validate and run tests.
Today, while writing some public documentation for the language, I did a bit of experimentation.
I wanted to know: Can an LLM simultaneously handle the complex challenge of understanding a <a href="https://en.wikipedia.org/wiki/Domain-specific_language">DSL's</a> syntax, the semantics of the English language, and be able to validate its own work with tests?
More to the point: can it do a better job with a DSL than an established language like Rust?</p>
<p>For those who want to follow along at home, <a href="https://chatgpt.com/g/g-6945b952be148191a21d0189e0691a97-weir-writer">here's the GPT</a> I've been toying around with.</p>
<h3>Some Experimentation</h3>
<p>I often hear that one should avoid novel or exceedingly old programming languages when working with LLMs, simply because these languages aren't well represented in their training data.
Based on this hearsay, I didn't expect much going into this little experiment of mine.</p>
<p>First, I wondered if it could handle the trivial issue of correcting "as nauseam" to the correct form, "ad nauseam".
It's a common typo, and it should be pretty simple Weir rule.</p>
<p>My minimal GPT spit this out:</p>
<pre><code class="hljs language-weir">expr main (as nauseam)

let message "Did you mean `ad nauseam`?"
let description "Corrects the common typo `as nauseam` to the proper Latin phrase `ad nauseam`."
let kind "Typo"
let becomes "ad nauseam"

test "He repeated the point as nauseam." "He repeated the point ad nauseam."
test "This argument has been made ad nauseam." "This argument has been made ad nauseam."
</code></pre>
<p>That looks pretty good!
All the syntax is correct, and it even included some (passing!) tests.
Claude Code or Codex could do this using our existing Rust infrastructure, but it would have taken several minutes to churn out a valid result.
GPT 5.2 Instant was able to do this in a matter of seconds.
Not to mention that it didn't need a copy of the entire Harper source code.</p>
<p>I wanted to try a problem that was a bit harder to solve, so I asked it to address the problem of double-negatives.
After thinking for a few minutes, it came up with four individual rules that each address different cases.</p>
<pre><code class="hljs language-weir">expr main &#x3C;(
  [(don't), (do not), (doesn't), (does not), (didn't), (did not),
   (can't), (cannot), (couldn't), (could not),
   (won't), (will not), (wouldn't), (would not),
   (shouldn't), (should not),
   (isn't), (is not), (aren't), (are not), (wasn't), (was not), (weren't), (were not),
   (ain't)]
  [$have, $get, $need, $want, $know, $see, $find, $say, $do]
  no
), no>

let message "Avoid a double negative."
let description "If the clause is already negated (e.g., `don't`), prefer `any` over `no` (e.g., `don't have any`)."
let kind "Usage"
let becomes "any"
let strategy "MatchCase"

test "I don't have no idea." "I don't have any idea."
test "She did not get NO response." "She did not get ANY response."
test "I have no idea." "I have no idea."
</code></pre>
<pre><code class="hljs language-weir">expr main &#x3C;(
  [(don't), (do not), (doesn't), (does not), (didn't), (did not),
   (can't), (cannot), (couldn't), (could not),
   (won't), (will not), (wouldn't), (would not),
   (shouldn't), (should not)]
  [$do, $say, $see, $hear, $find, $know, $have, $get]
  nothing
), nothing>

let message "Avoid a double negative."
let description "If the clause is already negated, replace `nothing` with `anything` (e.g., `didn't do anything`)."
let kind "Usage"
let becomes "anything"
let strategy "MatchCase"

test "He didn't do nothing." "He didn't do anything."
test "I can't see NOTHING." "I can't see ANYTHING."
test "I did nothing." "I did nothing."
</code></pre>
<pre><code class="hljs language-weir">expr main &#x3C;(
  [(don't), (do not), (doesn't), (does not), (didn't), (did not),
   (can't), (cannot), (couldn't), (could not),
   (won't), (will not), (wouldn't), (would not)]
  [$tell, $call, $see, $know, $say, $help, $find]
  nobody
), nobody>

let message "Avoid a double negative."
let description "If the clause is already negated, replace `nobody` with `anybody`."
let kind "Usage"
let becomes "anybody"
let strategy "MatchCase"

test "I didn't tell nobody." "I didn't tell anybody."
test "We can't find NOBODY." "We can't find ANYBODY."
test "Nobody called." "Nobody called."
</code></pre>
<pre><code class="hljs language-weir">expr main &#x3C;(
  [(don't), (do not), (doesn't), (does not), (didn't), (did not),
   (can't), (cannot), (couldn't), (could not),
   (won't), (will not)]
  [$go, $run, $walk, $drive, $get, $look]
  nowhere
), nowhere>

let message "Avoid a double negative."
let description "If the clause is already negated, replace `nowhere` with `anywhere`."
let kind "Usage"
let becomes "anywhere"
let strategy "MatchCase"

test "She can't go nowhere tonight." "She can't go anywhere tonight."
test "I did not look NOWHERE else." "I did not look ANYWHERE else."
test "Nowhere is safe." "Nowhere is safe."
</code></pre>
<p>Again, I'm quite impressed.
Obviously, it's hard to tell at first glance whether these rules would hold up in production for tens of thousands of users.
Nevertheless, these rules look correct, and my initial testing shows that they work.
The model was able to successfully apply more complex logic: the derivation syntax and the filter syntax.</p>
<p>In my original announcement blog post, I explained that the target audience for Weir were the members of large organizations that wish to enforce stylistic conventions or brand names throughout their workforce.
To that end, I wanted to test an LLM against some of Automattic's nomenclature.
After reading though some of our public documentation, it come up with some admittedly useful rules:</p>
<pre><code class="hljs language-weir">expr main (wpvip)

let message "Use the official spacing for VIP."
let description "Expands `wpvip` to the official brand forms `WordPress VIP` or `WP VIP`."
let kind "Miscellaneous"
let becomes ["WordPress VIP", "WP VIP"]
let strategy "Exact"

test "This runs on wpvip." "This runs on WordPress VIP."
</code></pre>
<pre><code class="hljs language-weir">expr main (jet[-, ( )]pack)

let message "Use the official spelling `Jetpack`."
let description "Standardizes `jet pack` / `jet-pack` to `Jetpack`."
let kind "Miscellaneous"
let becomes "Jetpack"
let strategy "Exact"

test "Install jet pack for backups." "Install Jetpack for backups."
test "Install jet-pack for backups." "Install Jetpack for backups."
test "Install Jetpack for backups." "Install Jetpack for backups."
</code></pre>
<pre><code class="hljs language-weir">expr main (word[-, ( )]press)

let message "Use the official spelling `WordPress`."
let description "Standardizes `word press` / `word-press` to the product name `WordPress`."
let kind "Miscellaneous"
let becomes "WordPress"
let strategy "Exact"

test "I build sites with word press." "I build sites with WordPress."
test "I build sites with word-press." "I build sites with WordPress."
test "I build sites with WordPress." "I build sites with WordPress."
</code></pre>
<p>Again, I'm quite pleased.
With just a quick search of our website, it was able to discern what style we wished to enforce and wrote functioning Weir rules to do so.
I can see this being helpful at any number of business that communicate regularly.</p>
<h3>Conclusion</h3>
<p>Overall, I'm quite pleased with how well these LLMs were able to write Weir code.
I've yet to finalize any documentation on the more complex parts of Weir's syntax (like our POS-tagging system), which means I haven't yet been able to test them with any LLMs.
Even so, I'm impressed.</p>
<p>I even tried it out on <a href="https://docs.mistral.ai/models/ministral-3-3b-25-12">Mistral's tiny and ultra-fast three billion parameter model</a>.
It performed almost as well as OpenAI's 5.2 Instant model, albeit without nearly the same level of creativity.
I think this somewhat proves that modern LLMs are able to generalize to novel languages, which makes them exceedingly useful for <a href="https://en.wikipedia.org/wiki/Domain-specific_language">DSLs</a> like Weir.</p>
<p>I'm looking forward to seeing how people end up taking advantage of this.</p>]]></description>
            <content:encoded><![CDATA[<h1>Generating Weir Code with LLMs</h1>
<p>As you know, I've been working on a small programming language called Weir for generating corrections to natural language.
For the curious, I highly suggest reading my previous blog posts on the subject if you haven't already.</p>
<ul>
<li><a href="./building_the_weir_language">Building the Weir Language</a></li>
<li><a href="./updates_on_the_weir_language">Updates on the Weir Language</a></li>
</ul>
<p>In one of those previous blog posts, I speculated that an LLM might be quite effective at generating these rules, especially if they are given access to tools that can validate and run tests.
Today, while writing some public documentation for the language, I did a bit of experimentation.
I wanted to know: Can an LLM simultaneously handle the complex challenge of understanding a <a href="https://en.wikipedia.org/wiki/Domain-specific_language">DSL's</a> syntax, the semantics of the English language, and be able to validate its own work with tests?
More to the point: can it do a better job with a DSL than an established language like Rust?</p>
<p>For those who want to follow along at home, <a href="https://chatgpt.com/g/g-6945b952be148191a21d0189e0691a97-weir-writer">here's the GPT</a> I've been toying around with.</p>
<h3>Some Experimentation</h3>
<p>I often hear that one should avoid novel or exceedingly old programming languages when working with LLMs, simply because these languages aren't well represented in their training data.
Based on this hearsay, I didn't expect much going into this little experiment of mine.</p>
<p>First, I wondered if it could handle the trivial issue of correcting "as nauseam" to the correct form, "ad nauseam".
It's a common typo, and it should be pretty simple Weir rule.</p>
<p>My minimal GPT spit this out:</p>
<pre><code class="hljs language-weir">expr main (as nauseam)

let message "Did you mean `ad nauseam`?"
let description "Corrects the common typo `as nauseam` to the proper Latin phrase `ad nauseam`."
let kind "Typo"
let becomes "ad nauseam"

test "He repeated the point as nauseam." "He repeated the point ad nauseam."
test "This argument has been made ad nauseam." "This argument has been made ad nauseam."
</code></pre>
<p>That looks pretty good!
All the syntax is correct, and it even included some (passing!) tests.
Claude Code or Codex could do this using our existing Rust infrastructure, but it would have taken several minutes to churn out a valid result.
GPT 5.2 Instant was able to do this in a matter of seconds.
Not to mention that it didn't need a copy of the entire Harper source code.</p>
<p>I wanted to try a problem that was a bit harder to solve, so I asked it to address the problem of double-negatives.
After thinking for a few minutes, it came up with four individual rules that each address different cases.</p>
<pre><code class="hljs language-weir">expr main &#x3C;(
  [(don't), (do not), (doesn't), (does not), (didn't), (did not),
   (can't), (cannot), (couldn't), (could not),
   (won't), (will not), (wouldn't), (would not),
   (shouldn't), (should not),
   (isn't), (is not), (aren't), (are not), (wasn't), (was not), (weren't), (were not),
   (ain't)]
  [$have, $get, $need, $want, $know, $see, $find, $say, $do]
  no
), no>

let message "Avoid a double negative."
let description "If the clause is already negated (e.g., `don't`), prefer `any` over `no` (e.g., `don't have any`)."
let kind "Usage"
let becomes "any"
let strategy "MatchCase"

test "I don't have no idea." "I don't have any idea."
test "She did not get NO response." "She did not get ANY response."
test "I have no idea." "I have no idea."
</code></pre>
<pre><code class="hljs language-weir">expr main &#x3C;(
  [(don't), (do not), (doesn't), (does not), (didn't), (did not),
   (can't), (cannot), (couldn't), (could not),
   (won't), (will not), (wouldn't), (would not),
   (shouldn't), (should not)]
  [$do, $say, $see, $hear, $find, $know, $have, $get]
  nothing
), nothing>

let message "Avoid a double negative."
let description "If the clause is already negated, replace `nothing` with `anything` (e.g., `didn't do anything`)."
let kind "Usage"
let becomes "anything"
let strategy "MatchCase"

test "He didn't do nothing." "He didn't do anything."
test "I can't see NOTHING." "I can't see ANYTHING."
test "I did nothing." "I did nothing."
</code></pre>
<pre><code class="hljs language-weir">expr main &#x3C;(
  [(don't), (do not), (doesn't), (does not), (didn't), (did not),
   (can't), (cannot), (couldn't), (could not),
   (won't), (will not), (wouldn't), (would not)]
  [$tell, $call, $see, $know, $say, $help, $find]
  nobody
), nobody>

let message "Avoid a double negative."
let description "If the clause is already negated, replace `nobody` with `anybody`."
let kind "Usage"
let becomes "anybody"
let strategy "MatchCase"

test "I didn't tell nobody." "I didn't tell anybody."
test "We can't find NOBODY." "We can't find ANYBODY."
test "Nobody called." "Nobody called."
</code></pre>
<pre><code class="hljs language-weir">expr main &#x3C;(
  [(don't), (do not), (doesn't), (does not), (didn't), (did not),
   (can't), (cannot), (couldn't), (could not),
   (won't), (will not)]
  [$go, $run, $walk, $drive, $get, $look]
  nowhere
), nowhere>

let message "Avoid a double negative."
let description "If the clause is already negated, replace `nowhere` with `anywhere`."
let kind "Usage"
let becomes "anywhere"
let strategy "MatchCase"

test "She can't go nowhere tonight." "She can't go anywhere tonight."
test "I did not look NOWHERE else." "I did not look ANYWHERE else."
test "Nowhere is safe." "Nowhere is safe."
</code></pre>
<p>Again, I'm quite impressed.
Obviously, it's hard to tell at first glance whether these rules would hold up in production for tens of thousands of users.
Nevertheless, these rules look correct, and my initial testing shows that they work.
The model was able to successfully apply more complex logic: the derivation syntax and the filter syntax.</p>
<p>In my original announcement blog post, I explained that the target audience for Weir were the members of large organizations that wish to enforce stylistic conventions or brand names throughout their workforce.
To that end, I wanted to test an LLM against some of Automattic's nomenclature.
After reading though some of our public documentation, it come up with some admittedly useful rules:</p>
<pre><code class="hljs language-weir">expr main (wpvip)

let message "Use the official spacing for VIP."
let description "Expands `wpvip` to the official brand forms `WordPress VIP` or `WP VIP`."
let kind "Miscellaneous"
let becomes ["WordPress VIP", "WP VIP"]
let strategy "Exact"

test "This runs on wpvip." "This runs on WordPress VIP."
</code></pre>
<pre><code class="hljs language-weir">expr main (jet[-, ( )]pack)

let message "Use the official spelling `Jetpack`."
let description "Standardizes `jet pack` / `jet-pack` to `Jetpack`."
let kind "Miscellaneous"
let becomes "Jetpack"
let strategy "Exact"

test "Install jet pack for backups." "Install Jetpack for backups."
test "Install jet-pack for backups." "Install Jetpack for backups."
test "Install Jetpack for backups." "Install Jetpack for backups."
</code></pre>
<pre><code class="hljs language-weir">expr main (word[-, ( )]press)

let message "Use the official spelling `WordPress`."
let description "Standardizes `word press` / `word-press` to the product name `WordPress`."
let kind "Miscellaneous"
let becomes "WordPress"
let strategy "Exact"

test "I build sites with word press." "I build sites with WordPress."
test "I build sites with word-press." "I build sites with WordPress."
test "I build sites with WordPress." "I build sites with WordPress."
</code></pre>
<p>Again, I'm quite pleased.
With just a quick search of our website, it was able to discern what style we wished to enforce and wrote functioning Weir rules to do so.
I can see this being helpful at any number of business that communicate regularly.</p>
<h3>Conclusion</h3>
<p>Overall, I'm quite pleased with how well these LLMs were able to write Weir code.
I've yet to finalize any documentation on the more complex parts of Weir's syntax (like our POS-tagging system), which means I haven't yet been able to test them with any LLMs.
Even so, I'm impressed.</p>
<p>I even tried it out on <a href="https://docs.mistral.ai/models/ministral-3-3b-25-12">Mistral's tiny and ultra-fast three billion parameter model</a>.
It performed almost as well as OpenAI's 5.2 Instant model, albeit without nearly the same level of creativity.
I think this somewhat proves that modern LLMs are able to generalize to novel languages, which makes them exceedingly useful for <a href="https://en.wikipedia.org/wiki/Domain-specific_language">DSLs</a> like Weir.</p>
<p>I'm looking forward to seeing how people end up taking advantage of this.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Updates on the Weir Language]]></title>
            <link>https://elijahpotter.dev/articles/updates_on_the_weir_language</link>
            <guid>https://elijahpotter.dev/articles/updates_on_the_weir_language</guid>
            <pubDate>Wed, 17 Dec 2025 07:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Updates on the Weir Language</h1>
<p>In <a href="./building_the_weir_language">my last blog post</a>, I described the why, what, and how of the Weir programming language.
I suggest you read that first.</p>
<p>Before signing off for the day, I just wanted to give a few updates on the project.
For those interested in the technical side of things, I've opened <a href="https://github.com/Automattic/harper/pull/2357">a draft PR</a> for your reading pleasure.</p>
<h2>Updated Keywords</h2>
<p>After having a conversation with Jason Adams, I came to the realization that <code>declare</code> and <code>set</code> are pretty sh*tty keywords.
They aren't self-describing and you'd be hard-pressed to find any real programming language that uses them.</p>
<p>I've instead replaced them with <code>let</code> and <code>expr</code>.</p>
<p>Here's what the rule from my last blog post looks like with these updated keywords:</p>
<pre><code class="hljs language-weir">expr main [(G [Suite, Suit]), (Google Apps for Work)]
let message "Use the updated brand."
let description "`G Suite` or `Google Apps for Work` is now called `Google Workspace`"
let kind "Miscellaneous"
let becomes "Google Workspace"
let strategy "Exact"
</code></pre>
<p>Note that I've also added the <code>strategy</code> setting.
This allows rule authors to describe which strategy Harper will use when applying the replacements.
Right now, the only two options are <code>Exact</code> or <code>MatchCase</code>, which apply either the exact text, or the exact text but matching the capitalization of the text it replaces.
In the above example, we use <code>Exact</code> because it is a proper noun, and it doesn't matter what the original text looked like: we want to capitalize it properly.</p>
<h2>Deprecation of the <code>phrase_corrections</code> Module</h2>
<p>I've also gone ahead and converted the entirety of the <code>phrase_corrections</code> module, as well as all associated tests, to the Weir language.
If you're reading this and you're a contributor, know that we will no longer be adding new rules to that module once Weir is merged into <code>master</code>.
When the merge happens, I'll handle the conversion of any lagging phrase corrections that haven't already been converted.</p>
<p>That unfortunately means that <a href="./writing_a_phrase_correction_for_harper">my previous guide</a> on the subject is no longer relevant.</p>
<h2>What's Next?</h2>
<p>Before Weir gets merged, I plan to support the matching of UPOS tags, in additional to some other quality of life updates.
Expect another blog post.</p>
<p>In the meantime, please let me know if you have any additional input!</p>]]></description>
            <content:encoded><![CDATA[<h1>Updates on the Weir Language</h1>
<p>In <a href="./building_the_weir_language">my last blog post</a>, I described the why, what, and how of the Weir programming language.
I suggest you read that first.</p>
<p>Before signing off for the day, I just wanted to give a few updates on the project.
For those interested in the technical side of things, I've opened <a href="https://github.com/Automattic/harper/pull/2357">a draft PR</a> for your reading pleasure.</p>
<h2>Updated Keywords</h2>
<p>After having a conversation with Jason Adams, I came to the realization that <code>declare</code> and <code>set</code> are pretty sh*tty keywords.
They aren't self-describing and you'd be hard-pressed to find any real programming language that uses them.</p>
<p>I've instead replaced them with <code>let</code> and <code>expr</code>.</p>
<p>Here's what the rule from my last blog post looks like with these updated keywords:</p>
<pre><code class="hljs language-weir">expr main [(G [Suite, Suit]), (Google Apps for Work)]
let message "Use the updated brand."
let description "`G Suite` or `Google Apps for Work` is now called `Google Workspace`"
let kind "Miscellaneous"
let becomes "Google Workspace"
let strategy "Exact"
</code></pre>
<p>Note that I've also added the <code>strategy</code> setting.
This allows rule authors to describe which strategy Harper will use when applying the replacements.
Right now, the only two options are <code>Exact</code> or <code>MatchCase</code>, which apply either the exact text, or the exact text but matching the capitalization of the text it replaces.
In the above example, we use <code>Exact</code> because it is a proper noun, and it doesn't matter what the original text looked like: we want to capitalize it properly.</p>
<h2>Deprecation of the <code>phrase_corrections</code> Module</h2>
<p>I've also gone ahead and converted the entirety of the <code>phrase_corrections</code> module, as well as all associated tests, to the Weir language.
If you're reading this and you're a contributor, know that we will no longer be adding new rules to that module once Weir is merged into <code>master</code>.
When the merge happens, I'll handle the conversion of any lagging phrase corrections that haven't already been converted.</p>
<p>That unfortunately means that <a href="./writing_a_phrase_correction_for_harper">my previous guide</a> on the subject is no longer relevant.</p>
<h2>What's Next?</h2>
<p>Before Weir gets merged, I plan to support the matching of UPOS tags, in additional to some other quality of life updates.
Expect another blog post.</p>
<p>In the meantime, please let me know if you have any additional input!</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Building the Weir Language]]></title>
            <link>https://elijahpotter.dev/articles/building_the_weir_language</link>
            <guid>https://elijahpotter.dev/articles/building_the_weir_language</guid>
            <pubDate>Thu, 11 Dec 2025 07:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Building the Weir Language</h1>
<p>Most large organizations have a style guide.
A document that decides which versions of a linguistic rule to use.
That could be whether to use the Oxford comma, or if contractions are allowed.
It could declare that a certain word should be capitalized in a specific context.</p>
<p>Harper can cover <em>most</em> of the rules in <em>most</em> style guides, but there will always be outliers that we can't support (or simply don't know about).
That is why it is critical that Harper allow individuals and organizations to define rules and conventions for Harper to enforce.</p>
<p>For the last few days, I've been prototyping a simple (not Turing-complete) programming language for describing these conventions.
I've called it Weir.
Weir is simple and thus easy to learn, but has escape hatches for instances where more complex logic is necessary.</p>
<p>Before I begin, a quick disclaimer: This is early work. The Weir language will likely change and evolve before it makes it into core.
What you see here is representative of the overall design, but nothing is final yet.</p>
<h2>History</h2>
<p>There are inevitably going to be a good number of edge cases that need to be addressed when building a grammar checker.
For these, Harper has a simple map from problematic phrases to the corrected versions.
Edge cases that are not simple enough to be described with a map are implemented as Rust code.
Here's what the map looks like:</p>
<pre><code class="hljs language-rust">add_exact_mappings!(group, {
    <span class="hljs-comment">// The name of the rule</span>
    <span class="hljs-string">"ACoupleMore"</span> => (
        <span class="hljs-comment">// The phrase(s) to look for.</span>
        [<span class="hljs-string">"a couple of more"</span>],
        <span class="hljs-comment">// The corrections to provide.</span>
        [<span class="hljs-string">"a couple more"</span>],
        <span class="hljs-comment">// The message to be shown with the error.</span>
        <span class="hljs-string">"The correct wording is `a couple more`, without the `of`."</span>,
        <span class="hljs-comment">// A description of the rule.</span>
        <span class="hljs-string">"Corrects `a couple of more` to `a couple more`."</span>,
        LintKind::Redundancy
    ),
    <span class="hljs-string">"CondenseAllThe"</span> => (
        [<span class="hljs-string">"all of the"</span>],
        [<span class="hljs-string">"all the"</span>],
        <span class="hljs-string">"Consider simplifying to `all the`."</span>,
        <span class="hljs-string">"Suggests removing `of` in `all of the` for a more concise phrase."</span>,
        LintKind::Redundancy
    ),
    <span class="hljs-string">"CoursingThroughVeins"</span> => (
        [<span class="hljs-string">"cursing through veins"</span>],
        [<span class="hljs-string">"coursing through veins"</span>],
        <span class="hljs-string">"In this idiom, blood “courses” (flows) through veins, not “curses”."</span>,
        <span class="hljs-string">"In English idioms, “to course” means to flow rapidly—so avoid the eggcorn `cursing through veins.`"</span>,
        LintKind::Eggcorn
    ),
});
</code></pre>
<p>As for the Rust code, it is often heinous and a chore to review.
I won't do myself the embarrassment of including it here.</p>
<h2>Introducing Weir</h2>
<p>The heart of Weir is an expression language that mimics the pseudocode Harper contributors tend to use when describing the Rust code they intend to write.</p>
<p>Imagine you work at Google. You've just rebranded the "G Suite" collection of apps and services to the new name "Google Workspace".
Before that, they were collectively named "Google Apps for Work".
Moving forward, you don't want you or your coworkers to accidentally write "G Suite" on public documentation, because doing so might confuse users.
To solve this, you use the following Weir rule:</p>
<pre><code class="hljs language-weir">set main [(G [Suite, Suit]), (Google Apps for Work)]

declare message "Use the updated brand."
declare description "`G Suite` or `Google Apps for Work` is now called `Google Workspace`"
declare kind "Miscellaneous"
declare becomes "Google Workspace"
</code></pre>
<p>The first line describes the pattern of the problematic text.
There are two cases here:</p>
<ol>
<li>The letter "G" followed by "Suite" or its misspelling "Suit"</li>
<li>The literal phrase "Google Apps for Work"</li>
</ol>
<p>Here is a semantically equivalent example that I find a bit easier to read:</p>
<pre><code class="hljs language-weir">set main [(G Suite), (G Suit), (Google Apps for Work)]
</code></pre>
<p>The remaining lines describe:</p>
<ol>
<li>The message to be shown to the user when the error in encountered.</li>
<li>A description of the rule itself, explaining why it exists.</li>
<li>What kind of rule it is.
I suspect most end users will mark it as "miscellaneous".
I'm considering making this field optional.</li>
<li>What corrections to provide to the user.</li>
</ol>
<p>One of my goals here was to make the system simple enough for an end-user to copy and paste an existing rule into a new file, make some tweaks, and they would be done.
For the more complex cases, I think we could set up a custom GPT to write these pretty easily.</p>
<p>Here's another example, one that will actually be a part of the Harper source code.
It uses a filter (the <code>&#x3C;></code> syntax) to first select the broader phrase, then another to select the whitespace in-between (that's the <code>( )</code> part).</p>
<pre><code class="hljs language-weir">set main &#x3C;([right, middle, left] $click), ( )>
declare message "Hyphenate this mouse command"
declare description "Hyphenates right-click style mouse commands."
declare kind "Punctuation"
declare becomes "-"
</code></pre>
<p>This is a great example of the <code>$</code> tag.
Instead of instructing Harper to look for the literal word afterward, it instead instructs it to look for all derivative words.
Those include "click", "clicking", "clicked", etc.</p>
<p>We also have specific syntax that can be used to test the expression inline.
These assert that the left string, after being passed through the rule, becomes the right string.</p>
<pre><code class="hljs language-weir">test "Right click the icon." "Right-click the icon."
test "Please right click on the link." "Please right-click on the link."
test "They right clicked the submit button." "They right-clicked the submit button."
test "Right clicking the item highlights it." "Right-clicking the item highlights it."
test "Right clicks are tracked in the log." "Right-clicks are tracked in the log."
test "He RIGHT CLICKED the file." "He RIGHT-CLICKED the file."
test "Left click the checkbox." "Left-click the checkbox."
test "Middle click to open in a new tab." "Middle-click to open in a new tab."
</code></pre>
<p>I suspect Weir will make it extremely easy for AI agents to iterate quickly on these rules.
There's no compilation step, and we already have excellent pipelines for generating lists of test cases.</p>
<h2>Why Call It "Weir"?</h2>
<p>I am naming Weir after the author of <em>Project Hail Mary</em>, <a href="https://en.wikipedia.org/wiki/Andy_Weir">Andy Weir</a>.
I guess I just like naming things after authors.
After all, Harper itself was <a href="./naming_harper.md">named after Harper Lee</a>.</p>
<h2>Moving Forward</h2>
<p>Once the syntax is finalized and a corpus of tests have been written, I suspect Weir rules will become the default for new grammatical rules inside the Harper repository and for individual users.
I'd love to create a marketplace for rule sets created by and for the community.</p>
<p>If you (the reader) have any thoughts on syntax or anything else, don't hesitate to speak your mind.</p>]]></description>
            <content:encoded><![CDATA[<h1>Building the Weir Language</h1>
<p>Most large organizations have a style guide.
A document that decides which versions of a linguistic rule to use.
That could be whether to use the Oxford comma, or if contractions are allowed.
It could declare that a certain word should be capitalized in a specific context.</p>
<p>Harper can cover <em>most</em> of the rules in <em>most</em> style guides, but there will always be outliers that we can't support (or simply don't know about).
That is why it is critical that Harper allow individuals and organizations to define rules and conventions for Harper to enforce.</p>
<p>For the last few days, I've been prototyping a simple (not Turing-complete) programming language for describing these conventions.
I've called it Weir.
Weir is simple and thus easy to learn, but has escape hatches for instances where more complex logic is necessary.</p>
<p>Before I begin, a quick disclaimer: This is early work. The Weir language will likely change and evolve before it makes it into core.
What you see here is representative of the overall design, but nothing is final yet.</p>
<h2>History</h2>
<p>There are inevitably going to be a good number of edge cases that need to be addressed when building a grammar checker.
For these, Harper has a simple map from problematic phrases to the corrected versions.
Edge cases that are not simple enough to be described with a map are implemented as Rust code.
Here's what the map looks like:</p>
<pre><code class="hljs language-rust">add_exact_mappings!(group, {
    <span class="hljs-comment">// The name of the rule</span>
    <span class="hljs-string">"ACoupleMore"</span> => (
        <span class="hljs-comment">// The phrase(s) to look for.</span>
        [<span class="hljs-string">"a couple of more"</span>],
        <span class="hljs-comment">// The corrections to provide.</span>
        [<span class="hljs-string">"a couple more"</span>],
        <span class="hljs-comment">// The message to be shown with the error.</span>
        <span class="hljs-string">"The correct wording is `a couple more`, without the `of`."</span>,
        <span class="hljs-comment">// A description of the rule.</span>
        <span class="hljs-string">"Corrects `a couple of more` to `a couple more`."</span>,
        LintKind::Redundancy
    ),
    <span class="hljs-string">"CondenseAllThe"</span> => (
        [<span class="hljs-string">"all of the"</span>],
        [<span class="hljs-string">"all the"</span>],
        <span class="hljs-string">"Consider simplifying to `all the`."</span>,
        <span class="hljs-string">"Suggests removing `of` in `all of the` for a more concise phrase."</span>,
        LintKind::Redundancy
    ),
    <span class="hljs-string">"CoursingThroughVeins"</span> => (
        [<span class="hljs-string">"cursing through veins"</span>],
        [<span class="hljs-string">"coursing through veins"</span>],
        <span class="hljs-string">"In this idiom, blood “courses” (flows) through veins, not “curses”."</span>,
        <span class="hljs-string">"In English idioms, “to course” means to flow rapidly—so avoid the eggcorn `cursing through veins.`"</span>,
        LintKind::Eggcorn
    ),
});
</code></pre>
<p>As for the Rust code, it is often heinous and a chore to review.
I won't do myself the embarrassment of including it here.</p>
<h2>Introducing Weir</h2>
<p>The heart of Weir is an expression language that mimics the pseudocode Harper contributors tend to use when describing the Rust code they intend to write.</p>
<p>Imagine you work at Google. You've just rebranded the "G Suite" collection of apps and services to the new name "Google Workspace".
Before that, they were collectively named "Google Apps for Work".
Moving forward, you don't want you or your coworkers to accidentally write "G Suite" on public documentation, because doing so might confuse users.
To solve this, you use the following Weir rule:</p>
<pre><code class="hljs language-weir">set main [(G [Suite, Suit]), (Google Apps for Work)]

declare message "Use the updated brand."
declare description "`G Suite` or `Google Apps for Work` is now called `Google Workspace`"
declare kind "Miscellaneous"
declare becomes "Google Workspace"
</code></pre>
<p>The first line describes the pattern of the problematic text.
There are two cases here:</p>
<ol>
<li>The letter "G" followed by "Suite" or its misspelling "Suit"</li>
<li>The literal phrase "Google Apps for Work"</li>
</ol>
<p>Here is a semantically equivalent example that I find a bit easier to read:</p>
<pre><code class="hljs language-weir">set main [(G Suite), (G Suit), (Google Apps for Work)]
</code></pre>
<p>The remaining lines describe:</p>
<ol>
<li>The message to be shown to the user when the error in encountered.</li>
<li>A description of the rule itself, explaining why it exists.</li>
<li>What kind of rule it is.
I suspect most end users will mark it as "miscellaneous".
I'm considering making this field optional.</li>
<li>What corrections to provide to the user.</li>
</ol>
<p>One of my goals here was to make the system simple enough for an end-user to copy and paste an existing rule into a new file, make some tweaks, and they would be done.
For the more complex cases, I think we could set up a custom GPT to write these pretty easily.</p>
<p>Here's another example, one that will actually be a part of the Harper source code.
It uses a filter (the <code>&#x3C;></code> syntax) to first select the broader phrase, then another to select the whitespace in-between (that's the <code>( )</code> part).</p>
<pre><code class="hljs language-weir">set main &#x3C;([right, middle, left] $click), ( )>
declare message "Hyphenate this mouse command"
declare description "Hyphenates right-click style mouse commands."
declare kind "Punctuation"
declare becomes "-"
</code></pre>
<p>This is a great example of the <code>$</code> tag.
Instead of instructing Harper to look for the literal word afterward, it instead instructs it to look for all derivative words.
Those include "click", "clicking", "clicked", etc.</p>
<p>We also have specific syntax that can be used to test the expression inline.
These assert that the left string, after being passed through the rule, becomes the right string.</p>
<pre><code class="hljs language-weir">test "Right click the icon." "Right-click the icon."
test "Please right click on the link." "Please right-click on the link."
test "They right clicked the submit button." "They right-clicked the submit button."
test "Right clicking the item highlights it." "Right-clicking the item highlights it."
test "Right clicks are tracked in the log." "Right-clicks are tracked in the log."
test "He RIGHT CLICKED the file." "He RIGHT-CLICKED the file."
test "Left click the checkbox." "Left-click the checkbox."
test "Middle click to open in a new tab." "Middle-click to open in a new tab."
</code></pre>
<p>I suspect Weir will make it extremely easy for AI agents to iterate quickly on these rules.
There's no compilation step, and we already have excellent pipelines for generating lists of test cases.</p>
<h2>Why Call It "Weir"?</h2>
<p>I am naming Weir after the author of <em>Project Hail Mary</em>, <a href="https://en.wikipedia.org/wiki/Andy_Weir">Andy Weir</a>.
I guess I just like naming things after authors.
After all, Harper itself was <a href="./naming_harper.md">named after Harper Lee</a>.</p>
<h2>Moving Forward</h2>
<p>Once the syntax is finalized and a corpus of tests have been written, I suspect Weir rules will become the default for new grammatical rules inside the Harper repository and for individual users.
I'd love to create a marketplace for rule sets created by and for the community.</p>
<p>If you (the reader) have any thoughts on syntax or anything else, don't hesitate to speak your mind.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Harper Can Apply Titlecase]]></title>
            <link>https://elijahpotter.dev/articles/harper_can_apply_titlecase</link>
            <guid>https://elijahpotter.dev/articles/harper_can_apply_titlecase</guid>
            <pubDate>Fri, 05 Dec 2025 07:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Harper Can Apply Title Case</h1>
<p>In case you didn't know: Harper can convert text to title case!
This has been around for a long time, but we haven't really talked about it publicly.
Being able to make text title case (strictly in accordance to Chicago Style) isn't a groundbreaking feature, but it is useful for certain niche applications.</p>
<p>For example, all the titles in this very blog are forced to title case on publish, which isn't something most CMS systems do.
I'd love to see it become more widespread.</p>
<p>We have <a href="https://writewithharper.com/titlecase">a live demo</a> where you can try it out. It's also available <a href="https://writewithharper.com/docs/harperjs/ref/harper.js.binarymodule.totitlecase.html">as part of <code>harper.js</code></a>, our JavaScript package.</p>
<p><a href="https://github.com/Automattic/harper/pull/2297">Soon</a>, this will become a part of Harper's native linting capabilities.</p>]]></description>
            <content:encoded><![CDATA[<h1>Harper Can Apply Title Case</h1>
<p>In case you didn't know: Harper can convert text to title case!
This has been around for a long time, but we haven't really talked about it publicly.
Being able to make text title case (strictly in accordance to Chicago Style) isn't a groundbreaking feature, but it is useful for certain niche applications.</p>
<p>For example, all the titles in this very blog are forced to title case on publish, which isn't something most CMS systems do.
I'd love to see it become more widespread.</p>
<p>We have <a href="https://writewithharper.com/titlecase">a live demo</a> where you can try it out. It's also available <a href="https://writewithharper.com/docs/harperjs/ref/harper.js.binarymodule.totitlecase.html">as part of <code>harper.js</code></a>, our JavaScript package.</p>
<p><a href="https://github.com/Automattic/harper/pull/2297">Soon</a>, this will become a part of Harper's native linting capabilities.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Harper Turns 1.0 Today]]></title>
            <link>https://elijahpotter.dev/articles/harper_turns_1.0_today</link>
            <guid>https://elijahpotter.dev/articles/harper_turns_1.0_today</guid>
            <pubDate>Fri, 28 Nov 2025 07:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Harper Turns 1.0 Today</h1>
<p>Today, we published <a href="https://github.com/Automattic/harper/releases/tag/v1.0.0">Harper's <code>1.0.0</code> release</a>. It's a huge milestone, and in this post I'd like to discuss why it took so long, why we're doing it now, and what's next for the project.</p>
<h2>A Round of Applause, Please</h2>
<p>As I mentioned: This is a huge step for the project. Before I say anything else, I want to take a moment to appreciate the collective work of our many contributors. It has taken many iterations to get to where we are today, and it wouldn't be possible without the community's effort and feedback.</p>
<figure class="wp-block-image"><img src="https://contrib.rocks/image?repo=automattic/harper" alt=""/></figure>
<h2>Why Now?</h2>
<p>Until today, I've kept Harper in a pre-1.0 state for one reason: I wanted to move fast. More specifically, I wanted the freedom to build and break things as many times as I needed to. I knew that it would take many cycles of building and rebuilding arrive at a robust system that addresses the people's need for a private writing tool. I believe that building something good can often mean taking the bad parts out. Once a project is "1.0", it becomes much harder to remove things. Naturally, that pushed me to hold off on "going 1.0".</p>
<p>Of course, that didn't stop us from getting the software into the hands of users. Today, tens of thousands of people benefit from Harper's fast and private grammar checking in Chrome, Obsidian, VS Code, and Neovim (among many others). We've racked up hundreds of thousands of downloads before ever slapping anything other than zero before the first decimal point of our version number.</p>
<p>I'm sure your thinking: "Elijah! Get to the point!" Fine. The reason we're doing this now, rather than earlier or later, is because our priorities have just recently changed.</p>
<p>For one, Harper's API has been rock-solid for a few months now, which means we can safely say that our need to move fast in that area has diminished. The opportunity cost of abiding by a stable API has gone down.</p>
<p>Secondly, I've been hearing progressively more interest from potential contributors and consumers who want to put Harper directly into their own apps or services. They have the desire to help make Harper more widespread, but they can't commit to it unless Harper commits to a stable API. That's what we're doing today.</p>
<p>I can imagine a future where Harper is natively integrated everywhere: learning management systems, document editors, messaging platforms, or even operating systems. But great things take time. Today, we're taking a huge step in setting Harper up for the long-term.</p>
<h2>What Do I Need to Know?</h2>
<p>As an end-user, not much changes. From here on out, we'll be pushing quality-of-life tweaks and bugfixes at a faster rate, all while improving Harper's capabilities across the board.</p>
<p>As a contributor, the patch review process might get a little bit more strict. We'll be focusing more on improving the quality of our code, rather than the amount. If your PRs are likely to result in a breaking change, expect copious notes and possible delays before we hit "merge".</p>
<p>As an integrator, you win more than anyone. If you'd like to include Harper in your application, let us know and we'll do our best to make it easy for you. Take a look at <a href="https://writewithharper.com/docs/about#Versioning-Policy">our versioning policy</a> if that sort of thing gives you peace of mind.</p>
<h2>Where Can I Get Further Updates?</h2>
<p>That depends on the level of verbosity your interested in. For those who want to know about everything I'm currently working, subscribe to <a href="https://elijahpotter.dev/">my blog</a>. For everyone else, our <a href="https://github.com/automattic/harper/releases">patch notes in GitHub</a> should suffice.</p>]]></description>
            <content:encoded><![CDATA[<h1>Harper Turns 1.0 Today</h1>
<p>Today, we published <a href="https://github.com/Automattic/harper/releases/tag/v1.0.0">Harper's <code>1.0.0</code> release</a>. It's a huge milestone, and in this post I'd like to discuss why it took so long, why we're doing it now, and what's next for the project.</p>
<h2>A Round of Applause, Please</h2>
<p>As I mentioned: This is a huge step for the project. Before I say anything else, I want to take a moment to appreciate the collective work of our many contributors. It has taken many iterations to get to where we are today, and it wouldn't be possible without the community's effort and feedback.</p>
<figure class="wp-block-image"><img src="https://contrib.rocks/image?repo=automattic/harper" alt=""/></figure>
<h2>Why Now?</h2>
<p>Until today, I've kept Harper in a pre-1.0 state for one reason: I wanted to move fast. More specifically, I wanted the freedom to build and break things as many times as I needed to. I knew that it would take many cycles of building and rebuilding arrive at a robust system that addresses the people's need for a private writing tool. I believe that building something good can often mean taking the bad parts out. Once a project is "1.0", it becomes much harder to remove things. Naturally, that pushed me to hold off on "going 1.0".</p>
<p>Of course, that didn't stop us from getting the software into the hands of users. Today, tens of thousands of people benefit from Harper's fast and private grammar checking in Chrome, Obsidian, VS Code, and Neovim (among many others). We've racked up hundreds of thousands of downloads before ever slapping anything other than zero before the first decimal point of our version number.</p>
<p>I'm sure your thinking: "Elijah! Get to the point!" Fine. The reason we're doing this now, rather than earlier or later, is because our priorities have just recently changed.</p>
<p>For one, Harper's API has been rock-solid for a few months now, which means we can safely say that our need to move fast in that area has diminished. The opportunity cost of abiding by a stable API has gone down.</p>
<p>Secondly, I've been hearing progressively more interest from potential contributors and consumers who want to put Harper directly into their own apps or services. They have the desire to help make Harper more widespread, but they can't commit to it unless Harper commits to a stable API. That's what we're doing today.</p>
<p>I can imagine a future where Harper is natively integrated everywhere: learning management systems, document editors, messaging platforms, or even operating systems. But great things take time. Today, we're taking a huge step in setting Harper up for the long-term.</p>
<h2>What Do I Need to Know?</h2>
<p>As an end-user, not much changes. From here on out, we'll be pushing quality-of-life tweaks and bugfixes at a faster rate, all while improving Harper's capabilities across the board.</p>
<p>As a contributor, the patch review process might get a little bit more strict. We'll be focusing more on improving the quality of our code, rather than the amount. If your PRs are likely to result in a breaking change, expect copious notes and possible delays before we hit "merge".</p>
<p>As an integrator, you win more than anyone. If you'd like to include Harper in your application, let us know and we'll do our best to make it easy for you. Take a look at <a href="https://writewithharper.com/docs/about#Versioning-Policy">our versioning policy</a> if that sort of thing gives you peace of mind.</p>
<h2>Where Can I Get Further Updates?</h2>
<p>That depends on the level of verbosity your interested in. For those who want to know about everything I'm currently working, subscribe to <a href="https://elijahpotter.dev/">my blog</a>. For everyone else, our <a href="https://github.com/automattic/harper/releases">patch notes in GitHub</a> should suffice.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Quality Requires Visual Design]]></title>
            <link>https://elijahpotter.dev/articles/quality_requires_visual_design</link>
            <guid>https://elijahpotter.dev/articles/quality_requires_visual_design</guid>
            <pubDate>Fri, 21 Nov 2025 07:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Quality Requires Visual Design</h1>
<p>Earlier this week, I was looking through a table of user feedback about <a href="https://writewithharper.com">Harper</a>.
I believe that software should be build <em>with</em> the user—not just <em>for</em> the user—so this is a pretty regular ritual for me.</p>
<p>I was delighted.
Many of the usual complaints were totally absent from the report.
Users were encountering fewer of the make-or-break bugs that would harm their experience.
This meant that my recent push to get Harper prepared for a <code>1.0</code> release was working.
Great news! Yippee!</p>
<p>Unfortunately, something new emerged from beneath the pile of bugs: problems with Harper's visual design.</p>
<p>For quite some time now, I've been focused solely on the practical parts of Quality.
I mean bugs and the hard-to-deny usefulness of Harper's service.
In that time, I've forgotten that the visual appeal of the software is at least as important.
So, when I saw these complaints with Harper's visual design, I knew I needed to do something about it.</p>
<p>I've never claimed to be a designer, but I'm also not one to shy away from a challenge.
So, I grabbed a copy of the <a href="https://app.thestorygraph.com/books/cd558dda-0b8d-48ff-9414-62684bc1c47b"><em>Laws of UX</em></a> (thanks <a href="https://eduardo.blog/">Eduardo</a>, for the recommendation) and got to work.</p>
<h2>Bring Everything Together</h2>
<p>Here's the secret to Harper's design system: it hasn't really existed until now.
Each of the integrations, from the Chrome extension, to the website, and even the Obsidian plugin, had their own design system and appearance.
Mostly, this was because I didn't care enough about it when first crafting these projects.
Things have changed, so I'm going to take my time and do a good job in an attempt to service these user complaints.</p>
<p>Before I could start tweaking CSS styles, I needed a unified color scheme.
After some research and time in the color picker, I arrived at this:</p>
<div style="
  --color-primary-50:#fef4e7;--color-primary-100:#fce9cf;--color-primary-200:#f9d49f;--color-primary-300:#f7be6e;--color-primary-400:#f4a83e;--color-primary:#f1920e;--color-primary-600:#c1750b;--color-primary-700:#915808;--color-primary-800:#603b06;--color-primary-900:#301d03;--color-primary-950:#221402;
  --color-accent-50:#fee7e9;--color-accent-100:#fccfd3;--color-accent-200:#f99fa6;--color-accent-300:#f76e7a;--color-accent-400:#f43e4d;--color-accent:#f10e21;--color-accent-600:#c10b1a;--color-accent-700:#910814;--color-accent-800:#60060d;--color-accent-900:#300307;--color-accent-950:#220205;
  --color-cream:#fef4e7;--color-cream-100:#fce9cf;--color-cream-200:#f9d49f;--color-cream-300:#f7be6e;--color-cream-400:#f4a83e;--color-cream-500:#f1920e;--color-cream-600:#c1750b;--color-cream-700:#915808;--color-cream-800:#603b06;--color-cream-900:#301d03;--color-cream-950:#221402;
  --color-white:#fffdfa;--color-white-100:#fceacf;--color-white-200:#fad59e;--color-white-300:#f7c06e;--color-white-400:#f5ab3d;--color-white-500:#f2960d;--color-white-600:#c2780a;--color-white-700:#915a08;--color-white-800:#613c05;--color-white-900:#301e03;--color-white-950:#221502;
  font-family: system-ui, -apple-system, Segoe UI, Roboto, sans-serif;
  border:1px solid #00000014;border-radius:12px;padding:14px;background:var(--color-white);
">
  <div style="display:flex;align-items:center;gap:10px;margin-bottom:10px;">
    <div style="width:12px;height:12px;border-radius:999px;background:var(--color-primary);"></div>
    <div style="font-weight:700;">Primary — honey bronze</div>
  </div>
  <div style="display:grid;grid-template-columns:repeat(11,minmax(0,1fr));gap:6px;">
    <div style="background:var(--color-primary-50);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#000000b3;">50</div>
    <div style="background:var(--color-primary-100);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#000000b3;">100</div>
    <div style="background:var(--color-primary-200);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#000000b3;">200</div>
    <div style="background:var(--color-primary-300);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#000000b3;">300</div>
    <div style="background:var(--color-primary-400);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#000000b3;">400</div>
    <div style="background:var(--color-primary);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#000000cc;">500</div>
    <div style="background:var(--color-primary-600);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#fff;">600</div>
    <div style="background:var(--color-primary-700);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#fff;">700</div>
    <div style="background:var(--color-primary-800);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#fff;">800</div>
    <div style="background:var(--color-primary-900);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#fff;">900</div>
    <div style="background:var(--color-primary-950);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#fff;">950</div>
  </div>
  <hr style="border:none;border-top:1px solid #00000010;margin:14px 0;">
  <div style="display:flex;align-items:center;gap:10px;margin-bottom:10px;">
    <div style="width:12px;height:12px;border-radius:999px;background:var(--color-accent);"></div>
    <div style="font-weight:700;">Accent — hot fuchsia</div>
  </div>
  <div style="display:grid;grid-template-columns:repeat(11,minmax(0,1fr));gap:6px;">
    <div style="background:var(--color-accent-50);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#000000b3;">50</div>
    <div style="background:var(--color-accent-100);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#000000b3;">100</div>
    <div style="background:var(--color-accent-200);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#000000b3;">200</div>
    <div style="background:var(--color-accent-300);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#000000b3;">300</div>
    <div style="background:var(--color-accent-400);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#000000b3;">400</div>
    <div style="background:var(--color-accent);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#fff;">500</div>
    <div style="background:var(--color-accent-600);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#fff;">600</div>
    <div style="background:var(--color-accent-700);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#fff;">700</div>
    <div style="background:var(--color-accent-800);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#fff;">800</div>
    <div style="background:var(--color-accent-900);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#fff;">900</div>
    <div style="background:var(--color-accent-950);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#fff;">950</div>
  </div>
  <hr style="border:none;border-top:1px solid #00000010;margin:14px 0;">
  <div style="display:flex;align-items:center;gap:10px;margin-bottom:10px;">
    <div style="width:12px;height:12px;border-radius:999px;background:var(--color-cream);"></div>
    <div style="font-weight:700;">Cream — simple cream</div>
  </div>
  <div style="display:grid;grid-template-columns:repeat(11,minmax(0,1fr));gap:6px;">
    <div style="background:var(--color-cream);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#000000b3;">50</div>
    <div style="background:var(--color-cream-100);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#000000b3;">100</div>
    <div style="background:var(--color-cream-200);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#000000b3;">200</div>
    <div style="background:var(--color-cream-300);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#000000b3;">300</div>
    <div style="background:var(--color-cream-400);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#000000b3;">400</div>
    <div style="background:var(--color-cream-500);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#000000cc;">500</div>
    <div style="background:var(--color-cream-600);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#fff;">600</div>
    <div style="background:var(--color-cream-700);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#fff;">700</div>
    <div style="background:var(--color-cream-800);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#fff;">800</div>
    <div style="background:var(--color-cream-900);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#fff;">900</div>
    <div style="background:var(--color-cream-950);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#fff;">950</div>
  </div>
  <hr style="border:none;border-top:1px solid #00000010;margin:14px 0;">
  <div style="display:flex;align-items:center;gap:10px;margin-bottom:10px;">
    <div style="width:12px;height:12px;border-radius:999px;background:var(--color-white);border:1px solid #0000001a;"></div>
    <div style="font-weight:700;">White — warm white ramp</div>
  </div>
  <div style="display:grid;grid-template-columns:repeat(11,minmax(0,1fr));gap:6px;">
    <div style="background:var(--color-white);height:48px;border-radius:8px;border:1px solid #00000012;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#000000b3;">base</div>
    <div style="background:var(--color-white-100);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#000000b3;">100</div>
    <div style="background:var(--color-white-200);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#000000b3;">200</div>
    <div style="background:var(--color-white-300);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#000000b3;">300</div>
    <div style="background:var(--color-white-400);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#000000b3;">400</div>
    <div style="background:var(--color-white-500);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#000000cc;">500</div>
    <div style="background:var(--color-white-600);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#fff;">600</div>
    <div style="background:var(--color-white-700);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#fff;">700</div>
    <div style="background:var(--color-white-800);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#fff;">800</div>
    <div style="background:var(--color-white-900);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#fff;">900</div>
    <div style="background:var(--color-white-950);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#fff;">950</div>
  </div>
  <div style="margin-top:14px;display:flex;gap:10px;flex-wrap:wrap;">
    <div style="flex:1;min-width:180px;border-radius:10px;padding:10px;background:linear-gradient(90deg,var(--color-primary-50),var(--color-primary));">
      <div style="font-size:12px;font-weight:700;color:#000000c0;">Primary gradient</div>
      <div style="font-size:11px;color:#000000a0;">50 → 500</div>
    </div>
    <div style="flex:1;min-width:180px;border-radius:10px;padding:10px;background:linear-gradient(90deg,var(--color-accent-50),var(--color-accent));">
      <div style="font-size:12px;font-weight:700;color:#000000c0;">Accent gradient</div>
      <div style="font-size:11px;color:#000000a0;">50 → 500</div>
    </div>
    <div style="flex:1;min-width:180px;border-radius:10px;padding:10px;background:linear-gradient(90deg,var(--color-cream),var(--color-cream-500));">
      <div style="font-size:12px;font-weight:700;color:#000000c0;">Cream gradient</div>
      <div style="font-size:11px;color:#000000a0;">50 → 500</div>
    </div>
    <div style="flex:1;min-width:180px;border-radius:10px;padding:10px;background:linear-gradient(90deg,var(--color-white),var(--color-white-500));border:1px solid #00000010;">
      <div style="font-size:12px;font-weight:700;color:#000000c0;">White gradient</div>
      <div style="font-size:11px;color:#000000a0;">base → 500</div>
    </div>
  </div>
</div>
<p>From there, I built out a component library based on <a href="https://flowbite.com/">Flowbite</a> and replaced all the relevant components in Harper's integrations with my own.</p>
<p>Next, I wanted to create a bit more dynamism with our typography.
After spending some time in Google Fonts, I decided to use Domine, a serif font for headings and Atkinson Hyperlegible for everything else.
Once I combined the duo with the orange-ish yellow from the color palette (which is usually associated with creative brands), I felt confident that I had nailed what I imagined Harper's personality to be.</p>
<h2>Finalizing the Draft</h2>
<p>I feel that Harper's landing page embodies the design system well, so I've shown it below.
It's the first, and often only,  thing a user sees, so I put some extra care into it.
At the time of writing, <a href="https://writewithharper.com">it is live</a>.</p>
<h3>Before My Changes</h3>
<img src="/images/writewithharper_before.png" alt="The Harper website before my changes">
<h3>After My Changes</h3>
<img src="/images/writewithharper_after.png" alt="The Harper website after my changes">
<h2>Where Do I Go From Here?</h2>
<p>As I said, this mini-project was simply a draft.
I'll be working with Harper contributors on a more regular basis to continue nailing it down, slowly improving the Quality of the visual design for the entire project.
If you have any feedback, thoughts, or ideas for how we can make Harper more user-friendly or visually appealing, let me know!</p>]]></description>
            <content:encoded><![CDATA[<h1>Quality Requires Visual Design</h1>
<p>Earlier this week, I was looking through a table of user feedback about <a href="https://writewithharper.com">Harper</a>.
I believe that software should be build <em>with</em> the user—not just <em>for</em> the user—so this is a pretty regular ritual for me.</p>
<p>I was delighted.
Many of the usual complaints were totally absent from the report.
Users were encountering fewer of the make-or-break bugs that would harm their experience.
This meant that my recent push to get Harper prepared for a <code>1.0</code> release was working.
Great news! Yippee!</p>
<p>Unfortunately, something new emerged from beneath the pile of bugs: problems with Harper's visual design.</p>
<p>For quite some time now, I've been focused solely on the practical parts of Quality.
I mean bugs and the hard-to-deny usefulness of Harper's service.
In that time, I've forgotten that the visual appeal of the software is at least as important.
So, when I saw these complaints with Harper's visual design, I knew I needed to do something about it.</p>
<p>I've never claimed to be a designer, but I'm also not one to shy away from a challenge.
So, I grabbed a copy of the <a href="https://app.thestorygraph.com/books/cd558dda-0b8d-48ff-9414-62684bc1c47b"><em>Laws of UX</em></a> (thanks <a href="https://eduardo.blog/">Eduardo</a>, for the recommendation) and got to work.</p>
<h2>Bring Everything Together</h2>
<p>Here's the secret to Harper's design system: it hasn't really existed until now.
Each of the integrations, from the Chrome extension, to the website, and even the Obsidian plugin, had their own design system and appearance.
Mostly, this was because I didn't care enough about it when first crafting these projects.
Things have changed, so I'm going to take my time and do a good job in an attempt to service these user complaints.</p>
<p>Before I could start tweaking CSS styles, I needed a unified color scheme.
After some research and time in the color picker, I arrived at this:</p>
<div style="
  --color-primary-50:#fef4e7;--color-primary-100:#fce9cf;--color-primary-200:#f9d49f;--color-primary-300:#f7be6e;--color-primary-400:#f4a83e;--color-primary:#f1920e;--color-primary-600:#c1750b;--color-primary-700:#915808;--color-primary-800:#603b06;--color-primary-900:#301d03;--color-primary-950:#221402;
  --color-accent-50:#fee7e9;--color-accent-100:#fccfd3;--color-accent-200:#f99fa6;--color-accent-300:#f76e7a;--color-accent-400:#f43e4d;--color-accent:#f10e21;--color-accent-600:#c10b1a;--color-accent-700:#910814;--color-accent-800:#60060d;--color-accent-900:#300307;--color-accent-950:#220205;
  --color-cream:#fef4e7;--color-cream-100:#fce9cf;--color-cream-200:#f9d49f;--color-cream-300:#f7be6e;--color-cream-400:#f4a83e;--color-cream-500:#f1920e;--color-cream-600:#c1750b;--color-cream-700:#915808;--color-cream-800:#603b06;--color-cream-900:#301d03;--color-cream-950:#221402;
  --color-white:#fffdfa;--color-white-100:#fceacf;--color-white-200:#fad59e;--color-white-300:#f7c06e;--color-white-400:#f5ab3d;--color-white-500:#f2960d;--color-white-600:#c2780a;--color-white-700:#915a08;--color-white-800:#613c05;--color-white-900:#301e03;--color-white-950:#221502;
  font-family: system-ui, -apple-system, Segoe UI, Roboto, sans-serif;
  border:1px solid #00000014;border-radius:12px;padding:14px;background:var(--color-white);
">
  <div style="display:flex;align-items:center;gap:10px;margin-bottom:10px;">
    <div style="width:12px;height:12px;border-radius:999px;background:var(--color-primary);"></div>
    <div style="font-weight:700;">Primary — honey bronze</div>
  </div>
  <div style="display:grid;grid-template-columns:repeat(11,minmax(0,1fr));gap:6px;">
    <div style="background:var(--color-primary-50);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#000000b3;">50</div>
    <div style="background:var(--color-primary-100);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#000000b3;">100</div>
    <div style="background:var(--color-primary-200);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#000000b3;">200</div>
    <div style="background:var(--color-primary-300);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#000000b3;">300</div>
    <div style="background:var(--color-primary-400);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#000000b3;">400</div>
    <div style="background:var(--color-primary);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#000000cc;">500</div>
    <div style="background:var(--color-primary-600);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#fff;">600</div>
    <div style="background:var(--color-primary-700);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#fff;">700</div>
    <div style="background:var(--color-primary-800);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#fff;">800</div>
    <div style="background:var(--color-primary-900);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#fff;">900</div>
    <div style="background:var(--color-primary-950);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#fff;">950</div>
  </div>
  <hr style="border:none;border-top:1px solid #00000010;margin:14px 0;">
  <div style="display:flex;align-items:center;gap:10px;margin-bottom:10px;">
    <div style="width:12px;height:12px;border-radius:999px;background:var(--color-accent);"></div>
    <div style="font-weight:700;">Accent — hot fuchsia</div>
  </div>
  <div style="display:grid;grid-template-columns:repeat(11,minmax(0,1fr));gap:6px;">
    <div style="background:var(--color-accent-50);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#000000b3;">50</div>
    <div style="background:var(--color-accent-100);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#000000b3;">100</div>
    <div style="background:var(--color-accent-200);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#000000b3;">200</div>
    <div style="background:var(--color-accent-300);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#000000b3;">300</div>
    <div style="background:var(--color-accent-400);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#000000b3;">400</div>
    <div style="background:var(--color-accent);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#fff;">500</div>
    <div style="background:var(--color-accent-600);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#fff;">600</div>
    <div style="background:var(--color-accent-700);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#fff;">700</div>
    <div style="background:var(--color-accent-800);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#fff;">800</div>
    <div style="background:var(--color-accent-900);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#fff;">900</div>
    <div style="background:var(--color-accent-950);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#fff;">950</div>
  </div>
  <hr style="border:none;border-top:1px solid #00000010;margin:14px 0;">
  <div style="display:flex;align-items:center;gap:10px;margin-bottom:10px;">
    <div style="width:12px;height:12px;border-radius:999px;background:var(--color-cream);"></div>
    <div style="font-weight:700;">Cream — simple cream</div>
  </div>
  <div style="display:grid;grid-template-columns:repeat(11,minmax(0,1fr));gap:6px;">
    <div style="background:var(--color-cream);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#000000b3;">50</div>
    <div style="background:var(--color-cream-100);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#000000b3;">100</div>
    <div style="background:var(--color-cream-200);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#000000b3;">200</div>
    <div style="background:var(--color-cream-300);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#000000b3;">300</div>
    <div style="background:var(--color-cream-400);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#000000b3;">400</div>
    <div style="background:var(--color-cream-500);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#000000cc;">500</div>
    <div style="background:var(--color-cream-600);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#fff;">600</div>
    <div style="background:var(--color-cream-700);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#fff;">700</div>
    <div style="background:var(--color-cream-800);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#fff;">800</div>
    <div style="background:var(--color-cream-900);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#fff;">900</div>
    <div style="background:var(--color-cream-950);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#fff;">950</div>
  </div>
  <hr style="border:none;border-top:1px solid #00000010;margin:14px 0;">
  <div style="display:flex;align-items:center;gap:10px;margin-bottom:10px;">
    <div style="width:12px;height:12px;border-radius:999px;background:var(--color-white);border:1px solid #0000001a;"></div>
    <div style="font-weight:700;">White — warm white ramp</div>
  </div>
  <div style="display:grid;grid-template-columns:repeat(11,minmax(0,1fr));gap:6px;">
    <div style="background:var(--color-white);height:48px;border-radius:8px;border:1px solid #00000012;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#000000b3;">base</div>
    <div style="background:var(--color-white-100);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#000000b3;">100</div>
    <div style="background:var(--color-white-200);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#000000b3;">200</div>
    <div style="background:var(--color-white-300);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#000000b3;">300</div>
    <div style="background:var(--color-white-400);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#000000b3;">400</div>
    <div style="background:var(--color-white-500);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#000000cc;">500</div>
    <div style="background:var(--color-white-600);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#fff;">600</div>
    <div style="background:var(--color-white-700);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#fff;">700</div>
    <div style="background:var(--color-white-800);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#fff;">800</div>
    <div style="background:var(--color-white-900);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#fff;">900</div>
    <div style="background:var(--color-white-950);height:48px;border-radius:8px;display:flex;align-items:end;justify-content:center;font-size:11px;padding:6px;color:#fff;">950</div>
  </div>
  <div style="margin-top:14px;display:flex;gap:10px;flex-wrap:wrap;">
    <div style="flex:1;min-width:180px;border-radius:10px;padding:10px;background:linear-gradient(90deg,var(--color-primary-50),var(--color-primary));">
      <div style="font-size:12px;font-weight:700;color:#000000c0;">Primary gradient</div>
      <div style="font-size:11px;color:#000000a0;">50 → 500</div>
    </div>
    <div style="flex:1;min-width:180px;border-radius:10px;padding:10px;background:linear-gradient(90deg,var(--color-accent-50),var(--color-accent));">
      <div style="font-size:12px;font-weight:700;color:#000000c0;">Accent gradient</div>
      <div style="font-size:11px;color:#000000a0;">50 → 500</div>
    </div>
    <div style="flex:1;min-width:180px;border-radius:10px;padding:10px;background:linear-gradient(90deg,var(--color-cream),var(--color-cream-500));">
      <div style="font-size:12px;font-weight:700;color:#000000c0;">Cream gradient</div>
      <div style="font-size:11px;color:#000000a0;">50 → 500</div>
    </div>
    <div style="flex:1;min-width:180px;border-radius:10px;padding:10px;background:linear-gradient(90deg,var(--color-white),var(--color-white-500));border:1px solid #00000010;">
      <div style="font-size:12px;font-weight:700;color:#000000c0;">White gradient</div>
      <div style="font-size:11px;color:#000000a0;">base → 500</div>
    </div>
  </div>
</div>
<p>From there, I built out a component library based on <a href="https://flowbite.com/">Flowbite</a> and replaced all the relevant components in Harper's integrations with my own.</p>
<p>Next, I wanted to create a bit more dynamism with our typography.
After spending some time in Google Fonts, I decided to use Domine, a serif font for headings and Atkinson Hyperlegible for everything else.
Once I combined the duo with the orange-ish yellow from the color palette (which is usually associated with creative brands), I felt confident that I had nailed what I imagined Harper's personality to be.</p>
<h2>Finalizing the Draft</h2>
<p>I feel that Harper's landing page embodies the design system well, so I've shown it below.
It's the first, and often only,  thing a user sees, so I put some extra care into it.
At the time of writing, <a href="https://writewithharper.com">it is live</a>.</p>
<h3>Before My Changes</h3>
<img src="/images/writewithharper_before.png" alt="The Harper website before my changes">
<h3>After My Changes</h3>
<img src="/images/writewithharper_after.png" alt="The Harper website after my changes">
<h2>Where Do I Go From Here?</h2>
<p>As I said, this mini-project was simply a draft.
I'll be working with Harper contributors on a more regular basis to continue nailing it down, slowly improving the Quality of the visual design for the entire project.
If you have any feedback, thoughts, or ideas for how we can make Harper more user-friendly or visually appealing, let me know!</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Re: Collaboration Sucks]]></title>
            <link>https://elijahpotter.dev/articles/re:_collaboration_sucks</link>
            <guid>https://elijahpotter.dev/articles/re:_collaboration_sucks</guid>
            <pubDate>Tue, 18 Nov 2025 07:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Re: Collaboration Sucks</h1>
<p>Earlier this week, I came across a really great post from a product engineer over at PostHog. If you haven't already, I highly recommend <a href="https://newsletter.posthog.com/p/collaboration-sucks">reading it</a>. With the title "Collaboration sucks," I think it makes its message clear.</p>
<p>If you had shown this article to the Elijah Potter from a year ago, he would have agreed with most of what it says. I was of the strong opinion that collaboration was fundamentally counterproductive to getting sh*t done. Since then, I've completely changed my tune. While I don't believe collaboration is the secret weapon my professors in school made it out to be, I think it's essential to working on ambitious projects for a long time without burning out. Furthermore, at least for open source projects, I think it's critical for prioritization of work.</p>
<p>First, let's get something out of the way. Although the post title declared that collaboration in any form was bad, the meat and potatoes of the post have a slightly different tune. He merely said that it isn't that collaboration is bad, just that <em>too much</em> collaboration is bad. I can get behind this, but I still think there's more nuance lurking in the shadows.</p>
<p>I think Cook's post outlines many of the various ways collaboration can go wrong. I particularly enjoy the red flag examples he provides. I'd like to provide some examples of collaboration going <em>right.</em></p>
<p>Since all of this is strictly related to <a href="https://writewithharper.com/">Harper</a>, it's possible that both these examples are only helpful to open source projects.</p>
<h2>It Boosts Motivation</h2>
<p>When I go a little bit too long without engaging with a member of the Harper user base, I start to feel a bit deflated. I start to forget who it is all for.</p>
<img src="/images/bluesky_testimonial.png" alt="A recent thread found on Bluesky discussing Harper">
<p>As a part of my <a href="https://elijahpotter.dev/articles/my-writing-environment-as-a-software-engineer">own writing environment</a>, Harper exists to make writing more fun and more human. I can think critically about my message while spending less time worrying about grammar or punctuation. For others, it poses to do all that without violating their privacy. It's great to hear from them, and how we can make our service even better.</p>
<img src="/images/discord_testimonial.png" alt="A recent conversation on Discord.">
<p>I know I'm doing something right when I'm acting as customer support and read a message like this. I feel empowered. Not only do I know <em>what</em> I am doing right, I feel highly motivated to keep doing that thing.</p>
<h2>In Open Source, Your Collaborators Are Also Your Users</h2>
<p>When working on open source software, especially one that values privacy, clearly stating a policy of being <a href="https://elijahpotter.dev/articles/never-wait">friendly to pull requests</a> is obviously critical. For one, it makes it clear that bug fixes are welcome and will be swiftly reviewed. Less obvious is the feedback residing within the pull request.</p>
<img src="/images/harper_issue_2143.png" alt="Harper Issue #2143">
<p>Here's <a href="https://github.com/Automattic/harper/pull/2143">an example</a>. A user encountered an issue resulting from a change in Obsidian's API. Not only did he take the time to report the problem, he also went through the effort of implementing a fix. What does this tell us?</p>
<p>For one, it gives us an idea of where our documentation is sufficient. He was able to find the information needed to build the code and run it on his own machine. In the future, we don't need to prioritize the Obsidian plugin's documentation.</p>
<p>More importantly, however, is the fact that the feature was valuable enough for this user to spend time fixing it. If it is valuable for him, we can safely say that it is valuable for other users. Now that I have sufficient evidence, I might consider bringing similar functionality to other platforms, like our Chrome extension.</p>
<p>Collaborating in an open source context can be hugely informative, since your collaborators are also your users. In my experience, this is surprisingly uncommon, even in companies that claim to have a culture of feedback.</p>
<h2>There Is Still Such a Thing As Too Much</h2>
<p>While I've found these particular cases to be productivity-boosters, I understand that focusing too much on any one thing can be detrimental. First and foremost, the goal should be to ship as high a quality of software as possible, as soon as possible.</p>]]></description>
            <content:encoded><![CDATA[<h1>Re: Collaboration Sucks</h1>
<p>Earlier this week, I came across a really great post from a product engineer over at PostHog. If you haven't already, I highly recommend <a href="https://newsletter.posthog.com/p/collaboration-sucks">reading it</a>. With the title "Collaboration sucks," I think it makes its message clear.</p>
<p>If you had shown this article to the Elijah Potter from a year ago, he would have agreed with most of what it says. I was of the strong opinion that collaboration was fundamentally counterproductive to getting sh*t done. Since then, I've completely changed my tune. While I don't believe collaboration is the secret weapon my professors in school made it out to be, I think it's essential to working on ambitious projects for a long time without burning out. Furthermore, at least for open source projects, I think it's critical for prioritization of work.</p>
<p>First, let's get something out of the way. Although the post title declared that collaboration in any form was bad, the meat and potatoes of the post have a slightly different tune. He merely said that it isn't that collaboration is bad, just that <em>too much</em> collaboration is bad. I can get behind this, but I still think there's more nuance lurking in the shadows.</p>
<p>I think Cook's post outlines many of the various ways collaboration can go wrong. I particularly enjoy the red flag examples he provides. I'd like to provide some examples of collaboration going <em>right.</em></p>
<p>Since all of this is strictly related to <a href="https://writewithharper.com/">Harper</a>, it's possible that both these examples are only helpful to open source projects.</p>
<h2>It Boosts Motivation</h2>
<p>When I go a little bit too long without engaging with a member of the Harper user base, I start to feel a bit deflated. I start to forget who it is all for.</p>
<img src="/images/bluesky_testimonial.png" alt="A recent thread found on Bluesky discussing Harper">
<p>As a part of my <a href="https://elijahpotter.dev/articles/my-writing-environment-as-a-software-engineer">own writing environment</a>, Harper exists to make writing more fun and more human. I can think critically about my message while spending less time worrying about grammar or punctuation. For others, it poses to do all that without violating their privacy. It's great to hear from them, and how we can make our service even better.</p>
<img src="/images/discord_testimonial.png" alt="A recent conversation on Discord.">
<p>I know I'm doing something right when I'm acting as customer support and read a message like this. I feel empowered. Not only do I know <em>what</em> I am doing right, I feel highly motivated to keep doing that thing.</p>
<h2>In Open Source, Your Collaborators Are Also Your Users</h2>
<p>When working on open source software, especially one that values privacy, clearly stating a policy of being <a href="https://elijahpotter.dev/articles/never-wait">friendly to pull requests</a> is obviously critical. For one, it makes it clear that bug fixes are welcome and will be swiftly reviewed. Less obvious is the feedback residing within the pull request.</p>
<img src="/images/harper_issue_2143.png" alt="Harper Issue #2143">
<p>Here's <a href="https://github.com/Automattic/harper/pull/2143">an example</a>. A user encountered an issue resulting from a change in Obsidian's API. Not only did he take the time to report the problem, he also went through the effort of implementing a fix. What does this tell us?</p>
<p>For one, it gives us an idea of where our documentation is sufficient. He was able to find the information needed to build the code and run it on his own machine. In the future, we don't need to prioritize the Obsidian plugin's documentation.</p>
<p>More importantly, however, is the fact that the feature was valuable enough for this user to spend time fixing it. If it is valuable for him, we can safely say that it is valuable for other users. Now that I have sufficient evidence, I might consider bringing similar functionality to other platforms, like our Chrome extension.</p>
<p>Collaborating in an open source context can be hugely informative, since your collaborators are also your users. In my experience, this is surprisingly uncommon, even in companies that claim to have a culture of feedback.</p>
<h2>There Is Still Such a Thing As Too Much</h2>
<p>While I've found these particular cases to be productivity-boosters, I understand that focusing too much on any one thing can be detrimental. First and foremost, the goal should be to ship as high a quality of software as possible, as soon as possible.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Improving Rust Compile Times by 71 Percent]]></title>
            <link>https://elijahpotter.dev/articles/improving_rust_compile_times_by_71_percent</link>
            <guid>https://elijahpotter.dev/articles/improving_rust_compile_times_by_71_percent</guid>
            <pubDate>Thu, 06 Nov 2025 07:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Improving Rust Compile Times By 71%</h1>
<p>If you maintain or work on a project in any compiled language, particularly a language that is known for having a slow compiler, I believe it is critical to regularly schedule time to do an analysis of your build process and make adjustments if necessary.</p>
<p>Compile-time increases slowly as a project morphs and grows.
Slow enough that it often goes unnoticed.
I call this "compiler creep" and believe it to be similar in many ways to <a href="https://en.wikipedia.org/wiki/Lifestyle_creep">lifestyle creep</a>.
By scheduling time to do a full accounting of where your CPU cycles are going, you can preemptively save yourself (and your contributors) a lot of time.</p>
<p>Yesterday was the big day for me. Over the last few months, <a href="https://github.com/automattic/harper">Harper's</a> compile time from scratch in GitHub actions had ballooned to a five full minutes. On my personal laptop, it had grown to three minutes. That may not sound like a lot, but I often need to build from scratch when swapping between branches or checking out a pull request. Incremental build times had also grown to an unjustifiable duration. Iterating upon unit tests was starting to become quite slow.</p>
<p>Harper's low latency is one of the reasons many turn to it over the alternatives. In the early days, we accomplished that by cranking the LLVM optimizer to the max, reducing the number of codegen units to one and using link-time optimization to take care of the rest. At the time, I believe this resulted in a ~30% performance uplift. Not bad!</p>
<p>For the uninitiated, a codegen unit is the smallest unit of work that is passed to the LLVM compiler. If you allow the project to be split into more than one codegen unit, you can allow more than one compiler to run at a time. This, therefore, increases the number of physical cores that can be used at any given moment during compilation. The downside is that the compiler misses out on optimization opportunities at the boundaries between codegen units.</p>
<p>So when I sat down yesterday, I was hardly surprised to see that code generation was the bottleneck. The compiler was spending very little time type-checking and linking, and a lot of time generating and optimizing machine code.</p>
<p>My initial thought was to reduce the <em>amount</em> of code that needed optimizing.
So I did what anyone else would do and went searching for unnecessary dependencies and other kinds of dead code in the repository. I found some, but not enough to make a meaningful dent in the compile time.</p>
<p>That's when I took a look at our build configuration and asked myself: "Do we still need LTO? Could we increase our codegen units?"</p>
<p>In the last few months, I've made some significant improvements to caching and pipelining, which means the bottleneck has shifted from compile-time optimization to memory bandwidth. We simply don't need the compiler to do as much work anymore. So, I tried disabling LTO and reverting the codegen units to their default value. What happened?</p>
<p>Clean builds went from five minutes on CI to less than two. All in, that's an improvement of 71%. On top of that, since it was the code generation stage that was slowing down incremental builds as well, we saw a similar improvement to test execution time. That means there is now less time between making a change in the code, and knowing how it affects our existing test cases. Since that's a huge portion of a contributor's time, improvements to incremental builds can be force-multipliers for the entire project.</p>
<p>I was surprised such a simple change would have such an outsized impact on the developer experience. It felt too good to be true, so I re-ran our benchmarks to see if the change would have a negative impact on Harper's actual performance. Our uncached lint times took three percent longer than before. I think that's acceptable.</p>
<p>Is there a lesson to be learned here? I think so. I implore you to take a half-hour this week to mess around with your compiler's settings. Even an improvement of a few seconds per iteration can compound into hours of time savings, both for you and your fellow contributors. If you think your compiler settings are already perfect, do it anyway. Your code or requirements may have changed.</p>
<p>This is great step in my recent work to make people's first contributions to Harper as frictionless as possible.</p>]]></description>
            <content:encoded><![CDATA[<h1>Improving Rust Compile Times By 71%</h1>
<p>If you maintain or work on a project in any compiled language, particularly a language that is known for having a slow compiler, I believe it is critical to regularly schedule time to do an analysis of your build process and make adjustments if necessary.</p>
<p>Compile-time increases slowly as a project morphs and grows.
Slow enough that it often goes unnoticed.
I call this "compiler creep" and believe it to be similar in many ways to <a href="https://en.wikipedia.org/wiki/Lifestyle_creep">lifestyle creep</a>.
By scheduling time to do a full accounting of where your CPU cycles are going, you can preemptively save yourself (and your contributors) a lot of time.</p>
<p>Yesterday was the big day for me. Over the last few months, <a href="https://github.com/automattic/harper">Harper's</a> compile time from scratch in GitHub actions had ballooned to a five full minutes. On my personal laptop, it had grown to three minutes. That may not sound like a lot, but I often need to build from scratch when swapping between branches or checking out a pull request. Incremental build times had also grown to an unjustifiable duration. Iterating upon unit tests was starting to become quite slow.</p>
<p>Harper's low latency is one of the reasons many turn to it over the alternatives. In the early days, we accomplished that by cranking the LLVM optimizer to the max, reducing the number of codegen units to one and using link-time optimization to take care of the rest. At the time, I believe this resulted in a ~30% performance uplift. Not bad!</p>
<p>For the uninitiated, a codegen unit is the smallest unit of work that is passed to the LLVM compiler. If you allow the project to be split into more than one codegen unit, you can allow more than one compiler to run at a time. This, therefore, increases the number of physical cores that can be used at any given moment during compilation. The downside is that the compiler misses out on optimization opportunities at the boundaries between codegen units.</p>
<p>So when I sat down yesterday, I was hardly surprised to see that code generation was the bottleneck. The compiler was spending very little time type-checking and linking, and a lot of time generating and optimizing machine code.</p>
<p>My initial thought was to reduce the <em>amount</em> of code that needed optimizing.
So I did what anyone else would do and went searching for unnecessary dependencies and other kinds of dead code in the repository. I found some, but not enough to make a meaningful dent in the compile time.</p>
<p>That's when I took a look at our build configuration and asked myself: "Do we still need LTO? Could we increase our codegen units?"</p>
<p>In the last few months, I've made some significant improvements to caching and pipelining, which means the bottleneck has shifted from compile-time optimization to memory bandwidth. We simply don't need the compiler to do as much work anymore. So, I tried disabling LTO and reverting the codegen units to their default value. What happened?</p>
<p>Clean builds went from five minutes on CI to less than two. All in, that's an improvement of 71%. On top of that, since it was the code generation stage that was slowing down incremental builds as well, we saw a similar improvement to test execution time. That means there is now less time between making a change in the code, and knowing how it affects our existing test cases. Since that's a huge portion of a contributor's time, improvements to incremental builds can be force-multipliers for the entire project.</p>
<p>I was surprised such a simple change would have such an outsized impact on the developer experience. It felt too good to be true, so I re-ran our benchmarks to see if the change would have a negative impact on Harper's actual performance. Our uncached lint times took three percent longer than before. I think that's acceptable.</p>
<p>Is there a lesson to be learned here? I think so. I implore you to take a half-hour this week to mess around with your compiler's settings. Even an improvement of a few seconds per iteration can compound into hours of time savings, both for you and your fellow contributors. If you think your compiler settings are already perfect, do it anyway. Your code or requirements may have changed.</p>
<p>This is great step in my recent work to make people's first contributions to Harper as frictionless as possible.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Finding Signal Through the Noise]]></title>
            <link>https://elijahpotter.dev/articles/finding_signal_through_the_noise</link>
            <guid>https://elijahpotter.dev/articles/finding_signal_through_the_noise</guid>
            <pubDate>Mon, 03 Nov 2025 07:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Finding Signal Through The Noise</h1>
<p>As the maintainer of <a href="https://github.com/automattic/harper">Harper</a>, I read through dozens of issues and pull requests per day and countless more per week.
There is a constant flood of new problems to fix and new code to review.
At times, it can get overwhelming.</p>
<p>I've been working recently to reduce the amount of noise I encounter when determining what to work on.
I'm sharing my findings here to help other novice maintainers handle the torrent.
Hopefully, there are some ideas here that you find insightful.</p>
<p>When your open source project gets big enough, you inevitably hit a point where the flow of bug reports or feature requests expands beyond what you're individually able to fix or implement.
If you're like <a href="https://www.jeffgeerling.com/">Jeff Geerling</a>, this might happen when you find yourself maintaining a larger <strong>number</strong> of projects.
Either way, it's clear that there will come a time where you need to start making hard decisions on how to spend your time.</p>
<h2>Set A Policy</h2>
<p>Jeff's solution, which I've adopted, is to set <a href="https://www.jeffgeerling.com/blog/2020/enabling-stale-issue-bot-on-my-github-repositories">well-defined</a> policies for which issues are allowed to slip through the cracks.</p>
<p>The strategy is simple in essence.
The maintainer, Geerling or I in this case, directly addresses issues and PRs which are important to the existing functionality of the software first.
New features or minor bugs come second.
Feature requests that help a single person or a bug reports that address a rare edge case are of lowest priority.</p>
<p>If an issue or pull request cannot be addressed within 30 days of its opening, it is marked stale.
If another 30 days pass before it becomes active again, it is closed.
Over time, unimportant topics are cleared away, leaving only the ones critical to the project's mission.</p>
<p>A policy like this sets "no" as the default and helps prevent runaway code quality declines.
I instituted it on Monday.
If it needs revision, I'm open to comments and feedback.</p>
<h3>Harper's False Positives</h3>
<p>This policy stands in the face of an almost overwhelming number of issues related to false-positives in Harper's algorithm.
<a href="https://github.com/Automattic/harper/issues/2124">Here's a representative example.</a></p>
<p>Since these false-positive reports represent such a significant plurality of the issues on GitHub, I feel the need to handle them separately from everything else.
They're usually resolved quickly but left open out of neglect. They're trivial enough for me to fix, but not for me to follow-up.
Moving forward, if anyone needs to report an issue related to a false-positive, please do so <a href="https://writewithharper.com/report-problematic-lint">here</a>.</p>
<h2>Develop a Sense of Importance</h2>
<p>There is no algorithm to determine which issues or PRs are the most important to a project's success.
Typically, I'll read through five or so, and pick the one that "feels" the most important to work on.
It's often a balance of prioritizing time-sensitive items, items that could result in publicity, and usefulness to my personal work.
It's hard to put it into words, which is why it must be developed, not taught.</p>
<h2>What Do You Think?</h2>
<p>I consider myself a novice maintainer, so I'd love to hear from more mature maintainers.
How do you find signal through the noise?</p>
<hr>
<h2>Additional Reading</h2>
<ul>
<li><a href="https://www.jeffgeerling.com/blog/2016/why-i-close-prs-oss-project-maintainer-notes">Why I close PRs - Jeff Geerling</a></li>
</ul>]]></description>
            <content:encoded><![CDATA[<h1>Finding Signal Through The Noise</h1>
<p>As the maintainer of <a href="https://github.com/automattic/harper">Harper</a>, I read through dozens of issues and pull requests per day and countless more per week.
There is a constant flood of new problems to fix and new code to review.
At times, it can get overwhelming.</p>
<p>I've been working recently to reduce the amount of noise I encounter when determining what to work on.
I'm sharing my findings here to help other novice maintainers handle the torrent.
Hopefully, there are some ideas here that you find insightful.</p>
<p>When your open source project gets big enough, you inevitably hit a point where the flow of bug reports or feature requests expands beyond what you're individually able to fix or implement.
If you're like <a href="https://www.jeffgeerling.com/">Jeff Geerling</a>, this might happen when you find yourself maintaining a larger <strong>number</strong> of projects.
Either way, it's clear that there will come a time where you need to start making hard decisions on how to spend your time.</p>
<h2>Set A Policy</h2>
<p>Jeff's solution, which I've adopted, is to set <a href="https://www.jeffgeerling.com/blog/2020/enabling-stale-issue-bot-on-my-github-repositories">well-defined</a> policies for which issues are allowed to slip through the cracks.</p>
<p>The strategy is simple in essence.
The maintainer, Geerling or I in this case, directly addresses issues and PRs which are important to the existing functionality of the software first.
New features or minor bugs come second.
Feature requests that help a single person or a bug reports that address a rare edge case are of lowest priority.</p>
<p>If an issue or pull request cannot be addressed within 30 days of its opening, it is marked stale.
If another 30 days pass before it becomes active again, it is closed.
Over time, unimportant topics are cleared away, leaving only the ones critical to the project's mission.</p>
<p>A policy like this sets "no" as the default and helps prevent runaway code quality declines.
I instituted it on Monday.
If it needs revision, I'm open to comments and feedback.</p>
<h3>Harper's False Positives</h3>
<p>This policy stands in the face of an almost overwhelming number of issues related to false-positives in Harper's algorithm.
<a href="https://github.com/Automattic/harper/issues/2124">Here's a representative example.</a></p>
<p>Since these false-positive reports represent such a significant plurality of the issues on GitHub, I feel the need to handle them separately from everything else.
They're usually resolved quickly but left open out of neglect. They're trivial enough for me to fix, but not for me to follow-up.
Moving forward, if anyone needs to report an issue related to a false-positive, please do so <a href="https://writewithharper.com/report-problematic-lint">here</a>.</p>
<h2>Develop a Sense of Importance</h2>
<p>There is no algorithm to determine which issues or PRs are the most important to a project's success.
Typically, I'll read through five or so, and pick the one that "feels" the most important to work on.
It's often a balance of prioritizing time-sensitive items, items that could result in publicity, and usefulness to my personal work.
It's hard to put it into words, which is why it must be developed, not taught.</p>
<h2>What Do You Think?</h2>
<p>I consider myself a novice maintainer, so I'd love to hear from more mature maintainers.
How do you find signal through the noise?</p>
<hr>
<h2>Additional Reading</h2>
<ul>
<li><a href="https://www.jeffgeerling.com/blog/2016/why-i-close-prs-oss-project-maintainer-notes">Why I close PRs - Jeff Geerling</a></li>
</ul>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Avoid Complexity]]></title>
            <link>https://elijahpotter.dev/articles/avoid_complexity</link>
            <guid>https://elijahpotter.dev/articles/avoid_complexity</guid>
            <pubDate>Fri, 17 Oct 2025 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Avoid Complexity</h1>
<p>For as long as I can remember, I've told people that the real challenge of software engineering isn't writing code.
It isn't documentation or CI pipelines either.
No, the biggest challenge in software engineering is managing complexity.</p>
<p>Think about it.
The typical software engineer is a single person on a vast team, writing and editing a system composed of hundreds, millions, or even billions of lines of code.
I find that intimidating.
How in the world are we supposed to build robust and reliable systems when facing such complex odds?</p>
<p>I worry that, in my fascination in the <a href="./LLM_assisted_fuzzing">novel</a> <a href="more_transformation-based_learning">applications</a> of machine learning we hear about on a day-to-day basis, I've gotten distracted from the beast of complexity.
I've allowed the Harper codebase to accrue tech debt.</p>
<p>I recently watched <a href="https://www.youtube.com/watch?v=gcwzWzC7gUA">DHH's keynote from Rails World 2025</a>.
While I definitely do not agree with everything he has to say, I believe he makes some poignant points about complexity.
Some things, he says, which we consider normal today would have been ludicrous ten years ago.
Somehow, with computer hardware faster than ever, we've arrived at a point where 20 minute CI runs are acceptable.
Where a deploy to production can take an hour and setting up a new environment can take days.</p>
<p>I want to change that—at least, for Harper.</p>
<p>While working on several smaller projects, I've been cutting the fat out of <a href="https://github.com/automattic/harper">the Harper monorepo.</a>
That means removing unnecessary build steps, parallelizing individual CI jobs, and removing optimization flags that don't work.</p>
<p>Harper isn't at the 2005 standard yet, but I want contributing to be as easy as possible.
Ideally, it shouldn't take more than five minutes to go from cloning the repo to committing code.
That starts with simplifying <a href="https://writewithharper.com/docs/contributors/environment">the environment</a>.</p>
<h2>Addendum</h2>
<p>I just came across <a href="https://digitalsociety.coop/posts/migrating-to-hetzner-cloud/">this other blog post</a> where they discuss achieving three times the performance for a quarter of the price by switching from
a complex AWS setup to a bare metal Hetzner server.
Another win for simpler systems!</p>]]></description>
            <content:encoded><![CDATA[<h1>Avoid Complexity</h1>
<p>For as long as I can remember, I've told people that the real challenge of software engineering isn't writing code.
It isn't documentation or CI pipelines either.
No, the biggest challenge in software engineering is managing complexity.</p>
<p>Think about it.
The typical software engineer is a single person on a vast team, writing and editing a system composed of hundreds, millions, or even billions of lines of code.
I find that intimidating.
How in the world are we supposed to build robust and reliable systems when facing such complex odds?</p>
<p>I worry that, in my fascination in the <a href="./LLM_assisted_fuzzing">novel</a> <a href="more_transformation-based_learning">applications</a> of machine learning we hear about on a day-to-day basis, I've gotten distracted from the beast of complexity.
I've allowed the Harper codebase to accrue tech debt.</p>
<p>I recently watched <a href="https://www.youtube.com/watch?v=gcwzWzC7gUA">DHH's keynote from Rails World 2025</a>.
While I definitely do not agree with everything he has to say, I believe he makes some poignant points about complexity.
Some things, he says, which we consider normal today would have been ludicrous ten years ago.
Somehow, with computer hardware faster than ever, we've arrived at a point where 20 minute CI runs are acceptable.
Where a deploy to production can take an hour and setting up a new environment can take days.</p>
<p>I want to change that—at least, for Harper.</p>
<p>While working on several smaller projects, I've been cutting the fat out of <a href="https://github.com/automattic/harper">the Harper monorepo.</a>
That means removing unnecessary build steps, parallelizing individual CI jobs, and removing optimization flags that don't work.</p>
<p>Harper isn't at the 2005 standard yet, but I want contributing to be as easy as possible.
Ideally, it shouldn't take more than five minutes to go from cloning the repo to committing code.
That starts with simplifying <a href="https://writewithharper.com/docs/contributors/environment">the environment</a>.</p>
<h2>Addendum</h2>
<p>I just came across <a href="https://digitalsociety.coop/posts/migrating-to-hetzner-cloud/">this other blog post</a> where they discuss achieving three times the performance for a quarter of the price by switching from
a complex AWS setup to a bare metal Hetzner server.
Another win for simpler systems!</p>]]></content:encoded>
            <enclosure url="https://elijahpotter.dev/images/cathedral_st_john.webp" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[Using Codex Is a Lot like Baking]]></title>
            <link>https://elijahpotter.dev/articles/using_codex_is_a_lot_like_baking</link>
            <guid>https://elijahpotter.dev/articles/using_codex_is_a_lot_like_baking</guid>
            <pubDate>Mon, 06 Oct 2025 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Using Codex is a Lot Like Baking</h1>
<p>The biggest problem with AI programming today is not what I expected a few years ago.
I truly didn't believe they would be able to do any economically valuable work.</p>
<p>For web development, modern programming tools like <a href="https://openai.com/codex/">Codex</a> are quite good.
They are able to understand a query and identify the relevant areas in a codebase.
Then, they can make modifications and iterate on the solution until it fits the prompt.
Frankly, I never expected them to reach this point.</p>
<p>No, the prevailing problem I've encountered with these tools is not their ability to do work.
Rather, the problem comes from their inability to ask questions up-front and iterate on their mental model before starting to code.</p>
<p>Here's what I mean.</p>
<h2>Some Context</h2>
<p><a href="https://writewithharper.com">Harper's landing page</a> is pretty important.
It sets expectations for potential users with a working in-browser demo and reduces unproductive traffic to our issue-tracker by answering common questions.
In the past, improving the capital-"Q" Quality of our landing page has had a significant impact on Harper's growth rate.</p>
<h2>What Did I Do?</h2>
<p>Earlier this week, I wanted to make some significant changes to parts of this landing page.
Since the complexity of this part of the code-base is relatively small, I wanted to try something: a new (to me) way to work with OpenAI's Codex.</p>
<p>Most of the marketing materials surrounding these tools imply that having a conversation with the "agent" is the intended workflow.
Sam Altman et al. make it seem like I'm supposed to boot up Codex and have a discussion.
At the end of that discussion, the thing in my head is down on paper (or code).</p>
<p>I wanted to try something new.
Instead of a conversation, I wrote a detailed goal document.
I usually wouldn't do such a thing for such a trivial task, but as I said: this was an experiment.</p>
<p>To parallel what real project doc would look like, I included what, why, and how the changes I wanted should be made.
Notably, I tried to anticipate any possible revisions or misinterpretations the agent might make, which I would later have to have a conversation about.
When I was done, I gave it to the agent and went to work on something else.
I came back an hour later, and it was done.</p>
<h2>What's the Big Deal?</h2>
<p>Imagine this situation. It may sound familiar to you.</p>
<p><em>You sit down to experiment with an LLM. You want to see what all hype around vibe-coding looks like.</em>
<em>You tell the agent what you want. You wait a few minutes for it to do its thing. The result doesn't look right, so you prompt it once more. You wait some more. The cycle continues.</em></p>
<p>This is extraordinarily inefficient.
You're cosplaying as a micromanaging boss.</p>
<p>If I had done used the conversational micro-managing strategy when working on the landing page, it probably would have taken just as long (about an hour).
The downside, is that I don't get to go work on something else in the meantime.</p>
<p>By writing a planning document for your agent, you can give yourself room to breath, saving time and being more productive as a result.
It's a lot like baking.
You put the ingredients together, put them in the oven, then walk away and something else while it cooks.</p>]]></description>
            <content:encoded><![CDATA[<h1>Using Codex is a Lot Like Baking</h1>
<p>The biggest problem with AI programming today is not what I expected a few years ago.
I truly didn't believe they would be able to do any economically valuable work.</p>
<p>For web development, modern programming tools like <a href="https://openai.com/codex/">Codex</a> are quite good.
They are able to understand a query and identify the relevant areas in a codebase.
Then, they can make modifications and iterate on the solution until it fits the prompt.
Frankly, I never expected them to reach this point.</p>
<p>No, the prevailing problem I've encountered with these tools is not their ability to do work.
Rather, the problem comes from their inability to ask questions up-front and iterate on their mental model before starting to code.</p>
<p>Here's what I mean.</p>
<h2>Some Context</h2>
<p><a href="https://writewithharper.com">Harper's landing page</a> is pretty important.
It sets expectations for potential users with a working in-browser demo and reduces unproductive traffic to our issue-tracker by answering common questions.
In the past, improving the capital-"Q" Quality of our landing page has had a significant impact on Harper's growth rate.</p>
<h2>What Did I Do?</h2>
<p>Earlier this week, I wanted to make some significant changes to parts of this landing page.
Since the complexity of this part of the code-base is relatively small, I wanted to try something: a new (to me) way to work with OpenAI's Codex.</p>
<p>Most of the marketing materials surrounding these tools imply that having a conversation with the "agent" is the intended workflow.
Sam Altman et al. make it seem like I'm supposed to boot up Codex and have a discussion.
At the end of that discussion, the thing in my head is down on paper (or code).</p>
<p>I wanted to try something new.
Instead of a conversation, I wrote a detailed goal document.
I usually wouldn't do such a thing for such a trivial task, but as I said: this was an experiment.</p>
<p>To parallel what real project doc would look like, I included what, why, and how the changes I wanted should be made.
Notably, I tried to anticipate any possible revisions or misinterpretations the agent might make, which I would later have to have a conversation about.
When I was done, I gave it to the agent and went to work on something else.
I came back an hour later, and it was done.</p>
<h2>What's the Big Deal?</h2>
<p>Imagine this situation. It may sound familiar to you.</p>
<p><em>You sit down to experiment with an LLM. You want to see what all hype around vibe-coding looks like.</em>
<em>You tell the agent what you want. You wait a few minutes for it to do its thing. The result doesn't look right, so you prompt it once more. You wait some more. The cycle continues.</em></p>
<p>This is extraordinarily inefficient.
You're cosplaying as a micromanaging boss.</p>
<p>If I had done used the conversational micro-managing strategy when working on the landing page, it probably would have taken just as long (about an hour).
The downside, is that I don't get to go work on something else in the meantime.</p>
<p>By writing a planning document for your agent, you can give yourself room to breath, saving time and being more productive as a result.
It's a lot like baking.
You put the ingredients together, put them in the oven, then walk away and something else while it cooks.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[My Writing Environment as a Software Engineer]]></title>
            <link>https://elijahpotter.dev/articles/my_writing_environment_as_a_software_engineer</link>
            <guid>https://elijahpotter.dev/articles/my_writing_environment_as_a_software_engineer</guid>
            <pubDate>Sun, 05 Oct 2025 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>My Writing Environment As a Software Engineer</h1>
<blockquote>
<p><strong>TL;DR: <em>Get a good text editor, and get good at using it.
Keep a notepad by your side.
Find a quiet place to write.</em></strong></p>
</blockquote>
<p>Writing is one of life's greater joys.
It's a mental workout that often brings me a level of clarity that is hard to find elsewhere.
I have never sat down to write for an extended period of time without coming away with a greater understanding of myself and the universe.</p>
<p>I imagine that sounds pretty cheesy.</p>
<p>I just want to impress upon you the importance writing has had in my life.
As with anything this important, I have an instinctual urge to optimize it.
As software engineers, isn't that part of our modus operandi?
To spend an unnecessary amount of time optimizing things?</p>
<p>Well, that's exactly what I've done.
Over the years, I've spend a unruly amount of time tweaking and changing the tools I use to write.
I'd love to talk about 'em.</p>
<h2>Neovim</h2>
<p>In the past few years, Neovim has the talk of the town.
Its ancestor, <code>vi</code> was borne from the observation that most writing is re-writing, and thus it includes a vast array of specific tools for editing text, rather than writing it.
<a href="https://www.reddit.com/r/neovim/"><code>r/neovim</code></a> has swelled to more than 110 thousand weekly visitors and thousands of plugins and LSPs to choose from.</p>
<p>Unless you choose a pre-built distro like <a href="https://www.lazyvim.org/">LazyVim</a>, Neovim does very little out of the box.
It is through relentless customization that it becomes something truly powerful.
Beside the various keybindings in my config, the important plugins and tools that I use are:</p>
<h3><a href="https://writewithharper.com">Harper</a></h3>
<p>As I'm the chief maintainer, this one should be of no surprise.
Don't worry, I won't give you the marketing spiel.
I'll simply say that Harper is a grammar checker that happens to work exceptionally well in Neovim.</p>
<h3><a href="https://github.com/folke/zen-mode.nvim">Zen Mode</a></h3>
<img src="/images/zen_mode.png" alt="Zen Mode: Activated">
<p>Plain Neovim organizes each file to cover the maximum amount of space on-screen.
Practically, this means most of my text ends up on the left side, which can hurt my neck.
Zen mode restricts the width of the file and centers it on the screen.</p>
<h3><a href="./the_simplest_neovim_markdown_setup">Tatum</a></h3>
<img src="/images/tatum_screenshot.webp" alt="A screenshot of Tatum at work">
<p>Tatum is a small tool I wrote in a few hours that lets me live-preview my Markdown as HTML.
I won't repeat myself.
If you're curious about Tatum, check out my <a href="./the_simplest_neovim_markdown_setup">previous blog post.</a></p>
<h3>More Details</h3>
<p>If you're thirsty for <em>all</em> the details of my Neovim setup, take a quick look at the configuration code on <a href="https://github.com/elijah-potter/dots/tree/master/nvim">GitHub</a>.</p>
<h2>ReMarkable 2</h2>
<p>A few years ago, while I was still in school, I splurged and purchased a used Remarkable 2.
Even now, there is rarely a day that goes by where I don't use it.
Sure, everything it does could be easily approximated by paper.
On the other hand, the thin device saves my back whenever I have to travel and there's a certain joy that comes from using an e-ink device like this.</p>
<img src="/images/remarkable.jpeg" alt="A Recent Brainstorming Session on the ReMarkable">
<p>I mostly use it to read digital books and blog posts, but it's also come in handy for getting around writer's block.
Being forced to slow down and use my hand to write, rather than a keyboard, is quite helpful in some inexplicable way.</p>
<h2>A Quiet Space</h2>
<p>More important than any text editor, at least for me, is a quiet space.</p>
<p>As I said before, writing can take real cognitive effort.
A space with distracting noise can make it unnecessarily hard to get words on paper.
I've found local libraries or co-working spaces to be great places for quiet authorship.</p>
<p>Whatever you do, do not listen to music with words in it.
Even if it <em>feels</em> more productive, I retroactively find that it makes me less so.</p>
<h2>Conclusion</h2>
<p>There's probably some more important points I'm leaving out, but I definitely believe I've hit the big three.
For me, the most important parts of a good writing environment, as a software engineer, are my text editor, a notepad, and a quiet space.</p>]]></description>
            <content:encoded><![CDATA[<h1>My Writing Environment As a Software Engineer</h1>
<blockquote>
<p><strong>TL;DR: <em>Get a good text editor, and get good at using it.
Keep a notepad by your side.
Find a quiet place to write.</em></strong></p>
</blockquote>
<p>Writing is one of life's greater joys.
It's a mental workout that often brings me a level of clarity that is hard to find elsewhere.
I have never sat down to write for an extended period of time without coming away with a greater understanding of myself and the universe.</p>
<p>I imagine that sounds pretty cheesy.</p>
<p>I just want to impress upon you the importance writing has had in my life.
As with anything this important, I have an instinctual urge to optimize it.
As software engineers, isn't that part of our modus operandi?
To spend an unnecessary amount of time optimizing things?</p>
<p>Well, that's exactly what I've done.
Over the years, I've spend a unruly amount of time tweaking and changing the tools I use to write.
I'd love to talk about 'em.</p>
<h2>Neovim</h2>
<p>In the past few years, Neovim has the talk of the town.
Its ancestor, <code>vi</code> was borne from the observation that most writing is re-writing, and thus it includes a vast array of specific tools for editing text, rather than writing it.
<a href="https://www.reddit.com/r/neovim/"><code>r/neovim</code></a> has swelled to more than 110 thousand weekly visitors and thousands of plugins and LSPs to choose from.</p>
<p>Unless you choose a pre-built distro like <a href="https://www.lazyvim.org/">LazyVim</a>, Neovim does very little out of the box.
It is through relentless customization that it becomes something truly powerful.
Beside the various keybindings in my config, the important plugins and tools that I use are:</p>
<h3><a href="https://writewithharper.com">Harper</a></h3>
<p>As I'm the chief maintainer, this one should be of no surprise.
Don't worry, I won't give you the marketing spiel.
I'll simply say that Harper is a grammar checker that happens to work exceptionally well in Neovim.</p>
<h3><a href="https://github.com/folke/zen-mode.nvim">Zen Mode</a></h3>
<img src="/images/zen_mode.png" alt="Zen Mode: Activated">
<p>Plain Neovim organizes each file to cover the maximum amount of space on-screen.
Practically, this means most of my text ends up on the left side, which can hurt my neck.
Zen mode restricts the width of the file and centers it on the screen.</p>
<h3><a href="./the_simplest_neovim_markdown_setup">Tatum</a></h3>
<img src="/images/tatum_screenshot.webp" alt="A screenshot of Tatum at work">
<p>Tatum is a small tool I wrote in a few hours that lets me live-preview my Markdown as HTML.
I won't repeat myself.
If you're curious about Tatum, check out my <a href="./the_simplest_neovim_markdown_setup">previous blog post.</a></p>
<h3>More Details</h3>
<p>If you're thirsty for <em>all</em> the details of my Neovim setup, take a quick look at the configuration code on <a href="https://github.com/elijah-potter/dots/tree/master/nvim">GitHub</a>.</p>
<h2>ReMarkable 2</h2>
<p>A few years ago, while I was still in school, I splurged and purchased a used Remarkable 2.
Even now, there is rarely a day that goes by where I don't use it.
Sure, everything it does could be easily approximated by paper.
On the other hand, the thin device saves my back whenever I have to travel and there's a certain joy that comes from using an e-ink device like this.</p>
<img src="/images/remarkable.jpeg" alt="A Recent Brainstorming Session on the ReMarkable">
<p>I mostly use it to read digital books and blog posts, but it's also come in handy for getting around writer's block.
Being forced to slow down and use my hand to write, rather than a keyboard, is quite helpful in some inexplicable way.</p>
<h2>A Quiet Space</h2>
<p>More important than any text editor, at least for me, is a quiet space.</p>
<p>As I said before, writing can take real cognitive effort.
A space with distracting noise can make it unnecessarily hard to get words on paper.
I've found local libraries or co-working spaces to be great places for quiet authorship.</p>
<p>Whatever you do, do not listen to music with words in it.
Even if it <em>feels</em> more productive, I retroactively find that it makes me less so.</p>
<h2>Conclusion</h2>
<p>There's probably some more important points I'm leaving out, but I definitely believe I've hit the big three.
For me, the most important parts of a good writing environment, as a software engineer, are my text editor, a notepad, and a quiet space.</p>]]></content:encoded>
            <enclosure url="https://elijahpotter.dev/images/remarkable.jpeg" length="0" type="image/jpeg"/>
        </item>
        <item>
            <title><![CDATA[Brainstorming a Harper Service]]></title>
            <link>https://elijahpotter.dev/articles/brainstorming_a_harper_service</link>
            <guid>https://elijahpotter.dev/articles/brainstorming_a_harper_service</guid>
            <pubDate>Mon, 29 Sep 2025 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Brainstorming a Harper Service</h1>
<p>SaaS products are all the rage these days.
Historically, Harper has positioned itself as <a href="./local-first_software_is_easier_to_scale">local-first</a>, which many often view as anti-SaaS.
That is not the case.</p>
<p>I describe Harper as "local-first" because it is principally meant to be run locally, meaning that your data doesn't leave your device.
Privacy is one of our core beliefs, and being "local-first" is part of how we honor that.</p>
<p>Given Harper's position on privacy, discussion of a centralized service (running on a server) might seem strange.
Today, I want to explain why a centralized service is necessary, how we intend to tackle it as part of the open source project, and how it's existence will not stand to affect our users' privacy in any way.</p>
<h2>Why We Need It <strong>Now</strong></h2>
<p>User's despise false-positives in Harper's grammar engine.</p>
<p>We'll often make a modification, test it as much as possible internally, and release it, only for a user to discover false-positives on their college essay.
We've significantly reduced their frequency through techniques like <a href="./LLM_assisted_fuzzing">LLM-Assisted Fuzzing</a> and <a href="./harper_evolves">artificial selection</a>, but they persist.</p>
<p>Far and wide, most of the time between when we get a bug report and when we get a fix into production is spent doing meaningless "paperwork".
Reading through issues, writing boilerplate code for tests, and preparing pull requests take up a lot of time.
I want to upend this entire loop and find a simpler way to iterate on false-positives.</p>
<p>The first step in this process is reducing the overhead for users to report them in the first place.
Instead of filing an issue, I'd love for user's to be able to push a button have have all relevant context sent directly to our inboxes.
Not GitHub account required, no fuss.</p>
<p>Obviously, we won't be sending any information without clearly disclosing the precise data being sent from the client.</p>
<p>Since these reports will be in a structured format, we can bundle fixes into larger batches, reducing overhead (and potentially making things easier for LLMs).
The Harper project's ability to continuously and surgically improve our service is part of what sets us apart from our competitors.
It's rare for user feedback to have such a direct impact on Grammarly's core engine.</p>
<p>This is just the start. Once we've proven that we can maintain such a system for our users, we will move on to more ambitious goals.</p>
<h2>How We Plan to Make It Happen</h2>
<p>I've had success with these technologies in the past, and believe they'll work quite well for our initial use-case, and scale quite nicely as the service becomes more capable.</p>
<h3>SvelteKit</h3>
<p>Harper's website is already built using SvelteKit.
It works.
Why mess with success?</p>
<h3>Database: <a href="https://sqlite.org/index.html">SQLite</a> with <a href="https://orm.drizzle.team/">Drizzle</a></h3>
<p>On modern hardware, SQLite can handle tens of thousands of transactions per second.
Since Harper (as a rule) will place little load on the service per-user, I expect SQLite can service our needs for a long time.
When that time expires, Drizzle (an ORM) makes it trivial to move to a more robust database down the road (I imagine PostgreSQL).</p>
<p>For the moment, iteration speed is paramount.
I want to move fast.
To that end, <a href="https://j.cv/">James</a> and I have discussed hosting everything on a small VPS, and we agree it's the best choice.</p>
<h3>Auth.JS</h3>
<p>Authentication is not something anyone should roll on their own.
Auth.JS is a proven solution that bundles well with Drizzle to building out user profiles, authentication pages, and everything in-between.
The needs of the project won't necessitate this right away.
When the time comes, using Auth.JS (from my experience) will shorten the auth problem down from weeks to just hours.</p>
<h2>The Future</h2>
<p>I'm thrilled that we're finally moving in this direction.
Being local-only has taken us far, but loosening the reigns to be more simply "local-first" will allow us to tackle more interesting problems and provide more value to our users.
I can't wait to get started.</p>]]></description>
            <content:encoded><![CDATA[<h1>Brainstorming a Harper Service</h1>
<p>SaaS products are all the rage these days.
Historically, Harper has positioned itself as <a href="./local-first_software_is_easier_to_scale">local-first</a>, which many often view as anti-SaaS.
That is not the case.</p>
<p>I describe Harper as "local-first" because it is principally meant to be run locally, meaning that your data doesn't leave your device.
Privacy is one of our core beliefs, and being "local-first" is part of how we honor that.</p>
<p>Given Harper's position on privacy, discussion of a centralized service (running on a server) might seem strange.
Today, I want to explain why a centralized service is necessary, how we intend to tackle it as part of the open source project, and how it's existence will not stand to affect our users' privacy in any way.</p>
<h2>Why We Need It <strong>Now</strong></h2>
<p>User's despise false-positives in Harper's grammar engine.</p>
<p>We'll often make a modification, test it as much as possible internally, and release it, only for a user to discover false-positives on their college essay.
We've significantly reduced their frequency through techniques like <a href="./LLM_assisted_fuzzing">LLM-Assisted Fuzzing</a> and <a href="./harper_evolves">artificial selection</a>, but they persist.</p>
<p>Far and wide, most of the time between when we get a bug report and when we get a fix into production is spent doing meaningless "paperwork".
Reading through issues, writing boilerplate code for tests, and preparing pull requests take up a lot of time.
I want to upend this entire loop and find a simpler way to iterate on false-positives.</p>
<p>The first step in this process is reducing the overhead for users to report them in the first place.
Instead of filing an issue, I'd love for user's to be able to push a button have have all relevant context sent directly to our inboxes.
Not GitHub account required, no fuss.</p>
<p>Obviously, we won't be sending any information without clearly disclosing the precise data being sent from the client.</p>
<p>Since these reports will be in a structured format, we can bundle fixes into larger batches, reducing overhead (and potentially making things easier for LLMs).
The Harper project's ability to continuously and surgically improve our service is part of what sets us apart from our competitors.
It's rare for user feedback to have such a direct impact on Grammarly's core engine.</p>
<p>This is just the start. Once we've proven that we can maintain such a system for our users, we will move on to more ambitious goals.</p>
<h2>How We Plan to Make It Happen</h2>
<p>I've had success with these technologies in the past, and believe they'll work quite well for our initial use-case, and scale quite nicely as the service becomes more capable.</p>
<h3>SvelteKit</h3>
<p>Harper's website is already built using SvelteKit.
It works.
Why mess with success?</p>
<h3>Database: <a href="https://sqlite.org/index.html">SQLite</a> with <a href="https://orm.drizzle.team/">Drizzle</a></h3>
<p>On modern hardware, SQLite can handle tens of thousands of transactions per second.
Since Harper (as a rule) will place little load on the service per-user, I expect SQLite can service our needs for a long time.
When that time expires, Drizzle (an ORM) makes it trivial to move to a more robust database down the road (I imagine PostgreSQL).</p>
<p>For the moment, iteration speed is paramount.
I want to move fast.
To that end, <a href="https://j.cv/">James</a> and I have discussed hosting everything on a small VPS, and we agree it's the best choice.</p>
<h3>Auth.JS</h3>
<p>Authentication is not something anyone should roll on their own.
Auth.JS is a proven solution that bundles well with Drizzle to building out user profiles, authentication pages, and everything in-between.
The needs of the project won't necessitate this right away.
When the time comes, using Auth.JS (from my experience) will shorten the auth problem down from weeks to just hours.</p>
<h2>The Future</h2>
<p>I'm thrilled that we're finally moving in this direction.
Being local-only has taken us far, but loosening the reigns to be more simply "local-first" will allow us to tackle more interesting problems and provide more value to our users.
I can't wait to get started.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Improving Harper for Old Laptops]]></title>
            <link>https://elijahpotter.dev/articles/improving_harper_for_old_laptops</link>
            <guid>https://elijahpotter.dev/articles/improving_harper_for_old_laptops</guid>
            <pubDate>Fri, 19 Sep 2025 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Improving Harper for Old Laptops</h1>
<p>The most common complaint I’ve been hearing as of late relates to Harper’s performance. I’ve been told that Harper has gotten slow.</p>
<p>I was puzzled. Nothing had substantially changed in our core engine that could have made it slower. Our benchmarks hadn’t shown any slow-downs. What could be prompting users to complain about performance?</p>
<p>The problem was clearly in the Chrome extension. Those who complained were doing so with screenshots from Notion or Ghost. Could recent developments in the <a href="https://webassembly.org/news/2025-09-17-wasm-3.0/">WebAssembly ecosystem</a> be the cause? Nope. Could it have something to do with the kinds of applications users were trying to use with Harper? Nope. The complaints seemed untethered to a specific domain or program. What could be slowing our users’ down?</p>
<h2>The Problem</h2>
<p>As it turns out, the problem wasn’t the actual grammar checking, it was rendering the highlights on the page.</p>
<p>The Chrome extension’s hot path looks something like this:</p>
<ol>
<li>Extract the text from the page</li>
<li>Run <code>harper-core</code> (the grammar engine) over it to find mistakes</li>
<li>Use DOM APIs to compute the bounding boxes of those mistakes on the page. This is important: This computation requires the browser to reflow the layout of the page.</li>
<li>Render additional DOM elements to the screen over these bounding boxes. This also requires the browser to reflow the layout of the page.</li>
</ol>
<p>Requiring the browser to lay out the page more than once per frame is called <em>“layout thrash”</em>, and it’s considered bad practice for performance reasons. It causes more CPU usage than otherwise, which can slow down complex websites and reduce battery life. It isn’t noticeable for newer computers (4 years old), but it <strong>can</strong> be quite jarring for older laptops.</p>
<h2>The Solution</h2>
<p>Fortunately, there’s a solution. Just recently, W3C standardized <a href="https://developer.mozilla.org/en-US/docs/Web/API/CSS_Custom_Highlight_API">The Custom Highlight API</a>, which allows us to render highlights without interacting with the DOM at all. This saves us two full layout refreshes and allows the browser to offload more work to the GPU, saving CPU cycles and battery life.</p>
<p>It’s not all roses and rainbows. For one, we can only use this optimization for text editors that use <code>contenteditable</code> elements. It doesn’t help for <code>&#x3C;textarea /></code> or <code>&#x3C;input /></code> elements. Firefox also doesn’t support this part of the API, which means we have to fall back to the old highlighted renderer on Gecko-based browsers.</p>
<p>There are also some minor differences between how the new renderer and the old renderer style their highlights. This is because highlights using the Custom Highlight API can’t be styled using any old CSS, but rather a subset.</p>]]></description>
            <content:encoded><![CDATA[<h1>Improving Harper for Old Laptops</h1>
<p>The most common complaint I’ve been hearing as of late relates to Harper’s performance. I’ve been told that Harper has gotten slow.</p>
<p>I was puzzled. Nothing had substantially changed in our core engine that could have made it slower. Our benchmarks hadn’t shown any slow-downs. What could be prompting users to complain about performance?</p>
<p>The problem was clearly in the Chrome extension. Those who complained were doing so with screenshots from Notion or Ghost. Could recent developments in the <a href="https://webassembly.org/news/2025-09-17-wasm-3.0/">WebAssembly ecosystem</a> be the cause? Nope. Could it have something to do with the kinds of applications users were trying to use with Harper? Nope. The complaints seemed untethered to a specific domain or program. What could be slowing our users’ down?</p>
<h2>The Problem</h2>
<p>As it turns out, the problem wasn’t the actual grammar checking, it was rendering the highlights on the page.</p>
<p>The Chrome extension’s hot path looks something like this:</p>
<ol>
<li>Extract the text from the page</li>
<li>Run <code>harper-core</code> (the grammar engine) over it to find mistakes</li>
<li>Use DOM APIs to compute the bounding boxes of those mistakes on the page. This is important: This computation requires the browser to reflow the layout of the page.</li>
<li>Render additional DOM elements to the screen over these bounding boxes. This also requires the browser to reflow the layout of the page.</li>
</ol>
<p>Requiring the browser to lay out the page more than once per frame is called <em>“layout thrash”</em>, and it’s considered bad practice for performance reasons. It causes more CPU usage than otherwise, which can slow down complex websites and reduce battery life. It isn’t noticeable for newer computers (4 years old), but it <strong>can</strong> be quite jarring for older laptops.</p>
<h2>The Solution</h2>
<p>Fortunately, there’s a solution. Just recently, W3C standardized <a href="https://developer.mozilla.org/en-US/docs/Web/API/CSS_Custom_Highlight_API">The Custom Highlight API</a>, which allows us to render highlights without interacting with the DOM at all. This saves us two full layout refreshes and allows the browser to offload more work to the GPU, saving CPU cycles and battery life.</p>
<p>It’s not all roses and rainbows. For one, we can only use this optimization for text editors that use <code>contenteditable</code> elements. It doesn’t help for <code>&#x3C;textarea /></code> or <code>&#x3C;input /></code> elements. Firefox also doesn’t support this part of the API, which means we have to fall back to the old highlighted renderer on Gecko-based browsers.</p>
<p>There are also some minor differences between how the new renderer and the old renderer style their highlights. This is because highlights using the Custom Highlight API can’t be styled using any old CSS, but rather a subset.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Demos Make Life Worth Living]]></title>
            <link>https://elijahpotter.dev/articles/demos_make_life_worth_living</link>
            <guid>https://elijahpotter.dev/articles/demos_make_life_worth_living</guid>
            <pubDate>Thu, 18 Sep 2025 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Demos Make Life Worth Living</h1>
<img src="/images/bethesda_fountain.png" alt="The Bethesda Fountain in NYC">
<p>The initial goal of any greenfield project of mine is to build a working demo. If possible, that demo should run on the web, so I can distribute to friends and family.</p>
<p>One of my first posts to hit the front-page of Hacker News represented exactly this philosophy. At the time, I was in high-school, taking an independent linear algebra class. I wanted to demonstrate exactly how the things I was learning could be applied to real-world problems. My blog post, <a href="./markov_chains_are_the_original_language_models">"Markov Chains are the Original Language Models"</a> included an interactive (small) language model that readers could use to generate meaningless text. I gave them the ability to toy around with the training data to see how it could affect the final model's generations.</p>
<p>I ended up getting an "A" in the class and a couple thousand hits online. More importantly, I had proven the utility of a demo. Since then, <a href="./building_a_software_render_engine_from_scratch">every</a> <a href="https://github.com/elijah-potter/thrax-language">major</a> <a href="https://writewithharper.com">project</a> of mine has had some kind of demo. Some of them better than others.</p>
<p>The Harper demo, in particular, was rough to start. It was enough to prove the concept and establish the project within a certain ecosystem, but it far from perfect. It had bugs and, until now, hasn't evolved with the project at all.</p>
<p>For all my projects though, I see the initial demo as critical. Beyond proving the concept, the demo gives me an enormous amount of motivation. As an engineer, there are few things more thrilling than to watch someone toy around with the product of your work. When there's a live version of your product available, any incremental improvement comes with a hit of dopamine.</p>
<p>I had a great discussion with <a href="https://adamadam.blog/">Adam Zielinski</a> a few weeks ago, when he encouraged me to make the demo on the Harper website as close to the actual Harper experience as possible. Last week, I did exactly that.</p>
<p>Actually, I did more than that. Today, the demo isn't just similar to the experience of using our Chrome extension, it is exactly the same. The DOM-inspecting and manipulating technology found in the Chrome extension can now be embedded in any page. That includes the actual Harper website, and any other site you might want.</p>
<p>For now, I'm calling this the <a href="https://github.com/Automattic/harper/tree/master/packages/lint-framework"><code>lint-framework</code></a>, and it opens the doors to a good many other integrations (should we decide to build them). It is essentially a portable package that includes pretty much everything you need to provide Harper to any browser-based text editor.</p>
<p>Many applications allow users to build plugins into their WebView, but don't provide the full web extension API (Thunderbird, for example). In these cases, we can pull in the <code>lint-framework</code> and wire up the native plugin APIs to build configuration pages. It isn't a complete solution, but it cuts down the work and maintenance overhead enormously.</p>
<p>As it turns out, the cruft of Harper's demo was a major barrier to adoption. Since polishing it up late last week, we're already seeing metrics like bounce rate move in a positive direction.</p>]]></description>
            <content:encoded><![CDATA[<h1>Demos Make Life Worth Living</h1>
<img src="/images/bethesda_fountain.png" alt="The Bethesda Fountain in NYC">
<p>The initial goal of any greenfield project of mine is to build a working demo. If possible, that demo should run on the web, so I can distribute to friends and family.</p>
<p>One of my first posts to hit the front-page of Hacker News represented exactly this philosophy. At the time, I was in high-school, taking an independent linear algebra class. I wanted to demonstrate exactly how the things I was learning could be applied to real-world problems. My blog post, <a href="./markov_chains_are_the_original_language_models">"Markov Chains are the Original Language Models"</a> included an interactive (small) language model that readers could use to generate meaningless text. I gave them the ability to toy around with the training data to see how it could affect the final model's generations.</p>
<p>I ended up getting an "A" in the class and a couple thousand hits online. More importantly, I had proven the utility of a demo. Since then, <a href="./building_a_software_render_engine_from_scratch">every</a> <a href="https://github.com/elijah-potter/thrax-language">major</a> <a href="https://writewithharper.com">project</a> of mine has had some kind of demo. Some of them better than others.</p>
<p>The Harper demo, in particular, was rough to start. It was enough to prove the concept and establish the project within a certain ecosystem, but it far from perfect. It had bugs and, until now, hasn't evolved with the project at all.</p>
<p>For all my projects though, I see the initial demo as critical. Beyond proving the concept, the demo gives me an enormous amount of motivation. As an engineer, there are few things more thrilling than to watch someone toy around with the product of your work. When there's a live version of your product available, any incremental improvement comes with a hit of dopamine.</p>
<p>I had a great discussion with <a href="https://adamadam.blog/">Adam Zielinski</a> a few weeks ago, when he encouraged me to make the demo on the Harper website as close to the actual Harper experience as possible. Last week, I did exactly that.</p>
<p>Actually, I did more than that. Today, the demo isn't just similar to the experience of using our Chrome extension, it is exactly the same. The DOM-inspecting and manipulating technology found in the Chrome extension can now be embedded in any page. That includes the actual Harper website, and any other site you might want.</p>
<p>For now, I'm calling this the <a href="https://github.com/Automattic/harper/tree/master/packages/lint-framework"><code>lint-framework</code></a>, and it opens the doors to a good many other integrations (should we decide to build them). It is essentially a portable package that includes pretty much everything you need to provide Harper to any browser-based text editor.</p>
<p>Many applications allow users to build plugins into their WebView, but don't provide the full web extension API (Thunderbird, for example). In these cases, we can pull in the <code>lint-framework</code> and wire up the native plugin APIs to build configuration pages. It isn't a complete solution, but it cuts down the work and maintenance overhead enormously.</p>
<p>As it turns out, the cruft of Harper's demo was a major barrier to adoption. Since polishing it up late last week, we're already seeing metrics like bounce rate move in a positive direction.</p>]]></content:encoded>
            <enclosure url="https://elijahpotter.dev/images/bethesda_fountain.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[What We Can Learn from New York City]]></title>
            <link>https://elijahpotter.dev/articles/what_we_can_learn_from_new_york_city</link>
            <guid>https://elijahpotter.dev/articles/what_we_can_learn_from_new_york_city</guid>
            <pubDate>Sat, 13 Sep 2025 06:00:00 GMT</pubDate>
            <description><![CDATA[<h2>What We Can Learn from New York City</h2>
<img src="/images/nyc_from_a8c_noho_office.png" alt="The view from Automattic&#x27;s NoHo Office">
<p>Today marks the end of my first week in New York City. It is an exceptionally active place. Neither the hustle nor the bustle ever stops. Sirens sound at all hours of the day and you're constantly smelling entirely new things. I've been told New York is an acquired taste. I can see why they might say that.</p>
<p>Back home, it's rare I would walk down the street and overhear a random founder talk about their latest Y Combinator funding round. Here, it's apparently common enough to occur at least once a week. Everywhere I look, I find the headquarters of multinational companies and the offices of major governmental organizations.</p>
<h3>The City is Practical</h3>
<p>If I were to describe Portland, Oregon (which I visited a few weeks ago to attend WordCamp US) in one word, it would be optimistic. If I were to describe New York in one word, based on my incredibly limited experience thus far, it would be practical.</p>
<p>The city and the people who reside inside take very little fluff. They say things as they are, in the kindest way possible. If they don't think your proposed route to a restaurant will work, they stop you before you finish explaining and put you on the right track. It saves everyone time and energy.</p>
<p>This is something I personally need to practice more often in my work. It is common for me to avoid productive conversations or code reviews, only because they might evoke a negative emotion in the short-term. These same conversations usually produce positive emotions in the long term, so it is in my interest and in my colleague's interest to have them. NYC is slowly training a habit to recognize these situations more often and make the right decision.</p>
<p>I don't think I'm alone here. I think a plurality of people in the modern world could benefit from a bolder mindset that takes on tough conversations, rather than shying away.</p>
<h3>We're Not Done Yet</h3>
<p>I'll be in New York for another week. If all goes well, I'll have more opportunities to learn from the other members of the AI group. It's been great to hang out with James and the gang. The culture here is incredibly inviting. I am immensely grateful to be here.</p>]]></description>
            <content:encoded><![CDATA[<h2>What We Can Learn from New York City</h2>
<img src="/images/nyc_from_a8c_noho_office.png" alt="The view from Automattic&#x27;s NoHo Office">
<p>Today marks the end of my first week in New York City. It is an exceptionally active place. Neither the hustle nor the bustle ever stops. Sirens sound at all hours of the day and you're constantly smelling entirely new things. I've been told New York is an acquired taste. I can see why they might say that.</p>
<p>Back home, it's rare I would walk down the street and overhear a random founder talk about their latest Y Combinator funding round. Here, it's apparently common enough to occur at least once a week. Everywhere I look, I find the headquarters of multinational companies and the offices of major governmental organizations.</p>
<h3>The City is Practical</h3>
<p>If I were to describe Portland, Oregon (which I visited a few weeks ago to attend WordCamp US) in one word, it would be optimistic. If I were to describe New York in one word, based on my incredibly limited experience thus far, it would be practical.</p>
<p>The city and the people who reside inside take very little fluff. They say things as they are, in the kindest way possible. If they don't think your proposed route to a restaurant will work, they stop you before you finish explaining and put you on the right track. It saves everyone time and energy.</p>
<p>This is something I personally need to practice more often in my work. It is common for me to avoid productive conversations or code reviews, only because they might evoke a negative emotion in the short-term. These same conversations usually produce positive emotions in the long term, so it is in my interest and in my colleague's interest to have them. NYC is slowly training a habit to recognize these situations more often and make the right decision.</p>
<p>I don't think I'm alone here. I think a plurality of people in the modern world could benefit from a bolder mindset that takes on tough conversations, rather than shying away.</p>
<h3>We're Not Done Yet</h3>
<p>I'll be in New York for another week. If all goes well, I'll have more opportunities to learn from the other members of the AI group. It's been great to hang out with James and the gang. The culture here is incredibly inviting. I am immensely grateful to be here.</p>]]></content:encoded>
            <enclosure url="https://elijahpotter.dev/images/nyc_from_a8c_noho_office.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[The Chrome Extension Supersedes the WordPress Plugin]]></title>
            <link>https://elijahpotter.dev/articles/the_chrome_extension_supersedes_the_wordpress_plugin</link>
            <guid>https://elijahpotter.dev/articles/the_chrome_extension_supersedes_the_wordpress_plugin</guid>
            <pubDate>Wed, 10 Sep 2025 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>The Chrome Extension Supersedes the WordPress Plugin</h1>
<p>Several months ago, I announced the initial version of the Harper WordPress plugin. In the interest of those still using it, I’d like to give y’all an update.</p>
<p><a href="https://chromewebstore.google.com/detail/private-grammar-checker-h/lodbfhdipoipcjmlebjbgmmgekckhpfb">The Chrome extension</a> is now the recommended way to use Harper in WordPress. There are several compounding reasons for this. Most importantly, iterating on a Chrome extension is significantly faster than the review process for WordPress.com plugins. In fact, the current WordPress.com Harper plugin, which you may already be using in your P2s, is currently several months out-of-date. While you may continue to use the WordPress plugin, it will no longer receive regular updates.</p>
<p>If you choose to switch to the Chrome extension, I think you’ll find the experience more holistic, since it is able to persist user data across sites.</p>]]></description>
            <content:encoded><![CDATA[<h1>The Chrome Extension Supersedes the WordPress Plugin</h1>
<p>Several months ago, I announced the initial version of the Harper WordPress plugin. In the interest of those still using it, I’d like to give y’all an update.</p>
<p><a href="https://chromewebstore.google.com/detail/private-grammar-checker-h/lodbfhdipoipcjmlebjbgmmgekckhpfb">The Chrome extension</a> is now the recommended way to use Harper in WordPress. There are several compounding reasons for this. Most importantly, iterating on a Chrome extension is significantly faster than the review process for WordPress.com plugins. In fact, the current WordPress.com Harper plugin, which you may already be using in your P2s, is currently several months out-of-date. While you may continue to use the WordPress plugin, it will no longer receive regular updates.</p>
<p>If you choose to switch to the Chrome extension, I think you’ll find the experience more holistic, since it is able to persist user data across sites.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[I Spoke at WordCamp U.S. in 2025]]></title>
            <link>https://elijahpotter.dev/articles/i_spoke_at_wordcamp_u.s._in_2025</link>
            <guid>https://elijahpotter.dev/articles/i_spoke_at_wordcamp_u.s._in_2025</guid>
            <pubDate>Wed, 03 Sep 2025 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>I Spoke at WordCamp U.S. in 2025</h1>
<p>I had the incredible opportunity to speak at WordCamp U.S. at <a href="https://ma.tt">Matt</a>'s suggestion.
I met a number of wonderful, kind, and inviting people.
It was my first WordCamp and first conference talk, so I was naturally nervous.
The community made it feel easy.</p>
<iframe width="100%" height="400" src="https://www.youtube-nocookie.com/embed/e-jYf8ZV_SA?si=VDPWlTx0kV8lVJTU" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
<p>I spoke about how we can use machine learning to improve the quality of our applications without big capital expenditures.
In fact, I argue that capital expenditures are often a step in the wrong direction.</p>
<p>Fun fact: This talk started out as <a href="./training_a_chunker_with_burn">a blog post</a>.</p>]]></description>
            <content:encoded><![CDATA[<h1>I Spoke at WordCamp U.S. in 2025</h1>
<p>I had the incredible opportunity to speak at WordCamp U.S. at <a href="https://ma.tt">Matt</a>'s suggestion.
I met a number of wonderful, kind, and inviting people.
It was my first WordCamp and first conference talk, so I was naturally nervous.
The community made it feel easy.</p>
<iframe width="100%" height="400" src="https://www.youtube-nocookie.com/embed/e-jYf8ZV_SA?si=VDPWlTx0kV8lVJTU" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
<p>I spoke about how we can use machine learning to improve the quality of our applications without big capital expenditures.
In fact, I argue that capital expenditures are often a step in the wrong direction.</p>
<p>Fun fact: This talk started out as <a href="./training_a_chunker_with_burn">a blog post</a>.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The Books I Have Read Since July 2025]]></title>
            <link>https://elijahpotter.dev/articles/the_books_i_have_read_since_july_2025</link>
            <guid>https://elijahpotter.dev/articles/the_books_i_have_read_since_july_2025</guid>
            <pubDate>Mon, 01 Sep 2025 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>The Books I Have Read Since July of 2025</h1>
<img src="/images/portland_rose.png" alt="Portland Rose">
<p>Whenever I can, I try to stop and reflect.
Often it's about my work and the things I can do to continue being more productive.
But often, I reflect about the recent books I have read and the things I (and you) can learn from them.
That's what this post will hopefully turn out to be.</p>
<p>So, without further ado, here are some of the books I have read recently.</p>
<h2><em>The Stoic Challenge</em> by William B. Irvine</h2>
<p>At the end of each episode of his podcast, Ezra Klein asks his guests the same question: <em>what three books would you recommend we read?</em>
Terminating <a href="https://pca.st/q3l0grkv">one particular episode</a>, author Jonathan Haidt suggests <em>The Stoic Challenge</em>.</p>
<p>I was already aware of William B. Irvine as a prolific stoic writer.
In fact, I had tried to read another of his books on the subject of Stoic philosophy.
I had not finished it because, well, I found it somewhat dry.</p>
<p><em>The Stoic Challenge</em>, on the other hand, was a delight.
It feels like Irvine took all the successful parts of the previous book and condensed them down into something my Gen-Z brain could understand.</p>
<p>The book's more important takeaway is simple.
In life, you and I both encounter daily struggles that impede our progress toward our goals.</p>
<p>Our flight is running late, so we get to our hotel later than we intend.
We forget to pack a lunch, so we must get through our workday hungry.
Our babysitter wants a raise, which means we won't be able to go out as often.</p>
<p><em>The Stoic Challenge</em> encourages us to frame these setbacks as a test.
A test put upon us by some hidden force.
To pass the test, we need to act in a rational way, and take steps to avoid exacerbating the resulting negative emotions.</p>
<p>I highly suggest anyone read this book.
I felt engaged the entire time reading it, and its lessons apply to any walk of life.</p>
<h2><em>Slow Productivity</em> by Cal Newport</h2>
<p>Cal Newport is known for many of the most groundbreaking productivity books of the last decade.
Personally, <em>Deep Work</em> and <em>How to Be a Straight-A Student</em> have been particularly impactful.
They've encouraged me to cut out the unnecessarily complex parts of my working life, and work to find ways of doing the same for others.</p>
<p>His latest work continues from the same ideas he put forth in those books.</p>
<p><em>How to be a Straight-A Student</em> was focused on the idea that the quantity of high-quality, published work made the difference between mediocre and stellar academic careers.
Following on, <em>Deep Work</em> made the case that the best way to achieve high output was by creating circumstances that allowed for extended periods of intense focus.</p>
<p><em>Slow Productivity</em> takes both of those ideas a step further.
He suggests that the key to high-quality output, in addition to deep focus, requires time.
He believe that truly amazing products require the brain to be relaxed, comfortable, and unhurried.
It is only then, that it can work outside of the box.</p>
<p>I'm not entirely sure I agree, but the book made for a stimulating read nonetheless.</p>
<h2><em>The Humans</em> by Matt Haig</h2>
<p><em>The Humans</em> is a beautiful book that celebrates all the idiosyncratic things that make humans special.
Don't worry, I won't spoil it for you
I've read a decent number of Haig's books in the past, and I've enjoyed each one.
This was no different.</p>
<p>I'll spare you the details.</p>
<h2><em>The 22 Immutable Laws of Marketing</em></h2>
<p>I am an engineer.
I went to an engineering school.
Since I spend most of my time around people who work in fields related to engineering, I do not spend much time around people who know a lot about marketing.
I wanted to fix that. Which is why I read <em>The 22 Immutable Laws of Marketing.</em></p>
<p>This is a book intended for people who want to understand the axioms of how brands are perceived by the public.</p>
<h2><em>Sapiens</em> by Yuval Noah Harari</h2>
<p><em>Sapiens</em> is a comprehensive history of our species, <em>homo sapiens</em>, beginning with the first humans to walk the planet all the way up to the humans of today.
Harari takes a unique look at what makes our species special, and the things that make our future uncertain.</p>
<p>At times, the writing can be a bit dry and repetitive, but on the whole it is a well-written book.
I would certainly recommend it for anyone who seeks an entirely new perspective on the short history of human life.</p>
<h2><em>Your Utopia</em> by Bora Chung</h2>
<p>I went into <em>Your Utopia</em> expecting a collection of short stories similar to those by Ted Chiang.
That is not what I received.</p>
<p>Instead, Chung provides a somewhat eclectic collection of stories that feel not unlike a fever dream.
Rather than genuine, hard science fiction, Chung tells the tales of sentient cars, elevators that care for the elderly, and immortal secretaries.</p>
<p>I don't regret reading it, but I also don't recommend it to fans of Chiang.</p>]]></description>
            <content:encoded><![CDATA[<h1>The Books I Have Read Since July of 2025</h1>
<img src="/images/portland_rose.png" alt="Portland Rose">
<p>Whenever I can, I try to stop and reflect.
Often it's about my work and the things I can do to continue being more productive.
But often, I reflect about the recent books I have read and the things I (and you) can learn from them.
That's what this post will hopefully turn out to be.</p>
<p>So, without further ado, here are some of the books I have read recently.</p>
<h2><em>The Stoic Challenge</em> by William B. Irvine</h2>
<p>At the end of each episode of his podcast, Ezra Klein asks his guests the same question: <em>what three books would you recommend we read?</em>
Terminating <a href="https://pca.st/q3l0grkv">one particular episode</a>, author Jonathan Haidt suggests <em>The Stoic Challenge</em>.</p>
<p>I was already aware of William B. Irvine as a prolific stoic writer.
In fact, I had tried to read another of his books on the subject of Stoic philosophy.
I had not finished it because, well, I found it somewhat dry.</p>
<p><em>The Stoic Challenge</em>, on the other hand, was a delight.
It feels like Irvine took all the successful parts of the previous book and condensed them down into something my Gen-Z brain could understand.</p>
<p>The book's more important takeaway is simple.
In life, you and I both encounter daily struggles that impede our progress toward our goals.</p>
<p>Our flight is running late, so we get to our hotel later than we intend.
We forget to pack a lunch, so we must get through our workday hungry.
Our babysitter wants a raise, which means we won't be able to go out as often.</p>
<p><em>The Stoic Challenge</em> encourages us to frame these setbacks as a test.
A test put upon us by some hidden force.
To pass the test, we need to act in a rational way, and take steps to avoid exacerbating the resulting negative emotions.</p>
<p>I highly suggest anyone read this book.
I felt engaged the entire time reading it, and its lessons apply to any walk of life.</p>
<h2><em>Slow Productivity</em> by Cal Newport</h2>
<p>Cal Newport is known for many of the most groundbreaking productivity books of the last decade.
Personally, <em>Deep Work</em> and <em>How to Be a Straight-A Student</em> have been particularly impactful.
They've encouraged me to cut out the unnecessarily complex parts of my working life, and work to find ways of doing the same for others.</p>
<p>His latest work continues from the same ideas he put forth in those books.</p>
<p><em>How to be a Straight-A Student</em> was focused on the idea that the quantity of high-quality, published work made the difference between mediocre and stellar academic careers.
Following on, <em>Deep Work</em> made the case that the best way to achieve high output was by creating circumstances that allowed for extended periods of intense focus.</p>
<p><em>Slow Productivity</em> takes both of those ideas a step further.
He suggests that the key to high-quality output, in addition to deep focus, requires time.
He believe that truly amazing products require the brain to be relaxed, comfortable, and unhurried.
It is only then, that it can work outside of the box.</p>
<p>I'm not entirely sure I agree, but the book made for a stimulating read nonetheless.</p>
<h2><em>The Humans</em> by Matt Haig</h2>
<p><em>The Humans</em> is a beautiful book that celebrates all the idiosyncratic things that make humans special.
Don't worry, I won't spoil it for you
I've read a decent number of Haig's books in the past, and I've enjoyed each one.
This was no different.</p>
<p>I'll spare you the details.</p>
<h2><em>The 22 Immutable Laws of Marketing</em></h2>
<p>I am an engineer.
I went to an engineering school.
Since I spend most of my time around people who work in fields related to engineering, I do not spend much time around people who know a lot about marketing.
I wanted to fix that. Which is why I read <em>The 22 Immutable Laws of Marketing.</em></p>
<p>This is a book intended for people who want to understand the axioms of how brands are perceived by the public.</p>
<h2><em>Sapiens</em> by Yuval Noah Harari</h2>
<p><em>Sapiens</em> is a comprehensive history of our species, <em>homo sapiens</em>, beginning with the first humans to walk the planet all the way up to the humans of today.
Harari takes a unique look at what makes our species special, and the things that make our future uncertain.</p>
<p>At times, the writing can be a bit dry and repetitive, but on the whole it is a well-written book.
I would certainly recommend it for anyone who seeks an entirely new perspective on the short history of human life.</p>
<h2><em>Your Utopia</em> by Bora Chung</h2>
<p>I went into <em>Your Utopia</em> expecting a collection of short stories similar to those by Ted Chiang.
That is not what I received.</p>
<p>Instead, Chung provides a somewhat eclectic collection of stories that feel not unlike a fever dream.
Rather than genuine, hard science fiction, Chung tells the tales of sentient cars, elevators that care for the elderly, and immortal secretaries.</p>
<p>I don't regret reading it, but I also don't recommend it to fans of Chiang.</p>]]></content:encoded>
            <enclosure url="https://elijahpotter.dev/images/portland_rose.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[Harper Evolves]]></title>
            <link>https://elijahpotter.dev/articles/harper_evolves</link>
            <guid>https://elijahpotter.dev/articles/harper_evolves</guid>
            <pubDate>Thu, 21 Aug 2025 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Harper Evolves</h1>
<p>I want you to read that title as literally as possible.
<a href="https://writewithharper.com">Harper</a> is now capable of evolution.</p>
<p>This past week, I've been working on a system that should allow us to handle more complex grammatical cases and contexts, faster.
I believe it will improve our ability to add new grammatical rules to Harper by somewhere between 500% and 1,000%.</p>
<p>To top it off, this system does it without slowing Harper itself down or increasing the memory footprint.</p>
<p><strong>Let's get into it.</strong></p>
<h2>The Problem</h2>
<p>There are several unique methodologies at play when Harper goes about grammar checking.
Which strategy depends on the grammatical rule in question.
Today, we're interested in expression rules.</p>
<p>For the curious, I have recently written <a href="./reflections_on_expression_rules">a reflection</a> on expression rules, as well as <a href="./writing_an_expression_rule_for_harper">a guide</a> for anyone interested in producing them.
This post, however, will not recount information I've already written on this blog.</p>
<p>By count, expression rules make up the majority of grammatical rules Harper is currently capable of detecting.
This is because they are fast, easy to write, and most importantly, easy to review.</p>
<p>There are, however, occasional hiccups that I encounter when tackling a problem.
The English language is tricky and often it contradicts itself.
I will often try to write a rule which covers a certain case, only to find that it doesn't cover all cases.
I can iterate, but it often becomes tedious and time-consuming.</p>
<h2>The Solution</h2>
<p>Last week, I threw in the towel.
I was tired of iterating ceaselessly towards a goal, only to have a new one to tackle after that.
So I decided I would let the computer iterate for me.</p>
<p>Harper's expressions are essentially small programs which are able to identify the locations of given patterns in natural language.
They are constructed at runtime, but they run exceedingly fast because they tend to be amenable to modern branch prediction.
We can use this fact to our advantage.</p>
<p>When generating an expression that detects a particular grammatical rule, the new system (which I've called The Ripper) follows three steps.</p>
<ol>
<li>Generate N random Harper expressions</li>
<li>Score the performance of these expressions by testing them against a curated dataset.
The dataset contains labeled rows of sentences that do and do not contain the grammatical rule of interest.</li>
<li>Take the best K expressions and mutate them to left with L new "child" expressions. Go to step 2.</li>
</ol>
<p>That's it!
We're essentially treating expressions as living creatures and subjecting them to artificial selection.
It works remarkably well.</p>
<p>Since these datasets are handcrafted (or generated by an LLM), they don't need to be large.
Plus, the expressions themselves are quite fast to generate and test, so we can do so at an exceptional rate.</p>
<p>My laptop is able to churn through about 90 thousand candidates per second, allowing us to converge on an acceptable result in just a few minutes.
Given more time, it's able to produce an expression rule that is more accurate than what I could write myself.</p>
<h2>What's Next?</h2>
<p>I intend to spend some time optimizing the process, particular for the human element.
I'd like to be able to create batches of these datasets and let The Ripper take care of them all at once, overnight or on a beefy server in the cloud.</p>
<p>I'd also like to set up automated workflows for piping data from an LLM directly into the Ripper.
Ideally, I want this system to get to a point where I can feed information from a style guide into an LLM and get a guaranteed functioning Harper expression rule out of it.</p>]]></description>
            <content:encoded><![CDATA[<h1>Harper Evolves</h1>
<p>I want you to read that title as literally as possible.
<a href="https://writewithharper.com">Harper</a> is now capable of evolution.</p>
<p>This past week, I've been working on a system that should allow us to handle more complex grammatical cases and contexts, faster.
I believe it will improve our ability to add new grammatical rules to Harper by somewhere between 500% and 1,000%.</p>
<p>To top it off, this system does it without slowing Harper itself down or increasing the memory footprint.</p>
<p><strong>Let's get into it.</strong></p>
<h2>The Problem</h2>
<p>There are several unique methodologies at play when Harper goes about grammar checking.
Which strategy depends on the grammatical rule in question.
Today, we're interested in expression rules.</p>
<p>For the curious, I have recently written <a href="./reflections_on_expression_rules">a reflection</a> on expression rules, as well as <a href="./writing_an_expression_rule_for_harper">a guide</a> for anyone interested in producing them.
This post, however, will not recount information I've already written on this blog.</p>
<p>By count, expression rules make up the majority of grammatical rules Harper is currently capable of detecting.
This is because they are fast, easy to write, and most importantly, easy to review.</p>
<p>There are, however, occasional hiccups that I encounter when tackling a problem.
The English language is tricky and often it contradicts itself.
I will often try to write a rule which covers a certain case, only to find that it doesn't cover all cases.
I can iterate, but it often becomes tedious and time-consuming.</p>
<h2>The Solution</h2>
<p>Last week, I threw in the towel.
I was tired of iterating ceaselessly towards a goal, only to have a new one to tackle after that.
So I decided I would let the computer iterate for me.</p>
<p>Harper's expressions are essentially small programs which are able to identify the locations of given patterns in natural language.
They are constructed at runtime, but they run exceedingly fast because they tend to be amenable to modern branch prediction.
We can use this fact to our advantage.</p>
<p>When generating an expression that detects a particular grammatical rule, the new system (which I've called The Ripper) follows three steps.</p>
<ol>
<li>Generate N random Harper expressions</li>
<li>Score the performance of these expressions by testing them against a curated dataset.
The dataset contains labeled rows of sentences that do and do not contain the grammatical rule of interest.</li>
<li>Take the best K expressions and mutate them to left with L new "child" expressions. Go to step 2.</li>
</ol>
<p>That's it!
We're essentially treating expressions as living creatures and subjecting them to artificial selection.
It works remarkably well.</p>
<p>Since these datasets are handcrafted (or generated by an LLM), they don't need to be large.
Plus, the expressions themselves are quite fast to generate and test, so we can do so at an exceptional rate.</p>
<p>My laptop is able to churn through about 90 thousand candidates per second, allowing us to converge on an acceptable result in just a few minutes.
Given more time, it's able to produce an expression rule that is more accurate than what I could write myself.</p>
<h2>What's Next?</h2>
<p>I intend to spend some time optimizing the process, particular for the human element.
I'd like to be able to create batches of these datasets and let The Ripper take care of them all at once, overnight or on a beefy server in the cloud.</p>
<p>I'd also like to set up automated workflows for piping data from an LLM directly into the Ripper.
Ideally, I want this system to get to a point where I can feed information from a style guide into an LLM and get a guaranteed functioning Harper expression rule out of it.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The Art of the Talk]]></title>
            <link>https://elijahpotter.dev/articles/the_art_of_the_talk</link>
            <guid>https://elijahpotter.dev/articles/the_art_of_the_talk</guid>
            <pubDate>Fri, 15 Aug 2025 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>The Art of the Talk</h1>
<img src="/images/mic.png" alt="A microphone sitting on a piano.">
<p>I've given dozens of talks over the years, yet I still feel like I have much to learn about public speaking.
The act of presenting an idea to a large audience in person is intimidating for me, as I imagine it is for most people.</p>
<p>Preparing for my upcoming talk at WordCamp US has me stopping to reflect on what I already know.
To recall the basic principles that turn a good talk into great one.</p>
<p>Conveying an idea effectively is a powerful skill. It can help get people on your side and direct them to your point of view.
Humans are beings that thrive when they build and maintain strong relationships with each other.
Good communication supercharges that process.</p>
<p>Here are my bold ideas for how to present ideas well, in no particular order.</p>
<h2>Convey Exactly One Idea At a Time</h2>
<p>A common problem people run into when they draft a presentation is not a lack of ideas: It is that they have too many.</p>
<p>People adore talking about themselves or their ideas.
A natural consequence is that people have a predilection to stuff a presentation with more information than they need to make their point.
Make sure to allocate time to take stuff out.</p>
<p>The more streamlined and clear you make your point, the more receptive your audience will be.</p>
<h2>Use a Story</h2>
<p>The human brain is wired for stories.
We've been telling them for thousands of years.
People remember them far better than they do arbitrary information.
In fact, that is exactly how communities used to do recordkeeping: by telling stories.</p>
<p>By weaving a tale into your presentation, you can prime the audience to receive your idea, and make that idea more memorable.</p>
<h2>Explain the "Why"</h2>
<p>The "Why" should be the first thing your audience hears from you.
Why they should listen.
Why your idea is important.
Why your input can help them in their professional or personal lives.</p>
<p>Chris Anderson, the Head of TED, says, "Style without substance is awful".</p>
<p>In other words, you can have the fanciest, most visually appealing slide deck and all the confidence in the world, and you can still have a mediocre presentation.
Find a topic or idea that you genuinely care about, and explain why the audience should care about it too.</p>]]></description>
            <content:encoded><![CDATA[<h1>The Art of the Talk</h1>
<img src="/images/mic.png" alt="A microphone sitting on a piano.">
<p>I've given dozens of talks over the years, yet I still feel like I have much to learn about public speaking.
The act of presenting an idea to a large audience in person is intimidating for me, as I imagine it is for most people.</p>
<p>Preparing for my upcoming talk at WordCamp US has me stopping to reflect on what I already know.
To recall the basic principles that turn a good talk into great one.</p>
<p>Conveying an idea effectively is a powerful skill. It can help get people on your side and direct them to your point of view.
Humans are beings that thrive when they build and maintain strong relationships with each other.
Good communication supercharges that process.</p>
<p>Here are my bold ideas for how to present ideas well, in no particular order.</p>
<h2>Convey Exactly One Idea At a Time</h2>
<p>A common problem people run into when they draft a presentation is not a lack of ideas: It is that they have too many.</p>
<p>People adore talking about themselves or their ideas.
A natural consequence is that people have a predilection to stuff a presentation with more information than they need to make their point.
Make sure to allocate time to take stuff out.</p>
<p>The more streamlined and clear you make your point, the more receptive your audience will be.</p>
<h2>Use a Story</h2>
<p>The human brain is wired for stories.
We've been telling them for thousands of years.
People remember them far better than they do arbitrary information.
In fact, that is exactly how communities used to do recordkeeping: by telling stories.</p>
<p>By weaving a tale into your presentation, you can prime the audience to receive your idea, and make that idea more memorable.</p>
<h2>Explain the "Why"</h2>
<p>The "Why" should be the first thing your audience hears from you.
Why they should listen.
Why your idea is important.
Why your input can help them in their professional or personal lives.</p>
<p>Chris Anderson, the Head of TED, says, "Style without substance is awful".</p>
<p>In other words, you can have the fanciest, most visually appealing slide deck and all the confidence in the world, and you can still have a mediocre presentation.
Find a topic or idea that you genuinely care about, and explain why the audience should care about it too.</p>]]></content:encoded>
            <enclosure url="https://elijahpotter.dev/images/mic.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[Harper in the News]]></title>
            <link>https://elijahpotter.dev/articles/harper_in_the_news</link>
            <guid>https://elijahpotter.dev/articles/harper_in_the_news</guid>
            <pubDate>Tue, 12 Aug 2025 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Harper in the News</h1>
<p>The biggest complaint about Harper is that the quality of grammar checking still needs some work.
I've got an exciting new project I'm preparing to discuss later this week (hopefully) in that area.
It should be a step-function improvement over our previous methods for creating new rules.</p>
<p>In the meantime, I've collected some recent sightings of Harper in the wild:</p>
<ul>
<li><a href="https://obsidian.rocks/resource-harper/">Obsidian Rocks</a></li>
<li><a href="https://www.wearefounders.uk/the-grammar-checker-that-actually-gets-developers-meet-harper/">We Are Founders</a></li>
<li><a href="https://itsfoss.com/harper-grammar-checker/">It's FOSS</a></li>
<li><a href="https://www.xda-developers.com/ditched-grammarly-for-this-amazing-open-source-alternative/">XDA Developers</a></li>
<li><a href="https://www.maketecheasier.com/harper-vs-grammarly/">Make Tech Easier</a></li>
</ul>]]></description>
            <content:encoded><![CDATA[<h1>Harper in the News</h1>
<p>The biggest complaint about Harper is that the quality of grammar checking still needs some work.
I've got an exciting new project I'm preparing to discuss later this week (hopefully) in that area.
It should be a step-function improvement over our previous methods for creating new rules.</p>
<p>In the meantime, I've collected some recent sightings of Harper in the wild:</p>
<ul>
<li><a href="https://obsidian.rocks/resource-harper/">Obsidian Rocks</a></li>
<li><a href="https://www.wearefounders.uk/the-grammar-checker-that-actually-gets-developers-meet-harper/">We Are Founders</a></li>
<li><a href="https://itsfoss.com/harper-grammar-checker/">It's FOSS</a></li>
<li><a href="https://www.xda-developers.com/ditched-grammarly-for-this-amazing-open-source-alternative/">XDA Developers</a></li>
<li><a href="https://www.maketecheasier.com/harper-vs-grammarly/">Make Tech Easier</a></li>
</ul>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Better Interfaces for Grammar Checking]]></title>
            <link>https://elijahpotter.dev/articles/better_interfaces_for_grammar_checking</link>
            <guid>https://elijahpotter.dev/articles/better_interfaces_for_grammar_checking</guid>
            <pubDate>Mon, 28 Jul 2025 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Better UI for Grammar Checking</h1>
<p>Grammar checking can be cumbersome, especially when its sloth gets in the way of your thinking. That’s part of why so many prefer <a href="https://writewithharper.com">Harper</a> over Grammarly or LanguageTool. Waiting four seconds per edit can add up over long days of writing.</p>
<p>But there’s something else that can slow people down, especially power users. It’s the mouse. Moving one’s hand to the pointing device, moving it, and clicking can take a while—and that adds up too.</p>
<p>Neovim (and similar text editors like Obsidian) solve this problem by using another input device: the keyboard. Since you’re using the keyboard to write words to the page, why not also use it to correct grammatical mistakes?</p>
<p>In the latest release of Harper’s Chrome extension, that’s exactly what I’ve added. While writing, you’re bound to make (at the very least) a few typos. If that’s the case, you can now double-tap the shift key to open the suggestion popup from the error closest to your caret. From there, you can use tab and enter to select and apply a correction.</p>
<p>I’ve personally found that this single-handedly speeds up my writing process by at least 20%. I’m able to focus less on each word and more on the content.</p>]]></description>
            <content:encoded><![CDATA[<h1>Better UI for Grammar Checking</h1>
<p>Grammar checking can be cumbersome, especially when its sloth gets in the way of your thinking. That’s part of why so many prefer <a href="https://writewithharper.com">Harper</a> over Grammarly or LanguageTool. Waiting four seconds per edit can add up over long days of writing.</p>
<p>But there’s something else that can slow people down, especially power users. It’s the mouse. Moving one’s hand to the pointing device, moving it, and clicking can take a while—and that adds up too.</p>
<p>Neovim (and similar text editors like Obsidian) solve this problem by using another input device: the keyboard. Since you’re using the keyboard to write words to the page, why not also use it to correct grammatical mistakes?</p>
<p>In the latest release of Harper’s Chrome extension, that’s exactly what I’ve added. While writing, you’re bound to make (at the very least) a few typos. If that’s the case, you can now double-tap the shift key to open the suggestion popup from the error closest to your caret. From there, you can use tab and enter to select and apply a correction.</p>
<p>I’ve personally found that this single-handedly speeds up my writing process by at least 20%. I’m able to focus less on each word and more on the content.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Speaking at WordCamp U.S. in 2025]]></title>
            <link>https://elijahpotter.dev/articles/speaking_at_wordcamp_u.s._in_2025</link>
            <guid>https://elijahpotter.dev/articles/speaking_at_wordcamp_u.s._in_2025</guid>
            <pubDate>Fri, 25 Jul 2025 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Speaking at WordCamp U.S. in August</h1>
<p>I was recently encouraged to submit a talk proposal to the AI track at WordCamp US.
I was more than happy to oblige, and to my surprise, I got in.
See description below for details. I hope to see some of y'all there!</p>
<img src="/images/wordcamp_us_speaker_2025.png" alt="I&#x27;ll be speaking at WordCamp US 2025">
<h2>Description</h2>
<p>Machine learning has been around for decades.
Until recently, running ML models typically required big centralized servers due to high computational demands.
With modern hardware and tools like Rust and Burn, we can flip the script.
When run on a user's device, trained models can be an incredible boost to human creativity and expression.</p>
<p>Harper is a grammar checker that is designed to improve your work without degrading your integrity—or the writing experience itself.
Our users generate and publish novel ideas faster because we focus on what's important: speed, privacy, and accuracy.</p>
<p>WordPress' mission is to democratize publishing.
This talk will explore how we can use AI to democratize writing too.
More importantly, how you can train and deploy specialized models at the edge to eliminate inference costs, lower latency, and improve user satisfaction.</p>]]></description>
            <content:encoded><![CDATA[<h1>Speaking at WordCamp U.S. in August</h1>
<p>I was recently encouraged to submit a talk proposal to the AI track at WordCamp US.
I was more than happy to oblige, and to my surprise, I got in.
See description below for details. I hope to see some of y'all there!</p>
<img src="/images/wordcamp_us_speaker_2025.png" alt="I&#x27;ll be speaking at WordCamp US 2025">
<h2>Description</h2>
<p>Machine learning has been around for decades.
Until recently, running ML models typically required big centralized servers due to high computational demands.
With modern hardware and tools like Rust and Burn, we can flip the script.
When run on a user's device, trained models can be an incredible boost to human creativity and expression.</p>
<p>Harper is a grammar checker that is designed to improve your work without degrading your integrity—or the writing experience itself.
Our users generate and publish novel ideas faster because we focus on what's important: speed, privacy, and accuracy.</p>
<p>WordPress' mission is to democratize publishing.
This talk will explore how we can use AI to democratize writing too.
More importantly, how you can train and deploy specialized models at the edge to eliminate inference costs, lower latency, and improve user satisfaction.</p>]]></content:encoded>
            <enclosure url="https://elijahpotter.dev/images/wordcamp_us_speaker_2025.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[Why I Talk to Myself]]></title>
            <link>https://elijahpotter.dev/articles/why_i_talk_to_myself</link>
            <guid>https://elijahpotter.dev/articles/why_i_talk_to_myself</guid>
            <pubDate>Mon, 21 Jul 2025 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Why I Talk to Myself</h1>
<img src="/images/bike_sidewalk.webp" alt="My bike broke down. Luckily, I had my camera">
<p>Each week I sit down and reflect on what I've learned. I think about which of my efforts have helped the Harper endeavor, and which have harmed it. It's not easy, but I think it's one of the best habits I've built. More accurately, it's one of the best habits <a href="https://ma.tt">Matt</a> has encouraged me to build.</p>
<p>When I really think about it, I'm reminded of the original <a href="https://blog.afterthedeadline.com/">After the Deadline blog</a>, which coincidentally also had a category called "Talking to Myself". I wonder if Raphael Mudge (AtD's author) had a similar motivation.</p>
<p>I talk to myself to find the weaknesses in Harper's mission and focus my attention on what matters. It is an efficient way to get from <a href="https://ma.tt/2025/05/sharing-levels/">layer one to layer five</a>. It can help get other maintainers on the same page and build a body of knowledge that can be linked to from anywhere.</p>
<p>Not worrying about keeping information locked-down is one of the more beautiful parts of working in public. Open source code would be nothing without similarly open source knowledge. I talk to myself <em>online</em> so my users can understand my intentions.</p>
<p>In many senses, we are not unlike moths. We are drawn to the lights of distraction, hype, and procrastination. For me, setting up checkpoints to stop and reflect has been an effective deterrent from these lights. In fact, I think each minute for which I stop and think has paid itself back three times over in time savings. If I can better allocate my time and energy, I can avoid doing redundant work pursuing pointless goals.</p>
<h2>Bonus Points</h2>
<p>A <a href="./markov_chains_are_the_original_language_models">few</a> of my <a href="./local-first_software_is_easier_to_scale">articles</a> have become popular on sites like Hacker News and Reddit. We've gotten quite a few Harper installations from links on these articles. I would never rely on this kind of blogging as a source of advertising, but it is nice to see it help spread the word.</p>]]></description>
            <content:encoded><![CDATA[<h1>Why I Talk to Myself</h1>
<img src="/images/bike_sidewalk.webp" alt="My bike broke down. Luckily, I had my camera">
<p>Each week I sit down and reflect on what I've learned. I think about which of my efforts have helped the Harper endeavor, and which have harmed it. It's not easy, but I think it's one of the best habits I've built. More accurately, it's one of the best habits <a href="https://ma.tt">Matt</a> has encouraged me to build.</p>
<p>When I really think about it, I'm reminded of the original <a href="https://blog.afterthedeadline.com/">After the Deadline blog</a>, which coincidentally also had a category called "Talking to Myself". I wonder if Raphael Mudge (AtD's author) had a similar motivation.</p>
<p>I talk to myself to find the weaknesses in Harper's mission and focus my attention on what matters. It is an efficient way to get from <a href="https://ma.tt/2025/05/sharing-levels/">layer one to layer five</a>. It can help get other maintainers on the same page and build a body of knowledge that can be linked to from anywhere.</p>
<p>Not worrying about keeping information locked-down is one of the more beautiful parts of working in public. Open source code would be nothing without similarly open source knowledge. I talk to myself <em>online</em> so my users can understand my intentions.</p>
<p>In many senses, we are not unlike moths. We are drawn to the lights of distraction, hype, and procrastination. For me, setting up checkpoints to stop and reflect has been an effective deterrent from these lights. In fact, I think each minute for which I stop and think has paid itself back three times over in time savings. If I can better allocate my time and energy, I can avoid doing redundant work pursuing pointless goals.</p>
<h2>Bonus Points</h2>
<p>A <a href="./markov_chains_are_the_original_language_models">few</a> of my <a href="./local-first_software_is_easier_to_scale">articles</a> have become popular on sites like Hacker News and Reddit. We've gotten quite a few Harper installations from links on these articles. I would never rely on this kind of blogging as a source of advertising, but it is nice to see it help spread the word.</p>]]></content:encoded>
            <enclosure url="https://elijahpotter.dev/images/bike_sidewalk.webp" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[Reflections on Expression Rules]]></title>
            <link>https://elijahpotter.dev/articles/reflections_on_expression_rules</link>
            <guid>https://elijahpotter.dev/articles/reflections_on_expression_rules</guid>
            <pubDate>Fri, 18 Jul 2025 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Reflections on Expression Rules</h1>
<p>Just like grammar itself, Harper is rule-based.
These rules can be written by human or machine, and usually take the form of a "phrase correction" or an expression rule.
You can read more about both in <a href="./writing_a_grammatical_rule_for_harper">my guide</a>.</p>
<p>In essence, an expression rule is a small, declarative program that seeks out specific cases Harper can be certain are grammatically incorrect.
For example, we know redundant modal verbs are always wrong:</p>
<pre><code class="hljs language-plaintext">We could give it try attempt.
</code></pre>
<h2>What's Great About Expression Rules</h2>
<p>As the maintainer of the project, I need to read through a decent number of pull requests each week.
Depending on their complexity, this can be time consuming.
Because expressions are almost always written with the same five primitives, they make reviews way faster.
For many rules, I only need a quick skim of the code and to confirm they don't create false-positives.</p>
<p>In addition to typos, the primitives underlying expression rules were designed to check agreement and word-ordering errors.
To that end, there are a number of assumptions baked in.
This makes some kinds of rules harder to write, but also makes all expression rules exceedingly easy to memoize.</p>
<h2>What's Not-So-Great About Expression Rules</h2>
<p>As I said before, expression rules were designed for a certain kind of error.
There are many grammatical or layout rules which hard difficult or downright impossible to define with an expression.</p>
<p>This normally isn't problem, since we can go about them in other ways.
On multiple occasions, however, I've seen potential contributors get frustrated at a perceived lack of flexibility.
We didn't make it clear enough that there were solutions <em>other</em> than expressions.</p>
<h2>How Can We Improve?</h2>
<p>Applying Amdahl's law, I think the most effective solution is to <a href="./writing_good_documentation">improve our documentation</a>, as I have been for the last week or so.</p>
<p>Beyond that, removing the need for boilerplate code and other causes of mental overhead (like registering our rules and waiting for a slow compilation process) might lower the perceived difficulty of writing new rules.</p>]]></description>
            <content:encoded><![CDATA[<h1>Reflections on Expression Rules</h1>
<p>Just like grammar itself, Harper is rule-based.
These rules can be written by human or machine, and usually take the form of a "phrase correction" or an expression rule.
You can read more about both in <a href="./writing_a_grammatical_rule_for_harper">my guide</a>.</p>
<p>In essence, an expression rule is a small, declarative program that seeks out specific cases Harper can be certain are grammatically incorrect.
For example, we know redundant modal verbs are always wrong:</p>
<pre><code class="hljs language-plaintext">We could give it try attempt.
</code></pre>
<h2>What's Great About Expression Rules</h2>
<p>As the maintainer of the project, I need to read through a decent number of pull requests each week.
Depending on their complexity, this can be time consuming.
Because expressions are almost always written with the same five primitives, they make reviews way faster.
For many rules, I only need a quick skim of the code and to confirm they don't create false-positives.</p>
<p>In addition to typos, the primitives underlying expression rules were designed to check agreement and word-ordering errors.
To that end, there are a number of assumptions baked in.
This makes some kinds of rules harder to write, but also makes all expression rules exceedingly easy to memoize.</p>
<h2>What's Not-So-Great About Expression Rules</h2>
<p>As I said before, expression rules were designed for a certain kind of error.
There are many grammatical or layout rules which hard difficult or downright impossible to define with an expression.</p>
<p>This normally isn't problem, since we can go about them in other ways.
On multiple occasions, however, I've seen potential contributors get frustrated at a perceived lack of flexibility.
We didn't make it clear enough that there were solutions <em>other</em> than expressions.</p>
<h2>How Can We Improve?</h2>
<p>Applying Amdahl's law, I think the most effective solution is to <a href="./writing_good_documentation">improve our documentation</a>, as I have been for the last week or so.</p>
<p>Beyond that, removing the need for boilerplate code and other causes of mental overhead (like registering our rules and waiting for a slow compilation process) might lower the perceived difficulty of writing new rules.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Training a Chunker with Burn]]></title>
            <link>https://elijahpotter.dev/articles/training_a_chunker_with_burn</link>
            <guid>https://elijahpotter.dev/articles/training_a_chunker_with_burn</guid>
            <pubDate>Wed, 16 Jul 2025 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Training a Chunker with Burn</h1>
<img src="/images/underpass.webp" alt="Graffiti in an Underpass">
<p>In <a href="./more_transformation-based_learning.md">a previous post</a>, I detailed how I implemented a basic nominal phrase chunker using Transformation-based learning (not to be confused with the transformer class of neural network models).
Since then, I've taken another crack at the problem.
My main goal: improve the accuracy.
The end result is a portable neural network model that achieves ~95% accuracy on grammatically correct text when tasked with identifying the nominal phrases in text.</p>
<h2>Our Goal</h2>
<p>We want Harper to be able to match against subjects and objects in sentences.
This is a prerequisite for checking a diverse array of grammatical rules.
For example, to catch the error in this sentence, we need to correctly identify which tokens represent our subject.</p>
<pre><code class="hljs language-plaintext">Neither of the big blue bottle would be broken by the fall.
</code></pre>
<p>In this case, our user has accidentally made the subject singular, while the verb "neither" implies that the subject should be plural.
We call this an agreement error.
Because our subject, "big blue bottle" contains multiple tokens, we need a way to identify subjects at a higher level than per-token.
That is what a chunker does.</p>
<h2>The Failures of the Brill Chunker</h2>
<p>The Brill chunker was by many accounts a success.
It wasn't, however, a success in the main way that mattered: It wasn't reliable enough to be used in Harper's grammatical rule system.
While fast and small, it failed to catch most edge-cases in English text.
In some senses, it overfit its training dataset.</p>
<h2>Why Train Our Own?</h2>
<p>As our needs continue to expand alongside our user-base, I need the chunker to be flexible.
If its needed capabilities expand, I need to be able to retrain the model to meet them.
That would not be possible without having a deep understanding of how the system works.</p>
<h2>Building a Neural Net</h2>
<p>To build a new chunker, I just needed to implement the Harper <code>Chunker</code> trait.
Easy enough.</p>
<pre><code class="hljs language-rust"><span class="hljs-comment">/// An implementer of this trait is capable of identifying the noun phrases in a provided sentence.</span>
<span class="hljs-keyword">pub</span> <span class="hljs-keyword">trait</span> <span class="hljs-title class_">Chunker</span> {
    <span class="hljs-comment">/// Iterate over the sentence, identifying the noun phrases contained within.</span>
    <span class="hljs-comment">/// A token marked `true` is a component of a noun phrase.</span>
    <span class="hljs-comment">/// A token marked `false` is not.</span>
    <span class="hljs-keyword">fn</span> <span class="hljs-title function_">chunk_sentence</span>(&#x26;<span class="hljs-keyword">self</span>, sentence: &#x26;[<span class="hljs-type">String</span>], tags: &#x26;[<span class="hljs-type">Option</span>&#x3C;UPOS>]) <span class="hljs-punctuation">-></span> <span class="hljs-type">Vec</span>&#x3C;<span class="hljs-type">bool</span>>;
}
</code></pre>
<p>For the nerds in the crowd, I decided to use a <code>Word + POS embedding -> BiLSTM -> Linear</code> architecture.
To keep things portable and consistent with the rest of the Harper codebase, I used <a href="https://burn.dev/">Burn</a>, a Rust-native machine learning toolkit.
While I believe the BiLSTM to be good enough for this application, one advantage of Burn is the ability to easily swap it out for a transformer if the need arises.
It also makes it unbelievably easy to quantize models.</p>
<p>This architecture gives us some hyperparameters to tune against.
After dozens of training runs of experimentation, these worked best:</p>
<table>
<thead>
<tr>
<th align="right">Dropout probability</th>
<th>Embedding dimensions</th>
<th align="right">Learning rate (I used Adam)</th>
<th>Dataset</th>
</tr>
</thead>
<tbody>
<tr>
<td align="right">30%</td>
<td>16 Word Embeddings + 8 UPOS Embeddings</td>
<td align="right">0.003</td>
<td>GUM + EWT + LINES</td>
</tr>
</tbody>
</table>
<h2>What's Next?</h2>
<p>Similar to the Brill Chunker, I'll be trying to use this new system in our grammar checker.
From there, I'll know what additional information we'd like for it to infer.
Once I've gotten it to reliably work for >= 3 rules, I'll declare it ready to merge.</p>
<h2>Update as of December 2025</h2>
<p>I totally forgot to update this post.
I suppose it is better late than never.
Back in August I actually ended up giving <a href="./i_spoke_at_wordcamp_u.s._in_2025">a talk about our chunker at WordCamp U.S</a>.
It was a great experience, and I highly suggest you go watch it if you find this stuff interesting.</p>]]></description>
            <content:encoded><![CDATA[<h1>Training a Chunker with Burn</h1>
<img src="/images/underpass.webp" alt="Graffiti in an Underpass">
<p>In <a href="./more_transformation-based_learning.md">a previous post</a>, I detailed how I implemented a basic nominal phrase chunker using Transformation-based learning (not to be confused with the transformer class of neural network models).
Since then, I've taken another crack at the problem.
My main goal: improve the accuracy.
The end result is a portable neural network model that achieves ~95% accuracy on grammatically correct text when tasked with identifying the nominal phrases in text.</p>
<h2>Our Goal</h2>
<p>We want Harper to be able to match against subjects and objects in sentences.
This is a prerequisite for checking a diverse array of grammatical rules.
For example, to catch the error in this sentence, we need to correctly identify which tokens represent our subject.</p>
<pre><code class="hljs language-plaintext">Neither of the big blue bottle would be broken by the fall.
</code></pre>
<p>In this case, our user has accidentally made the subject singular, while the verb "neither" implies that the subject should be plural.
We call this an agreement error.
Because our subject, "big blue bottle" contains multiple tokens, we need a way to identify subjects at a higher level than per-token.
That is what a chunker does.</p>
<h2>The Failures of the Brill Chunker</h2>
<p>The Brill chunker was by many accounts a success.
It wasn't, however, a success in the main way that mattered: It wasn't reliable enough to be used in Harper's grammatical rule system.
While fast and small, it failed to catch most edge-cases in English text.
In some senses, it overfit its training dataset.</p>
<h2>Why Train Our Own?</h2>
<p>As our needs continue to expand alongside our user-base, I need the chunker to be flexible.
If its needed capabilities expand, I need to be able to retrain the model to meet them.
That would not be possible without having a deep understanding of how the system works.</p>
<h2>Building a Neural Net</h2>
<p>To build a new chunker, I just needed to implement the Harper <code>Chunker</code> trait.
Easy enough.</p>
<pre><code class="hljs language-rust"><span class="hljs-comment">/// An implementer of this trait is capable of identifying the noun phrases in a provided sentence.</span>
<span class="hljs-keyword">pub</span> <span class="hljs-keyword">trait</span> <span class="hljs-title class_">Chunker</span> {
    <span class="hljs-comment">/// Iterate over the sentence, identifying the noun phrases contained within.</span>
    <span class="hljs-comment">/// A token marked `true` is a component of a noun phrase.</span>
    <span class="hljs-comment">/// A token marked `false` is not.</span>
    <span class="hljs-keyword">fn</span> <span class="hljs-title function_">chunk_sentence</span>(&#x26;<span class="hljs-keyword">self</span>, sentence: &#x26;[<span class="hljs-type">String</span>], tags: &#x26;[<span class="hljs-type">Option</span>&#x3C;UPOS>]) <span class="hljs-punctuation">-></span> <span class="hljs-type">Vec</span>&#x3C;<span class="hljs-type">bool</span>>;
}
</code></pre>
<p>For the nerds in the crowd, I decided to use a <code>Word + POS embedding -> BiLSTM -> Linear</code> architecture.
To keep things portable and consistent with the rest of the Harper codebase, I used <a href="https://burn.dev/">Burn</a>, a Rust-native machine learning toolkit.
While I believe the BiLSTM to be good enough for this application, one advantage of Burn is the ability to easily swap it out for a transformer if the need arises.
It also makes it unbelievably easy to quantize models.</p>
<p>This architecture gives us some hyperparameters to tune against.
After dozens of training runs of experimentation, these worked best:</p>
<table>
<thead>
<tr>
<th align="right">Dropout probability</th>
<th>Embedding dimensions</th>
<th align="right">Learning rate (I used Adam)</th>
<th>Dataset</th>
</tr>
</thead>
<tbody>
<tr>
<td align="right">30%</td>
<td>16 Word Embeddings + 8 UPOS Embeddings</td>
<td align="right">0.003</td>
<td>GUM + EWT + LINES</td>
</tr>
</tbody>
</table>
<h2>What's Next?</h2>
<p>Similar to the Brill Chunker, I'll be trying to use this new system in our grammar checker.
From there, I'll know what additional information we'd like for it to infer.
Once I've gotten it to reliably work for >= 3 rules, I'll declare it ready to merge.</p>
<h2>Update as of December 2025</h2>
<p>I totally forgot to update this post.
I suppose it is better late than never.
Back in August I actually ended up giving <a href="./i_spoke_at_wordcamp_u.s._in_2025">a talk about our chunker at WordCamp U.S</a>.
It was a great experience, and I highly suggest you go watch it if you find this stuff interesting.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Writing Good Documentation]]></title>
            <link>https://elijahpotter.dev/articles/writing_good_documentation</link>
            <guid>https://elijahpotter.dev/articles/writing_good_documentation</guid>
            <pubDate>Tue, 15 Jul 2025 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Writing Good Documentation</h1>
<p>I believe that good documentation is more important than ever. In an age where large language models have exceptional context windows, it's easy to fall into the trap of believing a system is simpler than it is. To test the limits of their abilities, I'll often feed Harper code in and ask for their explanation of it's inner workings.
They've never gotten it anywhere close to correct.</p>
<p>That's because good documentation is more than the sum of its parts. It should contain all the relevant technical details needed to work on a repository, but it should also provide a sense of quality and direction. It should distill an ability within the reader to tell what good code and conduct looks like.</p>
<p>I think this is an area Harper could do to improve on.</p>
<h2>It's Possible to Write Too Much</h2>
<p>Too much text can be overwhelming.
I have met many people (myself included) who believe that more detail is always better.
That as long as the information exists somewhere, someone will be able to find it.
In reality, that's just not true.</p>
<p>That brings us to our first rule: <strong>good documentation should be simple</strong>.</p>
<p>The worst way to describe a complex system is with complex language.
It's much better to first break it down into simpler parts.
From there, you can build it back up again.</p>
<p>I've found it effective to start by erring on the side of "too much", only to pare it down later.</p>
<p>Needless to say, it's also possible to write too little.</p>
<h2>Focus on <em>Why</em></h2>
<p>The code itself is (or should be) the best description of <em>how</em> the software works.
Documentation should complement this reality by describing <em>why</em> the software works that way.</p>
<p>This might include a little history.
What steps did the original code's author take before arriving on the final solution?
<a href="https://elijahpotter.dev/">A blog</a> is a great way to answer that question.</p>
<p>When describing a bug relevant to some code (like in a pull request), a picture is worth a thousand words.
It's much easier to show a problem than to describe it.
That's why I highly suggest you take a screenshot of the problem.
Even better: record a video recreation of it.</p>]]></description>
            <content:encoded><![CDATA[<h1>Writing Good Documentation</h1>
<p>I believe that good documentation is more important than ever. In an age where large language models have exceptional context windows, it's easy to fall into the trap of believing a system is simpler than it is. To test the limits of their abilities, I'll often feed Harper code in and ask for their explanation of it's inner workings.
They've never gotten it anywhere close to correct.</p>
<p>That's because good documentation is more than the sum of its parts. It should contain all the relevant technical details needed to work on a repository, but it should also provide a sense of quality and direction. It should distill an ability within the reader to tell what good code and conduct looks like.</p>
<p>I think this is an area Harper could do to improve on.</p>
<h2>It's Possible to Write Too Much</h2>
<p>Too much text can be overwhelming.
I have met many people (myself included) who believe that more detail is always better.
That as long as the information exists somewhere, someone will be able to find it.
In reality, that's just not true.</p>
<p>That brings us to our first rule: <strong>good documentation should be simple</strong>.</p>
<p>The worst way to describe a complex system is with complex language.
It's much better to first break it down into simpler parts.
From there, you can build it back up again.</p>
<p>I've found it effective to start by erring on the side of "too much", only to pare it down later.</p>
<p>Needless to say, it's also possible to write too little.</p>
<h2>Focus on <em>Why</em></h2>
<p>The code itself is (or should be) the best description of <em>how</em> the software works.
Documentation should complement this reality by describing <em>why</em> the software works that way.</p>
<p>This might include a little history.
What steps did the original code's author take before arriving on the final solution?
<a href="https://elijahpotter.dev/">A blog</a> is a great way to answer that question.</p>
<p>When describing a bug relevant to some code (like in a pull request), a picture is worth a thousand words.
It's much easier to show a problem than to describe it.
That's why I highly suggest you take a screenshot of the problem.
Even better: record a video recreation of it.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Writing a Grammatical Rule for Harper]]></title>
            <link>https://elijahpotter.dev/articles/writing_a_grammatical_rule_for_harper</link>
            <guid>https://elijahpotter.dev/articles/writing_a_grammatical_rule_for_harper</guid>
            <pubDate>Wed, 09 Jul 2025 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Writing a Grammatical Rule for Harper</h1>
<p><a href="https://writewithharper.com">Harper</a> is a grammar checker that relies on concrete, legible grammatical rules.
In doing so, we make Harper's inner-workings fundamentally clear, which allows us to guarantee privacy, speed, and most importantly remain <strong>impartial</strong>.</p>
<p>Writing additional rules is one of the best (and <em><strong>easiest</strong></em>) ways you can contribute to the open source project.
Simple rules take just a few minutes and often don't require any understanding of Rust at all—a fact I only cite because it is a common point of concern.</p>
<p>Instead of throwing a wall of text in your face, I'm breaking this "guide" of sorts into three simple sections.
You don't need to read all three—in fact, I would recommend against it.</p>
<p>The only thing you need from here is an idea of the grammatical rule you want to add to Harper.
Don't have one in mind?
Visit our <a href="https://github.com/Automattic/harper/issues?q=is%3Aissue%20state%3Aopen%20label%3Aenhancement%20label%3Aharper-core%20label%3Alinting">issue board</a> to find a potential rule that piques your interest.</p>
<p>The three paths:</p>
<ol>
<li><a href="./writing_a_phrase_correction_for_harper"><strong>A "phrase correction"</strong></a>. These are for the simplest grammatical rules. Use one of these in cases where semantic meaning and context aren't important.</li>
<li><a href="./writing_an_expression_rule_for_harper"><strong>An <code>ExprLinter</code></strong></a>. These are for more complex rules. Use one of these in cases where semantic meaning or context <strong>are</strong> important, and you don't need access to information wider than clause-level. Takes a little bit to learn, but are extremely powerful.</li>
<li><strong>A plain <code>Linter</code></strong>. These are often used for rules that involve punctuation. It requires the most Rust knowledge but the least Harper-specific knowledge. I'm going to hold off on writing a guide for these until I hear a real desire to learn about them.</li>
</ol>
<p>These guides will focus more on the process of writing a rule for Harper, not the technical details of wiring it up.
For the latter, see our <a href="https://writewithharper.com/docs/contributors/author-a-rule">official documentation.</a></p>]]></description>
            <content:encoded><![CDATA[<h1>Writing a Grammatical Rule for Harper</h1>
<p><a href="https://writewithharper.com">Harper</a> is a grammar checker that relies on concrete, legible grammatical rules.
In doing so, we make Harper's inner-workings fundamentally clear, which allows us to guarantee privacy, speed, and most importantly remain <strong>impartial</strong>.</p>
<p>Writing additional rules is one of the best (and <em><strong>easiest</strong></em>) ways you can contribute to the open source project.
Simple rules take just a few minutes and often don't require any understanding of Rust at all—a fact I only cite because it is a common point of concern.</p>
<p>Instead of throwing a wall of text in your face, I'm breaking this "guide" of sorts into three simple sections.
You don't need to read all three—in fact, I would recommend against it.</p>
<p>The only thing you need from here is an idea of the grammatical rule you want to add to Harper.
Don't have one in mind?
Visit our <a href="https://github.com/Automattic/harper/issues?q=is%3Aissue%20state%3Aopen%20label%3Aenhancement%20label%3Aharper-core%20label%3Alinting">issue board</a> to find a potential rule that piques your interest.</p>
<p>The three paths:</p>
<ol>
<li><a href="./writing_a_phrase_correction_for_harper"><strong>A "phrase correction"</strong></a>. These are for the simplest grammatical rules. Use one of these in cases where semantic meaning and context aren't important.</li>
<li><a href="./writing_an_expression_rule_for_harper"><strong>An <code>ExprLinter</code></strong></a>. These are for more complex rules. Use one of these in cases where semantic meaning or context <strong>are</strong> important, and you don't need access to information wider than clause-level. Takes a little bit to learn, but are extremely powerful.</li>
<li><strong>A plain <code>Linter</code></strong>. These are often used for rules that involve punctuation. It requires the most Rust knowledge but the least Harper-specific knowledge. I'm going to hold off on writing a guide for these until I hear a real desire to learn about them.</li>
</ol>
<p>These guides will focus more on the process of writing a rule for Harper, not the technical details of wiring it up.
For the latter, see our <a href="https://writewithharper.com/docs/contributors/author-a-rule">official documentation.</a></p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Writing a Phrase Correction for Harper]]></title>
            <link>https://elijahpotter.dev/articles/writing_a_phrase_correction_for_harper</link>
            <guid>https://elijahpotter.dev/articles/writing_a_phrase_correction_for_harper</guid>
            <pubDate>Wed, 09 Jul 2025 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Writing a Phrase Correction for Harper</h1>
<p><em>This is part of a series.
<a href="./writing_a_grammatical_rule_for_harper">Go to the start.</a></em></p>
<p>There are several ways to add a grammatical rule to Harper.
This post aims to outline the easiest (and most constrained): a "phrase correction".</p>
<p>Sometimes you'll see a simple but common grammatical error that doesn't have much to do with any broader context.
For example, I often mistype "in the" as "int he".
It happens quite often, and there isn't a broad pattern at play here.
This is a perfect candidate for a "phrase correction".</p>
<p>From a technical point of view, a "phrase correction" is just a mapping from one or more phrases to another set of phrases.
But they're more than a simple search-and-replace.
Under-the-hood, Harper will account for various capitalization and stylistic changes that can be difficult to cover manually.
Fortunately, they're pretty easy to add to Harper.</p>
<p>Before we begin, make sure you properly <a href="https://writewithharper.com/docs/contributors/environment">set up your environment</a>.</p>
<p>Open up the Harper <a href="https://github.com/automattic/harper">monorepo</a>.
In <code>harper-core/src/linting/phrase_corrections/mod.rs</code>, you'll find a list of entries that look somewhat like this:</p>
<pre><code class="hljs language-rust"><span class="hljs-string">"GildedAge"</span> => (
    [<span class="hljs-string">"guilded age"</span>],
    [<span class="hljs-string">"Gilded Age"</span>],
    <span class="hljs-string">"The period of economic prosperity is called the `Gilded Age`."</span>,
    <span class="hljs-string">"If referring to the period of economic prosperity, the correct term is `Gilded Age`."</span>
),
<span class="hljs-string">"GoingTo"</span> => (
    <span class="hljs-comment">// The value to map _from_.</span>
    [<span class="hljs-string">"gong to"</span>], 
    <span class="hljs-comment">// The suggestions to present to the user, which replace the problematic text.</span>
    [<span class="hljs-string">"going to"</span>], 
    <span class="hljs-comment">// The message for the user.</span>
    <span class="hljs-string">"Did you mean `going to`?"</span>,
    <span class="hljs-comment">// The rule description to be shown in settings pages.</span>
    <span class="hljs-string">"Corrects `gong to` to the intended phrase `going to`."</span>
),
<span class="hljs-string">"GotRidOff"</span> => (
    [<span class="hljs-string">"got rid off"</span>, <span class="hljs-string">"got ride of"</span>, <span class="hljs-string">"got ride off"</span>],
    [<span class="hljs-string">"got rid of"</span>],
    <span class="hljs-string">"Did you mean `got rid of`?"</span>,
    <span class="hljs-string">"Ensures `got rid of` is used instead of `got rid off`."</span>
),
</code></pre>
<p>Each of these is a "phrase correction".
To add one for the problem I outlined above, we just need to append to the end of the list and open <a href="./never_wait">a pull request</a>.</p>
<pre><code class="hljs language-rust"><span class="hljs-string">"InThe"</span> => (
    [<span class="hljs-string">"int he"</span>],
    [<span class="hljs-string">"in the"</span>],
    <span class="hljs-string">"Did you mean `in the`?"</span>,
    <span class="hljs-string">"Detects and corrects a spacing error where `in the` is mistakenly written as `int he`. Proper spacing is essential for readability and grammatical correctness in common phrases."</span>
),
</code></pre>
<p>If you want to go the extra mile, we'd really appreciate if you added one or two test cases to <code>harper-core/src/linting/phrase_corrections/tests.rs</code> to make sure everything works as expected:</p>
<pre><code class="hljs language-rust"><span class="hljs-meta">#[test]</span>
<span class="hljs-keyword">fn</span> <span class="hljs-title function_">corrects_int_he</span>() {
    <span class="hljs-title function_ invoke__">assert_suggestion_result</span>(
        <span class="hljs-string">"That pizza stayed int he box."</span>,
        <span class="hljs-title function_ invoke__">lint_group</span>(),
        <span class="hljs-string">"That pizza stayed in the box."</span>,
    );
}
</code></pre>]]></description>
            <content:encoded><![CDATA[<h1>Writing a Phrase Correction for Harper</h1>
<p><em>This is part of a series.
<a href="./writing_a_grammatical_rule_for_harper">Go to the start.</a></em></p>
<p>There are several ways to add a grammatical rule to Harper.
This post aims to outline the easiest (and most constrained): a "phrase correction".</p>
<p>Sometimes you'll see a simple but common grammatical error that doesn't have much to do with any broader context.
For example, I often mistype "in the" as "int he".
It happens quite often, and there isn't a broad pattern at play here.
This is a perfect candidate for a "phrase correction".</p>
<p>From a technical point of view, a "phrase correction" is just a mapping from one or more phrases to another set of phrases.
But they're more than a simple search-and-replace.
Under-the-hood, Harper will account for various capitalization and stylistic changes that can be difficult to cover manually.
Fortunately, they're pretty easy to add to Harper.</p>
<p>Before we begin, make sure you properly <a href="https://writewithharper.com/docs/contributors/environment">set up your environment</a>.</p>
<p>Open up the Harper <a href="https://github.com/automattic/harper">monorepo</a>.
In <code>harper-core/src/linting/phrase_corrections/mod.rs</code>, you'll find a list of entries that look somewhat like this:</p>
<pre><code class="hljs language-rust"><span class="hljs-string">"GildedAge"</span> => (
    [<span class="hljs-string">"guilded age"</span>],
    [<span class="hljs-string">"Gilded Age"</span>],
    <span class="hljs-string">"The period of economic prosperity is called the `Gilded Age`."</span>,
    <span class="hljs-string">"If referring to the period of economic prosperity, the correct term is `Gilded Age`."</span>
),
<span class="hljs-string">"GoingTo"</span> => (
    <span class="hljs-comment">// The value to map _from_.</span>
    [<span class="hljs-string">"gong to"</span>], 
    <span class="hljs-comment">// The suggestions to present to the user, which replace the problematic text.</span>
    [<span class="hljs-string">"going to"</span>], 
    <span class="hljs-comment">// The message for the user.</span>
    <span class="hljs-string">"Did you mean `going to`?"</span>,
    <span class="hljs-comment">// The rule description to be shown in settings pages.</span>
    <span class="hljs-string">"Corrects `gong to` to the intended phrase `going to`."</span>
),
<span class="hljs-string">"GotRidOff"</span> => (
    [<span class="hljs-string">"got rid off"</span>, <span class="hljs-string">"got ride of"</span>, <span class="hljs-string">"got ride off"</span>],
    [<span class="hljs-string">"got rid of"</span>],
    <span class="hljs-string">"Did you mean `got rid of`?"</span>,
    <span class="hljs-string">"Ensures `got rid of` is used instead of `got rid off`."</span>
),
</code></pre>
<p>Each of these is a "phrase correction".
To add one for the problem I outlined above, we just need to append to the end of the list and open <a href="./never_wait">a pull request</a>.</p>
<pre><code class="hljs language-rust"><span class="hljs-string">"InThe"</span> => (
    [<span class="hljs-string">"int he"</span>],
    [<span class="hljs-string">"in the"</span>],
    <span class="hljs-string">"Did you mean `in the`?"</span>,
    <span class="hljs-string">"Detects and corrects a spacing error where `in the` is mistakenly written as `int he`. Proper spacing is essential for readability and grammatical correctness in common phrases."</span>
),
</code></pre>
<p>If you want to go the extra mile, we'd really appreciate if you added one or two test cases to <code>harper-core/src/linting/phrase_corrections/tests.rs</code> to make sure everything works as expected:</p>
<pre><code class="hljs language-rust"><span class="hljs-meta">#[test]</span>
<span class="hljs-keyword">fn</span> <span class="hljs-title function_">corrects_int_he</span>() {
    <span class="hljs-title function_ invoke__">assert_suggestion_result</span>(
        <span class="hljs-string">"That pizza stayed int he box."</span>,
        <span class="hljs-title function_ invoke__">lint_group</span>(),
        <span class="hljs-string">"That pizza stayed in the box."</span>,
    );
}
</code></pre>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Writing an Expression Rule for Harper]]></title>
            <link>https://elijahpotter.dev/articles/writing_an_expression_rule_for_harper</link>
            <guid>https://elijahpotter.dev/articles/writing_an_expression_rule_for_harper</guid>
            <pubDate>Wed, 09 Jul 2025 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Writing an Expression Rule for Harper</h1>
<p><em>This is part of a series.
<a href="./writing_a_grammatical_rule_for_harper">Go to the start.</a></em></p>
<p>Expression rules (or more commonly, <code>ExprLinter</code>s) are Harper rules that use declarative expressions to find and fix grammatical errors.
They're halfway between a "phrase correction" and manually implementing <code>Linter</code>.</p>
<p>Make sure you properly <a href="https://writewithharper.com/docs/contributors/environment">set up your environment</a>.</p>
<p>Before we get started, let's take a look at the <code>ExprLinter</code> trait.
Here's what it looks like at the time of writing this post.</p>
<pre><code class="hljs language-rust"><span class="hljs-comment">/// A trait that searches for tokens that fulfil [`Expr`]s in a [`Document`].</span>
<span class="hljs-comment">///</span>
<span class="hljs-comment">/// Makes use of [`TokenStringExt::iter_chunks`] to avoid matching across sentence or clause</span>
<span class="hljs-comment">/// boundaries.</span>
<span class="hljs-meta">#[blanket(derive(Box))]</span>
<span class="hljs-keyword">pub</span> <span class="hljs-keyword">trait</span> <span class="hljs-title class_">ExprLinter</span>: LSend {
    <span class="hljs-comment">/// A simple getter for the expression you want Harper to search for.</span>
    <span class="hljs-keyword">fn</span> <span class="hljs-title function_">expr</span>(&#x26;<span class="hljs-keyword">self</span>) <span class="hljs-punctuation">-></span> &#x26;<span class="hljs-keyword">dyn</span> Expr;
    <span class="hljs-comment">/// If any portions of a [`Document`] match [`Self::expr`], they are passed through [`ExprLinter::match_to_lint`] to be</span>
    <span class="hljs-comment">/// transformed into a [`Lint`] for editor consumption.</span>
    <span class="hljs-comment">///</span>
    <span class="hljs-comment">/// This function may return `None` to elect _not_ to produce a lint.</span>
    <span class="hljs-keyword">fn</span> <span class="hljs-title function_">match_to_lint</span>(&#x26;<span class="hljs-keyword">self</span>, matched_tokens: &#x26;[Token], source: &#x26;[<span class="hljs-type">char</span>]) <span class="hljs-punctuation">-></span> <span class="hljs-type">Option</span>&#x3C;Lint>;
    <span class="hljs-comment">/// A user-facing description of what kinds of grammatical errors this rule looks for.</span>
    <span class="hljs-comment">/// It is usually shown in settings menus.</span>
    <span class="hljs-keyword">fn</span> <span class="hljs-title function_">description</span>(&#x26;<span class="hljs-keyword">self</span>) <span class="hljs-punctuation">-></span> &#x26;<span class="hljs-type">str</span>;
}
</code></pre>
<p>The structure of the trait reveals some of the behind-the-scenes work Harper is doing for you.
There are three phases:</p>
<ol>
<li>You provide Harper an <code>Expr</code>.
It will iterate through the document, looking for token sequences that match your expression.</li>
<li>Any and all matches are passed to <code>match_to_lint</code>.
From there, you can perform optional additional validation to confirm that the tokens really do represent a grammatical error.
If so, return <code>None</code>.
Otherwise, return a <code>Lint</code> with any suggestions that may fix the problem.</li>
<li>Harper will handle everything else. It will show UI, reformat text, and settings menus to the user.
It will also perform aggressive caching on the first two steps, so any modifications to the document have a negligible performance impact.</li>
</ol>
<h2>Let's Get Started</h2>
<p>Now that we've reviewed the essentials, let's implement an <code>ExprLinter</code>.</p>
<p>Before we can write a single line of code, we need a grammatical rule of interest.
I'm going to pay a visit to the Harper <a href="https://github.com/Automattic/harper/issues?q=is%3Aissue%20state%3Aopen%20label%3Aenhancement%20label%3Aharper-core%20label%3Alinting">issue board</a>.</p>
<p>After looking through a few options, I think <a href="https://github.com/Automattic/harper/issues/1513">#1513</a> is a good candidate.
We are looking for missing prepositions between an adjective and a subject.</p>
<p>To get started, we'll create a file under <code>harper-core/src/linting</code> called <code>missing_preposition.rs</code> and add it to the parent Rust module.
I'll paste the template into the file:</p>
<pre><code class="hljs language-rust"><span class="hljs-keyword">pub</span> <span class="hljs-keyword">struct</span> <span class="hljs-title class_">MissingPreposition</span> {
    expr: <span class="hljs-type">Box</span>&#x3C;<span class="hljs-keyword">dyn</span> Expr>,
}

<span class="hljs-keyword">impl</span> <span class="hljs-title class_">Default</span> <span class="hljs-keyword">for</span> <span class="hljs-title class_">MissingPreposition</span> {
    <span class="hljs-keyword">fn</span> <span class="hljs-title function_">default</span>() <span class="hljs-punctuation">-></span> <span class="hljs-keyword">Self</span> {
        <span class="hljs-keyword">let</span> <span class="hljs-variable">expr</span> = todo!();

        <span class="hljs-keyword">Self</span> {
            expr: <span class="hljs-type">Box</span>::<span class="hljs-title function_ invoke__">new</span>(expr),
        }
    }
}


<span class="hljs-keyword">impl</span> <span class="hljs-title class_">ExprLinter</span> <span class="hljs-keyword">for</span> <span class="hljs-title class_">MissingPreposition</span> {
    <span class="hljs-keyword">fn</span> <span class="hljs-title function_">expr</span>(&#x26;<span class="hljs-keyword">self</span>) <span class="hljs-punctuation">-></span> &#x26;<span class="hljs-keyword">dyn</span> Expr {
        <span class="hljs-keyword">self</span>.expr.<span class="hljs-title function_ invoke__">as_ref</span>()
    }

    <span class="hljs-keyword">fn</span> <span class="hljs-title function_">match_to_lint</span>(&#x26;<span class="hljs-keyword">self</span>, matched_tokens: &#x26;[Token], _source: &#x26;[<span class="hljs-type">char</span>]) <span class="hljs-punctuation">-></span> <span class="hljs-type">Option</span>&#x3C;Lint> {
        <span class="hljs-built_in">unimplemented!</span>()
    }

    <span class="hljs-keyword">fn</span> <span class="hljs-title function_">description</span>(&#x26;<span class="hljs-keyword">self</span>) <span class="hljs-punctuation">-></span> &#x26;<span class="hljs-symbol">'static</span> <span class="hljs-type">str</span> {
        <span class="hljs-built_in">unimplemented!</span>()
    }
}
</code></pre>
<p>I like to start by building out a few test cases before working on the actual code.
We get some for free from the GitHub issue:</p>
<pre><code class="hljs language-rust"><span class="hljs-meta">#[test]</span>
<span class="hljs-keyword">fn</span> <span class="hljs-title function_">fixes_issue_1513</span>() {
    <span class="hljs-title function_ invoke__">assert_lint_count</span>(
        <span class="hljs-string">"The city is famous its beaches."</span>,
        MissingPreposition::<span class="hljs-title function_ invoke__">default</span>(),
        <span class="hljs-number">1</span>,
    );
    <span class="hljs-title function_ invoke__">assert_lint_count</span>(
        <span class="hljs-string">"The students are interested learning."</span>,
        MissingPreposition::<span class="hljs-title function_ invoke__">default</span>(),
        <span class="hljs-number">1</span>,
    );
}
</code></pre>
<p>Obviously, these tests will fail if we try to run <code>cargo test</code>, but at this point you should do so anyway to make sure your toolchain is working.</p>
<h2>Writing our Expression</h2>
<p>The heart of this grammatical rule is the <code>Expr</code> (pronounced <em>expression</em>).
There are a number of ways to go about making one of these.
The simplest (and most common by far) is to put together a <a href="https://docs.rs/harper-core/latest/harper_core/expr/struct.SequenceExpr.html"><code>SequenceExpr</code></a>.</p>
<p>In our case, we're looking for missing prepositions between an adjective and a noun.
A good expression to start with could look like:</p>
<pre><code class="hljs language-rust"><span class="hljs-keyword">impl</span> <span class="hljs-title class_">Default</span> <span class="hljs-keyword">for</span> <span class="hljs-title class_">MissingPreposition</span> {
    <span class="hljs-keyword">fn</span> <span class="hljs-title function_">default</span>() <span class="hljs-punctuation">-></span> <span class="hljs-keyword">Self</span> {
        <span class="hljs-keyword">let</span> <span class="hljs-variable">expr</span> = SequenceExpr::<span class="hljs-title function_ invoke__">default</span>()
            .<span class="hljs-title function_ invoke__">then</span>(UPOSSet::<span class="hljs-title function_ invoke__">new</span>(&#x26;[UPOS::ADJ]))
            .<span class="hljs-title function_ invoke__">t_ws</span>()
            .<span class="hljs-title function_ invoke__">then</span>(UPOSSet::<span class="hljs-title function_ invoke__">new</span>(&#x26;[UPOS::NOUN, UPOS::PRON, UPOS::PROPN]));

        <span class="hljs-keyword">Self</span> {
            expr: <span class="hljs-type">Box</span>::<span class="hljs-title function_ invoke__">new</span>(expr),
        }
    }
}
</code></pre>
<p>We're using a <code>UPOSSet</code> here, which is another kind of <code>Expr</code> that looks for specific parts of speech.
The name derives from the <a href="https://universaldependencies.org/u/pos/index.html">Universal Dependencies tag system</a>.
Any tokens tagged with any of the options we've provided to the <code>UPOSSet</code> will match.</p>
<p>However, it's easy to create an example that this expression matches against, but doesn't contain a grammatical error.
We call this a false positive.
Let's write one and add it to our test suite.</p>
<pre><code class="hljs language-rust"><span class="hljs-meta">#[test]</span>
<span class="hljs-keyword">fn</span> <span class="hljs-title function_">allows_terrible_stuff</span>() {
    <span class="hljs-title function_ invoke__">assert_no_lints</span>(
        <span class="hljs-string">"Either it was terrible stuff or the whiskey distorted things."</span>,
        MissingPreposition::<span class="hljs-title function_ invoke__">default</span>(),
    );
}
</code></pre>
<p>From here, you should use your brain to continuously refine the expression into something that
maintains a low false-positive rate while remaining useful.
Here's what I settled on:</p>
<pre><code class="hljs language-rust"><span class="hljs-keyword">impl</span> <span class="hljs-title class_">Default</span> <span class="hljs-keyword">for</span> <span class="hljs-title class_">MissingPreposition</span> {
    <span class="hljs-keyword">fn</span> <span class="hljs-title function_">default</span>() <span class="hljs-punctuation">-></span> <span class="hljs-keyword">Self</span> {
        <span class="hljs-keyword">let</span> <span class="hljs-variable">expr</span> = SequenceExpr::<span class="hljs-title function_ invoke__">default</span>()
            .<span class="hljs-title function_ invoke__">then</span>(
                AnchorStart.<span class="hljs-title function_ invoke__">or</span>(SequenceExpr::<span class="hljs-title function_ invoke__">default</span>()
                    .<span class="hljs-title function_ invoke__">then</span>(UPOSSet::<span class="hljs-title function_ invoke__">new</span>(&#x26;[UPOS::DET]))
                    .<span class="hljs-title function_ invoke__">t_ws</span>()),
            )
            .<span class="hljs-title function_ invoke__">then</span>(UPOSSet::<span class="hljs-title function_ invoke__">new</span>(&#x26;[UPOS::NOUN, UPOS::PRON, UPOS::PROPN]))
            .<span class="hljs-title function_ invoke__">t_ws</span>()
            .<span class="hljs-title function_ invoke__">then</span>(UPOSSet::<span class="hljs-title function_ invoke__">new</span>(&#x26;[UPOS::AUX]))
            .<span class="hljs-title function_ invoke__">t_ws</span>()
            .<span class="hljs-title function_ invoke__">then</span>(UPOSSet::<span class="hljs-title function_ invoke__">new</span>(&#x26;[UPOS::ADJ]))
            .<span class="hljs-title function_ invoke__">t_ws</span>()
            .<span class="hljs-title function_ invoke__">then</span>(UPOSSet::<span class="hljs-title function_ invoke__">new</span>(&#x26;[UPOS::NOUN, UPOS::PRON, UPOS::PROPN]))
            .<span class="hljs-title function_ invoke__">then_optional</span>(AnyPattern)
            .<span class="hljs-title function_ invoke__">then_optional</span>(AnyPattern);

        <span class="hljs-keyword">Self</span> {
            expr: <span class="hljs-type">Box</span>::<span class="hljs-title function_ invoke__">new</span>(expr),
        }
    }
}
</code></pre>
<p>Now that we have an effective expression as a base, let's fill out the remaining fields.
I found checking for an adposition reduced the false-positive rate, and it was easiest to add it to the <code>match_to_lint</code> function.</p>
<pre><code class="hljs language-rust"><span class="hljs-keyword">impl</span> <span class="hljs-title class_">ExprLinter</span> <span class="hljs-keyword">for</span> <span class="hljs-title class_">MissingPreposition</span> {
    <span class="hljs-keyword">fn</span> <span class="hljs-title function_">expr</span>(&#x26;<span class="hljs-keyword">self</span>) <span class="hljs-punctuation">-></span> &#x26;<span class="hljs-keyword">dyn</span> Expr {
        <span class="hljs-keyword">self</span>.expr.<span class="hljs-title function_ invoke__">as_ref</span>()
    }

    <span class="hljs-keyword">fn</span> <span class="hljs-title function_">match_to_lint</span>(&#x26;<span class="hljs-keyword">self</span>, matched_tokens: &#x26;[Token], _source: &#x26;[<span class="hljs-type">char</span>]) <span class="hljs-punctuation">-></span> <span class="hljs-type">Option</span>&#x3C;Lint> {
        <span class="hljs-keyword">if</span> matched_tokens.<span class="hljs-title function_ invoke__">last</span>()?.kind.<span class="hljs-title function_ invoke__">is_upos</span>(UPOS::ADP) {
            <span class="hljs-keyword">return</span> <span class="hljs-literal">None</span>;
        }

        <span class="hljs-title function_ invoke__">Some</span>({
            Lint {
                span: matched_tokens[<span class="hljs-number">2</span>..<span class="hljs-number">4</span>].<span class="hljs-title function_ invoke__">span</span>()?,
                lint_kind: LintKind::Miscellaneous,
                suggestions: <span class="hljs-built_in">vec!</span>[],
                message: <span class="hljs-string">"You may be missing a preposition here."</span>.<span class="hljs-title function_ invoke__">to_owned</span>(),
                priority: <span class="hljs-number">31</span>,
            }
        })
    }

    <span class="hljs-keyword">fn</span> <span class="hljs-title function_">description</span>(&#x26;<span class="hljs-keyword">self</span>) <span class="hljs-punctuation">-></span> &#x26;<span class="hljs-symbol">'static</span> <span class="hljs-type">str</span> {
        <span class="hljs-string">"Locates potentially missing prepositions."</span>
    }
}
</code></pre>
<p>That's it!
We've written our rule.</p>
<p>Don't forget to <a href="https://writewithharper.com/docs/contributors/author-a-rule#Register-Your-Rule">register your rule</a> and add some more tests before opening PR.
Make sure you take a look at the <a href="https://github.com/Automattic/harper/pull/1530">pull request</a> to see the finished rule.</p>]]></description>
            <content:encoded><![CDATA[<h1>Writing an Expression Rule for Harper</h1>
<p><em>This is part of a series.
<a href="./writing_a_grammatical_rule_for_harper">Go to the start.</a></em></p>
<p>Expression rules (or more commonly, <code>ExprLinter</code>s) are Harper rules that use declarative expressions to find and fix grammatical errors.
They're halfway between a "phrase correction" and manually implementing <code>Linter</code>.</p>
<p>Make sure you properly <a href="https://writewithharper.com/docs/contributors/environment">set up your environment</a>.</p>
<p>Before we get started, let's take a look at the <code>ExprLinter</code> trait.
Here's what it looks like at the time of writing this post.</p>
<pre><code class="hljs language-rust"><span class="hljs-comment">/// A trait that searches for tokens that fulfil [`Expr`]s in a [`Document`].</span>
<span class="hljs-comment">///</span>
<span class="hljs-comment">/// Makes use of [`TokenStringExt::iter_chunks`] to avoid matching across sentence or clause</span>
<span class="hljs-comment">/// boundaries.</span>
<span class="hljs-meta">#[blanket(derive(Box))]</span>
<span class="hljs-keyword">pub</span> <span class="hljs-keyword">trait</span> <span class="hljs-title class_">ExprLinter</span>: LSend {
    <span class="hljs-comment">/// A simple getter for the expression you want Harper to search for.</span>
    <span class="hljs-keyword">fn</span> <span class="hljs-title function_">expr</span>(&#x26;<span class="hljs-keyword">self</span>) <span class="hljs-punctuation">-></span> &#x26;<span class="hljs-keyword">dyn</span> Expr;
    <span class="hljs-comment">/// If any portions of a [`Document`] match [`Self::expr`], they are passed through [`ExprLinter::match_to_lint`] to be</span>
    <span class="hljs-comment">/// transformed into a [`Lint`] for editor consumption.</span>
    <span class="hljs-comment">///</span>
    <span class="hljs-comment">/// This function may return `None` to elect _not_ to produce a lint.</span>
    <span class="hljs-keyword">fn</span> <span class="hljs-title function_">match_to_lint</span>(&#x26;<span class="hljs-keyword">self</span>, matched_tokens: &#x26;[Token], source: &#x26;[<span class="hljs-type">char</span>]) <span class="hljs-punctuation">-></span> <span class="hljs-type">Option</span>&#x3C;Lint>;
    <span class="hljs-comment">/// A user-facing description of what kinds of grammatical errors this rule looks for.</span>
    <span class="hljs-comment">/// It is usually shown in settings menus.</span>
    <span class="hljs-keyword">fn</span> <span class="hljs-title function_">description</span>(&#x26;<span class="hljs-keyword">self</span>) <span class="hljs-punctuation">-></span> &#x26;<span class="hljs-type">str</span>;
}
</code></pre>
<p>The structure of the trait reveals some of the behind-the-scenes work Harper is doing for you.
There are three phases:</p>
<ol>
<li>You provide Harper an <code>Expr</code>.
It will iterate through the document, looking for token sequences that match your expression.</li>
<li>Any and all matches are passed to <code>match_to_lint</code>.
From there, you can perform optional additional validation to confirm that the tokens really do represent a grammatical error.
If so, return <code>None</code>.
Otherwise, return a <code>Lint</code> with any suggestions that may fix the problem.</li>
<li>Harper will handle everything else. It will show UI, reformat text, and settings menus to the user.
It will also perform aggressive caching on the first two steps, so any modifications to the document have a negligible performance impact.</li>
</ol>
<h2>Let's Get Started</h2>
<p>Now that we've reviewed the essentials, let's implement an <code>ExprLinter</code>.</p>
<p>Before we can write a single line of code, we need a grammatical rule of interest.
I'm going to pay a visit to the Harper <a href="https://github.com/Automattic/harper/issues?q=is%3Aissue%20state%3Aopen%20label%3Aenhancement%20label%3Aharper-core%20label%3Alinting">issue board</a>.</p>
<p>After looking through a few options, I think <a href="https://github.com/Automattic/harper/issues/1513">#1513</a> is a good candidate.
We are looking for missing prepositions between an adjective and a subject.</p>
<p>To get started, we'll create a file under <code>harper-core/src/linting</code> called <code>missing_preposition.rs</code> and add it to the parent Rust module.
I'll paste the template into the file:</p>
<pre><code class="hljs language-rust"><span class="hljs-keyword">pub</span> <span class="hljs-keyword">struct</span> <span class="hljs-title class_">MissingPreposition</span> {
    expr: <span class="hljs-type">Box</span>&#x3C;<span class="hljs-keyword">dyn</span> Expr>,
}

<span class="hljs-keyword">impl</span> <span class="hljs-title class_">Default</span> <span class="hljs-keyword">for</span> <span class="hljs-title class_">MissingPreposition</span> {
    <span class="hljs-keyword">fn</span> <span class="hljs-title function_">default</span>() <span class="hljs-punctuation">-></span> <span class="hljs-keyword">Self</span> {
        <span class="hljs-keyword">let</span> <span class="hljs-variable">expr</span> = todo!();

        <span class="hljs-keyword">Self</span> {
            expr: <span class="hljs-type">Box</span>::<span class="hljs-title function_ invoke__">new</span>(expr),
        }
    }
}


<span class="hljs-keyword">impl</span> <span class="hljs-title class_">ExprLinter</span> <span class="hljs-keyword">for</span> <span class="hljs-title class_">MissingPreposition</span> {
    <span class="hljs-keyword">fn</span> <span class="hljs-title function_">expr</span>(&#x26;<span class="hljs-keyword">self</span>) <span class="hljs-punctuation">-></span> &#x26;<span class="hljs-keyword">dyn</span> Expr {
        <span class="hljs-keyword">self</span>.expr.<span class="hljs-title function_ invoke__">as_ref</span>()
    }

    <span class="hljs-keyword">fn</span> <span class="hljs-title function_">match_to_lint</span>(&#x26;<span class="hljs-keyword">self</span>, matched_tokens: &#x26;[Token], _source: &#x26;[<span class="hljs-type">char</span>]) <span class="hljs-punctuation">-></span> <span class="hljs-type">Option</span>&#x3C;Lint> {
        <span class="hljs-built_in">unimplemented!</span>()
    }

    <span class="hljs-keyword">fn</span> <span class="hljs-title function_">description</span>(&#x26;<span class="hljs-keyword">self</span>) <span class="hljs-punctuation">-></span> &#x26;<span class="hljs-symbol">'static</span> <span class="hljs-type">str</span> {
        <span class="hljs-built_in">unimplemented!</span>()
    }
}
</code></pre>
<p>I like to start by building out a few test cases before working on the actual code.
We get some for free from the GitHub issue:</p>
<pre><code class="hljs language-rust"><span class="hljs-meta">#[test]</span>
<span class="hljs-keyword">fn</span> <span class="hljs-title function_">fixes_issue_1513</span>() {
    <span class="hljs-title function_ invoke__">assert_lint_count</span>(
        <span class="hljs-string">"The city is famous its beaches."</span>,
        MissingPreposition::<span class="hljs-title function_ invoke__">default</span>(),
        <span class="hljs-number">1</span>,
    );
    <span class="hljs-title function_ invoke__">assert_lint_count</span>(
        <span class="hljs-string">"The students are interested learning."</span>,
        MissingPreposition::<span class="hljs-title function_ invoke__">default</span>(),
        <span class="hljs-number">1</span>,
    );
}
</code></pre>
<p>Obviously, these tests will fail if we try to run <code>cargo test</code>, but at this point you should do so anyway to make sure your toolchain is working.</p>
<h2>Writing our Expression</h2>
<p>The heart of this grammatical rule is the <code>Expr</code> (pronounced <em>expression</em>).
There are a number of ways to go about making one of these.
The simplest (and most common by far) is to put together a <a href="https://docs.rs/harper-core/latest/harper_core/expr/struct.SequenceExpr.html"><code>SequenceExpr</code></a>.</p>
<p>In our case, we're looking for missing prepositions between an adjective and a noun.
A good expression to start with could look like:</p>
<pre><code class="hljs language-rust"><span class="hljs-keyword">impl</span> <span class="hljs-title class_">Default</span> <span class="hljs-keyword">for</span> <span class="hljs-title class_">MissingPreposition</span> {
    <span class="hljs-keyword">fn</span> <span class="hljs-title function_">default</span>() <span class="hljs-punctuation">-></span> <span class="hljs-keyword">Self</span> {
        <span class="hljs-keyword">let</span> <span class="hljs-variable">expr</span> = SequenceExpr::<span class="hljs-title function_ invoke__">default</span>()
            .<span class="hljs-title function_ invoke__">then</span>(UPOSSet::<span class="hljs-title function_ invoke__">new</span>(&#x26;[UPOS::ADJ]))
            .<span class="hljs-title function_ invoke__">t_ws</span>()
            .<span class="hljs-title function_ invoke__">then</span>(UPOSSet::<span class="hljs-title function_ invoke__">new</span>(&#x26;[UPOS::NOUN, UPOS::PRON, UPOS::PROPN]));

        <span class="hljs-keyword">Self</span> {
            expr: <span class="hljs-type">Box</span>::<span class="hljs-title function_ invoke__">new</span>(expr),
        }
    }
}
</code></pre>
<p>We're using a <code>UPOSSet</code> here, which is another kind of <code>Expr</code> that looks for specific parts of speech.
The name derives from the <a href="https://universaldependencies.org/u/pos/index.html">Universal Dependencies tag system</a>.
Any tokens tagged with any of the options we've provided to the <code>UPOSSet</code> will match.</p>
<p>However, it's easy to create an example that this expression matches against, but doesn't contain a grammatical error.
We call this a false positive.
Let's write one and add it to our test suite.</p>
<pre><code class="hljs language-rust"><span class="hljs-meta">#[test]</span>
<span class="hljs-keyword">fn</span> <span class="hljs-title function_">allows_terrible_stuff</span>() {
    <span class="hljs-title function_ invoke__">assert_no_lints</span>(
        <span class="hljs-string">"Either it was terrible stuff or the whiskey distorted things."</span>,
        MissingPreposition::<span class="hljs-title function_ invoke__">default</span>(),
    );
}
</code></pre>
<p>From here, you should use your brain to continuously refine the expression into something that
maintains a low false-positive rate while remaining useful.
Here's what I settled on:</p>
<pre><code class="hljs language-rust"><span class="hljs-keyword">impl</span> <span class="hljs-title class_">Default</span> <span class="hljs-keyword">for</span> <span class="hljs-title class_">MissingPreposition</span> {
    <span class="hljs-keyword">fn</span> <span class="hljs-title function_">default</span>() <span class="hljs-punctuation">-></span> <span class="hljs-keyword">Self</span> {
        <span class="hljs-keyword">let</span> <span class="hljs-variable">expr</span> = SequenceExpr::<span class="hljs-title function_ invoke__">default</span>()
            .<span class="hljs-title function_ invoke__">then</span>(
                AnchorStart.<span class="hljs-title function_ invoke__">or</span>(SequenceExpr::<span class="hljs-title function_ invoke__">default</span>()
                    .<span class="hljs-title function_ invoke__">then</span>(UPOSSet::<span class="hljs-title function_ invoke__">new</span>(&#x26;[UPOS::DET]))
                    .<span class="hljs-title function_ invoke__">t_ws</span>()),
            )
            .<span class="hljs-title function_ invoke__">then</span>(UPOSSet::<span class="hljs-title function_ invoke__">new</span>(&#x26;[UPOS::NOUN, UPOS::PRON, UPOS::PROPN]))
            .<span class="hljs-title function_ invoke__">t_ws</span>()
            .<span class="hljs-title function_ invoke__">then</span>(UPOSSet::<span class="hljs-title function_ invoke__">new</span>(&#x26;[UPOS::AUX]))
            .<span class="hljs-title function_ invoke__">t_ws</span>()
            .<span class="hljs-title function_ invoke__">then</span>(UPOSSet::<span class="hljs-title function_ invoke__">new</span>(&#x26;[UPOS::ADJ]))
            .<span class="hljs-title function_ invoke__">t_ws</span>()
            .<span class="hljs-title function_ invoke__">then</span>(UPOSSet::<span class="hljs-title function_ invoke__">new</span>(&#x26;[UPOS::NOUN, UPOS::PRON, UPOS::PROPN]))
            .<span class="hljs-title function_ invoke__">then_optional</span>(AnyPattern)
            .<span class="hljs-title function_ invoke__">then_optional</span>(AnyPattern);

        <span class="hljs-keyword">Self</span> {
            expr: <span class="hljs-type">Box</span>::<span class="hljs-title function_ invoke__">new</span>(expr),
        }
    }
}
</code></pre>
<p>Now that we have an effective expression as a base, let's fill out the remaining fields.
I found checking for an adposition reduced the false-positive rate, and it was easiest to add it to the <code>match_to_lint</code> function.</p>
<pre><code class="hljs language-rust"><span class="hljs-keyword">impl</span> <span class="hljs-title class_">ExprLinter</span> <span class="hljs-keyword">for</span> <span class="hljs-title class_">MissingPreposition</span> {
    <span class="hljs-keyword">fn</span> <span class="hljs-title function_">expr</span>(&#x26;<span class="hljs-keyword">self</span>) <span class="hljs-punctuation">-></span> &#x26;<span class="hljs-keyword">dyn</span> Expr {
        <span class="hljs-keyword">self</span>.expr.<span class="hljs-title function_ invoke__">as_ref</span>()
    }

    <span class="hljs-keyword">fn</span> <span class="hljs-title function_">match_to_lint</span>(&#x26;<span class="hljs-keyword">self</span>, matched_tokens: &#x26;[Token], _source: &#x26;[<span class="hljs-type">char</span>]) <span class="hljs-punctuation">-></span> <span class="hljs-type">Option</span>&#x3C;Lint> {
        <span class="hljs-keyword">if</span> matched_tokens.<span class="hljs-title function_ invoke__">last</span>()?.kind.<span class="hljs-title function_ invoke__">is_upos</span>(UPOS::ADP) {
            <span class="hljs-keyword">return</span> <span class="hljs-literal">None</span>;
        }

        <span class="hljs-title function_ invoke__">Some</span>({
            Lint {
                span: matched_tokens[<span class="hljs-number">2</span>..<span class="hljs-number">4</span>].<span class="hljs-title function_ invoke__">span</span>()?,
                lint_kind: LintKind::Miscellaneous,
                suggestions: <span class="hljs-built_in">vec!</span>[],
                message: <span class="hljs-string">"You may be missing a preposition here."</span>.<span class="hljs-title function_ invoke__">to_owned</span>(),
                priority: <span class="hljs-number">31</span>,
            }
        })
    }

    <span class="hljs-keyword">fn</span> <span class="hljs-title function_">description</span>(&#x26;<span class="hljs-keyword">self</span>) <span class="hljs-punctuation">-></span> &#x26;<span class="hljs-symbol">'static</span> <span class="hljs-type">str</span> {
        <span class="hljs-string">"Locates potentially missing prepositions."</span>
    }
}
</code></pre>
<p>That's it!
We've written our rule.</p>
<p>Don't forget to <a href="https://writewithharper.com/docs/contributors/author-a-rule#Register-Your-Rule">register your rule</a> and add some more tests before opening PR.
Make sure you take a look at the <a href="https://github.com/Automattic/harper/pull/1530">pull request</a> to see the finished rule.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Adding a Programming Language to Harper]]></title>
            <link>https://elijahpotter.dev/articles/adding_a_programming_language_to_harper</link>
            <guid>https://elijahpotter.dev/articles/adding_a_programming_language_to_harper</guid>
            <pubDate>Mon, 07 Jul 2025 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Adding a Programming Language to Harper</h1>
<p>When I started the <a href="https://github.com/automattic/harper">Harper project</a> I knew I wanted to be able to use it for the comments in my code.
First, because I knew these comments would become part of our official documentation over time, and because I hoped it would encourage me to write more.
Over time, this has become one of the most prized features of the software, attracting tens of thousands of developers.</p>
<p>The common problem, however, is that there have always been programming languages that our LSP doesn't support.
One of the <a href="https://github.com/Automattic/harper/issues/79">oldest issues</a> on GitHub is about this.</p>
<p>This post is a guide for adding a new programming language to the Harper language server.</p>
<p>Why isn't it in the official documentation?
While the information contained within this guide will remain relevant to the project for a long time, I don't imagine each identifier or file path to remain the same.
If you think it <em>would</em> better serve potential contributors to place this guide on the <a href="https://writewithharper.com">main site</a>, let me know.</p>
<h2>Introduction to Tree-sitter</h2>
<p><a href="https://tree-sitter.github.io/tree-sitter/">Tree-sitter</a> is fantastic framework for building fault-tolerant language parsers.
That means it is still able to parse the majority of a document, even if it contains portions of invalid syntax.</p>
<p>This is important for Harper, since we expect people to use Harper <em>while</em> their programming.
It should be OK if some of their code is incorrect, since we only care about their comments.</p>
<p>There are also a wide variety of Tree-sitter parsers available on <a href="https://crates.io/">crates.io</a>, ripe for our consumption.
If you want to add a language to Harper, this is the easiest way to do so.</p>
<h2>Step 0: Avoid Duplicating Work</h2>
<p>You're interested in adding support for a programming language.
If that's the case, it's possible other people are too.
Make sure no one else has <a href="https://elijahpotter.dev/articles/never_wait">opened a PR</a> or <a href="https://writewithharper.com/docs/integrations/language-server#Supported-Languages">has already merged support</a> for the language you have in mind.</p>
<h2>Step 1: Find a Grammar</h2>
<p>Look for an existing grammar on <a href="https://crates.io">crates.io</a>.
By convention, they tend to be named <code>tree-sitter-&#x3C;language></code>, where <code>&#x3C;language></code> is the language you're looking for. For example, <a href="https://crates.io/crates/tree-sitter-java"><code>tree-sitter-java</code></a> is for Java and <a href="https://crates.io/crates/tree-sitter-rust"><code>tree-sitter-rust</code></a> is for Rust.</p>
<p>If you would rather write your own grammar, make sure it is eventually published on <code>crates.io</code>.
<code>harper-ls</code> binaries are often consumed from <code>crates.io</code>, which requires that all upstream dependencies come from the same source.</p>
<h2>Step 2: Import and Wire In</h2>
<p>Harper's comment support lies in the <code>harper-comments</code> crate in <a href="https://github.com/automattic/harper/">the monorepo</a>.
Import the grammar's crate into the project with Cargo.</p>
<pre><code class="hljs language-bash">cargo add &#x3C;CRATE-NAME>
</code></pre>
<p>Then, add lines to the relevant functions in <code>harper-comments/src/comment_parser.rs</code>.
Make sure you visit the <a href="https://microsoft.github.io/language-server-protocol/">Language Server Protocol Specification</a> to obtain the correct language ID.</p>
<pre><code class="hljs language-rust"><span class="hljs-keyword">pub</span> <span class="hljs-keyword">fn</span> <span class="hljs-title function_">new_from_language_id</span>(
    language_id: &#x26;<span class="hljs-type">str</span>,
    markdown_options: MarkdownOptions,
) <span class="hljs-punctuation">-></span> <span class="hljs-type">Option</span>&#x3C;<span class="hljs-keyword">Self</span>> {
    <span class="hljs-keyword">let</span> <span class="hljs-variable">language</span> = <span class="hljs-keyword">match</span> language_id {
        <span class="hljs-string">"cmake"</span> => tree_sitter_cmake::LANGUAGE,
        <span class="hljs-string">"cpp"</span> => tree_sitter_cpp::LANGUAGE,
        <span class="hljs-string">"csharp"</span> => tree_sitter_c_sharp::LANGUAGE,
        <span class="hljs-string">"c"</span> => tree_sitter_c::LANGUAGE,
        <span class="hljs-string">"dart"</span> => harper_tree_sitter_dart::LANGUAGE,
        <span class="hljs-string">"go"</span> => tree_sitter_go::LANGUAGE, <span class="hljs-comment">// Add a line here</span>
</code></pre>
<pre><code class="hljs language-rust"><span class="hljs-comment">/// Convert a provided path to a corresponding Language Server Protocol file</span>
<span class="hljs-comment">/// type.</span>
<span class="hljs-comment">///</span>
<span class="hljs-comment">/// Note to contributors: try to keep this in sync with</span>
<span class="hljs-comment">/// [`Self::new_from_language_id`]</span>
<span class="hljs-keyword">fn</span> <span class="hljs-title function_">filename_to_filetype</span>(path: &#x26;Path) <span class="hljs-punctuation">-></span> <span class="hljs-type">Option</span>&#x3C;&#x26;<span class="hljs-symbol">'static</span> <span class="hljs-type">str</span>> {
    <span class="hljs-title function_ invoke__">Some</span>(<span class="hljs-keyword">match</span> path.<span class="hljs-title function_ invoke__">extension</span>()?.<span class="hljs-title function_ invoke__">to_str</span>()? {
        <span class="hljs-string">"bash"</span> => <span class="hljs-string">"shellscript"</span>,
        <span class="hljs-string">"c"</span> => <span class="hljs-string">"c"</span>,
        <span class="hljs-string">"cmake"</span> => <span class="hljs-string">"cmake"</span>,
        <span class="hljs-string">"cpp"</span> => <span class="hljs-string">"cpp"</span>,
        <span class="hljs-string">"cs"</span> => <span class="hljs-string">"csharp"</span>, <span class="hljs-comment">// Add a line here</span>
</code></pre>
<h2>Step 3: Testing</h2>
<p>To make sure everything behaves correctly, we need to add some integration tests.
You'll find all the existing ones under <code>harper-comments/tests/language_support_sources</code>.</p>
<p>Find or write several new files under this directory in the language you've added support for.
Add intentional grammatical errors to these file in syntactically relevant places.
We want to make sure that Harper can detect the errors we want and will ignore the errors we do not want.
For example, we might put an error inside an <code>@param</code> tag in JSDoc.
That way we'll know if Harper is not properly ignoring those elements.</p>
<p>Add new entries to the bottom of <code>harper-comments/tests/language_support.rs</code>.
The second parameter of the <code>create_test!</code> macro is the number of grammatical errors that Harper <strong>should</strong> detect in that file.</p>
<pre><code class="hljs language-rust">create_test!(ignore_shebang_3.sh, <span class="hljs-number">0</span>);
create_test!(ignore_shebang_4.sh, <span class="hljs-number">1</span>);
create_test!(common.mill, <span class="hljs-number">1</span>);
create_test!(basic_kotlin.kt, <span class="hljs-number">0</span>); <span class="hljs-comment">// Add a line here</span>
</code></pre>
<p>From there, you can run <code>cargo test</code> to make sure everything passes.</p>
<h2>Step 4: Document</h2>
<p>To advertise support for the language, there are a couple addition places that need modification.
Notably:</p>
<ul>
<li>The supported languages table in <code>packages/web/src/routes/docs/integrations/language-server/+page.md</code></li>
<li>The <a href="https://github.com/Automattic/harper/issues/79">GitHub Issue</a></li>
<li>The <code>activationEvents</code> key in the VS Code plugin's manifest: <code>packages/vscode-plugin/package.json</code></li>
</ul>
<h2>Done!</h2>
<p>That should be everything.
Open a draft pull request while you work and ping me (<a href="https://github.com/elijah-potter/">elijah-potter</a>) if you have any questions.</p>
<h2>Additional Resources</h2>
<ul>
<li><a href="https://github.com/Automattic/harper/pull/1443">PR for adding Solidity support</a></li>
<li><a href="https://github.com/Automattic/harper/pull/970">PR for adding Scala support</a></li>
</ul>]]></description>
            <content:encoded><![CDATA[<h1>Adding a Programming Language to Harper</h1>
<p>When I started the <a href="https://github.com/automattic/harper">Harper project</a> I knew I wanted to be able to use it for the comments in my code.
First, because I knew these comments would become part of our official documentation over time, and because I hoped it would encourage me to write more.
Over time, this has become one of the most prized features of the software, attracting tens of thousands of developers.</p>
<p>The common problem, however, is that there have always been programming languages that our LSP doesn't support.
One of the <a href="https://github.com/Automattic/harper/issues/79">oldest issues</a> on GitHub is about this.</p>
<p>This post is a guide for adding a new programming language to the Harper language server.</p>
<p>Why isn't it in the official documentation?
While the information contained within this guide will remain relevant to the project for a long time, I don't imagine each identifier or file path to remain the same.
If you think it <em>would</em> better serve potential contributors to place this guide on the <a href="https://writewithharper.com">main site</a>, let me know.</p>
<h2>Introduction to Tree-sitter</h2>
<p><a href="https://tree-sitter.github.io/tree-sitter/">Tree-sitter</a> is fantastic framework for building fault-tolerant language parsers.
That means it is still able to parse the majority of a document, even if it contains portions of invalid syntax.</p>
<p>This is important for Harper, since we expect people to use Harper <em>while</em> their programming.
It should be OK if some of their code is incorrect, since we only care about their comments.</p>
<p>There are also a wide variety of Tree-sitter parsers available on <a href="https://crates.io/">crates.io</a>, ripe for our consumption.
If you want to add a language to Harper, this is the easiest way to do so.</p>
<h2>Step 0: Avoid Duplicating Work</h2>
<p>You're interested in adding support for a programming language.
If that's the case, it's possible other people are too.
Make sure no one else has <a href="https://elijahpotter.dev/articles/never_wait">opened a PR</a> or <a href="https://writewithharper.com/docs/integrations/language-server#Supported-Languages">has already merged support</a> for the language you have in mind.</p>
<h2>Step 1: Find a Grammar</h2>
<p>Look for an existing grammar on <a href="https://crates.io">crates.io</a>.
By convention, they tend to be named <code>tree-sitter-&#x3C;language></code>, where <code>&#x3C;language></code> is the language you're looking for. For example, <a href="https://crates.io/crates/tree-sitter-java"><code>tree-sitter-java</code></a> is for Java and <a href="https://crates.io/crates/tree-sitter-rust"><code>tree-sitter-rust</code></a> is for Rust.</p>
<p>If you would rather write your own grammar, make sure it is eventually published on <code>crates.io</code>.
<code>harper-ls</code> binaries are often consumed from <code>crates.io</code>, which requires that all upstream dependencies come from the same source.</p>
<h2>Step 2: Import and Wire In</h2>
<p>Harper's comment support lies in the <code>harper-comments</code> crate in <a href="https://github.com/automattic/harper/">the monorepo</a>.
Import the grammar's crate into the project with Cargo.</p>
<pre><code class="hljs language-bash">cargo add &#x3C;CRATE-NAME>
</code></pre>
<p>Then, add lines to the relevant functions in <code>harper-comments/src/comment_parser.rs</code>.
Make sure you visit the <a href="https://microsoft.github.io/language-server-protocol/">Language Server Protocol Specification</a> to obtain the correct language ID.</p>
<pre><code class="hljs language-rust"><span class="hljs-keyword">pub</span> <span class="hljs-keyword">fn</span> <span class="hljs-title function_">new_from_language_id</span>(
    language_id: &#x26;<span class="hljs-type">str</span>,
    markdown_options: MarkdownOptions,
) <span class="hljs-punctuation">-></span> <span class="hljs-type">Option</span>&#x3C;<span class="hljs-keyword">Self</span>> {
    <span class="hljs-keyword">let</span> <span class="hljs-variable">language</span> = <span class="hljs-keyword">match</span> language_id {
        <span class="hljs-string">"cmake"</span> => tree_sitter_cmake::LANGUAGE,
        <span class="hljs-string">"cpp"</span> => tree_sitter_cpp::LANGUAGE,
        <span class="hljs-string">"csharp"</span> => tree_sitter_c_sharp::LANGUAGE,
        <span class="hljs-string">"c"</span> => tree_sitter_c::LANGUAGE,
        <span class="hljs-string">"dart"</span> => harper_tree_sitter_dart::LANGUAGE,
        <span class="hljs-string">"go"</span> => tree_sitter_go::LANGUAGE, <span class="hljs-comment">// Add a line here</span>
</code></pre>
<pre><code class="hljs language-rust"><span class="hljs-comment">/// Convert a provided path to a corresponding Language Server Protocol file</span>
<span class="hljs-comment">/// type.</span>
<span class="hljs-comment">///</span>
<span class="hljs-comment">/// Note to contributors: try to keep this in sync with</span>
<span class="hljs-comment">/// [`Self::new_from_language_id`]</span>
<span class="hljs-keyword">fn</span> <span class="hljs-title function_">filename_to_filetype</span>(path: &#x26;Path) <span class="hljs-punctuation">-></span> <span class="hljs-type">Option</span>&#x3C;&#x26;<span class="hljs-symbol">'static</span> <span class="hljs-type">str</span>> {
    <span class="hljs-title function_ invoke__">Some</span>(<span class="hljs-keyword">match</span> path.<span class="hljs-title function_ invoke__">extension</span>()?.<span class="hljs-title function_ invoke__">to_str</span>()? {
        <span class="hljs-string">"bash"</span> => <span class="hljs-string">"shellscript"</span>,
        <span class="hljs-string">"c"</span> => <span class="hljs-string">"c"</span>,
        <span class="hljs-string">"cmake"</span> => <span class="hljs-string">"cmake"</span>,
        <span class="hljs-string">"cpp"</span> => <span class="hljs-string">"cpp"</span>,
        <span class="hljs-string">"cs"</span> => <span class="hljs-string">"csharp"</span>, <span class="hljs-comment">// Add a line here</span>
</code></pre>
<h2>Step 3: Testing</h2>
<p>To make sure everything behaves correctly, we need to add some integration tests.
You'll find all the existing ones under <code>harper-comments/tests/language_support_sources</code>.</p>
<p>Find or write several new files under this directory in the language you've added support for.
Add intentional grammatical errors to these file in syntactically relevant places.
We want to make sure that Harper can detect the errors we want and will ignore the errors we do not want.
For example, we might put an error inside an <code>@param</code> tag in JSDoc.
That way we'll know if Harper is not properly ignoring those elements.</p>
<p>Add new entries to the bottom of <code>harper-comments/tests/language_support.rs</code>.
The second parameter of the <code>create_test!</code> macro is the number of grammatical errors that Harper <strong>should</strong> detect in that file.</p>
<pre><code class="hljs language-rust">create_test!(ignore_shebang_3.sh, <span class="hljs-number">0</span>);
create_test!(ignore_shebang_4.sh, <span class="hljs-number">1</span>);
create_test!(common.mill, <span class="hljs-number">1</span>);
create_test!(basic_kotlin.kt, <span class="hljs-number">0</span>); <span class="hljs-comment">// Add a line here</span>
</code></pre>
<p>From there, you can run <code>cargo test</code> to make sure everything passes.</p>
<h2>Step 4: Document</h2>
<p>To advertise support for the language, there are a couple addition places that need modification.
Notably:</p>
<ul>
<li>The supported languages table in <code>packages/web/src/routes/docs/integrations/language-server/+page.md</code></li>
<li>The <a href="https://github.com/Automattic/harper/issues/79">GitHub Issue</a></li>
<li>The <code>activationEvents</code> key in the VS Code plugin's manifest: <code>packages/vscode-plugin/package.json</code></li>
</ul>
<h2>Done!</h2>
<p>That should be everything.
Open a draft pull request while you work and ping me (<a href="https://github.com/elijah-potter/">elijah-potter</a>) if you have any questions.</p>
<h2>Additional Resources</h2>
<ul>
<li><a href="https://github.com/Automattic/harper/pull/1443">PR for adding Solidity support</a></li>
<li><a href="https://github.com/Automattic/harper/pull/970">PR for adding Scala support</a></li>
</ul>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Local-First Software Is Easier to Scale]]></title>
            <link>https://elijahpotter.dev/articles/local-first_software_is_easier_to_scale</link>
            <guid>https://elijahpotter.dev/articles/local-first_software_is_easier_to_scale</guid>
            <pubDate>Fri, 04 Jul 2025 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Local-First Software is Easier to Scale</h1>
<img src="/images/flatirons.webp" alt="The Flatirons">
<p>The title of this post is somewhat misleading. Local-first software rarely needs to be scaled at all.</p>
<p><a href="https://writewithharper.com">Harper</a> recently received a massive increase in both traffic and user count.
How?
By making it to the front page of Hacker News.
If you couldn't tell by the extent to which I talk about this, I'm thrilled.
This amounted to a tremendous amount of free advertising, if nothing else.</p>
<p>I've been thinking a lot about the wondrous benefits of working at the edge.
That's just one term of many that people use to describe Harper: "edge-first".
Others include "local-first" or simply "on-device".
They are all just jargon for a simple idea: We run all the relevant grammar checking code as physically close to the users as possible.
That way, we can serve up suggestions faster than anyone else.
To achieve our goal, we make sure that everything is well-optimized and lean.
Harper barely uses any CPU or memory at all.</p>
<p>What does all this mean in the context of the recent uptick in user acquisition? Imagine, for a moment, we are LanguageTool.
Our software is written in Java, with questionable attendance to the efficiency of our code.
Since our software requires a big server to run, we have a certain number of servers on standby to handle a certain amount of expected load.</p>
<p>If suddenly the number of requests we received were to double (as it did for the Chrome extension), we would be big trouble.
To avoid increasing the latency on requests (or dropping requests all together), we would need to scale up the number of running servers.
This not only takes hiring an expert in cloud architecture, but also additional funding to pay the AWS (or whatever) bill at the end of the month.</p>
<p>Because Harper runs at the edge (no server required), we don't have to worry about that.
In fact, I only noticed that our user count had spiked after I visited Hacker News the morning after.
No hiccups at all.</p>
<p>Lots of cloud providers like to brag about being able to scale with their users. I like to brag about not having to scale at all.</p>]]></description>
            <content:encoded><![CDATA[<h1>Local-First Software is Easier to Scale</h1>
<img src="/images/flatirons.webp" alt="The Flatirons">
<p>The title of this post is somewhat misleading. Local-first software rarely needs to be scaled at all.</p>
<p><a href="https://writewithharper.com">Harper</a> recently received a massive increase in both traffic and user count.
How?
By making it to the front page of Hacker News.
If you couldn't tell by the extent to which I talk about this, I'm thrilled.
This amounted to a tremendous amount of free advertising, if nothing else.</p>
<p>I've been thinking a lot about the wondrous benefits of working at the edge.
That's just one term of many that people use to describe Harper: "edge-first".
Others include "local-first" or simply "on-device".
They are all just jargon for a simple idea: We run all the relevant grammar checking code as physically close to the users as possible.
That way, we can serve up suggestions faster than anyone else.
To achieve our goal, we make sure that everything is well-optimized and lean.
Harper barely uses any CPU or memory at all.</p>
<p>What does all this mean in the context of the recent uptick in user acquisition? Imagine, for a moment, we are LanguageTool.
Our software is written in Java, with questionable attendance to the efficiency of our code.
Since our software requires a big server to run, we have a certain number of servers on standby to handle a certain amount of expected load.</p>
<p>If suddenly the number of requests we received were to double (as it did for the Chrome extension), we would be big trouble.
To avoid increasing the latency on requests (or dropping requests all together), we would need to scale up the number of running servers.
This not only takes hiring an expert in cloud architecture, but also additional funding to pay the AWS (or whatever) bill at the end of the month.</p>
<p>Because Harper runs at the edge (no server required), we don't have to worry about that.
In fact, I only noticed that our user count had spiked after I visited Hacker News the morning after.
No hiccups at all.</p>
<p>Lots of cloud providers like to brag about being able to scale with their users. I like to brag about not having to scale at all.</p>]]></content:encoded>
            <enclosure url="https://elijahpotter.dev/images/flatirons.webp" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[Code Ages like Milk]]></title>
            <link>https://elijahpotter.dev/articles/code_ages_like_milk</link>
            <guid>https://elijahpotter.dev/articles/code_ages_like_milk</guid>
            <pubDate>Fri, 27 Jun 2025 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Code Ages Like Milk</h1>
<img src="/images/front_range.webp" alt="The Front Range in the summer is hard to beat">
<p>A bold title, no? But it’s true, and it’s something that I (and most other maintainers) have to deal with on a regular basis. Failing to account for this reality can slow down development and dissuade contributors from sticking around.</p>
<p>As the chief maintainer of Harper, one of my main jobs is to act as quality control. I receive <a href="https://github.com/Automattic/harper/pulse/monthly">a number of PRs each week</a>, both from new contributors and old. Given the diverse pool of contributors, it makes sense that I review a diverse pool of code of varying levels of quality.</p>
<p>When I have my head down on technical and attention-intense projects, I have less time to review code. When I have less time, I need to set priorities. The question becomes: Would I rather let code from high-quality, reliable sources pile up, or code from sources of unknown quality?</p>
<p>I usually end up reviewing the high-quality sources first, since they’ll likely require fewer revisions. Fewer revisions means quicker merge means improving the user experience faster. Whether or not this is a good decision is something I’ve been thinking about all week.</p>
<h2>Code Ages</h2>
<p>As code sits stagnant in a PR, it ages. You might say I’m crazy, that there is no way the code itself changes as it sits still in an unmerged and unmodified PR. I’d say you’re right, except for one teeny-tiny detail: Code only has meaning when it sits within a broader context. If that broader context changes (for example, when other PRs are merged) the meaning of the unmodified code does too.</p>
<p>Let me put it another way. If the patch from a PR remains stagnant, but the code it gets patched onto changes, the actual impact of the patch does too. This can cause all sorts of problems, from merge conflicts to erroneous test failures.</p>
<p>The longer code sits in a PR, the more time it usually ends up taking me to get it merged (when I finally have the time to get to it), just from the merge conflicts alone. It’s the Lindy effect rearing its ugly head once more.</p>
<h2>Features Age Too</h2>
<p>Not only does code age when left untouched and unused, so do the features they represent. If a user requests for a button that marks some text as bold and they don’t receive that feature promptly, they’ll find an alternative solution, possibly from a competitor.</p>
<p>The rest of the app can evolve too. If you’re an individual contributor, it’s in your best interest for your code to make it to master. If that doesn’t happen quickly enough, another contributor (or, if the software is extensible, like Obsidian or WordPress) or plugin author might beat you to the punch.</p>
<h2>What Can I Do About It?</h2>
<p>I believe the way I’ve been handling this until now has been entirely wrong. I should be allocating more of my time to training new contributors and fielding PRs from established ones. Open sources is a team effort.</p>
<p>As for you, dear reader, that’s for you to figure out. The worst way code can age is if it stays in your head. Don’t let your ideas go to waste. <a href="./never_wait">Open that PR</a>, or remind your reviewer to take another look if you haven’t already.</p>]]></description>
            <content:encoded><![CDATA[<h1>Code Ages Like Milk</h1>
<img src="/images/front_range.webp" alt="The Front Range in the summer is hard to beat">
<p>A bold title, no? But it’s true, and it’s something that I (and most other maintainers) have to deal with on a regular basis. Failing to account for this reality can slow down development and dissuade contributors from sticking around.</p>
<p>As the chief maintainer of Harper, one of my main jobs is to act as quality control. I receive <a href="https://github.com/Automattic/harper/pulse/monthly">a number of PRs each week</a>, both from new contributors and old. Given the diverse pool of contributors, it makes sense that I review a diverse pool of code of varying levels of quality.</p>
<p>When I have my head down on technical and attention-intense projects, I have less time to review code. When I have less time, I need to set priorities. The question becomes: Would I rather let code from high-quality, reliable sources pile up, or code from sources of unknown quality?</p>
<p>I usually end up reviewing the high-quality sources first, since they’ll likely require fewer revisions. Fewer revisions means quicker merge means improving the user experience faster. Whether or not this is a good decision is something I’ve been thinking about all week.</p>
<h2>Code Ages</h2>
<p>As code sits stagnant in a PR, it ages. You might say I’m crazy, that there is no way the code itself changes as it sits still in an unmerged and unmodified PR. I’d say you’re right, except for one teeny-tiny detail: Code only has meaning when it sits within a broader context. If that broader context changes (for example, when other PRs are merged) the meaning of the unmodified code does too.</p>
<p>Let me put it another way. If the patch from a PR remains stagnant, but the code it gets patched onto changes, the actual impact of the patch does too. This can cause all sorts of problems, from merge conflicts to erroneous test failures.</p>
<p>The longer code sits in a PR, the more time it usually ends up taking me to get it merged (when I finally have the time to get to it), just from the merge conflicts alone. It’s the Lindy effect rearing its ugly head once more.</p>
<h2>Features Age Too</h2>
<p>Not only does code age when left untouched and unused, so do the features they represent. If a user requests for a button that marks some text as bold and they don’t receive that feature promptly, they’ll find an alternative solution, possibly from a competitor.</p>
<p>The rest of the app can evolve too. If you’re an individual contributor, it’s in your best interest for your code to make it to master. If that doesn’t happen quickly enough, another contributor (or, if the software is extensible, like Obsidian or WordPress) or plugin author might beat you to the punch.</p>
<h2>What Can I Do About It?</h2>
<p>I believe the way I’ve been handling this until now has been entirely wrong. I should be allocating more of my time to training new contributors and fielding PRs from established ones. Open sources is a team effort.</p>
<p>As for you, dear reader, that’s for you to figure out. The worst way code can age is if it stays in your head. Don’t let your ideas go to waste. <a href="./never_wait">Open that PR</a>, or remind your reviewer to take another look if you haven’t already.</p>]]></content:encoded>
            <enclosure url="https://elijahpotter.dev/images/front_range.webp" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[The Books I Have Read Since April 2025]]></title>
            <link>https://elijahpotter.dev/articles/the_books_i_have_read_since_april_2025</link>
            <guid>https://elijahpotter.dev/articles/the_books_i_have_read_since_april_2025</guid>
            <pubDate>Sun, 22 Jun 2025 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>The Books I Have Read Since April 2025</h1>
<p>It has been a slow couple of months for my reading habit.
Although, now that I've said that out loud, I realize I say that every time.
I never quite read enough books by the time I feel ready to write one of these reading reviews.</p>
<p>In the interest of brevity, let's jump straight in.</p>
<h2><em>How to Stop Time</em> by Matt Haig</h2>
<p>After reading <em>The Midnight Library</em> in <a href="/the_books_i_read_in_january_2025">January</a>, I was thoroughly impressed by Haig's work.
I wanted more, but wasn't sure what to read next.
So, the next time I saw a book with an interesting title which happened to be his, I picked it up.</p>
<p>If <em>The Midnight Library</em> was a literary therapy session, <em>How to Stop Time</em> is a history lesson.
It follows a man who ages at one-tenth the rate of a normal human.
Despite being given the gift of extra life, he regularly falls into spats of depression when his loved ones die before him.</p>
<p>While Haig's writing continues to be sentimental and appreciative of the small things in life (a trait few seem to possess these days), I ultimately found the book predictable and unengaging.
It was a good book, and I felt satisfied having read it, I had put it down for a few days at a time more than once to do so.
If you're going to read just one of Haig's books, it shouldn't be this one.</p>
<h2><em>Everything is Tuberculosis</em> by John Green</h2>
<p>I've been a fan of John Green's work since I was kid.
I remember reading <em>Looking for Alaska</em> as I cruised down the Mississippi River on a school trip.
It was one of the most memorable books (and expeditions) I've had the pleasure of experiencing.</p>
<p>Naturally, I was ecstatic to hear John Green published something new: <em>Everything is Tuberculosis</em>, a book about the facets of our lives (of which there are many) that we can thank TB for.</p>
<p>I liked this book, but I don't think it matches the reflective attitude about the human experience you find in his other books—but that's a good thing!
<em>Everything is Tuberculosis</em> is not a book you're supposed to relate to, but a book that tells a story about a life most in the western world find completely unfamiliar.
One where each and every day is counted.
One where the price of a medication is the most important number.
One where you might not survive.</p>
<p>The fact that I couldn't relate to the events in <em>Everything is Tuberculosis</em> is the exact reason why I'm glad I read it.</p>
<h2><em>How to Win Friends and Influence People</em> by Dale Carnegie</h2>
<p>I read <em>How to Win Friends and Influence People</em> (a mouthful, I know) not because I needed advice on the subject.
I read it because I believe this book to be a telling window into the minds of the businessmen of the early 20th century.</p>
<p>I've been told that this book is still relevant today; that the advice still works.
I disagree.
The priorities, values, and culture are entirely different today than when this book was written.
I found it fascinating to compare and contrast between my own experience and the author's.</p>]]></description>
            <content:encoded><![CDATA[<h1>The Books I Have Read Since April 2025</h1>
<p>It has been a slow couple of months for my reading habit.
Although, now that I've said that out loud, I realize I say that every time.
I never quite read enough books by the time I feel ready to write one of these reading reviews.</p>
<p>In the interest of brevity, let's jump straight in.</p>
<h2><em>How to Stop Time</em> by Matt Haig</h2>
<p>After reading <em>The Midnight Library</em> in <a href="/the_books_i_read_in_january_2025">January</a>, I was thoroughly impressed by Haig's work.
I wanted more, but wasn't sure what to read next.
So, the next time I saw a book with an interesting title which happened to be his, I picked it up.</p>
<p>If <em>The Midnight Library</em> was a literary therapy session, <em>How to Stop Time</em> is a history lesson.
It follows a man who ages at one-tenth the rate of a normal human.
Despite being given the gift of extra life, he regularly falls into spats of depression when his loved ones die before him.</p>
<p>While Haig's writing continues to be sentimental and appreciative of the small things in life (a trait few seem to possess these days), I ultimately found the book predictable and unengaging.
It was a good book, and I felt satisfied having read it, I had put it down for a few days at a time more than once to do so.
If you're going to read just one of Haig's books, it shouldn't be this one.</p>
<h2><em>Everything is Tuberculosis</em> by John Green</h2>
<p>I've been a fan of John Green's work since I was kid.
I remember reading <em>Looking for Alaska</em> as I cruised down the Mississippi River on a school trip.
It was one of the most memorable books (and expeditions) I've had the pleasure of experiencing.</p>
<p>Naturally, I was ecstatic to hear John Green published something new: <em>Everything is Tuberculosis</em>, a book about the facets of our lives (of which there are many) that we can thank TB for.</p>
<p>I liked this book, but I don't think it matches the reflective attitude about the human experience you find in his other books—but that's a good thing!
<em>Everything is Tuberculosis</em> is not a book you're supposed to relate to, but a book that tells a story about a life most in the western world find completely unfamiliar.
One where each and every day is counted.
One where the price of a medication is the most important number.
One where you might not survive.</p>
<p>The fact that I couldn't relate to the events in <em>Everything is Tuberculosis</em> is the exact reason why I'm glad I read it.</p>
<h2><em>How to Win Friends and Influence People</em> by Dale Carnegie</h2>
<p>I read <em>How to Win Friends and Influence People</em> (a mouthful, I know) not because I needed advice on the subject.
I read it because I believe this book to be a telling window into the minds of the businessmen of the early 20th century.</p>
<p>I've been told that this book is still relevant today; that the advice still works.
I disagree.
The priorities, values, and culture are entirely different today than when this book was written.
I found it fascinating to compare and contrast between my own experience and the author's.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Harper for Firefox]]></title>
            <link>https://elijahpotter.dev/articles/harper_for_firefox</link>
            <guid>https://elijahpotter.dev/articles/harper_for_firefox</guid>
            <pubDate>Fri, 20 Jun 2025 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Harper for Firefox</h1>
<img src="/images/harper_chrome_ext.png" alt="The Harper Firefox Extension">
<p>There's a great deal of overlap between those who use Harper and those who use Firefox.
Foremost, they both value <em>privacy</em>.
Which is why I'm so thrilled to say that Harper has an official plugin for Firefox.
While there has been an unofficial one for <a href="./harper_for_firefox_through_spellbolt">a while now</a>, this is the one that fully deserves the Harper label. It's ours, through and through.</p>
<p>It's still early days, so if you run into problems, feel free to open <a href="https://github.com/Automattic/harper/issues/new/choose">an issue</a>.</p>
<p>If you'd like to install it, pay us a visit on the <a href="https://addons.mozilla.org/en-US/firefox/addon/private-grammar-checker-harper/">Firefox Add-on Store</a>.</p>]]></description>
            <content:encoded><![CDATA[<h1>Harper for Firefox</h1>
<img src="/images/harper_chrome_ext.png" alt="The Harper Firefox Extension">
<p>There's a great deal of overlap between those who use Harper and those who use Firefox.
Foremost, they both value <em>privacy</em>.
Which is why I'm so thrilled to say that Harper has an official plugin for Firefox.
While there has been an unofficial one for <a href="./harper_for_firefox_through_spellbolt">a while now</a>, this is the one that fully deserves the Harper label. It's ours, through and through.</p>
<p>It's still early days, so if you run into problems, feel free to open <a href="https://github.com/Automattic/harper/issues/new/choose">an issue</a>.</p>
<p>If you'd like to install it, pay us a visit on the <a href="https://addons.mozilla.org/en-US/firefox/addon/private-grammar-checker-harper/">Firefox Add-on Store</a>.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Refactoring More and Faster]]></title>
            <link>https://elijahpotter.dev/articles/refactoring_more_and_faster</link>
            <guid>https://elijahpotter.dev/articles/refactoring_more_and_faster</guid>
            <pubDate>Tue, 17 Jun 2025 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Refactoring More and Faster</h1>
<img src="/images/clouds.webp" alt="Clouds in Colorado">
<p>I've been deep in the refactor rabbit-hole.
You know—that awful (but strangely satisfying) space where the majority of your commits are small pedantic edits that don't change the end-user-experience at all?</p>
<p>These past few weeks, I've been working so hard on Harper's Chrome Extension and building systems for transformation-based learning, I haven't had a chance to touch any of the actual grammar rules the software is supposed to fix.
As such, a few glaring tools for rule authors have gone unimplemented.
These are tools that—because they're so tied into the core system—need a lot of expertise and fine-tuning to work properly.</p>
<p>If you take a look at the <a href="https://github.com/Automattic/harper/pull/1393">diff</a>, you'll find that there are a lot of small edits.
Since each one takes a minimal amount of cognitive effort, it is easy for me to slide into a trance.
A trance where I am technically moving towards my goal (in this case, creating an expression system that encompasses—but is more powerful than—our existing <code>Pattern</code> system), but I'm not doing so in a way that's truly productive.
I would like to cover some of the individual strategies I've found that have helped me speed up my refactoring proces.
After all, if the process of refactoring is the process of paying back tech debt, refactoring is incredibly important to maintaining velocity.</p>
<p>I am doing this for two reasons.
First, because I believe this is valuable information for any developer.
Second, because I want to solidify these ideas in my mind to further improve my refactoring down the line.</p>
<h2>Use Your Tools, but Not Too Much</h2>
<p>In whatever language you're working with, there are likely specialized tools for refactoring.
For Java, I've used <a href="https://www.jetbrains.com/idea/">IntellJ IDEA</a>.
For Harper, I'm using <a href="https://rust-analyzer.github.io/"><code>rust-analyzer</code></a>.
These provide neat functions for changing identifiers or moving modules, all while updating relevant references.</p>
<p>These tools are imperfect, however, and often fail to update references in parts of code that require a higher level of semantic understanding.</p>
<p>It's a common pattern for authors to create an instance of a class and assign it to a variable with the same name.</p>
<pre><code class="hljs language-rust"><span class="hljs-meta">#[derive(Default)]</span>
<span class="hljs-keyword">struct</span> <span class="hljs-title class_">Foo</span> {
    <span class="hljs-comment">// Implementation details</span>
}

<span class="hljs-keyword">fn</span> <span class="hljs-title function_">main</span>() {
    <span class="hljs-comment">// The author of this code named the variable after the class</span>
    <span class="hljs-keyword">let</span> <span class="hljs-variable">foo</span> = Foo::<span class="hljs-title function_ invoke__">default</span>();
}
</code></pre>
<p>In these cases, when someone changes the name of the class, the associated variable name will not get updated.</p>
<pre><code class="hljs language-rust"><span class="hljs-meta">#[derive(Default)]</span>
<span class="hljs-keyword">struct</span> <span class="hljs-title class_">Bar</span> {
    <span class="hljs-comment">// Implementation details</span>
}

<span class="hljs-keyword">fn</span> <span class="hljs-title function_">main</span>() {
    <span class="hljs-comment">// Variable name is not updated by our static analysis tool.</span>
    <span class="hljs-keyword">let</span> <span class="hljs-variable">foo</span> = Bar::<span class="hljs-title function_ invoke__">default</span>();
}
</code></pre>
<p>When doing a refactor like this, you need to read <strong>every changed file</strong> to ensure the code still makes as much sense to a human as it does to the compiler.</p>
<h2>Plan It Out, but Not Too Much</h2>
<p>I can't believe I need to say this, but it's chronically under-discussed.
If you're doing a fundamental change to how a system works, it is vital that you read the code first and plan it out.
The plan doesn't need to be specific nor need to be complete.</p>
<p>Having a good idea of what the start and end state should look like will save you hours of time.
While I am a proponent of <a href="https://jimmyhmiller.com/discovery-coding">discovery coding</a> for new features and code, I find that it is lackluster when it comes to major <em>re</em>-writing of code.</p>
<p>To instruct people to plan out their changes may feel juvinile or basic, but I think it's worth mentioning.</p>
<h2>The Joy</h2>
<p>Refactoring helps me write new features, faster.
Refactoring helps me find bugs in existing code.
Refactoring makes it easier for newcomers to join and contribute to a project.</p>
<p>It is one of my favorite pastimes.
I enjoy refactoring, but also want to do it well.
I hope you feel the same.</p>]]></description>
            <content:encoded><![CDATA[<h1>Refactoring More and Faster</h1>
<img src="/images/clouds.webp" alt="Clouds in Colorado">
<p>I've been deep in the refactor rabbit-hole.
You know—that awful (but strangely satisfying) space where the majority of your commits are small pedantic edits that don't change the end-user-experience at all?</p>
<p>These past few weeks, I've been working so hard on Harper's Chrome Extension and building systems for transformation-based learning, I haven't had a chance to touch any of the actual grammar rules the software is supposed to fix.
As such, a few glaring tools for rule authors have gone unimplemented.
These are tools that—because they're so tied into the core system—need a lot of expertise and fine-tuning to work properly.</p>
<p>If you take a look at the <a href="https://github.com/Automattic/harper/pull/1393">diff</a>, you'll find that there are a lot of small edits.
Since each one takes a minimal amount of cognitive effort, it is easy for me to slide into a trance.
A trance where I am technically moving towards my goal (in this case, creating an expression system that encompasses—but is more powerful than—our existing <code>Pattern</code> system), but I'm not doing so in a way that's truly productive.
I would like to cover some of the individual strategies I've found that have helped me speed up my refactoring proces.
After all, if the process of refactoring is the process of paying back tech debt, refactoring is incredibly important to maintaining velocity.</p>
<p>I am doing this for two reasons.
First, because I believe this is valuable information for any developer.
Second, because I want to solidify these ideas in my mind to further improve my refactoring down the line.</p>
<h2>Use Your Tools, but Not Too Much</h2>
<p>In whatever language you're working with, there are likely specialized tools for refactoring.
For Java, I've used <a href="https://www.jetbrains.com/idea/">IntellJ IDEA</a>.
For Harper, I'm using <a href="https://rust-analyzer.github.io/"><code>rust-analyzer</code></a>.
These provide neat functions for changing identifiers or moving modules, all while updating relevant references.</p>
<p>These tools are imperfect, however, and often fail to update references in parts of code that require a higher level of semantic understanding.</p>
<p>It's a common pattern for authors to create an instance of a class and assign it to a variable with the same name.</p>
<pre><code class="hljs language-rust"><span class="hljs-meta">#[derive(Default)]</span>
<span class="hljs-keyword">struct</span> <span class="hljs-title class_">Foo</span> {
    <span class="hljs-comment">// Implementation details</span>
}

<span class="hljs-keyword">fn</span> <span class="hljs-title function_">main</span>() {
    <span class="hljs-comment">// The author of this code named the variable after the class</span>
    <span class="hljs-keyword">let</span> <span class="hljs-variable">foo</span> = Foo::<span class="hljs-title function_ invoke__">default</span>();
}
</code></pre>
<p>In these cases, when someone changes the name of the class, the associated variable name will not get updated.</p>
<pre><code class="hljs language-rust"><span class="hljs-meta">#[derive(Default)]</span>
<span class="hljs-keyword">struct</span> <span class="hljs-title class_">Bar</span> {
    <span class="hljs-comment">// Implementation details</span>
}

<span class="hljs-keyword">fn</span> <span class="hljs-title function_">main</span>() {
    <span class="hljs-comment">// Variable name is not updated by our static analysis tool.</span>
    <span class="hljs-keyword">let</span> <span class="hljs-variable">foo</span> = Bar::<span class="hljs-title function_ invoke__">default</span>();
}
</code></pre>
<p>When doing a refactor like this, you need to read <strong>every changed file</strong> to ensure the code still makes as much sense to a human as it does to the compiler.</p>
<h2>Plan It Out, but Not Too Much</h2>
<p>I can't believe I need to say this, but it's chronically under-discussed.
If you're doing a fundamental change to how a system works, it is vital that you read the code first and plan it out.
The plan doesn't need to be specific nor need to be complete.</p>
<p>Having a good idea of what the start and end state should look like will save you hours of time.
While I am a proponent of <a href="https://jimmyhmiller.com/discovery-coding">discovery coding</a> for new features and code, I find that it is lackluster when it comes to major <em>re</em>-writing of code.</p>
<p>To instruct people to plan out their changes may feel juvinile or basic, but I think it's worth mentioning.</p>
<h2>The Joy</h2>
<p>Refactoring helps me write new features, faster.
Refactoring helps me find bugs in existing code.
Refactoring makes it easier for newcomers to join and contribute to a project.</p>
<p>It is one of my favorite pastimes.
I enjoy refactoring, but also want to do it well.
I hope you feel the same.</p>]]></content:encoded>
            <enclosure url="https://elijahpotter.dev/images/clouds.webp" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[More Transformation-Based Learning]]></title>
            <link>https://elijahpotter.dev/articles/more_transformation-based_learning</link>
            <guid>https://elijahpotter.dev/articles/more_transformation-based_learning</guid>
            <pubDate>Fri, 13 Jun 2025 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Continuations on Transformation-based Learning</h1>
<img src="/images/clear_creek_steps.webp" alt="Steps near Clear Creek. Taken by me.">
<p>The most common type of machine learning out there takes the form of some kind of neural network. Inspired by how our own brains work, these systems act as function approximations. They are great, but they come with a few key pitfalls.</p>
<p>First and foremost, they start out with very little baked-in understanding of the context they live in. This is fine—usually enough data can be provided to bridge the gap. It does mean, however, that they spend an inordinate amount of time learning the fundamentals of their field. This translates to a larger model size and longer inference time (especially at <a href="https://en.wikipedia.org/wiki/Edge_computing">the edge</a>).</p>
<p>Secondly, most neural networks are initialized with randomness, which results in extremely high entropy. High entropy means that these models cannot be compressed easily (if at all).</p>
<p>This disqualifies neural networks from many aspects of <a href="https://writewithharper.com/">Harper's architecture</a>. Harper tries to be fast and small, so it can be shipped and run wherever our users are. Neural networks (especially in the world of natural language processing) are neither fast nor small.</p>
<p>This is why we've taking an alternative approach to machine learning, as evidenced by last week's post on <a href="https://elijahpotter.dev/articles/transformation-based_learning">transformation-based learning</a>.</p>
<h2>Transformation-Based Learning: A Refresher</h2>
<p>Transformation-based learning is remarkably simple. It boils down to just four steps:</p>
<ul>
<li>Use a simple, stochastic model t label your data. This can be as simple as tagging each token (or other discrete component) with that variant’s most common tag. It doesn’t need to super accurate, just enough to establish a baseline.</li>
<li>Identify the errors between the tags in your canonical data and that which produced by your baseline model.</li>
<li>Using a finite list of human-defined templates, generate candidate rules that transform the output of the baseline model into something else. This is where the term “transformation-based” comes from.</li>
<li>Apply each of the candidate rules to the baseline model’s output. Check if the result is more accurate than before. If so, save the rule for future use.</li>
</ul>
<p>These saved candidates become your model.</p>
<p>If you're interested how this could be applied to POS tagging, I've since updated my original post on the subject to better explain the process. I'd recommend taking a look.</p>
<h2>Nominal Phrase Chunking</h2>
<p>It's often useful, especially when building a grammar checker, to be able to identify the subjects and objects of sentences. Suppose, for example, that we want to insert the missing Oxford comma in a list of fruits: "I like apples, bananas and oranges". In this trivial example, this can be done with POS tagging. If we have more complex subjects, like in the phrase "I like green apples, deliciously pernicious bananas and fresh oranges," POS tagging starts to fall apart. Identifying multi-token subjects is the job of a nominal phrase chunker.</p>
<p>I've been wanting to build a nominal phrase chunker for a while, but haven't had the tools to do so. Now that I have pipeline in place (from last week), it should be relatively straightforward.</p>
<p>For the purposes of this model, we'll be tagging each token with a boolean; it is either a member of a noun phrase, or it is not.</p>
<p>I started by assigning each token to a nominal phrase if a POS tagger marks it as a noun. This is our baseline model. It performs poorly because the resulting nominal phrases do not include determiners or any adjectives.</p>
<p>Similar to our POS tagging model, I used a Universal Dependencies treebank to determine the accuracy of our baseline. After generating candidate rules using the same patch templates as the POS tagging system and running them against the treebank, I have a model with a 90% accuracy.</p>
<p>I feel as though there should be more details to share, but that was pretty much it. I spent a good amount of time optimizing the training code. There's still a lot of work left to do to incorporate it into the rest of Harper. I am also unsatisfied with the model's current accuracy. To get it closer to 100%, I suspect I'll need to do a good amount of data cleaning.</p>]]></description>
            <content:encoded><![CDATA[<h1>Continuations on Transformation-based Learning</h1>
<img src="/images/clear_creek_steps.webp" alt="Steps near Clear Creek. Taken by me.">
<p>The most common type of machine learning out there takes the form of some kind of neural network. Inspired by how our own brains work, these systems act as function approximations. They are great, but they come with a few key pitfalls.</p>
<p>First and foremost, they start out with very little baked-in understanding of the context they live in. This is fine—usually enough data can be provided to bridge the gap. It does mean, however, that they spend an inordinate amount of time learning the fundamentals of their field. This translates to a larger model size and longer inference time (especially at <a href="https://en.wikipedia.org/wiki/Edge_computing">the edge</a>).</p>
<p>Secondly, most neural networks are initialized with randomness, which results in extremely high entropy. High entropy means that these models cannot be compressed easily (if at all).</p>
<p>This disqualifies neural networks from many aspects of <a href="https://writewithharper.com/">Harper's architecture</a>. Harper tries to be fast and small, so it can be shipped and run wherever our users are. Neural networks (especially in the world of natural language processing) are neither fast nor small.</p>
<p>This is why we've taking an alternative approach to machine learning, as evidenced by last week's post on <a href="https://elijahpotter.dev/articles/transformation-based_learning">transformation-based learning</a>.</p>
<h2>Transformation-Based Learning: A Refresher</h2>
<p>Transformation-based learning is remarkably simple. It boils down to just four steps:</p>
<ul>
<li>Use a simple, stochastic model t label your data. This can be as simple as tagging each token (or other discrete component) with that variant’s most common tag. It doesn’t need to super accurate, just enough to establish a baseline.</li>
<li>Identify the errors between the tags in your canonical data and that which produced by your baseline model.</li>
<li>Using a finite list of human-defined templates, generate candidate rules that transform the output of the baseline model into something else. This is where the term “transformation-based” comes from.</li>
<li>Apply each of the candidate rules to the baseline model’s output. Check if the result is more accurate than before. If so, save the rule for future use.</li>
</ul>
<p>These saved candidates become your model.</p>
<p>If you're interested how this could be applied to POS tagging, I've since updated my original post on the subject to better explain the process. I'd recommend taking a look.</p>
<h2>Nominal Phrase Chunking</h2>
<p>It's often useful, especially when building a grammar checker, to be able to identify the subjects and objects of sentences. Suppose, for example, that we want to insert the missing Oxford comma in a list of fruits: "I like apples, bananas and oranges". In this trivial example, this can be done with POS tagging. If we have more complex subjects, like in the phrase "I like green apples, deliciously pernicious bananas and fresh oranges," POS tagging starts to fall apart. Identifying multi-token subjects is the job of a nominal phrase chunker.</p>
<p>I've been wanting to build a nominal phrase chunker for a while, but haven't had the tools to do so. Now that I have pipeline in place (from last week), it should be relatively straightforward.</p>
<p>For the purposes of this model, we'll be tagging each token with a boolean; it is either a member of a noun phrase, or it is not.</p>
<p>I started by assigning each token to a nominal phrase if a POS tagger marks it as a noun. This is our baseline model. It performs poorly because the resulting nominal phrases do not include determiners or any adjectives.</p>
<p>Similar to our POS tagging model, I used a Universal Dependencies treebank to determine the accuracy of our baseline. After generating candidate rules using the same patch templates as the POS tagging system and running them against the treebank, I have a model with a 90% accuracy.</p>
<p>I feel as though there should be more details to share, but that was pretty much it. I spent a good amount of time optimizing the training code. There's still a lot of work left to do to incorporate it into the rest of Harper. I am also unsatisfied with the model's current accuracy. To get it closer to 100%, I suspect I'll need to do a good amount of data cleaning.</p>]]></content:encoded>
            <enclosure url="https://elijahpotter.dev/images/clear_creek_steps.webp" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[Transformation-Based Learning]]></title>
            <link>https://elijahpotter.dev/articles/transformation-based_learning</link>
            <guid>https://elijahpotter.dev/articles/transformation-based_learning</guid>
            <pubDate>Tue, 03 Jun 2025 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Transformation-based Learning for POS Tagging</h1>
<img src="/images/ctlm.webp" alt="CTLM">
<p><a href="https://writewithharper.com/">Harper</a> is currently undergoing some pretty radical changes when it comes to its language analysis. These improvements will improve the output of our existing rule engine, in addition to making entirely new corrections possible. This post will cover our existing NLP pipeline, the recent changes and improvements to our machine learning approach, and what will come next.</p>
<p>While AI is a common topic of discussion online, I don’t hear much about actual machine learning. In that light, I hope this post piques someone’s interest.</p>
<h2>What is POS Tagging?</h2>
<p>POS (Part-of-speech) tagging is the first step of most NLP (Natural Language Processing) pipelines. For any grammar checker worth its salt, POS tagging is essential. Apart from the basic corrections you’re capable of doing with simple string manipulation, most grammar checking directly or indirectly depends on POS tagging. High-quality tagging results in high-quality suggestions.</p>
<p>What is POS tagging? It is the process of identifying which possible definition of a word is being used, based on the surrounding context. For those unfamiliar with the territory, I’m certain an example is the best way to explain.</p>
<blockquote>
<p>“I am going to go tan in the sun.”</p>
</blockquote>
<p>Here we have a simple, English sentence. In this case, it is clear the word “tan” is being used as verb. The linguists in the audience would point out that it is specifically in the first-person future tense. Consider this similar sentence:</p>
<blockquote>
<p>“I am already very tan, so I will stay inside.”</p>
</blockquote>
<p>In this sentence, the word “tan” is being used as an adjective. How can we tell?</p>
<p>As intelligent humans, some of whom have been speaking English their entire lives, it is easy for us to determine which words are serving which roles. It’s not as easy for a computer to do the same. From an algorithmic standpoint, there are a number of ways to go about it, each with differing levels of “machine learning” required.</p>
<p>Before this week, Harper primarily took a dictionary-based approach. In short: we ship a “dictionary” of English words to the user’s machine and use hash table lookups to determine the possible roles each word could assume. The authors to our rule engine could then use rudimentary deductive reasoning to narrow the possibilities down. This strategy is remarkably effective and it has scaled to tens of thousands of users with surprisingly few hiccups.</p>
<p>That said, there are edge-cases and systems (which I’ll cover next week when I discuss chunking) which require extreme specificity from POS tags. My mission: improve our POS tagging to increase the confidence of Harper’s output and open the door for more advanced algorithms.</p>
<h2>Why Transformation-based Learning?</h2>
<p>The literature highlights four underlying machine learning model strategies that seem to work well for POS tagging.</p>
<ul>
<li>Hidden Markov Models (probabilistic models that predate the modern deep learning era)</li>
<li>Maximum Entropy Models (statistical models closely related to logistic regression).</li>
<li>Transformer-based Models (which use deep neural networks)</li>
<li>Transformation-based Rule Models (which are based on learned rules)</li>
</ul>
<p>While I heavily considered using a neural network (either via an HMM or MEM), I discarded the technology for three reasons.</p>
<ul>
<li>TRMs are typically more accurate (barely; measured in basis points).</li>
<li>TRMs are more amenable to fine-tuning.</li>
<li>TRMs are exceptionally low-latency and can be compressed quite small.</li>
</ul>
<p>Transformation-based learning is remarkably simple. It boils down to just four steps:</p>
<ul>
<li>Use a simple, stochastic model to label your data. This can be as simple as tagging each token (or other discrete component) with that variant’s most common tag. It doesn’t need to super accurate, just enough to establish a baseline.</li>
<li>Identify the errors between the tags in your canonical data and that which produced by your baseline model.</li>
<li>Using a finite list of human-defined templates, generate candidate rules that transform the output of the baseline model into something else. This is where the term “transformation-based” comes from.</li>
<li>Apply each of the candidate rules to the baseline model’s output. Check if the result is <em>more</em> accurate than before. If so, save the rule for future use.</li>
</ul>
<p>These saved candidates become your model.</p>
<h2>POS-Tagging using Transformation-based Learning</h2>
<p>Let’s apply these steps to build a POS-tagging system.</p>
<p>For our baseline model, we will just assign each word in our dataset the most common POS tag associated with that word. If the word is “tan”, we’ll assign it’s most common POS tag (verb). It will often be incorrect, but those cases will be handled by our rules.</p>
<p>To identify the baseline model’s errors, we’ll use an off-the-shelf tree-bank from the Universal Dependencies project.</p>
<p>Our rule templates will take the form of these <code>PatchCriteria</code>. By initializing our candidates with any one of these enum variants and initializing the child variables to random values, we can cover a good number of cases.</p>
<pre><code class="hljs language-rust"><span class="hljs-meta">#[derive(Debug, Clone, Serialize, Deserialize, Hash, PartialEq, Eq)]</span>
<span class="hljs-keyword">pub</span> <span class="hljs-keyword">enum</span> <span class="hljs-title class_">PatchCriteria</span> {
    WordIsTaggedWith {
        <span class="hljs-comment">/// Which token to inspect.</span>
        relative: <span class="hljs-type">isize</span>,
        is_tagged: UPOS,
    },
    AnyWordIsTaggedWith {
        <span class="hljs-comment">/// The farthest relative index to look</span>
        max_relative: <span class="hljs-type">isize</span>,
        is_tagged: UPOS,
    },
    SandwichTaggedWith {
        prev_word_tagged: UPOS,
        post_word_tagged: UPOS,
    },
    WordIs {
        relative: <span class="hljs-type">isize</span>,
        word: <span class="hljs-type">String</span>,
    },
    <span class="hljs-comment">/// Not applicable to the Brill Tagger, only the chunker</span>
    NounPhraseAt {
        is_np: <span class="hljs-type">bool</span>,
        relative: <span class="hljs-type">isize</span>,
    },
    Combined {
        a: <span class="hljs-type">Box</span>&#x3C;PatchCriteria>,
        b: <span class="hljs-type">Box</span>&#x3C;PatchCriteria>,
    },
}
</code></pre>
<p>Finally, we’ll apply each of the hundreds of thousands of candidates to our treebank to see if the result of their transformations have a lower error rate than the baseline.</p>
<p>Here are a couple of the candidates we found:</p>
<pre><code class="hljs language-json">[
  {
    "from": "PRON",
    "to": "SCONJ",
    "criteria": {
      "Combined": {
        "a": {
          "WordIs": {
            "relative": 0,
            "word": "that"
          }
        },
        "b": {
          "WordIsTaggedWith": {
            "relative": -1,
            "is_tagged": "VERB"
          }
        }
      }
    }
  },
  {
    "from": "PART",
    "to": "ADP",
    "criteria": {
      "Combined": {
        "a": {
          "WordIs": {
            "relative": 1,
            "word": "there"
          }
        },
        "b": {
          "AnyWordIsTaggedWith": {
            "max_relative": -4,
            "is_tagged": "NOUN"
          }
        }
      }
    }
  }
]
</code></pre>
<p>That’s the whole process! With it, I was able to bring our previous accuracy all the way up to 95% (from 40%) without a meaningful change in linting latency or compiled binary size.</p>]]></description>
            <content:encoded><![CDATA[<h1>Transformation-based Learning for POS Tagging</h1>
<img src="/images/ctlm.webp" alt="CTLM">
<p><a href="https://writewithharper.com/">Harper</a> is currently undergoing some pretty radical changes when it comes to its language analysis. These improvements will improve the output of our existing rule engine, in addition to making entirely new corrections possible. This post will cover our existing NLP pipeline, the recent changes and improvements to our machine learning approach, and what will come next.</p>
<p>While AI is a common topic of discussion online, I don’t hear much about actual machine learning. In that light, I hope this post piques someone’s interest.</p>
<h2>What is POS Tagging?</h2>
<p>POS (Part-of-speech) tagging is the first step of most NLP (Natural Language Processing) pipelines. For any grammar checker worth its salt, POS tagging is essential. Apart from the basic corrections you’re capable of doing with simple string manipulation, most grammar checking directly or indirectly depends on POS tagging. High-quality tagging results in high-quality suggestions.</p>
<p>What is POS tagging? It is the process of identifying which possible definition of a word is being used, based on the surrounding context. For those unfamiliar with the territory, I’m certain an example is the best way to explain.</p>
<blockquote>
<p>“I am going to go tan in the sun.”</p>
</blockquote>
<p>Here we have a simple, English sentence. In this case, it is clear the word “tan” is being used as verb. The linguists in the audience would point out that it is specifically in the first-person future tense. Consider this similar sentence:</p>
<blockquote>
<p>“I am already very tan, so I will stay inside.”</p>
</blockquote>
<p>In this sentence, the word “tan” is being used as an adjective. How can we tell?</p>
<p>As intelligent humans, some of whom have been speaking English their entire lives, it is easy for us to determine which words are serving which roles. It’s not as easy for a computer to do the same. From an algorithmic standpoint, there are a number of ways to go about it, each with differing levels of “machine learning” required.</p>
<p>Before this week, Harper primarily took a dictionary-based approach. In short: we ship a “dictionary” of English words to the user’s machine and use hash table lookups to determine the possible roles each word could assume. The authors to our rule engine could then use rudimentary deductive reasoning to narrow the possibilities down. This strategy is remarkably effective and it has scaled to tens of thousands of users with surprisingly few hiccups.</p>
<p>That said, there are edge-cases and systems (which I’ll cover next week when I discuss chunking) which require extreme specificity from POS tags. My mission: improve our POS tagging to increase the confidence of Harper’s output and open the door for more advanced algorithms.</p>
<h2>Why Transformation-based Learning?</h2>
<p>The literature highlights four underlying machine learning model strategies that seem to work well for POS tagging.</p>
<ul>
<li>Hidden Markov Models (probabilistic models that predate the modern deep learning era)</li>
<li>Maximum Entropy Models (statistical models closely related to logistic regression).</li>
<li>Transformer-based Models (which use deep neural networks)</li>
<li>Transformation-based Rule Models (which are based on learned rules)</li>
</ul>
<p>While I heavily considered using a neural network (either via an HMM or MEM), I discarded the technology for three reasons.</p>
<ul>
<li>TRMs are typically more accurate (barely; measured in basis points).</li>
<li>TRMs are more amenable to fine-tuning.</li>
<li>TRMs are exceptionally low-latency and can be compressed quite small.</li>
</ul>
<p>Transformation-based learning is remarkably simple. It boils down to just four steps:</p>
<ul>
<li>Use a simple, stochastic model to label your data. This can be as simple as tagging each token (or other discrete component) with that variant’s most common tag. It doesn’t need to super accurate, just enough to establish a baseline.</li>
<li>Identify the errors between the tags in your canonical data and that which produced by your baseline model.</li>
<li>Using a finite list of human-defined templates, generate candidate rules that transform the output of the baseline model into something else. This is where the term “transformation-based” comes from.</li>
<li>Apply each of the candidate rules to the baseline model’s output. Check if the result is <em>more</em> accurate than before. If so, save the rule for future use.</li>
</ul>
<p>These saved candidates become your model.</p>
<h2>POS-Tagging using Transformation-based Learning</h2>
<p>Let’s apply these steps to build a POS-tagging system.</p>
<p>For our baseline model, we will just assign each word in our dataset the most common POS tag associated with that word. If the word is “tan”, we’ll assign it’s most common POS tag (verb). It will often be incorrect, but those cases will be handled by our rules.</p>
<p>To identify the baseline model’s errors, we’ll use an off-the-shelf tree-bank from the Universal Dependencies project.</p>
<p>Our rule templates will take the form of these <code>PatchCriteria</code>. By initializing our candidates with any one of these enum variants and initializing the child variables to random values, we can cover a good number of cases.</p>
<pre><code class="hljs language-rust"><span class="hljs-meta">#[derive(Debug, Clone, Serialize, Deserialize, Hash, PartialEq, Eq)]</span>
<span class="hljs-keyword">pub</span> <span class="hljs-keyword">enum</span> <span class="hljs-title class_">PatchCriteria</span> {
    WordIsTaggedWith {
        <span class="hljs-comment">/// Which token to inspect.</span>
        relative: <span class="hljs-type">isize</span>,
        is_tagged: UPOS,
    },
    AnyWordIsTaggedWith {
        <span class="hljs-comment">/// The farthest relative index to look</span>
        max_relative: <span class="hljs-type">isize</span>,
        is_tagged: UPOS,
    },
    SandwichTaggedWith {
        prev_word_tagged: UPOS,
        post_word_tagged: UPOS,
    },
    WordIs {
        relative: <span class="hljs-type">isize</span>,
        word: <span class="hljs-type">String</span>,
    },
    <span class="hljs-comment">/// Not applicable to the Brill Tagger, only the chunker</span>
    NounPhraseAt {
        is_np: <span class="hljs-type">bool</span>,
        relative: <span class="hljs-type">isize</span>,
    },
    Combined {
        a: <span class="hljs-type">Box</span>&#x3C;PatchCriteria>,
        b: <span class="hljs-type">Box</span>&#x3C;PatchCriteria>,
    },
}
</code></pre>
<p>Finally, we’ll apply each of the hundreds of thousands of candidates to our treebank to see if the result of their transformations have a lower error rate than the baseline.</p>
<p>Here are a couple of the candidates we found:</p>
<pre><code class="hljs language-json">[
  {
    "from": "PRON",
    "to": "SCONJ",
    "criteria": {
      "Combined": {
        "a": {
          "WordIs": {
            "relative": 0,
            "word": "that"
          }
        },
        "b": {
          "WordIsTaggedWith": {
            "relative": -1,
            "is_tagged": "VERB"
          }
        }
      }
    }
  },
  {
    "from": "PART",
    "to": "ADP",
    "criteria": {
      "Combined": {
        "a": {
          "WordIs": {
            "relative": 1,
            "word": "there"
          }
        },
        "b": {
          "AnyWordIsTaggedWith": {
            "max_relative": -4,
            "is_tagged": "NOUN"
          }
        }
      }
    }
  }
]
</code></pre>
<p>That’s the whole process! With it, I was able to bring our previous accuracy all the way up to 95% (from 40%) without a meaningful change in linting latency or compiled binary size.</p>]]></content:encoded>
            <enclosure url="https://elijahpotter.dev/images/ctlm.webp" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[Harper for Firefox Through Spellbolt]]></title>
            <link>https://elijahpotter.dev/articles/harper_for_firefox_through_spellbolt</link>
            <guid>https://elijahpotter.dev/articles/harper_for_firefox_through_spellbolt</guid>
            <pubDate>Mon, 19 May 2025 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Harper in Firefox Through SpellBolt</h1>
<img src="/images/ford_street.webp" alt="Ford Street, Golden, Colorado">
<p>We designed Harper to be the ultimately portable grammar checker, but we're still working on living up to that promise.
As the Harper Chrome extension becomes more capable day-by-day, it has one critical weakness: it doesn't support Firefox.
Since Chrome has a much larger market share, it isn't a priority at the moment (although we accept contributions).</p>
<p>This is a wonderful example of where the open-source nature of the Harper's core engine really shines.
Someone has taken <a href="https://www.npmjs.com/package/harper.js"><code>harper.js</code></a> and put it into their own extension.
<a href="https://addons.mozilla.org/en-US/firefox/addon/spellbolt/">SpellBolt</a> is a Firefox extension that delivers Harper's fantastic grammar checking straight into your Firefox browser.
My initial testing shows that it's a solid extension.</p>
<p>I am delighted to see people making use of the pluggable architecture Harper has adopted, and I can't wait to see what SpellBolt does next.</p>]]></description>
            <content:encoded><![CDATA[<h1>Harper in Firefox Through SpellBolt</h1>
<img src="/images/ford_street.webp" alt="Ford Street, Golden, Colorado">
<p>We designed Harper to be the ultimately portable grammar checker, but we're still working on living up to that promise.
As the Harper Chrome extension becomes more capable day-by-day, it has one critical weakness: it doesn't support Firefox.
Since Chrome has a much larger market share, it isn't a priority at the moment (although we accept contributions).</p>
<p>This is a wonderful example of where the open-source nature of the Harper's core engine really shines.
Someone has taken <a href="https://www.npmjs.com/package/harper.js"><code>harper.js</code></a> and put it into their own extension.
<a href="https://addons.mozilla.org/en-US/firefox/addon/spellbolt/">SpellBolt</a> is a Firefox extension that delivers Harper's fantastic grammar checking straight into your Firefox browser.
My initial testing shows that it's a solid extension.</p>
<p>I am delighted to see people making use of the pluggable architecture Harper has adopted, and I can't wait to see what SpellBolt does next.</p>]]></content:encoded>
            <enclosure url="https://elijahpotter.dev/images/ford_street.webp" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[Quality Is the Most Important Metric]]></title>
            <link>https://elijahpotter.dev/articles/quality_is_the_most_important_metric</link>
            <guid>https://elijahpotter.dev/articles/quality_is_the_most_important_metric</guid>
            <pubDate>Mon, 19 May 2025 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Quality Is the Most Important Metric</h1>
<p>Harper's Chrome extension continues to come along beautifully. I'm actively working to make it more useful and responsive, slowly crushing bugs that I hear about from our users and contributors. I will not go over them here, since our <a href="https://github.com/Automattic/harper/pulls?q=sort%3Aupdated-desc+is%3Apr+is%3Aclosed">closed pull requests</a> should speak for themselves. I would, however, like to rapid-fire some of the small improvements and ideas about Quality that I've been brewing up this week.</p>
<img src="/images/guggenheim_2.png" alt="Living in a college town is strange. The whole place gets empty in the summer.">
<h2>Quality</h2>
<p>I first heard about big-Q Quality from <em>Zen and the Art of Motorcycle Maintenance</em> by Robert Persig. The book's length—whose main subject is the idea of Quality—speaks to its complexity as a topic. It's a great read, and I'd highly recommend it.</p>
<p>One important point the author makes is that Quality is inherently a human idea. It is hard, if not impossible, to define algorithmically or on paper. When one interacts with an object or a system, they come away with a sense of its Quality, without necessarily knowing exactly what led to that sense.</p>
<p>Some argue that in today's world of LLMs and fast-iterations, the real value of a good software engineer derives from their sense of Quality. We know whether a design decision or change to the code is worth exploring. We know how it will affect users. I don't think this argument is relevant to the LLM discussion at all, but I do agree with it in spirit.</p>
<h2>The Most Important UI Element</h2>
<p>Harper's Chrome extension has one critical UI element that user's should be interacting with more than any other: the suggestion box.</p>
<table>
<thead>
<tr>
<th align="left">Before</th>
<th align="right">After</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left"><img src="/images/before_suggestion_box.png" alt="Before the changes to the suggestion box"></td>
<td align="right"><img src="/images/after_suggestion_box.png" alt="After the changes to the suggestion box"></td>
</tr>
</tbody>
</table>
<p>When Harper locates a problem in a user's text, it underlines it and waits for them to notice. When they do, they have the option to click it and review Harper's suggestion. This is a flow typical of most spell and grammar-checking programs, so you're likely familiar with it. This process makes the contents and behavior of the suggestion box extremely important for user satisfaction.</p>
<p>After receiving some feedback related to the suggestion box's visual unpleasantness and difficulty to understand intuitively, I've started making some modifications. Nothing drastic—I don't want to confuse existing users. I've focused on making it more compact and use more visual storytelling. Now, when a spelling mistake should be corrected, it shows an icon rather than a full "Add to dictionary" label. I'm already getting positive feedback about the changes.</p>
<p>All-in, it seems like much of the design philosophy of the UI is being pushed further in the direction of Harper's core tenet: to get out of the way.</p>
<h1>Why Quality is Important</h1>
<img src="/images/chrome_web_store_front_page.png" alt="The Chrome Web Store Front Page">
<p>Quality is important for the Chrome extension (more so than other integrations) because of the Chrome Web Store's front page.</p>
<p>Extensions that get featured on the front page get a significant amount of free advertising. Millions of users view this page monthly. Getting on the page is guarantee of tens of thousands of new users, if not more.</p>
<p>Here's the rub: you can't get on the front page by paying for it or by chance. An extension is placed on the front page if and only if it meets a high bar of value and quality. In other words: you have to earn it. This is the essential reason why I've been working so hard on Quality these past few weeks: I'm trying to get on the front page.</p>
<p>Most of the steps I've taken in this direction are small enough to not be worth mentioning here. If you're curious, feel free to reach out!</p>]]></description>
            <content:encoded><![CDATA[<h1>Quality Is the Most Important Metric</h1>
<p>Harper's Chrome extension continues to come along beautifully. I'm actively working to make it more useful and responsive, slowly crushing bugs that I hear about from our users and contributors. I will not go over them here, since our <a href="https://github.com/Automattic/harper/pulls?q=sort%3Aupdated-desc+is%3Apr+is%3Aclosed">closed pull requests</a> should speak for themselves. I would, however, like to rapid-fire some of the small improvements and ideas about Quality that I've been brewing up this week.</p>
<img src="/images/guggenheim_2.png" alt="Living in a college town is strange. The whole place gets empty in the summer.">
<h2>Quality</h2>
<p>I first heard about big-Q Quality from <em>Zen and the Art of Motorcycle Maintenance</em> by Robert Persig. The book's length—whose main subject is the idea of Quality—speaks to its complexity as a topic. It's a great read, and I'd highly recommend it.</p>
<p>One important point the author makes is that Quality is inherently a human idea. It is hard, if not impossible, to define algorithmically or on paper. When one interacts with an object or a system, they come away with a sense of its Quality, without necessarily knowing exactly what led to that sense.</p>
<p>Some argue that in today's world of LLMs and fast-iterations, the real value of a good software engineer derives from their sense of Quality. We know whether a design decision or change to the code is worth exploring. We know how it will affect users. I don't think this argument is relevant to the LLM discussion at all, but I do agree with it in spirit.</p>
<h2>The Most Important UI Element</h2>
<p>Harper's Chrome extension has one critical UI element that user's should be interacting with more than any other: the suggestion box.</p>
<table>
<thead>
<tr>
<th align="left">Before</th>
<th align="right">After</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left"><img src="/images/before_suggestion_box.png" alt="Before the changes to the suggestion box"></td>
<td align="right"><img src="/images/after_suggestion_box.png" alt="After the changes to the suggestion box"></td>
</tr>
</tbody>
</table>
<p>When Harper locates a problem in a user's text, it underlines it and waits for them to notice. When they do, they have the option to click it and review Harper's suggestion. This is a flow typical of most spell and grammar-checking programs, so you're likely familiar with it. This process makes the contents and behavior of the suggestion box extremely important for user satisfaction.</p>
<p>After receiving some feedback related to the suggestion box's visual unpleasantness and difficulty to understand intuitively, I've started making some modifications. Nothing drastic—I don't want to confuse existing users. I've focused on making it more compact and use more visual storytelling. Now, when a spelling mistake should be corrected, it shows an icon rather than a full "Add to dictionary" label. I'm already getting positive feedback about the changes.</p>
<p>All-in, it seems like much of the design philosophy of the UI is being pushed further in the direction of Harper's core tenet: to get out of the way.</p>
<h1>Why Quality is Important</h1>
<img src="/images/chrome_web_store_front_page.png" alt="The Chrome Web Store Front Page">
<p>Quality is important for the Chrome extension (more so than other integrations) because of the Chrome Web Store's front page.</p>
<p>Extensions that get featured on the front page get a significant amount of free advertising. Millions of users view this page monthly. Getting on the page is guarantee of tens of thousands of new users, if not more.</p>
<p>Here's the rub: you can't get on the front page by paying for it or by chance. An extension is placed on the front page if and only if it meets a high bar of value and quality. In other words: you have to earn it. This is the essential reason why I've been working so hard on Quality these past few weeks: I'm trying to get on the front page.</p>
<p>Most of the steps I've taken in this direction are small enough to not be worth mentioning here. If you're curious, feel free to reach out!</p>]]></content:encoded>
            <enclosure url="https://elijahpotter.dev/images/guggenheim_2.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[Integration Testing Thousands of Sites with Playwright]]></title>
            <link>https://elijahpotter.dev/articles/integration_testing_thousands_of_sites_with_playwright</link>
            <guid>https://elijahpotter.dev/articles/integration_testing_thousands_of_sites_with_playwright</guid>
            <pubDate>Fri, 16 May 2025 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Integration Testing Thousands of Websites with Playwright</h1>
<p>As I've accounted and discussed in previous posts, one of the hardest problems <a href="https://github.com/automattic/harper">Harper faces</a> is that of the great diversity of the internet. There are a great variety text editors on the web, each used in a different context. That's a beautiful thing, but unfortunately our users expect Harper to work in all of these places seamlessly. I suppose you can view this post as a third part in this series where I talk about supporting thousands of websites for potentially millions of users (with zero server costs, I might add).</p>
<p>This journey started in Discord (as many do), with a satisfied user reporting an issue. I'm quite fortunate: they were technical. They're report was detailed and included some initial speculation on what the root cause could be.</p>
<img src="/images/why_so_serious_screenshot.png" alt="A small part of a larger conversion about the problem at hand.">
<p>While the actual underlying problem was complex and difficult to fix, that is not what this post is about. It is about Harper's strategy for doing end-to-end testing on the many sites we support.</p>
<p>It's relevant because this was a problem that could have been discovered through end-to-end testing. Since we have a diverse set of users already, they found the issue swiftly. That is far from ideal. Our testing suite should catch these problems <em>before the PR is merged.</em></p>
<h2>Why Playwright?</h2>
<p>"<a href="https://playwright.dev/">Playwright</a> enables reliable end-to-end testing for modern web apps." At least—that's what their site claims Playwright can do. I'm not sure if it lives up to this claim of reliability, at least not yet.</p>
<p>The decision to use Playwright over alternative choices came down to a few key points:</p>
<ul>
<li>It's quite polished and well supported (Microsoft seems to be the main player).</li>
<li>While it is more complex to load a Chrome extension in Playwright than Puppeteer, I am also given a lot more control.</li>
<li>We're already using it for other integrations (but not for end-to-end tests).</li>
</ul>
<h2>The Game Plan</h2>
<p>My goal is to build up a comprehensive-enough test suite that I can catch integration problems in foreign text editors before I merge PRs for logic that interacts with them.</p>
<p>Step one was to get Playwright installed and running on my machine, reproducible with npm. Fortunately for me, this was as simple as: <code>pnpm create playwright</code>. Step two was a little more complex: get our extension loaded within the headless browser. I found this could be done by overriding Playwright's default Chrome properties, instructing it to only install the Harper plugin and nothing else:</p>
<pre><code class="hljs language-typescript">export const test = base.extend&#x3C;{
	context: BrowserContext;
	extensionId: string;
}>({
	// biome-ignore lint/correctness/noEmptyPattern: it's by Playwright. Explanation not provided.
	context: async ({}, use) => {
		const pathToExtension = path.join(import.meta.dirname, '../build');
		console.log(`Loading extension from ${pathToExtension}`);
		const context = await chromium.launchPersistentContext('', {
			channel: 'chromium',
			args: [
				`--disable-extensions-except=${pathToExtension}`,
				`--load-extension=${pathToExtension}`,
			],
		});
		await use(context);
		await context.close();
	},
	extensionId: async ({ context }, use) => {
		let [background] = context.serviceWorkers();
		if (!background) background = await context.waitForEvent('serviceworker');

		const extensionId = background.url().split('/')[2];
		await use(extensionId);
	},
});
export const expect = test.expect;
</code></pre>
<p>From there, it was pretty trivial to build out assertions and tools for interacting with basic elements for the specific text editor I was interested in (<a href="https://www.slatejs.org/examples/richtext">Slate</a>).</p>
<h2>The Cool Part</h2>
<p>Most text editors on the web advertise themselves in the DOM, usually with a special attribute like <code>data-lexical-editor="true"</code> or <code>data-slate-editor="true"</code>. This even happens on world-class sites like LinkedIn or Instagram. I wonder if I can use this for something?</p>
<p>I believe this consistency in production code is intentional. Making our tests easier to write must be a side-effect of making the editor author's tests easier to write.</p>
<p>This is great news for me. With just a few tweaks, I can use the same code to test Harper on Discord, Medium, Notion, Desmos, Asana—you get the point. Since they use just a small set of rich text editors (which come pre-tagged), I can generate automated tests to determine whether Harper works properly on their sites.</p>
<h2>The Not-So-Cool Part</h2>
<p>The bad news: this process is slow. Each page must be fetched from the network and operated on like a user. We might be able to fix the first problem, but the second is interminable. My initial experiments put the runtime of each test case around thirty seconds. If I'm testing hundreds or thousands of sites, this is a real problem.</p>
<p>For the time being, I'll have these tests be their own workflow in GitHub Actions and only run them when the extension code changes. I don't see people other than myself messing with this code too much, so I am not worried.</p>
<h2>What Does Testing Thousands of Sites Look Like?</h2>
<p>I'll admit, I'm not quite to the "thousands of sites" territory just yet. The most frustrating part is getting started. Now that I've got a couple of sites under my belt, with the tools ready and able , the next thousand will be a lot easier.</p>]]></description>
            <content:encoded><![CDATA[<h1>Integration Testing Thousands of Websites with Playwright</h1>
<p>As I've accounted and discussed in previous posts, one of the hardest problems <a href="https://github.com/automattic/harper">Harper faces</a> is that of the great diversity of the internet. There are a great variety text editors on the web, each used in a different context. That's a beautiful thing, but unfortunately our users expect Harper to work in all of these places seamlessly. I suppose you can view this post as a third part in this series where I talk about supporting thousands of websites for potentially millions of users (with zero server costs, I might add).</p>
<p>This journey started in Discord (as many do), with a satisfied user reporting an issue. I'm quite fortunate: they were technical. They're report was detailed and included some initial speculation on what the root cause could be.</p>
<img src="/images/why_so_serious_screenshot.png" alt="A small part of a larger conversion about the problem at hand.">
<p>While the actual underlying problem was complex and difficult to fix, that is not what this post is about. It is about Harper's strategy for doing end-to-end testing on the many sites we support.</p>
<p>It's relevant because this was a problem that could have been discovered through end-to-end testing. Since we have a diverse set of users already, they found the issue swiftly. That is far from ideal. Our testing suite should catch these problems <em>before the PR is merged.</em></p>
<h2>Why Playwright?</h2>
<p>"<a href="https://playwright.dev/">Playwright</a> enables reliable end-to-end testing for modern web apps." At least—that's what their site claims Playwright can do. I'm not sure if it lives up to this claim of reliability, at least not yet.</p>
<p>The decision to use Playwright over alternative choices came down to a few key points:</p>
<ul>
<li>It's quite polished and well supported (Microsoft seems to be the main player).</li>
<li>While it is more complex to load a Chrome extension in Playwright than Puppeteer, I am also given a lot more control.</li>
<li>We're already using it for other integrations (but not for end-to-end tests).</li>
</ul>
<h2>The Game Plan</h2>
<p>My goal is to build up a comprehensive-enough test suite that I can catch integration problems in foreign text editors before I merge PRs for logic that interacts with them.</p>
<p>Step one was to get Playwright installed and running on my machine, reproducible with npm. Fortunately for me, this was as simple as: <code>pnpm create playwright</code>. Step two was a little more complex: get our extension loaded within the headless browser. I found this could be done by overriding Playwright's default Chrome properties, instructing it to only install the Harper plugin and nothing else:</p>
<pre><code class="hljs language-typescript">export const test = base.extend&#x3C;{
	context: BrowserContext;
	extensionId: string;
}>({
	// biome-ignore lint/correctness/noEmptyPattern: it's by Playwright. Explanation not provided.
	context: async ({}, use) => {
		const pathToExtension = path.join(import.meta.dirname, '../build');
		console.log(`Loading extension from ${pathToExtension}`);
		const context = await chromium.launchPersistentContext('', {
			channel: 'chromium',
			args: [
				`--disable-extensions-except=${pathToExtension}`,
				`--load-extension=${pathToExtension}`,
			],
		});
		await use(context);
		await context.close();
	},
	extensionId: async ({ context }, use) => {
		let [background] = context.serviceWorkers();
		if (!background) background = await context.waitForEvent('serviceworker');

		const extensionId = background.url().split('/')[2];
		await use(extensionId);
	},
});
export const expect = test.expect;
</code></pre>
<p>From there, it was pretty trivial to build out assertions and tools for interacting with basic elements for the specific text editor I was interested in (<a href="https://www.slatejs.org/examples/richtext">Slate</a>).</p>
<h2>The Cool Part</h2>
<p>Most text editors on the web advertise themselves in the DOM, usually with a special attribute like <code>data-lexical-editor="true"</code> or <code>data-slate-editor="true"</code>. This even happens on world-class sites like LinkedIn or Instagram. I wonder if I can use this for something?</p>
<p>I believe this consistency in production code is intentional. Making our tests easier to write must be a side-effect of making the editor author's tests easier to write.</p>
<p>This is great news for me. With just a few tweaks, I can use the same code to test Harper on Discord, Medium, Notion, Desmos, Asana—you get the point. Since they use just a small set of rich text editors (which come pre-tagged), I can generate automated tests to determine whether Harper works properly on their sites.</p>
<h2>The Not-So-Cool Part</h2>
<p>The bad news: this process is slow. Each page must be fetched from the network and operated on like a user. We might be able to fix the first problem, but the second is interminable. My initial experiments put the runtime of each test case around thirty seconds. If I'm testing hundreds or thousands of sites, this is a real problem.</p>
<p>For the time being, I'll have these tests be their own workflow in GitHub Actions and only run them when the extension code changes. I don't see people other than myself messing with this code too much, so I am not worried.</p>
<h2>What Does Testing Thousands of Sites Look Like?</h2>
<p>I'll admit, I'm not quite to the "thousands of sites" territory just yet. The most frustrating part is getting started. Now that I've got a couple of sites under my belt, with the tools ready and able , the next thousand will be a lot easier.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Bypassing Hallucinations in Llms]]></title>
            <link>https://elijahpotter.dev/articles/bypassing_hallucinations_in_llms</link>
            <guid>https://elijahpotter.dev/articles/bypassing_hallucinations_in_llms</guid>
            <pubDate>Wed, 14 May 2025 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Bypassing Hallucinations in LLMs</h1>
<img src="/images/bike_sidewalk.webp" alt="My bike broke down. Luckily, I had my camera">
<p>Before I get too deep, I just want to get it out of the way: OpenAI's o3 model is impressive.
With its tool use and web search capabilities, it can do a lot more than most offerings out there.</p>
<p>That said, although I've found it to be quite a capable coder, I still don't trust it with anything important.
Once or twice, I've instructed it to outline a React component, only to rewrite most of it myself.</p>
<p>I also don't trust its factual accuracy at all.
After hallucinating a campground and several entire web APIs, I can't quite believe anything it says. Not that I've trusted any model that came before.</p>
<p>That said, there is one thing it is incredibly useful for: <strong>finding canonical documentation for complex subjects.</strong></p>
<p>I've found the greatest success, personally and professionally, when I am working with the most concrete and original source of information available.
When working with the web, the most canonical source is the W3C spec.
When working with compilers, it's The Dragon Book.
When researching the ins and outs of GNU/Linux systems, it's the <code>man</code> page.</p>
<p>This can't be a novel concept.
You (the reader) must see things the same way I do.
Even LinkedIn seems to agree that base-truth documentation is where we should be getting our information.</p>
<p>I've always wondered: if the W3C spec is the best place to find information about the web, why isn't it the first result on Google?
Why, after all these years, is W3Schools <em>still the first result 90% of the time?</em></p>
<p>I use o3 to find canonical sources of information.</p>
<p>I was recently looking to improve the dynamic range in my camera, but do so before my post-processing step. By improving dynamic range in-camera, I can avoid the pitfalls of certain kinds of compression. I asked o3: "find me the canonical guide for improving dynamic range on my D7100 from the most authoritative source."</p>
<p>I learnt more from the resulting guide (which was hosted on the Nikon website, NOT ChatGPT) than the last three years of shooting, combined.</p>]]></description>
            <content:encoded><![CDATA[<h1>Bypassing Hallucinations in LLMs</h1>
<img src="/images/bike_sidewalk.webp" alt="My bike broke down. Luckily, I had my camera">
<p>Before I get too deep, I just want to get it out of the way: OpenAI's o3 model is impressive.
With its tool use and web search capabilities, it can do a lot more than most offerings out there.</p>
<p>That said, although I've found it to be quite a capable coder, I still don't trust it with anything important.
Once or twice, I've instructed it to outline a React component, only to rewrite most of it myself.</p>
<p>I also don't trust its factual accuracy at all.
After hallucinating a campground and several entire web APIs, I can't quite believe anything it says. Not that I've trusted any model that came before.</p>
<p>That said, there is one thing it is incredibly useful for: <strong>finding canonical documentation for complex subjects.</strong></p>
<p>I've found the greatest success, personally and professionally, when I am working with the most concrete and original source of information available.
When working with the web, the most canonical source is the W3C spec.
When working with compilers, it's The Dragon Book.
When researching the ins and outs of GNU/Linux systems, it's the <code>man</code> page.</p>
<p>This can't be a novel concept.
You (the reader) must see things the same way I do.
Even LinkedIn seems to agree that base-truth documentation is where we should be getting our information.</p>
<p>I've always wondered: if the W3C spec is the best place to find information about the web, why isn't it the first result on Google?
Why, after all these years, is W3Schools <em>still the first result 90% of the time?</em></p>
<p>I use o3 to find canonical sources of information.</p>
<p>I was recently looking to improve the dynamic range in my camera, but do so before my post-processing step. By improving dynamic range in-camera, I can avoid the pitfalls of certain kinds of compression. I asked o3: "find me the canonical guide for improving dynamic range on my D7100 from the most authoritative source."</p>
<p>I learnt more from the resulting guide (which was hosted on the Nikon website, NOT ChatGPT) than the last three years of shooting, combined.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Putting Harper in Your Browser]]></title>
            <link>https://elijahpotter.dev/articles/putting_harper_in_your_browser</link>
            <guid>https://elijahpotter.dev/articles/putting_harper_in_your_browser</guid>
            <pubDate>Fri, 02 May 2025 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Putting Harper in Your Browser</h1>
<p>When our users <a href="https://writewithharper.com">install Harper</a>, they should expect it to work anywhere they do.
Whether they're writing up a blog post in WordPress, leaving a comment on Facebook, or messaging a loved one on WhatsApp, Harper should be there.
Harper's core is extremely portable and it can run pretty much anywhere Rust can, so what's the big deal?</p>
<p>Why can't we just run Harper in the browser through a web extension?</p>
<h2>Running Locally</h2>
<p>There's a single complaint that I hear over and over again from people who use Grammarly or LanguageTool: they are both slow as molasses.
The process of writing has slowly evolved to be more complex than it needs to be.
With these tools, it writing looks like:</p>
<ol>
<li>Write a sentence.</li>
<li>Wait for the grammar checker to run (which takes as many as four seconds).</li>
<li>Fix the mistakes you made.</li>
<li>Go back to step one.</li>
</ol>
<p>The whole process reminds me of the <a href="https://thomasvogelaar.me/posts/the-copilot-pause/">copilot pause</a>.
This is part of why Harper is better than these other tools: it doesn't stop you from writing at the speed of thought.
Our most ardent users tell us this all the time: it feels great to <strong>just write</strong>, error free.</p>
<h4>How do we deliver grammar checking so quickly?</h4>
<p>Instead of hosting huge Java codebases in the cloud, we ship our software straight to the user's device.
Since there's no network request involved, we're able to put pixels on the screen faster than anyone else.
That's not to mention the privacy implications.</p>
<h4>Running Harper's engine locally in the browser presented some technical challenges.</h4>
<p>I'm quite proud that our JavaScript library can be installed as simply as <code>npm install harper.js</code>.
In order for that to work as well as it does, I needed to develop a system for:</p>
<ol>
<li>Compiling our engine to WebAssembly.</li>
<li>Shipping that engine to the browser.</li>
<li>Instantiating the WebAssembly code.</li>
<li>Build out the boilerplate necessary to make it feel native.</li>
</ol>
<p>Steps one and four were easy.
I just slapped <code>#[wasm_bindgen]</code> tags on a Rust library and put on a pot of coffee.
Steaming coffee is vital for writing tedious JavaScript.</p>
<p>Steps two and three, however, were a little more difficult.
The latest iteration of Google's extension standard, Manifest V3, places some heavy-handed rules on how executable code could be loaded.
I won't bore you with the details here.
Know that I spent many hours in JavaScript bundler hell.</p>
<h2>Running Everywhere</h2>
<p>Harper, nascent as it is, has the greatest market opportunity in the browser.
Over 3.5 <em>billion</em> people use Chrome on a weekly basis.
The plurality of knowledge workers spend most of their waking moments (as crushing as it sounds) in a web browser.
Half of the time spent at desks today is spent writing.</p>
<p>In order to address this market segment, we need a Chrome extension.
To lint text in the browser, I need a way to:</p>
<ol>
<li>Cleanly read text from input fields.</li>
<li>Locate the pixel coordinates of grammatical errors.</li>
<li>Render suggestions in popups.</li>
<li>Cleanly replace text in input fields when a suggestion is selected.</li>
</ol>
<img src="/images/harper_chrome_ext_2.png" alt="An example of Harper&#x27;s suggestion box.">
<h4>Reading and Writing Text is Hard</h4>
<p>The web may have standards, but there is nothing standard about it.
The "standard" way to input text is with a <code>&#x3C;textarea /></code> element.
Even so, most high-traffic sites implement their own text editors from scratch, using <code>&#x3C;div contenteditable="true" /></code> as a base.
Each of these cases required special care.</p>
<h3><code>&#x3C;textarea /></code></h3>
<p><code>&#x3C;textarea /></code>s are hard for one reason: it is difficult to get a good understanding of what they look like.
I can obtain their content with <code>input.value</code>, but I can't directly infer the pixel coordinates of grammatical errors inside them.</p>
<p>When the Harper Chrome Extension is offered an <code>&#x3C;input /></code> or <code>&#x3C;textarea /></code> element to analyze, here's what it does.</p>
<ol>
<li>Creates a new <code>&#x3C;div /></code>.</li>
<li>Copies all styles from the provided element onto the <code>&#x3C;div /></code>.</li>
<li>Using <code>position: absolute;</code>, it moves this <code>&#x3C;div /></code> directly on top of the provided element.</li>
<li>Copies the content of the provided element into the <code>&#x3C;div /></code>.</li>
<li>Uses the <a href="https://developer.mozilla.org/en-US/docs/Web/API/Range">Range API</a> to turn the text indices emitted by Harper's engine into pixel coordinates on the <code>&#x3C;div /></code>.</li>
</ol>
<p>Using this mirroring strategy works, but has performance implications and additional complexity to handle scrolling within the element.</p>
<h3><code>&#x3C;div contenteditable="true" /></code></h3>
<p>Since elements in a <code>contenteditable</code> text editor (like Trix, Lexical, Gutenberg, etc.) actually exist in the DOM, I can just use the <code>Range API</code> to get pixel coordinates.
The trouble this time comes when I try to write a suggestion back into the editor.</p>
<p>Most documentation I could find suggests that you:</p>
<ol>
<li>Select the content of the element you wish to edit.
This can be done using <code>window.getSelection().addRange()</code>.</li>
<li>Call <code>document.execCommand('insertText', null, "YOUR TEXT")</code>.</li>
</ol>
<p>Much of this documentation acknowledges that <code>document.execCommand</code> is deprecated, but instructs you to use it anyway.</p>
<p>This is bad advice. <strong>Do not do this.</strong>
I spent an embarrassing amount of time trying to get it to work consistently.</p>
<p>The better way to replace text programmatically comes directly from the W3C standard:</p>
<ol>
<li>Manually edit the DOM in the fashion outlined by the suggestion chosen by the user.</li>
<li>Fire input events to instruct WYSIWYG editors to synchronize their internal state to the DOM.</li>
</ol>
<p>I'll admit that this is an oversimplification.
Much of the complexity here lays in determining which DOM nodes to edit or fire events on.</p>
<h2>Read the Source of Truth</h2>
<p>If you take one thing away from this post, it should be this: <strong>always read from the source of truth.</strong>
There is a lot of faulty information out there, especially when it comes to creating complex interactions with opaque systems.
If there is a source of truth, read it.
It may look intimidating or seem unnecessarily verbose.
It is that way for a reason.</p>]]></description>
            <content:encoded><![CDATA[<h1>Putting Harper in Your Browser</h1>
<p>When our users <a href="https://writewithharper.com">install Harper</a>, they should expect it to work anywhere they do.
Whether they're writing up a blog post in WordPress, leaving a comment on Facebook, or messaging a loved one on WhatsApp, Harper should be there.
Harper's core is extremely portable and it can run pretty much anywhere Rust can, so what's the big deal?</p>
<p>Why can't we just run Harper in the browser through a web extension?</p>
<h2>Running Locally</h2>
<p>There's a single complaint that I hear over and over again from people who use Grammarly or LanguageTool: they are both slow as molasses.
The process of writing has slowly evolved to be more complex than it needs to be.
With these tools, it writing looks like:</p>
<ol>
<li>Write a sentence.</li>
<li>Wait for the grammar checker to run (which takes as many as four seconds).</li>
<li>Fix the mistakes you made.</li>
<li>Go back to step one.</li>
</ol>
<p>The whole process reminds me of the <a href="https://thomasvogelaar.me/posts/the-copilot-pause/">copilot pause</a>.
This is part of why Harper is better than these other tools: it doesn't stop you from writing at the speed of thought.
Our most ardent users tell us this all the time: it feels great to <strong>just write</strong>, error free.</p>
<h4>How do we deliver grammar checking so quickly?</h4>
<p>Instead of hosting huge Java codebases in the cloud, we ship our software straight to the user's device.
Since there's no network request involved, we're able to put pixels on the screen faster than anyone else.
That's not to mention the privacy implications.</p>
<h4>Running Harper's engine locally in the browser presented some technical challenges.</h4>
<p>I'm quite proud that our JavaScript library can be installed as simply as <code>npm install harper.js</code>.
In order for that to work as well as it does, I needed to develop a system for:</p>
<ol>
<li>Compiling our engine to WebAssembly.</li>
<li>Shipping that engine to the browser.</li>
<li>Instantiating the WebAssembly code.</li>
<li>Build out the boilerplate necessary to make it feel native.</li>
</ol>
<p>Steps one and four were easy.
I just slapped <code>#[wasm_bindgen]</code> tags on a Rust library and put on a pot of coffee.
Steaming coffee is vital for writing tedious JavaScript.</p>
<p>Steps two and three, however, were a little more difficult.
The latest iteration of Google's extension standard, Manifest V3, places some heavy-handed rules on how executable code could be loaded.
I won't bore you with the details here.
Know that I spent many hours in JavaScript bundler hell.</p>
<h2>Running Everywhere</h2>
<p>Harper, nascent as it is, has the greatest market opportunity in the browser.
Over 3.5 <em>billion</em> people use Chrome on a weekly basis.
The plurality of knowledge workers spend most of their waking moments (as crushing as it sounds) in a web browser.
Half of the time spent at desks today is spent writing.</p>
<p>In order to address this market segment, we need a Chrome extension.
To lint text in the browser, I need a way to:</p>
<ol>
<li>Cleanly read text from input fields.</li>
<li>Locate the pixel coordinates of grammatical errors.</li>
<li>Render suggestions in popups.</li>
<li>Cleanly replace text in input fields when a suggestion is selected.</li>
</ol>
<img src="/images/harper_chrome_ext_2.png" alt="An example of Harper&#x27;s suggestion box.">
<h4>Reading and Writing Text is Hard</h4>
<p>The web may have standards, but there is nothing standard about it.
The "standard" way to input text is with a <code>&#x3C;textarea /></code> element.
Even so, most high-traffic sites implement their own text editors from scratch, using <code>&#x3C;div contenteditable="true" /></code> as a base.
Each of these cases required special care.</p>
<h3><code>&#x3C;textarea /></code></h3>
<p><code>&#x3C;textarea /></code>s are hard for one reason: it is difficult to get a good understanding of what they look like.
I can obtain their content with <code>input.value</code>, but I can't directly infer the pixel coordinates of grammatical errors inside them.</p>
<p>When the Harper Chrome Extension is offered an <code>&#x3C;input /></code> or <code>&#x3C;textarea /></code> element to analyze, here's what it does.</p>
<ol>
<li>Creates a new <code>&#x3C;div /></code>.</li>
<li>Copies all styles from the provided element onto the <code>&#x3C;div /></code>.</li>
<li>Using <code>position: absolute;</code>, it moves this <code>&#x3C;div /></code> directly on top of the provided element.</li>
<li>Copies the content of the provided element into the <code>&#x3C;div /></code>.</li>
<li>Uses the <a href="https://developer.mozilla.org/en-US/docs/Web/API/Range">Range API</a> to turn the text indices emitted by Harper's engine into pixel coordinates on the <code>&#x3C;div /></code>.</li>
</ol>
<p>Using this mirroring strategy works, but has performance implications and additional complexity to handle scrolling within the element.</p>
<h3><code>&#x3C;div contenteditable="true" /></code></h3>
<p>Since elements in a <code>contenteditable</code> text editor (like Trix, Lexical, Gutenberg, etc.) actually exist in the DOM, I can just use the <code>Range API</code> to get pixel coordinates.
The trouble this time comes when I try to write a suggestion back into the editor.</p>
<p>Most documentation I could find suggests that you:</p>
<ol>
<li>Select the content of the element you wish to edit.
This can be done using <code>window.getSelection().addRange()</code>.</li>
<li>Call <code>document.execCommand('insertText', null, "YOUR TEXT")</code>.</li>
</ol>
<p>Much of this documentation acknowledges that <code>document.execCommand</code> is deprecated, but instructs you to use it anyway.</p>
<p>This is bad advice. <strong>Do not do this.</strong>
I spent an embarrassing amount of time trying to get it to work consistently.</p>
<p>The better way to replace text programmatically comes directly from the W3C standard:</p>
<ol>
<li>Manually edit the DOM in the fashion outlined by the suggestion chosen by the user.</li>
<li>Fire input events to instruct WYSIWYG editors to synchronize their internal state to the DOM.</li>
</ol>
<p>I'll admit that this is an oversimplification.
Much of the complexity here lays in determining which DOM nodes to edit or fire events on.</p>
<h2>Read the Source of Truth</h2>
<p>If you take one thing away from this post, it should be this: <strong>always read from the source of truth.</strong>
There is a lot of faulty information out there, especially when it comes to creating complex interactions with opaque systems.
If there is a source of truth, read it.
It may look intimidating or seem unnecessarily verbose.
It is that way for a reason.</p>]]></content:encoded>
            <enclosure url="https://elijahpotter.dev/images/harper_chrome_ext_2.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[Always Think of the Hook First]]></title>
            <link>https://elijahpotter.dev/articles/always_think_of_the_hook_first</link>
            <guid>https://elijahpotter.dev/articles/always_think_of_the_hook_first</guid>
            <pubDate>Thu, 01 May 2025 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Always Think of the Hook First</h1>
<img src="/images/log.webp" alt="A log I found in the Indian Peaks Wilderness">
<p>In his popular book, <em>Outliers</em>, Malcolm Gladwell made the case that there was one number that predicted your skill level in a field: the number of hours of dedicated practice.
In popular culture, this idea has been simplified.
Some think that if you just do something for 10,000 hours, then you'll become a world-class expert.
This is false.</p>
<p>What does the word "dedicated" mean in the phrase "dedicated" practice?
It means applying focused, deliberate thought to it.
It means reflecting on your own performance.
It means seeking help from masters in the field.
Above all, it means <strong>achieving mastery is not easy</strong>.</p>
<p>I aim to master the skill of writing.
In my free time this week, I've been working on my deliberate practice of writing.
In doing so, I've come to some conclusions.
If you'd like to join me in pursuing mastery of the written word, here's what I think you should do.</p>
<h2>Proof it Yourself</h2>
<p>Two-time Pulitzer prize-winner David McCullough says "Writing is thinking. To write well is to think clearly. That's why it's so hard."</p>
<p>Because the act of writing is also the act of thinking, the process of revising text clarifies thought.
I personally find that my argument when I first touch my keyboard usually stands in stark contrast to when I hit "publish."</p>
<p>Heavy-handed revision tools like ChatGPT actively disrupt this great benefit of the revision process.
Sure, they can rewrite tone to be more attractive online, but they also rob you of the opportunity to develop your concept.
That's why I turn everything else off and only use Harper: because it gets out of the way of what I am trying to say.
<strong>You can't reflect on what you've written if you are not the one reflecting.</strong></p>
<h2>Read The Work of Great Writers</h2>
<p>When I sit down to write after spending time online, I feel deflated.
The competition of social media and the churn of the news cycle both lead to poor prose.</p>
<p>When I sit to write after reading Bonnie Garmus or Cal Newport, I feel invigorated.
I can imagine a shape for the words on the page which precisely captures the ideas I'm trying to communicate.
Reading great literature gives you an intuition for what your own work should look like.</p>
<h2>Think of the Hook First</h2>
<p>In rapper circles, it's a common point of debate whether to think of the verse first or the hook.</p>
<p>On one hand, the verse is the substance of the song.
It contains the message, the detail, and the personal meaning.
Music derives much of its complexity from the verse.
On the other hand, the hook is what gets people to listen.
It is what makes them stop flicking through Spotify and fall into the beat.</p>
<p>If you're writing for an audience, you have a lot of competition for their attention.
Spend more time on the introduction than anything else.
It should capture your readers, delight their imaginations, and put them in your shoes.</p>
<h2>Don't Get Attached</h2>
<p>I spent a long time on a post similar to this one.
I was quite proud of it, for I had spent hours adding layers of detail and quite a few case studies from my own work.
Alas, Gutenberg decided it was not meant to be and chucked it into oblivion.</p>
<p>Although I was initially crestfallen from the loss, I now believe the second round ended up even better.
To truly revise your work, you'll inevitably have to delete parts of it.
Get comfortable with that.</p>]]></description>
            <content:encoded><![CDATA[<h1>Always Think of the Hook First</h1>
<img src="/images/log.webp" alt="A log I found in the Indian Peaks Wilderness">
<p>In his popular book, <em>Outliers</em>, Malcolm Gladwell made the case that there was one number that predicted your skill level in a field: the number of hours of dedicated practice.
In popular culture, this idea has been simplified.
Some think that if you just do something for 10,000 hours, then you'll become a world-class expert.
This is false.</p>
<p>What does the word "dedicated" mean in the phrase "dedicated" practice?
It means applying focused, deliberate thought to it.
It means reflecting on your own performance.
It means seeking help from masters in the field.
Above all, it means <strong>achieving mastery is not easy</strong>.</p>
<p>I aim to master the skill of writing.
In my free time this week, I've been working on my deliberate practice of writing.
In doing so, I've come to some conclusions.
If you'd like to join me in pursuing mastery of the written word, here's what I think you should do.</p>
<h2>Proof it Yourself</h2>
<p>Two-time Pulitzer prize-winner David McCullough says "Writing is thinking. To write well is to think clearly. That's why it's so hard."</p>
<p>Because the act of writing is also the act of thinking, the process of revising text clarifies thought.
I personally find that my argument when I first touch my keyboard usually stands in stark contrast to when I hit "publish."</p>
<p>Heavy-handed revision tools like ChatGPT actively disrupt this great benefit of the revision process.
Sure, they can rewrite tone to be more attractive online, but they also rob you of the opportunity to develop your concept.
That's why I turn everything else off and only use Harper: because it gets out of the way of what I am trying to say.
<strong>You can't reflect on what you've written if you are not the one reflecting.</strong></p>
<h2>Read The Work of Great Writers</h2>
<p>When I sit down to write after spending time online, I feel deflated.
The competition of social media and the churn of the news cycle both lead to poor prose.</p>
<p>When I sit to write after reading Bonnie Garmus or Cal Newport, I feel invigorated.
I can imagine a shape for the words on the page which precisely captures the ideas I'm trying to communicate.
Reading great literature gives you an intuition for what your own work should look like.</p>
<h2>Think of the Hook First</h2>
<p>In rapper circles, it's a common point of debate whether to think of the verse first or the hook.</p>
<p>On one hand, the verse is the substance of the song.
It contains the message, the detail, and the personal meaning.
Music derives much of its complexity from the verse.
On the other hand, the hook is what gets people to listen.
It is what makes them stop flicking through Spotify and fall into the beat.</p>
<p>If you're writing for an audience, you have a lot of competition for their attention.
Spend more time on the introduction than anything else.
It should capture your readers, delight their imaginations, and put them in your shoes.</p>
<h2>Don't Get Attached</h2>
<p>I spent a long time on a post similar to this one.
I was quite proud of it, for I had spent hours adding layers of detail and quite a few case studies from my own work.
Alas, Gutenberg decided it was not meant to be and chucked it into oblivion.</p>
<p>Although I was initially crestfallen from the loss, I now believe the second round ended up even better.
To truly revise your work, you'll inevitably have to delete parts of it.
Get comfortable with that.</p>]]></content:encoded>
            <enclosure url="https://elijahpotter.dev/images/log.webp" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[ChatGPT for the Moms]]></title>
            <link>https://elijahpotter.dev/articles/chatgpt_for_the_moms</link>
            <guid>https://elijahpotter.dev/articles/chatgpt_for_the_moms</guid>
            <pubDate>Wed, 23 Apr 2025 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>ChatGPT for the Moms</h1>
<img src="/images/guggenheim.webp" alt="Spring has sprung, and Guggenheim shows it.">
<p>My mom is a violently intelligent woman.
But she lives in the unfortunate reality of not being a software engineer or mathematician.
While she is just as excited about ChatGPT (and all the other nascent AI technology) as the rest of us, she isn't aware of the unintelligent pitfalls artificial intelligence has these days.</p>
<p>If your mom is experimenting with ChatGPT, I'd highly recommend sending this list to her.
Powerful technology is in their hands, but I've got three things they should know before relying on it.</p>
<h2>Advice for Moms</h2>
<h3>Use Web Search</h3>
<p>My mother recently learned how to make cobbler.
She adored asked ChatGPT questions about the recipe, and how she could improve it using the ingredients she had on hand.
Unfortunately, she was getting subtly wrong information from the bot, so the cobbler turned out "sub-par".
This can be solved by using ChatGPT's web search.</p>
<p>When you open up ChatGPT on a phone or on a computer, you'll be met with the familiar chat window.
If you ask it questions it will give you an answer.
Sometimes that answer will be good, sometimes not.</p>
<p>You can generally get better information if you enable the web search tool.</p>
<img src="/images/chatgpt_web_search_tool.png" alt="A screenshot of the ChatGPT web search tool.">
<p>This will allow ChatGPT to do Google searches on your behalf.
It improves the quality of information it gives you, while only adding a second or two to their response.</p>
<h3>It Is Not Your Friend</h3>
<p>When we see a chat window, our first thought is that there's a person on the other end.
Someone with emotions and lives all their own.
That is not the case here.</p>
<p>ChatGPT, while an advanced machine learning model, is not a person.
It does not replace real human connection.
It can't watch your kids when you've got an emergency.
It won't show up to the school recital.
It doesn't care about you—despite what it says.</p>
<p>There's a lot that goes into a genuine social connection, all of which are missing from ChatGPT.</p>
<h3>Use Your Brain</h3>
<p>ChatGPT is no substitute for critical thought.</p>
<p>It cannot do most things, even if it claims it can.
You are a capable human being—able to accomplish anything you put your mind to.
While ChatGPT can be a helpful assistant or brainstorming partner, don't let it steer the ship.</p>
<h2>What to Remember</h2>
<p>While ChatGPT can be a powerful tool in the right hands, it is important to remember it is just that: a tool.
It can be enhanced with the right settings, it is not real human connection, and it can't replace your brain.
Use it well and use it wisely.</p>]]></description>
            <content:encoded><![CDATA[<h1>ChatGPT for the Moms</h1>
<img src="/images/guggenheim.webp" alt="Spring has sprung, and Guggenheim shows it.">
<p>My mom is a violently intelligent woman.
But she lives in the unfortunate reality of not being a software engineer or mathematician.
While she is just as excited about ChatGPT (and all the other nascent AI technology) as the rest of us, she isn't aware of the unintelligent pitfalls artificial intelligence has these days.</p>
<p>If your mom is experimenting with ChatGPT, I'd highly recommend sending this list to her.
Powerful technology is in their hands, but I've got three things they should know before relying on it.</p>
<h2>Advice for Moms</h2>
<h3>Use Web Search</h3>
<p>My mother recently learned how to make cobbler.
She adored asked ChatGPT questions about the recipe, and how she could improve it using the ingredients she had on hand.
Unfortunately, she was getting subtly wrong information from the bot, so the cobbler turned out "sub-par".
This can be solved by using ChatGPT's web search.</p>
<p>When you open up ChatGPT on a phone or on a computer, you'll be met with the familiar chat window.
If you ask it questions it will give you an answer.
Sometimes that answer will be good, sometimes not.</p>
<p>You can generally get better information if you enable the web search tool.</p>
<img src="/images/chatgpt_web_search_tool.png" alt="A screenshot of the ChatGPT web search tool.">
<p>This will allow ChatGPT to do Google searches on your behalf.
It improves the quality of information it gives you, while only adding a second or two to their response.</p>
<h3>It Is Not Your Friend</h3>
<p>When we see a chat window, our first thought is that there's a person on the other end.
Someone with emotions and lives all their own.
That is not the case here.</p>
<p>ChatGPT, while an advanced machine learning model, is not a person.
It does not replace real human connection.
It can't watch your kids when you've got an emergency.
It won't show up to the school recital.
It doesn't care about you—despite what it says.</p>
<p>There's a lot that goes into a genuine social connection, all of which are missing from ChatGPT.</p>
<h3>Use Your Brain</h3>
<p>ChatGPT is no substitute for critical thought.</p>
<p>It cannot do most things, even if it claims it can.
You are a capable human being—able to accomplish anything you put your mind to.
While ChatGPT can be a helpful assistant or brainstorming partner, don't let it steer the ship.</p>
<h2>What to Remember</h2>
<p>While ChatGPT can be a powerful tool in the right hands, it is important to remember it is just that: a tool.
It can be enhanced with the right settings, it is not real human connection, and it can't replace your brain.
Use it well and use it wisely.</p>]]></content:encoded>
            <enclosure url="https://elijahpotter.dev/images/guggenheim.webp" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[Status of the Harper Chrome Extension]]></title>
            <link>https://elijahpotter.dev/articles/status_of_the_harper_chrome_extension</link>
            <guid>https://elijahpotter.dev/articles/status_of_the_harper_chrome_extension</guid>
            <pubDate>Wed, 23 Apr 2025 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>The Status of the Harper Chrome Extension</h1>
<p>Elijah! Elijah! When will we have a Chrome extension for <a href="https://writewithharper.com">Harper</a>?</p>
<p>Although it's usually asked with a bit more panache, this is a question I've gotten a lot.
In fact, an author at Lifehacker <a href="https://lifehacker.com/tech/harper-offline-alternative-to-grammarly">wrote about it just yesterday</a>.</p>
<p>For the longest time, it was a project on the back-burner.
Something I intended to work on, but was always passed up for more important things.
It never saw the light of day.</p>
<p>Until now.</p>
<img src="/images/harper_chrome_ext.png" alt="The Harper Chrome Extension">
<h2>What is Harper?</h2>
<p>...and why do I need its Chrome extension?</p>
<p>Harper is a grammar checker that respects your privacy.
All language processing happens on-device, no matter where you're using it.</p>
<p>For most of Harper's history, it has only been available in one of many text editors as a plugin.
With the Chrome extension, you'll be able to get high-quality grammar checking anywhere on the web.</p>
<h2>What's the Status?</h2>
<p>I just marked the <a href="https://github.com/Automattic/harper/pull/1072">Chrome extension PR</a> as "ready for review".
Here's what that means.</p>
<p>If you're willing to get into the technical brambles, you can compile and run it yourself.
That will continue to be the case until the Chrome Web Store reviews and approves our submission, which could take a couple of weeks.</p>
<p>When installed, it will be enabled for a curated list of website that I've personally reviewed to confirm work well.
If you want to try using the extension on another site, you will be able to adventure into the world of unexpected behavior by hitting the popup menu.</p>
<p>Importantly, it is working exceptionally well on any site using the Gutenberg editor.
I will be backporting a lot of this code to our WordPress plugin to improve the experience there.</p>
<h3>Will it Support Firefox?</h3>
<p>As the PR stands today, the extension does not support Firefox.</p>
<p>I have list of things that should get done whilst waiting for the Chrome Web Store to approve the submission.
Firefox support is on that list.</p>
<h2>Any Questions?</h2>
<p>If you've got questions for me about this, feel free to reach out via <a href="https://discord.com/invite/JBqcAaKrzQ">Discord</a> or on <a href="https://github.com/automattic/harper">GitHub</a>.</p>]]></description>
            <content:encoded><![CDATA[<h1>The Status of the Harper Chrome Extension</h1>
<p>Elijah! Elijah! When will we have a Chrome extension for <a href="https://writewithharper.com">Harper</a>?</p>
<p>Although it's usually asked with a bit more panache, this is a question I've gotten a lot.
In fact, an author at Lifehacker <a href="https://lifehacker.com/tech/harper-offline-alternative-to-grammarly">wrote about it just yesterday</a>.</p>
<p>For the longest time, it was a project on the back-burner.
Something I intended to work on, but was always passed up for more important things.
It never saw the light of day.</p>
<p>Until now.</p>
<img src="/images/harper_chrome_ext.png" alt="The Harper Chrome Extension">
<h2>What is Harper?</h2>
<p>...and why do I need its Chrome extension?</p>
<p>Harper is a grammar checker that respects your privacy.
All language processing happens on-device, no matter where you're using it.</p>
<p>For most of Harper's history, it has only been available in one of many text editors as a plugin.
With the Chrome extension, you'll be able to get high-quality grammar checking anywhere on the web.</p>
<h2>What's the Status?</h2>
<p>I just marked the <a href="https://github.com/Automattic/harper/pull/1072">Chrome extension PR</a> as "ready for review".
Here's what that means.</p>
<p>If you're willing to get into the technical brambles, you can compile and run it yourself.
That will continue to be the case until the Chrome Web Store reviews and approves our submission, which could take a couple of weeks.</p>
<p>When installed, it will be enabled for a curated list of website that I've personally reviewed to confirm work well.
If you want to try using the extension on another site, you will be able to adventure into the world of unexpected behavior by hitting the popup menu.</p>
<p>Importantly, it is working exceptionally well on any site using the Gutenberg editor.
I will be backporting a lot of this code to our WordPress plugin to improve the experience there.</p>
<h3>Will it Support Firefox?</h3>
<p>As the PR stands today, the extension does not support Firefox.</p>
<p>I have list of things that should get done whilst waiting for the Chrome Web Store to approve the submission.
Firefox support is on that list.</p>
<h2>Any Questions?</h2>
<p>If you've got questions for me about this, feel free to reach out via <a href="https://discord.com/invite/JBqcAaKrzQ">Discord</a> or on <a href="https://github.com/automattic/harper">GitHub</a>.</p>]]></content:encoded>
            <enclosure url="https://elijahpotter.dev/images/harper_chrome_ext.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[Photography as Meditation]]></title>
            <link>https://elijahpotter.dev/articles/photography_as_meditation</link>
            <guid>https://elijahpotter.dev/articles/photography_as_meditation</guid>
            <pubDate>Sat, 19 Apr 2025 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Photography as Meditation</h1>
<p>The goal of meditation is to become more mindful and aware.
For me, it is often an attempt to get an appreciation for my surroundings—to see how utterly complex the world is.</p>
<img src="/images/pink_flower_tree.webp" alt="Spring has sprung in Colorado. That means pink flowers everwhere.">
<p>Photography does something similar for me.
The heft of the camera in my hands reminds me to slow down and take in the view.</p>
<p>My mother is a photographer—a great one at that.
She's been in the business for at least as long as I've been alive.
She tells me something similar: the act of framing a picture is itself a way to understand your subject better.</p>
<img src="/images/puddle.webp" alt="Spring brings rain and rain brings puddles.">]]></description>
            <content:encoded><![CDATA[<h1>Photography as Meditation</h1>
<p>The goal of meditation is to become more mindful and aware.
For me, it is often an attempt to get an appreciation for my surroundings—to see how utterly complex the world is.</p>
<img src="/images/pink_flower_tree.webp" alt="Spring has sprung in Colorado. That means pink flowers everwhere.">
<p>Photography does something similar for me.
The heft of the camera in my hands reminds me to slow down and take in the view.</p>
<p>My mother is a photographer—a great one at that.
She's been in the business for at least as long as I've been alive.
She tells me something similar: the act of framing a picture is itself a way to understand your subject better.</p>
<img src="/images/puddle.webp" alt="Spring brings rain and rain brings puddles.">]]></content:encoded>
            <enclosure url="https://elijahpotter.dev/images/pink_flower_tree.webp" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[The Art of Exception]]></title>
            <link>https://elijahpotter.dev/articles/the_art_of_exception</link>
            <guid>https://elijahpotter.dev/articles/the_art_of_exception</guid>
            <pubDate>Fri, 18 Apr 2025 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>The Art of Exception</h1>
<p>English is an area of interest littered with edge cases.
In preparation for addressing similar problems in the development of the Harper Chrome extension, I'm spending some time here reflecting on what I've learned by tackling the complex maze of English edge cases.</p>
<p><a href="https://writewithharper.com">Harper</a> is not alone, and there's a real chance that you'll have to work on exception-tolerant code. In that case, you will need to understand the <em>why</em> for some of the design decisions you encounter in the wild.</p>
<img src="/images/harper_chrome_ext.png" alt="A screenshot of the Harper grammar checker web extension.">
<h2>What Do I Mean By "Edge Case"?</h2>
<p>An edge-case is a situation (that is often context-dependent) which results in incorrect behavior from a model, program, or theoretical framework. In conversations about the Harper Chrome extension, an "edge case" is where the extension improperly reads from or writes to a site's embedded text editor.</p>
<p>Most sites use <code>&#x3C;textarea /></code> or <code>&#x3C;input /></code> elements for text editing, but a number of sites (including WordPress, as you know) have complex WYSIWYG editors. Each behaves differently, which can cause problems with the aforementioned read/write loop. The problem: our users expect us to support all major text editors.</p>
<h2>Err on the Side of Inaction</h2>
<p>In Harper's core algorithm, we err on the side of false-negatives. This decision was derived from an observation made early on in the project's life cycle: people usually blame themselves for their own writing mistakes, unless the error is truly trivial.</p>
<p>All in, we get far more complaints about false-positives than false-negatives. Which is why we err on the side of inaction. If the algorithm thinks it's possible, but not certain an error was made, we suppress the report in case Harper is wrong.</p>
<p>In the Chrome extension, we will be exemplifying this rule by only enabling it on a curated list of domains by default.</p>
<img src="/images/site_pareto.png" alt="A graph of site visits vs rank">
<p>As site visits tend to follow a Pareto distribution, we can cover most traffic with just a few items in this list.</p>
<h2>Make Tweaks Easy</h2>
<p>When an edge case (which is almost always a false-positive) appears in Harper's core algorithm, it's usually in a pretty obvious spot. This is because we associate each lint output with a specific, easy-to-find module the core code.</p>
<p>In most cases, this is not verbose, specialized Rust code. Rather, it is an LLM-friendly DSL that is legible to most with even beginner-level programming experience. This combination of easy-to-read and easy-to-edit makes contributions from the community (regarding edge-cases) commonplace.</p>
<p>I aim to replicate this success in the Chrome extension's read/write capabilities by carefully documenting the architecture and working with third-parties to make the contributing process clearer and easier.</p>
<h2>It Is an Art</h2>
<p>There is a reason I call this an art. Design decisions in exception-tolerant systems stem from these two simple ideas, but they grow from an intuition developed by their maintainers. I'm not exactly sure how the Chrome extension will grow to handle edge cases. I'll be sure to come back here and detail them when I do.</p>]]></description>
            <content:encoded><![CDATA[<h1>The Art of Exception</h1>
<p>English is an area of interest littered with edge cases.
In preparation for addressing similar problems in the development of the Harper Chrome extension, I'm spending some time here reflecting on what I've learned by tackling the complex maze of English edge cases.</p>
<p><a href="https://writewithharper.com">Harper</a> is not alone, and there's a real chance that you'll have to work on exception-tolerant code. In that case, you will need to understand the <em>why</em> for some of the design decisions you encounter in the wild.</p>
<img src="/images/harper_chrome_ext.png" alt="A screenshot of the Harper grammar checker web extension.">
<h2>What Do I Mean By "Edge Case"?</h2>
<p>An edge-case is a situation (that is often context-dependent) which results in incorrect behavior from a model, program, or theoretical framework. In conversations about the Harper Chrome extension, an "edge case" is where the extension improperly reads from or writes to a site's embedded text editor.</p>
<p>Most sites use <code>&#x3C;textarea /></code> or <code>&#x3C;input /></code> elements for text editing, but a number of sites (including WordPress, as you know) have complex WYSIWYG editors. Each behaves differently, which can cause problems with the aforementioned read/write loop. The problem: our users expect us to support all major text editors.</p>
<h2>Err on the Side of Inaction</h2>
<p>In Harper's core algorithm, we err on the side of false-negatives. This decision was derived from an observation made early on in the project's life cycle: people usually blame themselves for their own writing mistakes, unless the error is truly trivial.</p>
<p>All in, we get far more complaints about false-positives than false-negatives. Which is why we err on the side of inaction. If the algorithm thinks it's possible, but not certain an error was made, we suppress the report in case Harper is wrong.</p>
<p>In the Chrome extension, we will be exemplifying this rule by only enabling it on a curated list of domains by default.</p>
<img src="/images/site_pareto.png" alt="A graph of site visits vs rank">
<p>As site visits tend to follow a Pareto distribution, we can cover most traffic with just a few items in this list.</p>
<h2>Make Tweaks Easy</h2>
<p>When an edge case (which is almost always a false-positive) appears in Harper's core algorithm, it's usually in a pretty obvious spot. This is because we associate each lint output with a specific, easy-to-find module the core code.</p>
<p>In most cases, this is not verbose, specialized Rust code. Rather, it is an LLM-friendly DSL that is legible to most with even beginner-level programming experience. This combination of easy-to-read and easy-to-edit makes contributions from the community (regarding edge-cases) commonplace.</p>
<p>I aim to replicate this success in the Chrome extension's read/write capabilities by carefully documenting the architecture and working with third-parties to make the contributing process clearer and easier.</p>
<h2>It Is an Art</h2>
<p>There is a reason I call this an art. Design decisions in exception-tolerant systems stem from these two simple ideas, but they grow from an intuition developed by their maintainers. I'm not exactly sure how the Chrome extension will grow to handle edge cases. I'll be sure to come back here and detail them when I do.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[On Linkarzu]]></title>
            <link>https://elijahpotter.dev/articles/on_linkarzu</link>
            <guid>https://elijahpotter.dev/articles/on_linkarzu</guid>
            <pubDate>Thu, 17 Apr 2025 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>On Linkarzu's YouTube</h1>
<p>I consider myself an avid member of the Neovim community.
As a whole, they have a great pulse on what a good writing experience feels like.
Over time, this community has grown into a real force in the software engineering field.</p>
<p>Linkarzu has been a constant presence in this ecosystem for a while, giving a voice to the hidden
maintainers that make Neovim great.
I am honored to be counted among the guests he has hosted on his channel.</p>
<p>We talked at length about <a href="https://writewithharper.com">Harper</a>, open source maintenance, and the tooling we use in our development workflows.</p>
<iframe width="100%" height="400" src="https://www.youtube.com/embed/l9D7M1gIY8I?si=dXhZKospDwyb6zmU" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>]]></description>
            <content:encoded><![CDATA[<h1>On Linkarzu's YouTube</h1>
<p>I consider myself an avid member of the Neovim community.
As a whole, they have a great pulse on what a good writing experience feels like.
Over time, this community has grown into a real force in the software engineering field.</p>
<p>Linkarzu has been a constant presence in this ecosystem for a while, giving a voice to the hidden
maintainers that make Neovim great.
I am honored to be counted among the guests he has hosted on his channel.</p>
<p>We talked at length about <a href="https://writewithharper.com">Harper</a>, open source maintenance, and the tooling we use in our development workflows.</p>
<iframe width="100%" height="400" src="https://www.youtube.com/embed/l9D7M1gIY8I?si=dXhZKospDwyb6zmU" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Footguns of the Rust Webassembly Target]]></title>
            <link>https://elijahpotter.dev/articles/footguns_of_the_rust_webassembly_target</link>
            <guid>https://elijahpotter.dev/articles/footguns_of_the_rust_webassembly_target</guid>
            <pubDate>Tue, 08 Apr 2025 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Footguns of the Rust WebAssembly Target</h1>
<img src="/images/rail.webp" alt="A trail in Golden, Colorado">
<p>WebAssembly—even after several years of standardization—is still a nascent technology.</p>
<p>I've been working with Rust and WebAssembly for nearly four years now.
This post is intended to be a distillate of that experience, formatted for developers who are interested in publishing WebAssembly code to npm.
Specifically, these are the footguns I've personally encountered while working on <a href="https://writewithharper.com/docs/harperjs/introduction"><code>harper.js</code></a>, a WebAssembly-powered package for grammar checking at the edge.
This page should be valuable if you are even speculating on the possibility of using WebAssembly in your codebase.</p>
<hr>
<h2>1. Exposing Only Synchronous Functions</h2>
<p>The Harper package exposes one interface that captures all of its WebAssembly interactions: the <strong>Linter</strong>. This is an object that handles downloading and compiling the Harper WebAssembly module, as well as invoking functions in it. Every function returns a <code>Promise</code>. There are several good reasons for that:</p>
<ul>
<li>WebAssembly modules larger than 4 kilobytes must be instantiated <strong>asynchronously</strong> to avoid blocking the event loop during download or compilation. This is a technical limitation that cannot be avoided with clever logic.</li>
<li>If most functions are asynchronous, you can centralize computation and caching into a single instance of the WebAssembly module, hiding the complexity of instantiation and making caches easier to build.</li>
<li>If your problem domain can be computationally intense, it might be prudent to offload jobs onto a <strong>web worker</strong>, which is easier if every function of your facade is asynchronous.</li>
<li>It will be easier for both you and your users to expose any important function as asynchronous out of the gate.</li>
</ul>
<h2>2. Assume Rust Can Trivially Do IO</h2>
<p>Whatever you do: avoid assuming that Rust libraries (like <code>reqwest</code> or <code>rand</code>) will be able to perform IO without some work on your end. WebAssembly alone is <strong>not capable of IO</strong>, which means any function in that category will require some amount of JavaScript to work properly.</p>
<p>Rather than leaving that up to the Rust toolchain to figure out, save yourself the headache and <strong>inject the necessary JavaScript functions</strong> directly into the WebAssembly module by passing them through <code>wasm_bindgen</code>.</p>
<hr>
<h2>3. Just Inline the WebAssembly Module</h2>
<p><strong>Obsidian plugins</strong>, for example, must be composed of exactly one JavaScript file, which means everything must be inlined. In the interest of keeping the bundle size small, it's much easier for the package developer (of <code>harper.js</code>) to set up inlining than the plugin developer.</p>
<p>On the other side of the spectrum, <strong>Chrome's Manifest V3</strong> disallows WebAssembly from being loaded inline.</p>
<p>If you plan for your package to be consumed by a variety of applications, know that it will also be consumed by a variety of <strong>bundlers</strong>. Both bundlers and applications are pretty inconsistent with their inlining and tree-shaking behavior. To avoid problems, you should provide <strong>two versions</strong> of your package:</p>
<ul>
<li>One where your WebAssembly module is already inlined.</li>
<li>One where it isn't.</li>
</ul>
<h1>Wrapping it Up</h1>
<p>Harper’s problem domain is not your problem domain.
We have to integrate with a variety of unique applications, which means we must keep our system flexible.
That may not be the case for you, which may mean these footguns do not apply.
If you have any questions about any other problems the Harper project might have faced, let me know.</p>]]></description>
            <content:encoded><![CDATA[<h1>Footguns of the Rust WebAssembly Target</h1>
<img src="/images/rail.webp" alt="A trail in Golden, Colorado">
<p>WebAssembly—even after several years of standardization—is still a nascent technology.</p>
<p>I've been working with Rust and WebAssembly for nearly four years now.
This post is intended to be a distillate of that experience, formatted for developers who are interested in publishing WebAssembly code to npm.
Specifically, these are the footguns I've personally encountered while working on <a href="https://writewithharper.com/docs/harperjs/introduction"><code>harper.js</code></a>, a WebAssembly-powered package for grammar checking at the edge.
This page should be valuable if you are even speculating on the possibility of using WebAssembly in your codebase.</p>
<hr>
<h2>1. Exposing Only Synchronous Functions</h2>
<p>The Harper package exposes one interface that captures all of its WebAssembly interactions: the <strong>Linter</strong>. This is an object that handles downloading and compiling the Harper WebAssembly module, as well as invoking functions in it. Every function returns a <code>Promise</code>. There are several good reasons for that:</p>
<ul>
<li>WebAssembly modules larger than 4 kilobytes must be instantiated <strong>asynchronously</strong> to avoid blocking the event loop during download or compilation. This is a technical limitation that cannot be avoided with clever logic.</li>
<li>If most functions are asynchronous, you can centralize computation and caching into a single instance of the WebAssembly module, hiding the complexity of instantiation and making caches easier to build.</li>
<li>If your problem domain can be computationally intense, it might be prudent to offload jobs onto a <strong>web worker</strong>, which is easier if every function of your facade is asynchronous.</li>
<li>It will be easier for both you and your users to expose any important function as asynchronous out of the gate.</li>
</ul>
<h2>2. Assume Rust Can Trivially Do IO</h2>
<p>Whatever you do: avoid assuming that Rust libraries (like <code>reqwest</code> or <code>rand</code>) will be able to perform IO without some work on your end. WebAssembly alone is <strong>not capable of IO</strong>, which means any function in that category will require some amount of JavaScript to work properly.</p>
<p>Rather than leaving that up to the Rust toolchain to figure out, save yourself the headache and <strong>inject the necessary JavaScript functions</strong> directly into the WebAssembly module by passing them through <code>wasm_bindgen</code>.</p>
<hr>
<h2>3. Just Inline the WebAssembly Module</h2>
<p><strong>Obsidian plugins</strong>, for example, must be composed of exactly one JavaScript file, which means everything must be inlined. In the interest of keeping the bundle size small, it's much easier for the package developer (of <code>harper.js</code>) to set up inlining than the plugin developer.</p>
<p>On the other side of the spectrum, <strong>Chrome's Manifest V3</strong> disallows WebAssembly from being loaded inline.</p>
<p>If you plan for your package to be consumed by a variety of applications, know that it will also be consumed by a variety of <strong>bundlers</strong>. Both bundlers and applications are pretty inconsistent with their inlining and tree-shaking behavior. To avoid problems, you should provide <strong>two versions</strong> of your package:</p>
<ul>
<li>One where your WebAssembly module is already inlined.</li>
<li>One where it isn't.</li>
</ul>
<h1>Wrapping it Up</h1>
<p>Harper’s problem domain is not your problem domain.
We have to integrate with a variety of unique applications, which means we must keep our system flexible.
That may not be the case for you, which may mean these footguns do not apply.
If you have any questions about any other problems the Harper project might have faced, let me know.</p>]]></content:encoded>
            <enclosure url="https://elijahpotter.dev/images/rail.webp" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[The Books I Read in February and March 2025]]></title>
            <link>https://elijahpotter.dev/articles/the_books_i_read_in_february_and_march_2025</link>
            <guid>https://elijahpotter.dev/articles/the_books_i_read_in_february_and_march_2025</guid>
            <pubDate>Sat, 05 Apr 2025 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>The Books I Read in February and March 2025</h1>
<p>In a world containing TikTok, YouTube and a vast array of other apps that profit off human attention, it takes effort to place our focus on things that truly benefit our minds.</p>
<p>At least, that is what I would I say on LinkedIn as an explanation for why I read.
In reality: <strong>I simply enjoy it</strong>.</p>
<p>Each month I go back and summarize what I've learned from the books I've read.
In doing so, I've found that I get both deeper insights and greater joy from the literature.</p>
<h3>Wait! What's Up With February?</h3>
<p>While I only finished one book in February, I started reading several others that I ended up finishing in March.
It only felt right to combine the two months into a single post.</p>
<h2><em>The Dinner</em> by Herman Koch</h2>
<p>Someone convinced me to read this book based on its simple premise: the entire novel is an account of a single dinner.
I was skeptical.
It is a big book.
How could they possibly fit that much plot into a single dinner?</p>
<p>The answer: Koch drip-feeds you information about the eclectic characters in attendance.
They gain an additional dimension that you couldn't achieve in another, more action-packed book.</p>
<p>I won't spoil anything here, but the ending was a complete surprise to me.
I did <em>not</em> see that coming.</p>
<h2><em>Gold</em> by Isaac Asimov</h2>
<p>I remember reading <em>Gold</em> for the first time on a camping trip to the <em>Great Sand Dunes National Park</em>.
I was captivated by the dynamic short-stories contained within.
At the time I believed them to be pure science fiction.</p>
<p>Reading <em>Gold</em> for the second time revealed how wrong I was.
More than one of the fictional ideas Asimov presents in the collection of short stories has since come to reality.</p>
<h2><em>My Grandma Asked Me to Tell You She's Sorry</em> by Fredrik Backman</h2>
<p>I'll admit: this novel was challenging to finish.</p>
<p><strong>Pros:</strong> the main character (an almost-eight year old girl) is expertly conveyed in the writing.
Backman makes clever use of run-on sentences to convey frustration, which is a literary device I've never seen before.
Further, the character has an unusual degree of insight into everyday aspects of life.
I found myself appreciating <em>my</em> every day more placing myself in her shoes.</p>
<p><strong>Cons:</strong> it takes a long time for the insights to start coming.
An excessive degree of unimportant information is thrown at you right out of the gate.
I had difficulty maintaining my focus until the second half of the book (when stuff starts heating up).</p>
<h2><em>Lessons in Chemistry</em> by Bonnie Garmus</h2>
<p>Garmus knocked it out of the park for her first novel.
It follows Elizabeth Zott, who, in all respects is the person I want to be when I grow up.</p>
<p><em>Lessons in Chemistry</em> is a testament to the value of subtraction.
The original manuscript of this book was about twice as long as the final published version.
The author was forced to condense it down after getting rejected from several publishers.
It effort shows.</p>
<p>I cannot capture the novel's elegance of prose, creativity in fiction, or importance in morality here.
<strong>You need to read this book</strong>.</p>]]></description>
            <content:encoded><![CDATA[<h1>The Books I Read in February and March 2025</h1>
<p>In a world containing TikTok, YouTube and a vast array of other apps that profit off human attention, it takes effort to place our focus on things that truly benefit our minds.</p>
<p>At least, that is what I would I say on LinkedIn as an explanation for why I read.
In reality: <strong>I simply enjoy it</strong>.</p>
<p>Each month I go back and summarize what I've learned from the books I've read.
In doing so, I've found that I get both deeper insights and greater joy from the literature.</p>
<h3>Wait! What's Up With February?</h3>
<p>While I only finished one book in February, I started reading several others that I ended up finishing in March.
It only felt right to combine the two months into a single post.</p>
<h2><em>The Dinner</em> by Herman Koch</h2>
<p>Someone convinced me to read this book based on its simple premise: the entire novel is an account of a single dinner.
I was skeptical.
It is a big book.
How could they possibly fit that much plot into a single dinner?</p>
<p>The answer: Koch drip-feeds you information about the eclectic characters in attendance.
They gain an additional dimension that you couldn't achieve in another, more action-packed book.</p>
<p>I won't spoil anything here, but the ending was a complete surprise to me.
I did <em>not</em> see that coming.</p>
<h2><em>Gold</em> by Isaac Asimov</h2>
<p>I remember reading <em>Gold</em> for the first time on a camping trip to the <em>Great Sand Dunes National Park</em>.
I was captivated by the dynamic short-stories contained within.
At the time I believed them to be pure science fiction.</p>
<p>Reading <em>Gold</em> for the second time revealed how wrong I was.
More than one of the fictional ideas Asimov presents in the collection of short stories has since come to reality.</p>
<h2><em>My Grandma Asked Me to Tell You She's Sorry</em> by Fredrik Backman</h2>
<p>I'll admit: this novel was challenging to finish.</p>
<p><strong>Pros:</strong> the main character (an almost-eight year old girl) is expertly conveyed in the writing.
Backman makes clever use of run-on sentences to convey frustration, which is a literary device I've never seen before.
Further, the character has an unusual degree of insight into everyday aspects of life.
I found myself appreciating <em>my</em> every day more placing myself in her shoes.</p>
<p><strong>Cons:</strong> it takes a long time for the insights to start coming.
An excessive degree of unimportant information is thrown at you right out of the gate.
I had difficulty maintaining my focus until the second half of the book (when stuff starts heating up).</p>
<h2><em>Lessons in Chemistry</em> by Bonnie Garmus</h2>
<p>Garmus knocked it out of the park for her first novel.
It follows Elizabeth Zott, who, in all respects is the person I want to be when I grow up.</p>
<p><em>Lessons in Chemistry</em> is a testament to the value of subtraction.
The original manuscript of this book was about twice as long as the final published version.
The author was forced to condense it down after getting rejected from several publishers.
It effort shows.</p>
<p>I cannot capture the novel's elegance of prose, creativity in fiction, or importance in morality here.
<strong>You need to read this book</strong>.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The One Hard Thing]]></title>
            <link>https://elijahpotter.dev/articles/the_one_hard_thing</link>
            <guid>https://elijahpotter.dev/articles/the_one_hard_thing</guid>
            <pubDate>Thu, 27 Mar 2025 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>The One Hard Thing</h1>
<p>People (myself included) love to make cascading lists of tasks.
Actions, each of which are needed to improve a product, release a feature, or manage people.
These lists tend to explode in size, as more is asked of you.</p>
<p>There's one problem: I have only eight hours in a working day.
With so little time, how can I be expected to both maintain existing projects (bug fixes, etc.) while consistently delivering new functionality and maintaining a fast iterative loop?</p>
<h2>Insight</h2>
<p>I've had two key insights that have led to some tremendous personal productivity gains.</p>
<p><strong>First:</strong> I've observed that the difficulty of my tasks tends to follow a <a href="https://en.wikipedia.org/wiki/Pareto_distribution">Pareto distribution</a>.
A small number of tasks are shockingly difficult to solve, while a much larger set boil down to communication.</p>
<p><strong>Second:</strong> the morning is my most productive time of day.
For some reason, I'm able to tackle problems with an otherwise unusual mental clarity.
I suspect it's something to do with the natural human circadian rhythm or digestion.</p>
<h2>Leveraging the Pattern</h2>
<p>Motivated by these two observations, I've developed a habit: I dedicate each morning to its own <strong>difficult problem</strong>.
This is a single task that is chosen specifically to stretch my limits.
I call this my daily <strong><em>one hard thing</em></strong>.</p>
<p>Previous examples:</p>
<ul>
<li>Developing the first version of <a href="https://writewithharper.com/docs/integrations/language-server"><code>harper-ls</code></a> which is now used by thousands of developers daily.</li>
<li>Building the first 80% of the <a href="https://writewithharper.com/docs/integrations/obsidian">Harper Obsidian plugin</a>, which is also widely used.</li>
<li>Grokking how <a href="https://github.com/Automattic/jetpack">Jetpack's Write Brief with AI</a> feature works (before I joined Automattic).</li>
<li>Compiling Harper to WebAssembly and embedding it somewhere no one has put a grammar checker before: <em>inside the web browser</em>.</li>
</ul>
<h2>Think About It</h2>
<p>I implore you: think about how you work.
How is your energy affected by the things you do in the day-to-day?
If you enjoy your eight hours, maybe the rest will be better too.</p>]]></description>
            <content:encoded><![CDATA[<h1>The One Hard Thing</h1>
<p>People (myself included) love to make cascading lists of tasks.
Actions, each of which are needed to improve a product, release a feature, or manage people.
These lists tend to explode in size, as more is asked of you.</p>
<p>There's one problem: I have only eight hours in a working day.
With so little time, how can I be expected to both maintain existing projects (bug fixes, etc.) while consistently delivering new functionality and maintaining a fast iterative loop?</p>
<h2>Insight</h2>
<p>I've had two key insights that have led to some tremendous personal productivity gains.</p>
<p><strong>First:</strong> I've observed that the difficulty of my tasks tends to follow a <a href="https://en.wikipedia.org/wiki/Pareto_distribution">Pareto distribution</a>.
A small number of tasks are shockingly difficult to solve, while a much larger set boil down to communication.</p>
<p><strong>Second:</strong> the morning is my most productive time of day.
For some reason, I'm able to tackle problems with an otherwise unusual mental clarity.
I suspect it's something to do with the natural human circadian rhythm or digestion.</p>
<h2>Leveraging the Pattern</h2>
<p>Motivated by these two observations, I've developed a habit: I dedicate each morning to its own <strong>difficult problem</strong>.
This is a single task that is chosen specifically to stretch my limits.
I call this my daily <strong><em>one hard thing</em></strong>.</p>
<p>Previous examples:</p>
<ul>
<li>Developing the first version of <a href="https://writewithharper.com/docs/integrations/language-server"><code>harper-ls</code></a> which is now used by thousands of developers daily.</li>
<li>Building the first 80% of the <a href="https://writewithharper.com/docs/integrations/obsidian">Harper Obsidian plugin</a>, which is also widely used.</li>
<li>Grokking how <a href="https://github.com/Automattic/jetpack">Jetpack's Write Brief with AI</a> feature works (before I joined Automattic).</li>
<li>Compiling Harper to WebAssembly and embedding it somewhere no one has put a grammar checker before: <em>inside the web browser</em>.</li>
</ul>
<h2>Think About It</h2>
<p>I implore you: think about how you work.
How is your energy affected by the things you do in the day-to-day?
If you enjoy your eight hours, maybe the rest will be better too.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[3 Traits of Good Test Suites]]></title>
            <link>https://elijahpotter.dev/articles/3_traits_of_good_test_suites</link>
            <guid>https://elijahpotter.dev/articles/3_traits_of_good_test_suites</guid>
            <pubDate>Sun, 23 Mar 2025 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>3 Traits of Good Test Suites</h1>
<p>As evidenced by my previous posts on <a href="https://elijahpotter.dev/articles/LLM_assisted_fuzzing">LLM-Assisted Fuzzing</a>, I've been dedicating a lot of my mental bandwidth to maintaining a low false-positive rate while we improve <a href="https://github.com/automattic/harper">Harper's rule coverage</a>. Part of that is through fuzzing and dogfooding, some can be through statistics, but the first lines of defense will continue to be unit and integration testing. This past week particularly, I've been reading up on how other <a href="https://github.com/rust-lang/rust-analyzer">big linting programs</a> approach this problem.</p>
<h2>1. Test Features, Not Code</h2>
<p>I often ask myself: am I spending more time thinking or talking about the thing, or am I spending more time doing the thing? I've personally seen how projects fall into decline because their leaders are more interested in planning than doing.</p>
<p>In the context of software testing, this mantra is transformed into "test features, not code." To my eye, good code is flexible and self-explanatory. Tests that hook deeply into application or library internals make code less flexible and harder to read.</p>
<p>I especially like Alex Kladov's heuristic for this: the neural network test.</p>
<blockquote>
<p>"Can you re-use the test suite if your entire software is replaced with an opaque neural network?" - Alex Kladov</p>
</blockquote>
<p>It's not a question of whether a neural network would pass the test suite, only whether the test suite could work for it. If the answer is no, the tests are likely testing code, not features.</p>
<h2>2. Performance</h2>
<p>The speed at which you can build and run tests (unit, static, integration, etc.) is a force-multiplier for everything else. You can validate ideas sooner, run CI faster, and get contributors on-boarded in less time.</p>
<p>Our goal to be fast at runtime dovetails quite nicely into this, so it's something Harper already does quite well. Moving forward, we need to make sure that we don't rely on any kind of IO in our tests, since that continues to be the slowest part of most Harper integrations.</p>
<h2>3. Good Assertions</h2>
<p>We can simplify programs like Harper down into a single function which consumes text and returns a list of observed problems.</p>
<pre><code class="hljs language-rust"><span class="hljs-keyword">fn</span> <span class="hljs-title function_">harper</span>(text: <span class="hljs-type">String</span>) <span class="hljs-punctuation">-></span> <span class="hljs-type">Vec</span>&#x3C;Lint>{
    <span class="hljs-comment">// Implementation details...</span>
}
</code></pre>
<p>Most testing we are interested can be done with assertion functions that declare what qualities the output should have with a specific input.</p>
<p>For example, we have a function called assert_suggestion_result, which runs a grammatically incorrect string through Harper, applies the first resulting suggestion and checks whether the edited string matches a given value.</p>
<pre><code class="hljs language-rust"><span class="hljs-comment">/// An example of a test that uses assert_suggestion_result</span>
<span class="hljs-meta">#[test]</span>
<span class="hljs-keyword">fn</span> <span class="hljs-title function_">catches_less_then</span>() {
    <span class="hljs-title function_ invoke__">assert_suggestion_result</span>(
        <span class="hljs-string">"I eat less then you."</span>,
        ThenThan::<span class="hljs-title function_ invoke__">default</span>(),
        <span class="hljs-string">"I eat less than you."</span>,
    );
}
</code></pre>
<p>It's also vital that these assertions show good, readable error messages when they fail. Each time I've improved their logs, I get unprompted positive feedback from contributors.</p>
<p>Moving forward, I'd like to create a more diverse array assertions like this, as well as better-document their use. A lot of the current back-and-forth for rule contributions is related this.</p>
<h2>Wait! I Disagree</h2>
<p>I hope someone does. Good test suites are something I'm continuing <a href="https://automattic.com/creed/never-stop-learning/">to learn how to build</a>. I understand that a lot of what I've said here doesn't apply to other kinds of applications or codebases. If there's nuance I'm not covering here, let me know!</p></p>]]></description>
            <content:encoded><![CDATA[<h1>3 Traits of Good Test Suites</h1>
<p>As evidenced by my previous posts on <a href="https://elijahpotter.dev/articles/LLM_assisted_fuzzing">LLM-Assisted Fuzzing</a>, I've been dedicating a lot of my mental bandwidth to maintaining a low false-positive rate while we improve <a href="https://github.com/automattic/harper">Harper's rule coverage</a>. Part of that is through fuzzing and dogfooding, some can be through statistics, but the first lines of defense will continue to be unit and integration testing. This past week particularly, I've been reading up on how other <a href="https://github.com/rust-lang/rust-analyzer">big linting programs</a> approach this problem.</p>
<h2>1. Test Features, Not Code</h2>
<p>I often ask myself: am I spending more time thinking or talking about the thing, or am I spending more time doing the thing? I've personally seen how projects fall into decline because their leaders are more interested in planning than doing.</p>
<p>In the context of software testing, this mantra is transformed into "test features, not code." To my eye, good code is flexible and self-explanatory. Tests that hook deeply into application or library internals make code less flexible and harder to read.</p>
<p>I especially like Alex Kladov's heuristic for this: the neural network test.</p>
<blockquote>
<p>"Can you re-use the test suite if your entire software is replaced with an opaque neural network?" - Alex Kladov</p>
</blockquote>
<p>It's not a question of whether a neural network would pass the test suite, only whether the test suite could work for it. If the answer is no, the tests are likely testing code, not features.</p>
<h2>2. Performance</h2>
<p>The speed at which you can build and run tests (unit, static, integration, etc.) is a force-multiplier for everything else. You can validate ideas sooner, run CI faster, and get contributors on-boarded in less time.</p>
<p>Our goal to be fast at runtime dovetails quite nicely into this, so it's something Harper already does quite well. Moving forward, we need to make sure that we don't rely on any kind of IO in our tests, since that continues to be the slowest part of most Harper integrations.</p>
<h2>3. Good Assertions</h2>
<p>We can simplify programs like Harper down into a single function which consumes text and returns a list of observed problems.</p>
<pre><code class="hljs language-rust"><span class="hljs-keyword">fn</span> <span class="hljs-title function_">harper</span>(text: <span class="hljs-type">String</span>) <span class="hljs-punctuation">-></span> <span class="hljs-type">Vec</span>&#x3C;Lint>{
    <span class="hljs-comment">// Implementation details...</span>
}
</code></pre>
<p>Most testing we are interested can be done with assertion functions that declare what qualities the output should have with a specific input.</p>
<p>For example, we have a function called assert_suggestion_result, which runs a grammatically incorrect string through Harper, applies the first resulting suggestion and checks whether the edited string matches a given value.</p>
<pre><code class="hljs language-rust"><span class="hljs-comment">/// An example of a test that uses assert_suggestion_result</span>
<span class="hljs-meta">#[test]</span>
<span class="hljs-keyword">fn</span> <span class="hljs-title function_">catches_less_then</span>() {
    <span class="hljs-title function_ invoke__">assert_suggestion_result</span>(
        <span class="hljs-string">"I eat less then you."</span>,
        ThenThan::<span class="hljs-title function_ invoke__">default</span>(),
        <span class="hljs-string">"I eat less than you."</span>,
    );
}
</code></pre>
<p>It's also vital that these assertions show good, readable error messages when they fail. Each time I've improved their logs, I get unprompted positive feedback from contributors.</p>
<p>Moving forward, I'd like to create a more diverse array assertions like this, as well as better-document their use. A lot of the current back-and-forth for rule contributions is related this.</p>
<h2>Wait! I Disagree</h2>
<p>I hope someone does. Good test suites are something I'm continuing <a href="https://automattic.com/creed/never-stop-learning/">to learn how to build</a>. I understand that a lot of what I've said here doesn't apply to other kinds of applications or codebases. If there's nuance I'm not covering here, let me know!</p></p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[LLM Assisted Fuzzing]]></title>
            <link>https://elijahpotter.dev/articles/LLM_assisted_fuzzing</link>
            <guid>https://elijahpotter.dev/articles/LLM_assisted_fuzzing</guid>
            <pubDate>Fri, 21 Mar 2025 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>LLM-Assisted Fuzzing: A New Approach to False-Positives</h1>
<blockquote>
<p><strong>Preface</strong>: this post was actually written on February 24th, despite being published today.</p>
</blockquote>
<p>I'd like to do some reflection on how we can improve <a href="https://github.com/automattic/Harper">Harper's</a> output. It is fundamentally a process problem. For context, let us go through a contrived example.</p>
<img src="/images/sumner_screenshot_error.png" alt="Sumner Evans reported a bug to me while writing a tech talk about CI and CD">
<ol>
<li>Somebody (usually me) discovers an error in their writing that Harper did not catch. This is a concrete string like "I have a a care in the world".</li>
<li>I look for other forms of the error to see if I can generalize it to more cases. Here, we see an indefinite article repeating itself. We could just look for the exact string "a a" and call it day, but we can cover more cases by looking for <indefinite article> <indefinite article>. In fact, we can generalize it even further. In this case, writers want to avoid repetitions of any word that is not a homograph.</li>
<li>Once the code is written that defines the rule, I push it out, and wait to hear from users if any false-positives arise.</li>
</ol>
<p>This is a flawed system, particularly that last step. I've found something a bit better.</p>
<p>I've been experimenting more with LLM tooling, particularly Ollama. With it, I've set up a useful system for identifying false-positives before pushing changes out to users. I like to call it "LLM Assisted Fuzzing".</p>
<h2>LLM-Assisted Fuzzing</h2>
<p>Fuzzing is a common practice wherein engineers rapidly and continuously push random data through their system to deterministically check if it is behaving correctly (i.e. not crashing). If a particular batch of random data causes an issue, it is elevated to an engineer.</p>
<p>Harper as a program stands in a somewhat unique position: it analyzes natural language to identify grammatical errors. LLMs exist as a direct complement to it: they generate "natural" language. Do you see where I am going with this?</p>
<p>Let's be clear: language models do not generate grammatically correct text even a good amount of the time. As a result, their output is useful as a proxy for our user's written work. If we can implement a feature that reliably corrects (without false-positives) an LLM's output, we can reasonably expect it to work for our user's text.</p>
<h2>My Initial Attempt at LLM-Assisted Fuzzing</h2>
<p>I started working on this Friday morning, and finally have what I would call a "first draft" of an LLM-assisted fuzzing system for identifying false-positives. Here's how it works:</p>
<p>I start a local Ollama server, preloaded with several models (which I'll expand on more below).</p>
<p>I run a bash script that repeatedly feeds the same prompt into the models <a href="https://elijahpotter.dev/articles/prompting_large_language_models_in_bash_scripts">using a command-line client for Ollama</a>. This gives me a large dataset of responses to the prompt. I'm not looking for the semantic content of the response. I'm interested in extensive coverage of whichever linguistic domain I've defined in the prompt. I can run this as long as I like before going on to the next step.</p>
<p>Once I have a large dataset, I run a separate script that runs Harper over each response. If it finds an error, the script opens the response in <a href="https://writewithharper.com/docs/about">the Harper editor</a> for further inspection.</p>
<p>The biggest issue with this workflow right now: regardless of my prompts, the LLMs continue to emit actual grammatical errors. This means that when I go through this process, I get at least as many true-positives as false-positives. It also does nothing for false-negatives.</p>
<h2>Tweaks</h2>
<p>There are a couple parameters here that I'm going to continue to tweak as the workflow becomes more mature. So far, I've been using <code>deepseek-r1:32b</code> and <code>mistral-small</code>. DeepSeek tends to use more colorful language than Mistral, which runs much faster on my machine.</p>
<p>I've also been iterating on the prompt as I go. Here's the latest version:</p>
<pre><code class="hljs language-plaintext">Please write a long, drawn-out essay on a topic of your choosing.
Write out the essay multiple times before giving me your final draft.
Your final draft should be grammatically perfect, clearly demonstrating that you put time and effort into your submission.

I repeat: I do not want to see anything other than your final drafts.
Do not give me any intermediate work.

Furthermore, do not use any characters or words other than English. 
I do not want Chinese or anything else.

I have all the time in the world. I'm perfectly fine with waiting a little longer if it means you give me something you are proud of.
</code></pre>
<p>When I move on to tackling false-negatives with this, I'll be able to prompt the model to give me specific kinds of errors, which makes it significantly more useful than trying to track down existing work.</p>]]></description>
            <content:encoded><![CDATA[<h1>LLM-Assisted Fuzzing: A New Approach to False-Positives</h1>
<blockquote>
<p><strong>Preface</strong>: this post was actually written on February 24th, despite being published today.</p>
</blockquote>
<p>I'd like to do some reflection on how we can improve <a href="https://github.com/automattic/Harper">Harper's</a> output. It is fundamentally a process problem. For context, let us go through a contrived example.</p>
<img src="/images/sumner_screenshot_error.png" alt="Sumner Evans reported a bug to me while writing a tech talk about CI and CD">
<ol>
<li>Somebody (usually me) discovers an error in their writing that Harper did not catch. This is a concrete string like "I have a a care in the world".</li>
<li>I look for other forms of the error to see if I can generalize it to more cases. Here, we see an indefinite article repeating itself. We could just look for the exact string "a a" and call it day, but we can cover more cases by looking for <indefinite article> <indefinite article>. In fact, we can generalize it even further. In this case, writers want to avoid repetitions of any word that is not a homograph.</li>
<li>Once the code is written that defines the rule, I push it out, and wait to hear from users if any false-positives arise.</li>
</ol>
<p>This is a flawed system, particularly that last step. I've found something a bit better.</p>
<p>I've been experimenting more with LLM tooling, particularly Ollama. With it, I've set up a useful system for identifying false-positives before pushing changes out to users. I like to call it "LLM Assisted Fuzzing".</p>
<h2>LLM-Assisted Fuzzing</h2>
<p>Fuzzing is a common practice wherein engineers rapidly and continuously push random data through their system to deterministically check if it is behaving correctly (i.e. not crashing). If a particular batch of random data causes an issue, it is elevated to an engineer.</p>
<p>Harper as a program stands in a somewhat unique position: it analyzes natural language to identify grammatical errors. LLMs exist as a direct complement to it: they generate "natural" language. Do you see where I am going with this?</p>
<p>Let's be clear: language models do not generate grammatically correct text even a good amount of the time. As a result, their output is useful as a proxy for our user's written work. If we can implement a feature that reliably corrects (without false-positives) an LLM's output, we can reasonably expect it to work for our user's text.</p>
<h2>My Initial Attempt at LLM-Assisted Fuzzing</h2>
<p>I started working on this Friday morning, and finally have what I would call a "first draft" of an LLM-assisted fuzzing system for identifying false-positives. Here's how it works:</p>
<p>I start a local Ollama server, preloaded with several models (which I'll expand on more below).</p>
<p>I run a bash script that repeatedly feeds the same prompt into the models <a href="https://elijahpotter.dev/articles/prompting_large_language_models_in_bash_scripts">using a command-line client for Ollama</a>. This gives me a large dataset of responses to the prompt. I'm not looking for the semantic content of the response. I'm interested in extensive coverage of whichever linguistic domain I've defined in the prompt. I can run this as long as I like before going on to the next step.</p>
<p>Once I have a large dataset, I run a separate script that runs Harper over each response. If it finds an error, the script opens the response in <a href="https://writewithharper.com/docs/about">the Harper editor</a> for further inspection.</p>
<p>The biggest issue with this workflow right now: regardless of my prompts, the LLMs continue to emit actual grammatical errors. This means that when I go through this process, I get at least as many true-positives as false-positives. It also does nothing for false-negatives.</p>
<h2>Tweaks</h2>
<p>There are a couple parameters here that I'm going to continue to tweak as the workflow becomes more mature. So far, I've been using <code>deepseek-r1:32b</code> and <code>mistral-small</code>. DeepSeek tends to use more colorful language than Mistral, which runs much faster on my machine.</p>
<p>I've also been iterating on the prompt as I go. Here's the latest version:</p>
<pre><code class="hljs language-plaintext">Please write a long, drawn-out essay on a topic of your choosing.
Write out the essay multiple times before giving me your final draft.
Your final draft should be grammatically perfect, clearly demonstrating that you put time and effort into your submission.

I repeat: I do not want to see anything other than your final drafts.
Do not give me any intermediate work.

Furthermore, do not use any characters or words other than English. 
I do not want Chinese or anything else.

I have all the time in the world. I'm perfectly fine with waiting a little longer if it means you give me something you are proud of.
</code></pre>
<p>When I move on to tackling false-negatives with this, I'll be able to prompt the model to give me specific kinds of errors, which makes it significantly more useful than trying to track down existing work.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Harper Is in Cursor and Visual Studio Code]]></title>
            <link>https://elijahpotter.dev/articles/harper_is_in_cursor_and_visual_studio_code</link>
            <guid>https://elijahpotter.dev/articles/harper_is_in_cursor_and_visual_studio_code</guid>
            <pubDate>Wed, 19 Mar 2025 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Harper Is in Cursor and Visual Studio Code</h1>
<p>Harper is a grammar checker for developers. Its roots are in code editors like Neovim, Helix, Zed, and you guessed it—Visual Studio Code. I don't think a lot of my followers know this, so I thought I'd give it a little shout-out here.</p>
<p>Since Cursor and Windsurf are forks of Visual Studio Code, Harper is available for both <a href="https://marketplace.visualstudio.com/items?itemName=elijah-potter.harper">via the Visual Studio Marketplace</a>. Give us a good rating over there if you install the plugin and end up liking it!</p>
<img src="/images/harper_vscode_screenshot.png" alt="A screenshot of the Harper Visual Studio Code extension checking its own source code">
<h2>How Does It Work?</h2>
<p>Harper works similarly to a number of other extensions you may have installed. Like Pylance or ESLint, it runs in the background as <a href="https://microsoft.github.io/language-server-protocol/">a language server</a>. Each time a modification is made to your document, it reads your comments for grammatical mistakes and typos and displays them as errors or warnings (it's configurable).</p>
<h2>Does It Support My Programming Language?</h2>
<p>The Harper language server, thanks to contributions from the community, supports a pretty <a href="https://writewithharper.com/docs/integrations/language-server#Supported-Languages">wide range of programming and markup languages</a>. We've only recently added support for PHP, so if that's your thing, know you're in somewhat uncharted territory. If you find issues, <a href="https://github.com/Automattic/harper/issues">let us know</a>.</p>]]></description>
            <content:encoded><![CDATA[<h1>Harper Is in Cursor and Visual Studio Code</h1>
<p>Harper is a grammar checker for developers. Its roots are in code editors like Neovim, Helix, Zed, and you guessed it—Visual Studio Code. I don't think a lot of my followers know this, so I thought I'd give it a little shout-out here.</p>
<p>Since Cursor and Windsurf are forks of Visual Studio Code, Harper is available for both <a href="https://marketplace.visualstudio.com/items?itemName=elijah-potter.harper">via the Visual Studio Marketplace</a>. Give us a good rating over there if you install the plugin and end up liking it!</p>
<img src="/images/harper_vscode_screenshot.png" alt="A screenshot of the Harper Visual Studio Code extension checking its own source code">
<h2>How Does It Work?</h2>
<p>Harper works similarly to a number of other extensions you may have installed. Like Pylance or ESLint, it runs in the background as <a href="https://microsoft.github.io/language-server-protocol/">a language server</a>. Each time a modification is made to your document, it reads your comments for grammatical mistakes and typos and displays them as errors or warnings (it's configurable).</p>
<h2>Does It Support My Programming Language?</h2>
<p>The Harper language server, thanks to contributions from the community, supports a pretty <a href="https://writewithharper.com/docs/integrations/language-server#Supported-Languages">wide range of programming and markup languages</a>. We've only recently added support for PHP, so if that's your thing, know you're in somewhat uncharted territory. If you find issues, <a href="https://github.com/Automattic/harper/issues">let us know</a>.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Never Wait]]></title>
            <link>https://elijahpotter.dev/articles/never_wait</link>
            <guid>https://elijahpotter.dev/articles/never_wait</guid>
            <pubDate>Sat, 01 Mar 2025 07:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Never Wait</h1>
<blockquote>
<p><em>Preface:</em> This post is specifically about Pull Requests for <a href="https://github.com/Automattic/harper">Harper</a>.
Read the contributor guidelines for a project before opening a PR.</p>
</blockquote>
<p>I get it.
Opening a pull request is an intimidating proposition.</p>
<p>It is non-trivial to put your blood, sweat, and (let's be honest) tears into code and put that onto the internet for the world to see.
I remember my first pull-request: it wasn't pretty either.</p>
<p>But I want to highlight a couple reasons why I want potential contributors to open their pull requests as early as possible.</p>
<h2>Drafts Reduce Duplicate Work</h2>
<p>When a contributor to a project is assessing the viability or usefulness of some work they're preparing to do,
most look at existing Pull Requests first.
Even if you're still working on the patch, it's a great idea to open a "draft" PR so no one starts work that could go to waste.</p>
<p>Further, if you have a draft open whose CI is failing, a maintainer may see it and fix whatever the problem may be.
I find myself frequently browsing <a href="https://github.com/Automattic/harper/pulls">Harper's Pull Requests</a> looking for failing builds, since I'm often the best equipped to find the issue.</p>
<h2>It Helps With Debugging</h2>
<p>I try to make myself available to contributors in case they have any questions regarding the architecture of Harper.
I'm also game to help debug their code, partly because I am of the opinion that debugging is a skill which is best learned by example.</p>
<p>But it is hard for me to debug code if I don't have it.
If a draft PR has been opened, I can usually see the problem without even cloning the code, since our CI is so comprehensive.</p>
<h2>Conclusion</h2>
<p>So please don't wait; open that PR.
I am thrilled each time I see a name I don't recognize on a GitHub notification.</p>]]></description>
            <content:encoded><![CDATA[<h1>Never Wait</h1>
<blockquote>
<p><em>Preface:</em> This post is specifically about Pull Requests for <a href="https://github.com/Automattic/harper">Harper</a>.
Read the contributor guidelines for a project before opening a PR.</p>
</blockquote>
<p>I get it.
Opening a pull request is an intimidating proposition.</p>
<p>It is non-trivial to put your blood, sweat, and (let's be honest) tears into code and put that onto the internet for the world to see.
I remember my first pull-request: it wasn't pretty either.</p>
<p>But I want to highlight a couple reasons why I want potential contributors to open their pull requests as early as possible.</p>
<h2>Drafts Reduce Duplicate Work</h2>
<p>When a contributor to a project is assessing the viability or usefulness of some work they're preparing to do,
most look at existing Pull Requests first.
Even if you're still working on the patch, it's a great idea to open a "draft" PR so no one starts work that could go to waste.</p>
<p>Further, if you have a draft open whose CI is failing, a maintainer may see it and fix whatever the problem may be.
I find myself frequently browsing <a href="https://github.com/Automattic/harper/pulls">Harper's Pull Requests</a> looking for failing builds, since I'm often the best equipped to find the issue.</p>
<h2>It Helps With Debugging</h2>
<p>I try to make myself available to contributors in case they have any questions regarding the architecture of Harper.
I'm also game to help debug their code, partly because I am of the opinion that debugging is a skill which is best learned by example.</p>
<p>But it is hard for me to debug code if I don't have it.
If a draft PR has been opened, I can usually see the problem without even cloning the code, since our CI is so comprehensive.</p>
<h2>Conclusion</h2>
<p>So please don't wait; open that PR.
I am thrilled each time I see a name I don't recognize on a GitHub notification.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Prompting Large Language Models in Bash Scripts]]></title>
            <link>https://elijahpotter.dev/articles/prompting_large_language_models_in_bash_scripts</link>
            <guid>https://elijahpotter.dev/articles/prompting_large_language_models_in_bash_scripts</guid>
            <pubDate>Wed, 26 Feb 2025 07:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Prompting Large Language Models In Bash Scripts</h1>
<p>I've been experimenting with using LLMs locally for generating datasets to test <a href="https://writewithharper.com">Harper</a> against.
I might write a blog post about the technique (which I am grandiosely calling "LLM-assisted fuzzing"), but I'm going to make you wait.</p>
<p>I've written a little tool called <a href="https://github.com/elijah-potter/ofc"><code>ofc</code> that lets you insert Ollama into your bash scripts</a>.
I think it's pretty neat, since it (very easily) lets you do some pretty cool things.</p>
<p>For example, you can swap out the system prompt, so if you want to compare behavior across prompts, you can just toss it in a loop:</p>
<pre><code class="hljs language-bash"><span class="hljs-meta">#!/bin/bash</span>

subreddits=(<span class="hljs-string">"r/vscode"</span> <span class="hljs-string">"r/neovim"</span> <span class="hljs-string">"r/wallstreetbets"</span>)

<span class="hljs-comment"># Loop over each item in the list</span>
<span class="hljs-keyword">for</span> subreddit <span class="hljs-keyword">in</span> <span class="hljs-string">"<span class="hljs-variable">${subreddits[@]}</span>"</span>; <span class="hljs-keyword">do</span>
  <span class="hljs-built_in">echo</span> <span class="hljs-string">"++++++++ BEGIN <span class="hljs-variable">$subreddit</span> ++++++++"</span>
  ofc --system-prompt <span class="hljs-string">"Assume the persona of a commenter of <span class="hljs-variable">$subreddit</span>"</span> <span class="hljs-string">"What is your opinion on pepperjack cheese?"</span>
  <span class="hljs-built_in">cat</span>
<span class="hljs-keyword">done</span>
</code></pre>
<p>Or, you can instruct a model to prompt itself:</p>
<pre><code class="hljs language-bash">ofc --system-prompt <span class="hljs-string">"<span class="hljs-subst">$(ofc <span class="hljs-string">"Write a prompt for a large language model that makes it think harder. "</span>)</span>"</span> <span class="hljs-string">"What is a while loop?"</span>
</code></pre>
<h1>Installation</h1>
<p><code>ofc</code> is installable from either crates.io or its repository.</p>
<pre><code class="hljs language-bash">cargo install ofc --locked

<span class="hljs-comment"># Or...</span>
cargo install --git https://github.com/elijah-potter/ofc --locked
</code></pre>]]></description>
            <content:encoded><![CDATA[<h1>Prompting Large Language Models In Bash Scripts</h1>
<p>I've been experimenting with using LLMs locally for generating datasets to test <a href="https://writewithharper.com">Harper</a> against.
I might write a blog post about the technique (which I am grandiosely calling "LLM-assisted fuzzing"), but I'm going to make you wait.</p>
<p>I've written a little tool called <a href="https://github.com/elijah-potter/ofc"><code>ofc</code> that lets you insert Ollama into your bash scripts</a>.
I think it's pretty neat, since it (very easily) lets you do some pretty cool things.</p>
<p>For example, you can swap out the system prompt, so if you want to compare behavior across prompts, you can just toss it in a loop:</p>
<pre><code class="hljs language-bash"><span class="hljs-meta">#!/bin/bash</span>

subreddits=(<span class="hljs-string">"r/vscode"</span> <span class="hljs-string">"r/neovim"</span> <span class="hljs-string">"r/wallstreetbets"</span>)

<span class="hljs-comment"># Loop over each item in the list</span>
<span class="hljs-keyword">for</span> subreddit <span class="hljs-keyword">in</span> <span class="hljs-string">"<span class="hljs-variable">${subreddits[@]}</span>"</span>; <span class="hljs-keyword">do</span>
  <span class="hljs-built_in">echo</span> <span class="hljs-string">"++++++++ BEGIN <span class="hljs-variable">$subreddit</span> ++++++++"</span>
  ofc --system-prompt <span class="hljs-string">"Assume the persona of a commenter of <span class="hljs-variable">$subreddit</span>"</span> <span class="hljs-string">"What is your opinion on pepperjack cheese?"</span>
  <span class="hljs-built_in">cat</span>
<span class="hljs-keyword">done</span>
</code></pre>
<p>Or, you can instruct a model to prompt itself:</p>
<pre><code class="hljs language-bash">ofc --system-prompt <span class="hljs-string">"<span class="hljs-subst">$(ofc <span class="hljs-string">"Write a prompt for a large language model that makes it think harder. "</span>)</span>"</span> <span class="hljs-string">"What is a while loop?"</span>
</code></pre>
<h1>Installation</h1>
<p><code>ofc</code> is installable from either crates.io or its repository.</p>
<pre><code class="hljs language-bash">cargo install ofc --locked

<span class="hljs-comment"># Or...</span>
cargo install --git https://github.com/elijah-potter/ofc --locked
</code></pre>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[A Harper Record]]></title>
            <link>https://elijahpotter.dev/articles/a_harper_record</link>
            <guid>https://elijahpotter.dev/articles/a_harper_record</guid>
            <pubDate>Thu, 06 Feb 2025 07:00:00 GMT</pubDate>
            <description><![CDATA[<h1>A Harper Record</h1>
<p>When we started work on <a href="https://writewithharper.com/docs/harperjs/introduction"><code>harper.js</code></a>, our goal was simple.
We wanted it to take less than 24 hours for a developer to embed high-quality grammar checking in their app.</p>
<p>A friend of mine, <a href="https://lukaswerner.com/">Lukas Werner</a> recently set a record for this.
In just over 15 minutes, he was able to embed Harper into Raycast, a command palate for MacOS.</p>
<p>This is fantastic news.
It's always great to see when your goals are having a real impact on someone's ability.</p>
<p>I hope you take the time to look at <a href="https://github.com/lukasmwerner/harper-raycast">the project on GitHub</a>.</p>
<p>In other news, we have some other small projects using Harper:</p>
<ul>
<li><a href="https://github.com/lockbook/lockbook">Lockbook</a></li>
<li><a href="https://stubby.io/">Stubby CMS</a></li>
</ul>]]></description>
            <content:encoded><![CDATA[<h1>A Harper Record</h1>
<p>When we started work on <a href="https://writewithharper.com/docs/harperjs/introduction"><code>harper.js</code></a>, our goal was simple.
We wanted it to take less than 24 hours for a developer to embed high-quality grammar checking in their app.</p>
<p>A friend of mine, <a href="https://lukaswerner.com/">Lukas Werner</a> recently set a record for this.
In just over 15 minutes, he was able to embed Harper into Raycast, a command palate for MacOS.</p>
<p>This is fantastic news.
It's always great to see when your goals are having a real impact on someone's ability.</p>
<p>I hope you take the time to look at <a href="https://github.com/lukasmwerner/harper-raycast">the project on GitHub</a>.</p>
<p>In other news, we have some other small projects using Harper:</p>
<ul>
<li><a href="https://github.com/lockbook/lockbook">Lockbook</a></li>
<li><a href="https://stubby.io/">Stubby CMS</a></li>
</ul>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The Books I Read in January 2025]]></title>
            <link>https://elijahpotter.dev/articles/the_books_i_read_in_january_2025</link>
            <guid>https://elijahpotter.dev/articles/the_books_i_read_in_january_2025</guid>
            <pubDate>Tue, 04 Feb 2025 07:00:00 GMT</pubDate>
            <description><![CDATA[<h1>The Books I Read in January 2025</h1>
<p>I only got to read two full books this month.
The first was a short novel by Matt Haig with the title <a href="https://app.thestorygraph.com/books/d9c7ed04-6148-4e01-a118-d96cba16f507"><em>The Midnight Library</em>.</a></p>
<p>It reads like and feels like a therapy session.
I quite enjoyed it, but I'm not sure it is for everyone.</p>
<p>The second book (and the one I got the most value from) was <a href="https://app.thestorygraph.com/books/2505f748-ac12-41fb-a411-0868cbe3694c"><em>Working in Public: the Making and Maintenance of Open Source Software</em></a> by Nadia Eghbal.
Until now, I've largely organized Harper in ways inspired by projects like Vite and Rust that I interact with on a daily basis.</p>
<p>Eghbal gives a more holistic view than mine, written for an external observer.
This book was especially interesting due to the comprehensive coverage of "open" projects with no code at all.
She seems to believe the OSS software management problem is one of social complexity, not technical complexity.</p>
<p>Furthermore, Nadia makes the case that synchronous communication, while often a burden on maintainer attention, is usually necessary.</p>]]></description>
            <content:encoded><![CDATA[<h1>The Books I Read in January 2025</h1>
<p>I only got to read two full books this month.
The first was a short novel by Matt Haig with the title <a href="https://app.thestorygraph.com/books/d9c7ed04-6148-4e01-a118-d96cba16f507"><em>The Midnight Library</em>.</a></p>
<p>It reads like and feels like a therapy session.
I quite enjoyed it, but I'm not sure it is for everyone.</p>
<p>The second book (and the one I got the most value from) was <a href="https://app.thestorygraph.com/books/2505f748-ac12-41fb-a411-0868cbe3694c"><em>Working in Public: the Making and Maintenance of Open Source Software</em></a> by Nadia Eghbal.
Until now, I've largely organized Harper in ways inspired by projects like Vite and Rust that I interact with on a daily basis.</p>
<p>Eghbal gives a more holistic view than mine, written for an external observer.
This book was especially interesting due to the comprehensive coverage of "open" projects with no code at all.
She seems to believe the OSS software management problem is one of social complexity, not technical complexity.</p>
<p>Furthermore, Nadia makes the case that synchronous communication, while often a burden on maintainer attention, is usually necessary.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Notifications]]></title>
            <link>https://elijahpotter.dev/articles/notifications</link>
            <guid>https://elijahpotter.dev/articles/notifications</guid>
            <pubDate>Sun, 02 Feb 2025 07:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Notifications</h1>
<p>I have found that the first couple hours of the day are my most productive.
I try to move tedious tasks to later so I can focus on solving "hard problems" that need my full attention.</p>
<p>In the past, this has been hard to do for one reason: GitHub notifications.
They pull me in and consume hours of my time.
Responding to and working with OSS contributors is important to me, but that can happen later in the day.</p>
<p>So I wrote a little ViolentMonkey script to hide the notifications icon during these early morning hours.</p>
<pre><code class="hljs language-javascript"><span class="hljs-comment">// ==UserScript==</span>
<span class="hljs-comment">// @name        Hide Notification Indicator</span>
<span class="hljs-comment">// @namespace   Violentmonkey Scripts</span>
<span class="hljs-comment">// @match       https://github.com/*</span>
<span class="hljs-comment">// @grant       none</span>
<span class="hljs-comment">// @version     1.0</span>
<span class="hljs-comment">// @author      Elijah Potter</span>
<span class="hljs-comment">// @description 1/29/2025, 8:16:11 AM</span>
<span class="hljs-comment">// ==/UserScript==</span>

<span class="hljs-keyword">function</span> <span class="hljs-title function_">hideEm</span>(<span class="hljs-params"></span>) {
  <span class="hljs-keyword">const</span> matches = <span class="hljs-variable language_">document</span>.<span class="hljs-title function_">getElementsByClassName</span>(
    <span class="hljs-string">"AppHeader-button--hasIndicator"</span>,
  );
  <span class="hljs-keyword">for</span> (<span class="hljs-keyword">const</span> element <span class="hljs-keyword">of</span> matches) {
    element.<span class="hljs-title function_">remove</span>();
  }
}

<span class="hljs-keyword">new</span> <span class="hljs-title class_">MutationObserver</span>(hideEm).<span class="hljs-title function_">observe</span>(<span class="hljs-variable language_">document</span>, {
  <span class="hljs-attr">childList</span>: <span class="hljs-literal">true</span>,
  <span class="hljs-attr">subtree</span>: <span class="hljs-literal">true</span>,
});
</code></pre>]]></description>
            <content:encoded><![CDATA[<h1>Notifications</h1>
<p>I have found that the first couple hours of the day are my most productive.
I try to move tedious tasks to later so I can focus on solving "hard problems" that need my full attention.</p>
<p>In the past, this has been hard to do for one reason: GitHub notifications.
They pull me in and consume hours of my time.
Responding to and working with OSS contributors is important to me, but that can happen later in the day.</p>
<p>So I wrote a little ViolentMonkey script to hide the notifications icon during these early morning hours.</p>
<pre><code class="hljs language-javascript"><span class="hljs-comment">// ==UserScript==</span>
<span class="hljs-comment">// @name        Hide Notification Indicator</span>
<span class="hljs-comment">// @namespace   Violentmonkey Scripts</span>
<span class="hljs-comment">// @match       https://github.com/*</span>
<span class="hljs-comment">// @grant       none</span>
<span class="hljs-comment">// @version     1.0</span>
<span class="hljs-comment">// @author      Elijah Potter</span>
<span class="hljs-comment">// @description 1/29/2025, 8:16:11 AM</span>
<span class="hljs-comment">// ==/UserScript==</span>

<span class="hljs-keyword">function</span> <span class="hljs-title function_">hideEm</span>(<span class="hljs-params"></span>) {
  <span class="hljs-keyword">const</span> matches = <span class="hljs-variable language_">document</span>.<span class="hljs-title function_">getElementsByClassName</span>(
    <span class="hljs-string">"AppHeader-button--hasIndicator"</span>,
  );
  <span class="hljs-keyword">for</span> (<span class="hljs-keyword">const</span> element <span class="hljs-keyword">of</span> matches) {
    element.<span class="hljs-title function_">remove</span>();
  }
}

<span class="hljs-keyword">new</span> <span class="hljs-title class_">MutationObserver</span>(hideEm).<span class="hljs-title function_">observe</span>(<span class="hljs-variable language_">document</span>, {
  <span class="hljs-attr">childList</span>: <span class="hljs-literal">true</span>,
  <span class="hljs-attr">subtree</span>: <span class="hljs-literal">true</span>,
});
</code></pre>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The Three Steps to an Apology]]></title>
            <link>https://elijahpotter.dev/articles/the_three_steps_to_an_apology</link>
            <guid>https://elijahpotter.dev/articles/the_three_steps_to_an_apology</guid>
            <pubDate>Thu, 30 Jan 2025 07:00:00 GMT</pubDate>
            <description><![CDATA[<h1>The Three Steps to an Apology</h1>
<p>My grandmother is a wonderful woman. She never leaves the house without a least a little bit of purple. Maybe a keychain or a purse. Sometimes, she wouldn't leave the house without a lot of purple, head to toe. She also never forgets a smile and a good attitude.</p>
<p>She's a wise woman. With more than 70 years and five children under her belt, she's seen some stuff. All that experience comes with a lot of advice. The three steps to an apology are some of the best bits of advice she's ever given me.</p>
<h2>Step One: "I am sorry"</h2>
<p>Admit you've done something wrong. Recognize your fault and imagine what would have happened if you had done the right thing.</p>
<h2>Step Two: "It will not happen again"</h2>
<p>If you're actually sorry, you'll want to avoid replicating the same mistake at a later date. Figure out what you need to do to make sure you this is a one-time error.</p>
<h2>Step Three: "What can I do to make it better?"</h2>
<p>Attempt repair. In a fantasy world, you could go back in time and stop yourself from doing whatever it is you did in the first place. But we don't live in a fantasy world, so you have to find a way to make it right, as soon as possible.</p>]]></description>
            <content:encoded><![CDATA[<h1>The Three Steps to an Apology</h1>
<p>My grandmother is a wonderful woman. She never leaves the house without a least a little bit of purple. Maybe a keychain or a purse. Sometimes, she wouldn't leave the house without a lot of purple, head to toe. She also never forgets a smile and a good attitude.</p>
<p>She's a wise woman. With more than 70 years and five children under her belt, she's seen some stuff. All that experience comes with a lot of advice. The three steps to an apology are some of the best bits of advice she's ever given me.</p>
<h2>Step One: "I am sorry"</h2>
<p>Admit you've done something wrong. Recognize your fault and imagine what would have happened if you had done the right thing.</p>
<h2>Step Two: "It will not happen again"</h2>
<p>If you're actually sorry, you'll want to avoid replicating the same mistake at a later date. Figure out what you need to do to make sure you this is a one-time error.</p>
<h2>Step Three: "What can I do to make it better?"</h2>
<p>Attempt repair. In a fantasy world, you could go back in time and stop yourself from doing whatever it is you did in the first place. But we don't live in a fantasy world, so you have to find a way to make it right, as soon as possible.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The Best 25 Bucks I Ever Spent]]></title>
            <link>https://elijahpotter.dev/articles/the_best_25_bucks_i_ever_spent</link>
            <guid>https://elijahpotter.dev/articles/the_best_25_bucks_i_ever_spent</guid>
            <pubDate>Mon, 13 Jan 2025 07:00:00 GMT</pubDate>
            <description><![CDATA[<h1>The Best 25 Bucks I Ever Spent</h1>
<p>Earlier this year (I don't remember the exact day or month) I decided, screw it, I'm going to try using a flip phone.
For years I've wondered what it was like before the information revolution, but <a href="https://en.wikipedia.org/wiki/Generation_Z">as a Zoomer</a>, I've never had the opportunity.</p>
<p>I got the cheapest Verizon-compatible flip phone Amazon had to offer and I committed myself to using it exclusively for at least a week.
A week without the entirety of human knowledge in my pocket.
A week of using my voice to communicate, rather than a touch-screen keyboard.</p>
<img src="/images/flip_phone.jpg" alt="The phone in question">
<p>Beside two factor authentication for GitHub, that first week was a breeze.
I found myself socializing more, since it turns out that most people in my social circle are just a phone call and a 10-minute walk away.
I also found myself reading better quality literature.
Rather than resorting to HackerNews to ease my boredom, I began devouring full books at a frightening pace.
That first week turned into two, then four, then twelve.
All the while, I was proving to myself that I didn't need a smartphone.</p>
<p>For a number of reasons, I'm back to using a smartphone.
But this experience has taught me some important lessons that I don't think can be taught any other way.</p>
<p><strong>Organizing social encounters over text just doesn't work.</strong></p>
<p>Some people don't check their phone too often.
Calling them make it clear that their participation is important to you.
This may sound trivial or obvious to some of the older members of my audience, but to pretty much every Zoomer I've talked to about this, it is a revelation.</p>
<p><strong>Quiet is sacred, and should be protected.</strong></p>
<p>I see loads of people who walk around every day with their headphones in, barely paying attention to the music blaring out of them.
I've done this myself.
It closes you off the world.
From the tapping of water off an icicle.
From the shaky but warm greeting from a colleague.
From the chortling roar of an engine.</p>
<p>But most of all, it closes you off from your own mind.
The best thinking and imagining happens in the quiet, when your brain has nothing more stimulating to do.</p>
<p>Overall, using a flip phone was a very quieting experience.
It helped the world feel larger and more magical.</p>]]></description>
            <content:encoded><![CDATA[<h1>The Best 25 Bucks I Ever Spent</h1>
<p>Earlier this year (I don't remember the exact day or month) I decided, screw it, I'm going to try using a flip phone.
For years I've wondered what it was like before the information revolution, but <a href="https://en.wikipedia.org/wiki/Generation_Z">as a Zoomer</a>, I've never had the opportunity.</p>
<p>I got the cheapest Verizon-compatible flip phone Amazon had to offer and I committed myself to using it exclusively for at least a week.
A week without the entirety of human knowledge in my pocket.
A week of using my voice to communicate, rather than a touch-screen keyboard.</p>
<img src="/images/flip_phone.jpg" alt="The phone in question">
<p>Beside two factor authentication for GitHub, that first week was a breeze.
I found myself socializing more, since it turns out that most people in my social circle are just a phone call and a 10-minute walk away.
I also found myself reading better quality literature.
Rather than resorting to HackerNews to ease my boredom, I began devouring full books at a frightening pace.
That first week turned into two, then four, then twelve.
All the while, I was proving to myself that I didn't need a smartphone.</p>
<p>For a number of reasons, I'm back to using a smartphone.
But this experience has taught me some important lessons that I don't think can be taught any other way.</p>
<p><strong>Organizing social encounters over text just doesn't work.</strong></p>
<p>Some people don't check their phone too often.
Calling them make it clear that their participation is important to you.
This may sound trivial or obvious to some of the older members of my audience, but to pretty much every Zoomer I've talked to about this, it is a revelation.</p>
<p><strong>Quiet is sacred, and should be protected.</strong></p>
<p>I see loads of people who walk around every day with their headphones in, barely paying attention to the music blaring out of them.
I've done this myself.
It closes you off the world.
From the tapping of water off an icicle.
From the shaky but warm greeting from a colleague.
From the chortling roar of an engine.</p>
<p>But most of all, it closes you off from your own mind.
The best thinking and imagining happens in the quiet, when your brain has nothing more stimulating to do.</p>
<p>Overall, using a flip phone was a very quieting experience.
It helped the world feel larger and more magical.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Why You Need Sccache]]></title>
            <link>https://elijahpotter.dev/articles/why_you_need_sccache</link>
            <guid>https://elijahpotter.dev/articles/why_you_need_sccache</guid>
            <pubDate>Sun, 29 Dec 2024 07:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Why You Need <code>sccache</code></h1>
<p>As the maintainer of a <a href="https://writewithharper.com">reasonably popular open source project</a> written in Rust, I find myself cloning PRs and swapping between branches dozens of times a day.
In doing so, I often blow away Rust's compilation cache either via <code>git clean -xf .</code> or by changing dependency versions.</p>
<p>Having a build cache that is separate from the project is a huge win for compile times, which is why I use <a href="https://github.com/mozilla/sccache"><code>sccache</code></a>.</p>
<p>For those unaware, <code>sccache</code> is a build tool for C, C++ and Rust projects that stands in front of the compiler.
If a given file's hash is the same across calls to the compiler, <code>sccache</code> reuses the previous result.
This cache is compiler–and project agnostic.
In my case, it sits in <code>$HOME/.scccache</code>, but you can set it up with AWS S3, Redis or a myriad of other options.</p>
<p>If a dependency is cached from a compile in one project, it is available for a build in another.</p>
<p>In short: you need <code>sccache</code> if you are frequently swapping between branches with similar (but not identical) dependencies or expect to frequently delete Rust's disk cache.</p>
<h2>Why You Don't Need <code>sccache</code></h2>
<p>If you are working in a single branch on a project whose dependencies remain somewhat stable, <code>sccache</code> might actually increase your compile time.
Since <code>sccache</code> has to hit the disk (or even the network) regardless of whether it is a cache hit, a little bit of latency is added.
If you expect most compiles to be cache hits or most compiles to be cache misses, <code>sccache</code> is no more useful than the built-in caching <code>cargo</code> provides.</p>
<p>In that case you should not use <code>sccache</code>.</p>
<h2>Wrap-Up</h2>
<p>Regardless of whether you install <code>sccache</code> on your machine, it is a neat little technology.
I would recommend the curious reader take a look for themselves.
You might be inspired.</p>]]></description>
            <content:encoded><![CDATA[<h1>Why You Need <code>sccache</code></h1>
<p>As the maintainer of a <a href="https://writewithharper.com">reasonably popular open source project</a> written in Rust, I find myself cloning PRs and swapping between branches dozens of times a day.
In doing so, I often blow away Rust's compilation cache either via <code>git clean -xf .</code> or by changing dependency versions.</p>
<p>Having a build cache that is separate from the project is a huge win for compile times, which is why I use <a href="https://github.com/mozilla/sccache"><code>sccache</code></a>.</p>
<p>For those unaware, <code>sccache</code> is a build tool for C, C++ and Rust projects that stands in front of the compiler.
If a given file's hash is the same across calls to the compiler, <code>sccache</code> reuses the previous result.
This cache is compiler–and project agnostic.
In my case, it sits in <code>$HOME/.scccache</code>, but you can set it up with AWS S3, Redis or a myriad of other options.</p>
<p>If a dependency is cached from a compile in one project, it is available for a build in another.</p>
<p>In short: you need <code>sccache</code> if you are frequently swapping between branches with similar (but not identical) dependencies or expect to frequently delete Rust's disk cache.</p>
<h2>Why You Don't Need <code>sccache</code></h2>
<p>If you are working in a single branch on a project whose dependencies remain somewhat stable, <code>sccache</code> might actually increase your compile time.
Since <code>sccache</code> has to hit the disk (or even the network) regardless of whether it is a cache hit, a little bit of latency is added.
If you expect most compiles to be cache hits or most compiles to be cache misses, <code>sccache</code> is no more useful than the built-in caching <code>cargo</code> provides.</p>
<p>In that case you should not use <code>sccache</code>.</p>
<h2>Wrap-Up</h2>
<p>Regardless of whether you install <code>sccache</code> on your machine, it is a neat little technology.
I would recommend the curious reader take a look for themselves.
You might be inspired.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The Best Books I Read This Year]]></title>
            <link>https://elijahpotter.dev/articles/the_best_books_i_read_this_year</link>
            <guid>https://elijahpotter.dev/articles/the_best_books_i_read_this_year</guid>
            <pubDate>Sat, 21 Dec 2024 07:00:00 GMT</pubDate>
            <description><![CDATA[<h1>The Best Books I Read This Year</h1>
<p>This year I've been reading a lot more.
Is there a particular reason?
Other than simply admiring those that do, no.</p>
<p>I have found some gems of literature that I would like to share.</p>
<h2><em>Social Intelligence</em> by Daniel Goleman</h2>
<p><em>Social Intelligence</em> is truly a treasure.
I heard of Daniel Goleman's work first from an episode of <a href="https://www.youtube.com/watch?v=-hoo_dIOP8k">Talks at Google</a> and later from psychologists who raved about his previous book, <em>Emotional Intelligence</em>.</p>
<p>It gives a deep dive into the subconscious and lightning-fast systems the underlay every social interaction.
As I read it, I found myself analyzing my own behavior, as well as the behavior of others.
As someone who has been overthinking my own social life from a young age, Daniel's advice to let my brain go to its default mode was relieving.</p>
<p>If you're the type to feel anxious in social situations, this book is perfect for you.</p>
<h2><em>Tomorrow and Tomorrow and Tomorrow</em> by Gabrielle Zevin</h2>
<p><em>Tomorrow and Tomorrow and Tomorrow</em> is probably one of the best fiction books I've read.
<strong>Period.</strong></p>
<p>At first glance, it is just the story of the careers of two game developers and their producer.
But if you dig a little bit <strong>deeper</strong>, it reveals itself to be the story of how the relationships between people who work together can be incredibly deep and complex.
Specifically, how the passion to create something can both bond and break people.</p>
<p>I don't think it is possible for me to accurately describe the emotional rollercoaster this book took me on.
Which is why you should read it!</p>
<h2><em>Why We Sleep</em> by Matthew Walker</h2>
<p>I was curious why some people in my life sleep very little and other sleep quite a bit, both with varying levels of general wakefulness.
So, I ran a search: "Why do we sleep?"</p>
<p>To my surprise, I discovered someone wrote a book with almost that exact title.
Since I've read it, I've increased the duration of my sleep by nearly two hours.
This practice has had an outsize impact on my health and happiness.
I could not recommend it more.</p>
<p>I've <a href="./the_optimal_workspace">written about this book extensively</a>, so I won't repeat myself further.</p>
<h2>That's It!</h2>
<p>I could wax poetic about these books and how they've changed me—but I won't.
In reality, I read them because I enjoyed reading them.
In an age where people <a href="https://www.youtube.com/watch?v=A3wJcF0t0bQ">read less than ever</a>, I implore you: try picking up a book, <strong>you might just like it</strong>.</p>]]></description>
            <content:encoded><![CDATA[<h1>The Best Books I Read This Year</h1>
<p>This year I've been reading a lot more.
Is there a particular reason?
Other than simply admiring those that do, no.</p>
<p>I have found some gems of literature that I would like to share.</p>
<h2><em>Social Intelligence</em> by Daniel Goleman</h2>
<p><em>Social Intelligence</em> is truly a treasure.
I heard of Daniel Goleman's work first from an episode of <a href="https://www.youtube.com/watch?v=-hoo_dIOP8k">Talks at Google</a> and later from psychologists who raved about his previous book, <em>Emotional Intelligence</em>.</p>
<p>It gives a deep dive into the subconscious and lightning-fast systems the underlay every social interaction.
As I read it, I found myself analyzing my own behavior, as well as the behavior of others.
As someone who has been overthinking my own social life from a young age, Daniel's advice to let my brain go to its default mode was relieving.</p>
<p>If you're the type to feel anxious in social situations, this book is perfect for you.</p>
<h2><em>Tomorrow and Tomorrow and Tomorrow</em> by Gabrielle Zevin</h2>
<p><em>Tomorrow and Tomorrow and Tomorrow</em> is probably one of the best fiction books I've read.
<strong>Period.</strong></p>
<p>At first glance, it is just the story of the careers of two game developers and their producer.
But if you dig a little bit <strong>deeper</strong>, it reveals itself to be the story of how the relationships between people who work together can be incredibly deep and complex.
Specifically, how the passion to create something can both bond and break people.</p>
<p>I don't think it is possible for me to accurately describe the emotional rollercoaster this book took me on.
Which is why you should read it!</p>
<h2><em>Why We Sleep</em> by Matthew Walker</h2>
<p>I was curious why some people in my life sleep very little and other sleep quite a bit, both with varying levels of general wakefulness.
So, I ran a search: "Why do we sleep?"</p>
<p>To my surprise, I discovered someone wrote a book with almost that exact title.
Since I've read it, I've increased the duration of my sleep by nearly two hours.
This practice has had an outsize impact on my health and happiness.
I could not recommend it more.</p>
<p>I've <a href="./the_optimal_workspace">written about this book extensively</a>, so I won't repeat myself further.</p>
<h2>That's It!</h2>
<p>I could wax poetic about these books and how they've changed me—but I won't.
In reality, I read them because I enjoyed reading them.
In an age where people <a href="https://www.youtube.com/watch?v=A3wJcF0t0bQ">read less than ever</a>, I implore you: try picking up a book, <strong>you might just like it</strong>.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[For the Love of Iframes]]></title>
            <link>https://elijahpotter.dev/articles/for_the_love_of_iframes</link>
            <guid>https://elijahpotter.dev/articles/for_the_love_of_iframes</guid>
            <pubDate>Sun, 13 Oct 2024 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>For The Love of <code>iframe</code>s.</h1>
<p>I adore a good <code>iframe</code>.
They're so elegant as a web component.</p>
<p>Just expose an endpoint, say <a href="https://writewithharper.com/editor?initialText=This%20is%20an%20interactive%20buffer%20you%20can%20use%20to%20to%20check%20your%20work"><code>https://writewithharper.com/editor</code></a>, set it up to accept some query parameters and get it hosted.
Now you can access this web-component from any page that has a loose enough Content Security Policy.
For me, that means my <a href="https://elijahpotter.dev/articles/the_simplest_neovim_markdown_setup">school assignments</a> and other assorted documentation.</p>
<p>It also means that I can avoid setting up a complex build system for MDX, while still being able to include interactive components.</p>
<p>The example from earlier:</p>
<pre><code class="hljs language-html">&#x3C;iframe
  src="https://writewithharper.com/editor?initialText=See, we can now embed the the Harper editor%0Arght into this document!%0A%0AIt's a little too easy."
  width="100%"
  height="400px"
  style="border-radius:10px;border:0px"
>&#x3C;/iframe>
</code></pre>
<p>One major caveat though: when we pass our arguments to the component through the query URL, this gets sent to the component's server as well.
<strong>I</strong> certainly trust the Harper website's server, since I maintain it and the code is <a href="https://github.com/elijah-potter/harper/tree/master/packages/web">open source</a>, but that isn't always the case.</p>
<iframe src="https://writewithharper.com/editor?initialText=See, we can now embed the the Harper editor%0Arght into this document!%0A%0AIt's a little too easy." width="100%" height="400px" style="border-radius:10px;border:0px" ></iframe>
<p>You should also probably avoid doing this <em>too</em> much.
Most browsers spawn a whole new process for every <code>iframe</code>, so if you want things to stay snappy it is best to limit yourself to just one (<strong>maybe</strong> two) per page.</p>]]></description>
            <content:encoded><![CDATA[<h1>For The Love of <code>iframe</code>s.</h1>
<p>I adore a good <code>iframe</code>.
They're so elegant as a web component.</p>
<p>Just expose an endpoint, say <a href="https://writewithharper.com/editor?initialText=This%20is%20an%20interactive%20buffer%20you%20can%20use%20to%20to%20check%20your%20work"><code>https://writewithharper.com/editor</code></a>, set it up to accept some query parameters and get it hosted.
Now you can access this web-component from any page that has a loose enough Content Security Policy.
For me, that means my <a href="https://elijahpotter.dev/articles/the_simplest_neovim_markdown_setup">school assignments</a> and other assorted documentation.</p>
<p>It also means that I can avoid setting up a complex build system for MDX, while still being able to include interactive components.</p>
<p>The example from earlier:</p>
<pre><code class="hljs language-html">&#x3C;iframe
  src="https://writewithharper.com/editor?initialText=See, we can now embed the the Harper editor%0Arght into this document!%0A%0AIt's a little too easy."
  width="100%"
  height="400px"
  style="border-radius:10px;border:0px"
>&#x3C;/iframe>
</code></pre>
<p>One major caveat though: when we pass our arguments to the component through the query URL, this gets sent to the component's server as well.
<strong>I</strong> certainly trust the Harper website's server, since I maintain it and the code is <a href="https://github.com/elijah-potter/harper/tree/master/packages/web">open source</a>, but that isn't always the case.</p>
<iframe src="https://writewithharper.com/editor?initialText=See, we can now embed the the Harper editor%0Arght into this document!%0A%0AIt's a little too easy." width="100%" height="400px" style="border-radius:10px;border:0px" ></iframe>
<p>You should also probably avoid doing this <em>too</em> much.
Most browsers spawn a whole new process for every <code>iframe</code>, so if you want things to stay snappy it is best to limit yourself to just one (<strong>maybe</strong> two) per page.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Naming Harper]]></title>
            <link>https://elijahpotter.dev/articles/naming_harper</link>
            <guid>https://elijahpotter.dev/articles/naming_harper</guid>
            <pubDate>Tue, 06 Aug 2024 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Naming Harper</h1>
<p><a href="https://github.com/elijah-potter/harper/issues/103">Someone</a> recently asked me where the name <a href="https://writewithharper.com">Harper</a> came from.</p>
<p>When I first sat down to start work on Harper, I had one goal in mind.
I wanted to create a grammar checker that wouldn't impinge on the artistic freedom of authors.
Part of that meant that I did not want literary devices to be flagged as grammatical errors.
Harper should speed up the writing process rather than slow it down as Grammarly tends to.</p>
<p>To embody this original goal, I wanted to make the project a namesake of an author known for their literary devices and prose. I also had three other (practical) criterion:</p>
<ol>
<li>It could not be occupied on <a href="https://crates.io/">crates.io</a>.</li>
<li>It could not be occupied on GitHub.</li>
<li>It should not be more than two syllables.</li>
</ol>
<p>I asked around for some ideas and someone suggested I name the project after <a href="https://en.wikipedia.org/wiki/Harper_Lee">Harper Lee</a>.
It fit the bill, so I went with it.</p>]]></description>
            <content:encoded><![CDATA[<h1>Naming Harper</h1>
<p><a href="https://github.com/elijah-potter/harper/issues/103">Someone</a> recently asked me where the name <a href="https://writewithharper.com">Harper</a> came from.</p>
<p>When I first sat down to start work on Harper, I had one goal in mind.
I wanted to create a grammar checker that wouldn't impinge on the artistic freedom of authors.
Part of that meant that I did not want literary devices to be flagged as grammatical errors.
Harper should speed up the writing process rather than slow it down as Grammarly tends to.</p>
<p>To embody this original goal, I wanted to make the project a namesake of an author known for their literary devices and prose. I also had three other (practical) criterion:</p>
<ol>
<li>It could not be occupied on <a href="https://crates.io/">crates.io</a>.</li>
<li>It could not be occupied on GitHub.</li>
<li>It should not be more than two syllables.</li>
</ol>
<p>I asked around for some ideas and someone suggested I name the project after <a href="https://en.wikipedia.org/wiki/Harper_Lee">Harper Lee</a>.
It fit the bill, so I went with it.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The Simplest Neovim Markdown Setup]]></title>
            <link>https://elijahpotter.dev/articles/the_simplest_neovim_markdown_setup</link>
            <guid>https://elijahpotter.dev/articles/the_simplest_neovim_markdown_setup</guid>
            <pubDate>Sat, 13 Jul 2024 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>The Simplest Neovim Markdown Setup</h1>
<p>I am not one who enjoys complexity.
I am also someone who likes to make their <a href="https://github.com/elijah-potter/harper">own tools</a>.</p>
<p>As a student, I write a lot.
That includes notes, papers, and documentation for my wide variety of projects.</p>
<p>A lot of classes request to submit assignments as either PDF, HTML or <code>.docx</code>.
For a while, I submitted PDF's.
I had a whole orchestrated setup.</p>
<p>I would write everything in <a href="https://neovim.io/">Neovim</a>, save it, and <a href="https://github.com/frabjous/knap">KNAP</a> would render it using <a href="https://pandoc.org/">Pandoc</a>.
Finally, it would be rendered to my screen using <a href="https://sioyek.info/">Sioyek</a>.</p>
<p>This worked fine, I guess, but it was far from perfect.</p>
<ol>
<li><strong>It was slow.</strong> Each edit I made in Markdown could take as many as 10 seconds to show up in Sioyek.</li>
<li>It wasn't interesting. While I made modifications to my Pandoc settings, my PDF's still looked like every other <code>pdflatex</code> document ever made.</li>
<li>I couldn't make my documents interactive if I wanted to.</li>
</ol>
<p>Markdown was designed to be turned into HTML, I reasoned.
So why not just do that?</p>
<img src="/images/tatum_screenshot.webp" alt="A screenshot of Tatum at work">
<p>That's why I created <a href="https://github.com/elijah-potter/tatum">Tatum</a>.
It does one thing, really well.
Point it at a Markdown file, and it will run a tiny web server to render the resulting HTML to.
If the file changes, a WebSocket connection tells the browser to refresh.</p>
<p>Tatum renders in milliseconds and creates beautiful pages with <a href="https://katex.org/"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mtext>KaTeX</mtext></mrow><annotation encoding="application/x-tex">\KaTeX</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8988em;vertical-align:-0.2155em;"></span><span class="mord text"><span class="mord textrm">K</span><span class="mspace" style="margin-right:-0.17em;"></span><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.6833em;"><span style="top:-2.905em;"><span class="pstrut" style="height:2.7em;"></span><span class="mord"><span class="mord textrm mtight sizing reset-size6 size3">A</span></span></span></span></span></span><span class="mspace" style="margin-right:-0.15em;"></span><span class="mord text"><span class="mord textrm">T</span><span class="mspace" style="margin-right:-0.1667em;"></span><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.4678em;"><span style="top:-2.7845em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord textrm">E</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.2155em;"><span></span></span></span></span><span class="mspace" style="margin-right:-0.125em;"></span><span class="mord textrm">X</span></span></span></span></span></span></a>, <a href="https://simplecss.org/">Simple.css</a> and <a href="https://highlightjs.org/"><code>highlight.js</code></a>.
I can embed interactive HTML, CSS and JavaScript elements directly into my Markdown to get the interactivity and aesthetics I desire.</p>
<p>Once I'm done working, I just run <code>tatum render &#x3C;file.md></code> and I get a single file (images and all) that I can submit for my assignments.</p>
<p>Tatum isn't for you to use.
Feel free to poke around at how it works, or even fork it and make your own modifications.
<strong>It fits my use case perfectly.</strong></p>]]></description>
            <content:encoded><![CDATA[<h1>The Simplest Neovim Markdown Setup</h1>
<p>I am not one who enjoys complexity.
I am also someone who likes to make their <a href="https://github.com/elijah-potter/harper">own tools</a>.</p>
<p>As a student, I write a lot.
That includes notes, papers, and documentation for my wide variety of projects.</p>
<p>A lot of classes request to submit assignments as either PDF, HTML or <code>.docx</code>.
For a while, I submitted PDF's.
I had a whole orchestrated setup.</p>
<p>I would write everything in <a href="https://neovim.io/">Neovim</a>, save it, and <a href="https://github.com/frabjous/knap">KNAP</a> would render it using <a href="https://pandoc.org/">Pandoc</a>.
Finally, it would be rendered to my screen using <a href="https://sioyek.info/">Sioyek</a>.</p>
<p>This worked fine, I guess, but it was far from perfect.</p>
<ol>
<li><strong>It was slow.</strong> Each edit I made in Markdown could take as many as 10 seconds to show up in Sioyek.</li>
<li>It wasn't interesting. While I made modifications to my Pandoc settings, my PDF's still looked like every other <code>pdflatex</code> document ever made.</li>
<li>I couldn't make my documents interactive if I wanted to.</li>
</ol>
<p>Markdown was designed to be turned into HTML, I reasoned.
So why not just do that?</p>
<img src="/images/tatum_screenshot.webp" alt="A screenshot of Tatum at work">
<p>That's why I created <a href="https://github.com/elijah-potter/tatum">Tatum</a>.
It does one thing, really well.
Point it at a Markdown file, and it will run a tiny web server to render the resulting HTML to.
If the file changes, a WebSocket connection tells the browser to refresh.</p>
<p>Tatum renders in milliseconds and creates beautiful pages with <a href="https://katex.org/"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mtext>KaTeX</mtext></mrow><annotation encoding="application/x-tex">\KaTeX</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8988em;vertical-align:-0.2155em;"></span><span class="mord text"><span class="mord textrm">K</span><span class="mspace" style="margin-right:-0.17em;"></span><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.6833em;"><span style="top:-2.905em;"><span class="pstrut" style="height:2.7em;"></span><span class="mord"><span class="mord textrm mtight sizing reset-size6 size3">A</span></span></span></span></span></span><span class="mspace" style="margin-right:-0.15em;"></span><span class="mord text"><span class="mord textrm">T</span><span class="mspace" style="margin-right:-0.1667em;"></span><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.4678em;"><span style="top:-2.7845em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord textrm">E</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.2155em;"><span></span></span></span></span><span class="mspace" style="margin-right:-0.125em;"></span><span class="mord textrm">X</span></span></span></span></span></span></a>, <a href="https://simplecss.org/">Simple.css</a> and <a href="https://highlightjs.org/"><code>highlight.js</code></a>.
I can embed interactive HTML, CSS and JavaScript elements directly into my Markdown to get the interactivity and aesthetics I desire.</p>
<p>Once I'm done working, I just run <code>tatum render &#x3C;file.md></code> and I get a single file (images and all) that I can submit for my assignments.</p>
<p>Tatum isn't for you to use.
Feel free to poke around at how it works, or even fork it and make your own modifications.
<strong>It fits my use case perfectly.</strong></p>]]></content:encoded>
            <enclosure url="https://elijahpotter.dev/images/tatum_screenshot.webp" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[What Blasterhacks Taught Me About Leadership]]></title>
            <link>https://elijahpotter.dev/articles/what_blasterhacks_taught_me_about_leadership</link>
            <guid>https://elijahpotter.dev/articles/what_blasterhacks_taught_me_about_leadership</guid>
            <pubDate>Fri, 19 Apr 2024 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>What Blasterhacks Taught Me About Leadership</h1>
<h2>The Beginning</h2>
<p>Although it was several months ago, I remember it like it was yesterday.
It was what I would call a <em>dusty</em> Colorado Friday.
The sky was overcast and the trees depressingly barren.
The air smelled of cow farts, which was typical when the winds changed, blowing in cold air from the beef farms in the nearby town of Greeley.
In my mind, this signaled snow.</p>
<p>I walked into the Green Center, the building in housing the opening ceremony of BlasterHacks, the hackathon run by the Colorado School of Mines.
I sign in at the entrance and walk down into the auditorium.</p>
<p>As I greeted my teammates, <a href="https://grantlemons.com/">Grant Lemons</a>, <a href="https://lukaswerner.com/">Lukas Werner</a> and <a href="https://b-sharman.dev/">Byron Sharman</a>, I noticed the nervous smiles on their faces.
You know the kind, the smile with the inner eyebrows slightly raised, shoulders tense.
We were excited, but we also knew we had 36 solid hours of programming ahead of us.</p>
<p>After collectively downing several whole pizzas and listening to the rules of the competition, we made our way over to Labriola Innovation Complex, the hours-old manufacturing building the competition would take place in.</p>
<h2>The Plan</h2>
<p>We brainstormed some ideas under the overcast sky:</p>
<ul>
<li>A Bennett Foddy-inspired battle-royale game where all the controls were <code>vim</code>-keybindings.</li>
<li>A health insurance app that estimates the cost of <em>your</em> care at all local hospitals.</li>
<li>An app that encouraged you to stand up and move every thirty minutes.</li>
</ul>
<p>It was that last one that we all got excited about.
We imagined it to be social: at a random interval (which would average to thirty minutes), the app would simultaneously send out a notification to every user.
Should the user choose to participate in the StandUp moment, the app would guide them all through the same short workout activity.
Finally, the user had the choice to upload "evidence" of the workout, which could be a short caption about what they were doing, or a photo.</p>
<p>The idea of StandUp (as we would go on to name our project) was inspired simultaneously by <a href="https://bereal.com/en/">BeReal</a>, recent research in the detrimental health effects of remote work, and the use of social liability for comparatively more extreme workouts (Strava).</p>
<img src="/images/screenshot_standup.webp" alt="The Feed page of standup after a user has posted">
<p>Of course, most of this had yet to be nailed down yet, for we were just entering the Labriola Innovation Complex.
I remember asking myself if I would leave before the competition was over.
Although I was tired from the days classes and homework, I felt a drive to dedicate my whole self to this project.</p>
<h2>The Board</h2>
<p>We sat down and created a new <a href="https://github.com/grantlemons/blasterhacks2024">git repository</a>.
That's when I started thinking about how we were going to organize our work.
I proposed we setup a KanBan board, a project management technique I had picked up at <a href="https://archytasinc.com">Archytas Automation</a>.</p>
<img src="/images/kanban_blasterhacks.jpg" alt="Our KanBan board near the end of the Hackathon">
<p>If you've never heard of--or used a KanBan board, here's the gist.
There are three buckets:</p>
<ol>
<li>To Do.</li>
<li>Doing.</li>
<li>Done.</li>
</ol>
<p>The second bucket is divided up by person, so in our case, it was really four separate buckets.
When someone needed something new to work on, they would go to the "To Do" bucket and select a new task (which in our case, was a sticky note).
They would place the task into their corresponding "Doing" bucket.
Once the task was done, they would move the sticky note into the "Done" bucket and repeat.</p>
<p>As a gag, we added an additional (superfluous) bucket: IDGAF.
If a task was unanimously deemed no longer important or relevant to the project, that's where we would put it.</p>
<p>So, we got to work.
We had decided to use <a href="https://firebase.google.com/">Firebase</a>, <a href="https://kit.svelte.dev/">SvelteKit</a> and <a href="https://go.dev">Go</a>.
I started setting up Firebase, Grant started researching the notification service, Lukas started working on a circular status page and Byron started researching the Camera API.</p>
<p>As I look back at this moment today, I realize something important about all of these tasks that ended up benefiting us in the long term: they were all independent.
None of these original tasks relied on any other to get done.
We could work in parallel.
Down the road, this would change, but the fact that we were parallel at the beginning set us up for success.</p>
<h2>The Snow and the Crash</h2>
<p>We kept working through the night, downing energy drinks and laughing as we worked. We got the important services and UI element up and running.
Around midnight, as my nose had predicted several hours earlier, it started snowing.</p>
<p>As the snowflakes grew larger and ever more fluffy, I felt the teams energy begin to dip.
We were encountering roadblocks, one-by-one, as our motivation waned and the threat of sleep became more prominent.</p>
<p>Then, someone in the main atrium (who, I don't remember) pronounced, "we are going outside."
We ran down the halls like children spreading the announcement, and soon enough, there were dozens of tired computer science students throwing snowballs in their jeans the t-shirts.</p>
<p>As I threw (and received) a number of snowballs, I realized something important: we were making an app about taking care of yourself physically, but had neglected to do so ourselves.
As important as the project may be, it won't get done when everyone is tired, muscles cramped, and unhappy.
To keep the momentum going, breaks are required.</p>
<p>As we filed back inside to warm our hands under tap water, I swore to make sure the rest of the team would take care of themselves--and each other.</p>
<h2>The Conclusion</h2>
<p>The rest of the hackathon was uneventful.
We sat at our laptops, cranking out line after line of code.
We took breaks and slept when we were blocked by other tasks.</p>
<p>Although it was our first hackathon, we decided to run our project in the <em>advanced</em> track, normally reserved for hackathon veterans.
It paid off, because we won second-place!</p>
<p>Look at us: haggard but satisfied.</p>
<img src="/images/secondplace_blasterhacks.png" alt="The team receiving our second-place win.">]]></description>
            <content:encoded><![CDATA[<h1>What Blasterhacks Taught Me About Leadership</h1>
<h2>The Beginning</h2>
<p>Although it was several months ago, I remember it like it was yesterday.
It was what I would call a <em>dusty</em> Colorado Friday.
The sky was overcast and the trees depressingly barren.
The air smelled of cow farts, which was typical when the winds changed, blowing in cold air from the beef farms in the nearby town of Greeley.
In my mind, this signaled snow.</p>
<p>I walked into the Green Center, the building in housing the opening ceremony of BlasterHacks, the hackathon run by the Colorado School of Mines.
I sign in at the entrance and walk down into the auditorium.</p>
<p>As I greeted my teammates, <a href="https://grantlemons.com/">Grant Lemons</a>, <a href="https://lukaswerner.com/">Lukas Werner</a> and <a href="https://b-sharman.dev/">Byron Sharman</a>, I noticed the nervous smiles on their faces.
You know the kind, the smile with the inner eyebrows slightly raised, shoulders tense.
We were excited, but we also knew we had 36 solid hours of programming ahead of us.</p>
<p>After collectively downing several whole pizzas and listening to the rules of the competition, we made our way over to Labriola Innovation Complex, the hours-old manufacturing building the competition would take place in.</p>
<h2>The Plan</h2>
<p>We brainstormed some ideas under the overcast sky:</p>
<ul>
<li>A Bennett Foddy-inspired battle-royale game where all the controls were <code>vim</code>-keybindings.</li>
<li>A health insurance app that estimates the cost of <em>your</em> care at all local hospitals.</li>
<li>An app that encouraged you to stand up and move every thirty minutes.</li>
</ul>
<p>It was that last one that we all got excited about.
We imagined it to be social: at a random interval (which would average to thirty minutes), the app would simultaneously send out a notification to every user.
Should the user choose to participate in the StandUp moment, the app would guide them all through the same short workout activity.
Finally, the user had the choice to upload "evidence" of the workout, which could be a short caption about what they were doing, or a photo.</p>
<p>The idea of StandUp (as we would go on to name our project) was inspired simultaneously by <a href="https://bereal.com/en/">BeReal</a>, recent research in the detrimental health effects of remote work, and the use of social liability for comparatively more extreme workouts (Strava).</p>
<img src="/images/screenshot_standup.webp" alt="The Feed page of standup after a user has posted">
<p>Of course, most of this had yet to be nailed down yet, for we were just entering the Labriola Innovation Complex.
I remember asking myself if I would leave before the competition was over.
Although I was tired from the days classes and homework, I felt a drive to dedicate my whole self to this project.</p>
<h2>The Board</h2>
<p>We sat down and created a new <a href="https://github.com/grantlemons/blasterhacks2024">git repository</a>.
That's when I started thinking about how we were going to organize our work.
I proposed we setup a KanBan board, a project management technique I had picked up at <a href="https://archytasinc.com">Archytas Automation</a>.</p>
<img src="/images/kanban_blasterhacks.jpg" alt="Our KanBan board near the end of the Hackathon">
<p>If you've never heard of--or used a KanBan board, here's the gist.
There are three buckets:</p>
<ol>
<li>To Do.</li>
<li>Doing.</li>
<li>Done.</li>
</ol>
<p>The second bucket is divided up by person, so in our case, it was really four separate buckets.
When someone needed something new to work on, they would go to the "To Do" bucket and select a new task (which in our case, was a sticky note).
They would place the task into their corresponding "Doing" bucket.
Once the task was done, they would move the sticky note into the "Done" bucket and repeat.</p>
<p>As a gag, we added an additional (superfluous) bucket: IDGAF.
If a task was unanimously deemed no longer important or relevant to the project, that's where we would put it.</p>
<p>So, we got to work.
We had decided to use <a href="https://firebase.google.com/">Firebase</a>, <a href="https://kit.svelte.dev/">SvelteKit</a> and <a href="https://go.dev">Go</a>.
I started setting up Firebase, Grant started researching the notification service, Lukas started working on a circular status page and Byron started researching the Camera API.</p>
<p>As I look back at this moment today, I realize something important about all of these tasks that ended up benefiting us in the long term: they were all independent.
None of these original tasks relied on any other to get done.
We could work in parallel.
Down the road, this would change, but the fact that we were parallel at the beginning set us up for success.</p>
<h2>The Snow and the Crash</h2>
<p>We kept working through the night, downing energy drinks and laughing as we worked. We got the important services and UI element up and running.
Around midnight, as my nose had predicted several hours earlier, it started snowing.</p>
<p>As the snowflakes grew larger and ever more fluffy, I felt the teams energy begin to dip.
We were encountering roadblocks, one-by-one, as our motivation waned and the threat of sleep became more prominent.</p>
<p>Then, someone in the main atrium (who, I don't remember) pronounced, "we are going outside."
We ran down the halls like children spreading the announcement, and soon enough, there were dozens of tired computer science students throwing snowballs in their jeans the t-shirts.</p>
<p>As I threw (and received) a number of snowballs, I realized something important: we were making an app about taking care of yourself physically, but had neglected to do so ourselves.
As important as the project may be, it won't get done when everyone is tired, muscles cramped, and unhappy.
To keep the momentum going, breaks are required.</p>
<p>As we filed back inside to warm our hands under tap water, I swore to make sure the rest of the team would take care of themselves--and each other.</p>
<h2>The Conclusion</h2>
<p>The rest of the hackathon was uneventful.
We sat at our laptops, cranking out line after line of code.
We took breaks and slept when we were blocked by other tasks.</p>
<p>Although it was our first hackathon, we decided to run our project in the <em>advanced</em> track, normally reserved for hackathon veterans.
It paid off, because we won second-place!</p>
<p>Look at us: haggard but satisfied.</p>
<img src="/images/secondplace_blasterhacks.png" alt="The team receiving our second-place win.">]]></content:encoded>
            <enclosure url="https://elijahpotter.dev/images/kanban_blasterhacks.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[The Optimal Workspace]]></title>
            <link>https://elijahpotter.dev/articles/the_optimal_workspace</link>
            <guid>https://elijahpotter.dev/articles/the_optimal_workspace</guid>
            <pubDate>Fri, 29 Mar 2024 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>The Optimal Workspace</h1>
<p>There are a number of grand challenges my generation faces.
Some threaten the way we live, like the housing crisis.
Others are difficult to solve, like space travel.
More still both threaten the way we live <strong>and</strong> are difficult to tackle, like climate change and pollution.</p>
<p>But of all the discussion of the problems of the modern world, one is often overlooked: how do we live happier, more meaningful lives.</p>
<p>I've found that there are two main potential answers which involve the way we sleep and the way we work.
They are related to the ongoing brain damage of millions of Americans, social media and lighting.
By attempting to solve this problem (that of happiness and productivity), all other grand challenges become easier to solve.</p>
<p>Unlike other grand challenges, like space travel, whose solutions lie in the hands of relatively few scientists and administrators, this problem of how to live well is the responsibility of <strong>both</strong> individuals and businesses.</p>
<h2>Step 1: Sleep</h2>
<img src="/images/sleeping_fox.webp" alt="A fox sleeping on pavement">
<p>As a high school student, I had an adversarial relationship with sleep.
The way I saw it, every minute asleep was a minute of homework not done, a minute of friendship not cherished.
A minute of life lost.</p>
<p>I went many nights completely forgoing it entirely, believing that it was merely a preview of death.
In hindsight, I think it reflects well on my work ethic, but poorly on my work technique.
The ancient Greeks seemed to agree with me.
According to them, god of sleep was Hypnos, twin brother of death.</p>
<p>I couldn't have been more wrong, for sleep is incredibly important.
In fact, I may have received mild brain damage for my decisions.
Sleep is <strong>not</strong> mere lack of activity.
In the book <em>Why We Sleep</em> by Matthew Walker, he discusses why sleep is critical for living a life containing depth of work and depth of being.</p>
<p>There are two types of sleep, and each are equally important.</p>
<h3>NREM (Non-Rapid Eye Movement) Sleep</h3>
<p>NREM sleep is critical in the integration and solidification of memory.
This is the portion of sleep where your eyes are stationary, thus the name.
In fact, NREM sleep seems to be the only time the brain actively works to retain memory.
Since NREM sleep is more prevalent in the earlier hours of the night, going to bed two hours hours later than usual results in a 70% reduction in retention of factual information the next day.</p>
<h3>REM Sleep</h3>
<p>REM sleep is pivotal in creative thinking and emotional processing.</p>
<p>Walker models ideas as a knowledge graph.
During wakefulness, traversal across this graph for the purpose of problem solving is somewhat isolated.
In other words, people who are awake are only thinking of solutions or making connections that are conceptually similar to the existing solutions and ideas.
During REM sleep, however, the brain makes finds longer-distances relationships, from much further-apart sections of the graph.</p>
<p>PTSD patients always have poor quality sleep from to nightmares.
It isn't just a correlation, it is literally required in the diagnosis.
Walker claims this is because the brain is attempting to <em>process</em> the negative emotions associated with trauma, but, since the emotions are so strong, wakes itself up.
One of the main goals with cognitive behavioral therapy for PTSD patients is get the victim through a full night of sleep.
By giving the brain the opportunity to properly process the negative experience (in REM sleep), they are able to dramatically speed up recovery.</p>
<p>Main takeaway: sleep more and you will live a longer, healthier, more impactful life.</p>
<p>The need for sleep varies across your lifetime less than many people think.
The most desperate need is during infancy, with babies and toddlers requiring as much as 18 hours of sleep per day.
As you get older, decreases until leveling off at eight and half hours of time in bed.
No, that is not <em>time asleep</em> it is <em>time in bed.</em>
It is a myth that seniors require less sleep.
In truth, they require just as much sleep as in middle age, but have greater difficulty achieving it.
This is primarily due to the correlated difficulty breathing, and secondarily due to the earlier bed-time seniors require.</p>
<p>There are a number of traits of the modern world that have resulting in the WHO diagnosis of sleep deprivation as a pandemic.
More than 28% of Americans are chronically sleep deprived.</p>
<h3>The Danger of Sleep Deprivation</h3>
<p>Chronic sleep deprivation can result in brain damage.
While you are awake, the <a href="https://en.wikipedia.org/wiki/Glymphatic_system">glymphatic system</a> (which is similar in function, different in implementation to the <a href="https://en.wikipedia.org/wiki/Lymphatic_system">lymphatic system</a>) is inactive.
As a result, there is a gradual buildup of garbage in the brain, blocking neurotransmitters and generally disrupting the ability for neurons to communicate with each other.
One protein that builds up: beta amyloid, the same protein that causes Alzheimer's disease.
When you sleep, the glymph system starts up and cleans up the garbage wakefulness accumulated.
However, if not given enough time to do its work (by sleeping), the glymphatic system is not able to do its job entirely.
This is why your likelihood of getting Alzheimers increases the less you sleep.</p>
<p>Sleep deprivation also causes increased concentrations of cannabinoids.
Just like Marijuana, these increases appetite for unhealthy foods.
Additionally, sleep is an energy intensive process.
The body burns more calories at night during sleep than during harmful wakefulness.
All in all, there is a very strong connection between sleep deprivation and obesity.</p>
<p>Matthew Walker highlights one especially critical time when physical and mental health (especially memory retention) are important: school.
The primary goal of school is to learn, to which NREM sleep is critical (due to its role in memory retention).</p>
<p>Starting in middle school, children's circadian rhythms start to shift later.
In other words, teenagers naturally fall asleep and wake up later than other age groups.
Walker attributes this to the need for teens to become more independent.
Evolution pushes them to operate by themselves by forcing their parents to go to bed earlier.</p>
<p>Schools operate under the circadian rhythms of parents and teachers.
7 o'clock in the morning is the ideal time for them, but not for the students.
But we find that students don't go to bed any earlier, even though they are forced to awake at the same time as their parents.
The end result: millions of middle and high school students are currently experiencing chronic sleep deprivation, just because school starts too early.</p>
<p>The eating and physical habits developed during one's school years serve as the template for the remainder of their lives.
If one spends their developing years under sleep deprivation (which is the case for millions of students in the U.S.) they are learning how to live their life under the influence of cannabinoids, all the while getting gradual brain damage from the buildup of beta amyloid.</p>
<h3>Sleeping Pills</h3>
<p>As an aside, I feel the need to address sleeping pills.
Do not take them.</p>
<p>There are currently <strong>no</strong> medications on the market that induce real sleep.
There <em>are</em> medications that slow and stop brain function, like Ambien.
They do not provide real sleep, and as such do not result in memory integration, emotional processing or any of the other benefits of sleep.
In fact, studies with Ambien show that induced "sleep" actually results in a net loss of memory, compared with not sleeping at all.
Not only does Ambien stop natural memory integration, it actively causes forgetfulness.</p>
<h3>Phones and Blue LEDs</h3>
<img src="/images/phone_in_bed.jpg" alt="A person looking a phone in bed">
<p>The part of the brain that runs your circadian rhythm is called the <a href="https://en.wikipedia.org/wiki/Suprachiasmatic_nucleus">superchiasmatic nucleus</a>.
This is an extremely small cluster of neurons that sits on the intersection of your two optic nerves.
It watches for signals of blue light.
If is any is detected (since the only significant, natural blue is from the sky) it can safely assume it is day and will slowly adjust itself under that assumption.</p>
<p>Phones and other sources of blue light mess with this process.
If your eyes detected <em>any</em> at night, you are guaranteed to sleep worse.</p>
<p>There are other neurological effects at play.
Most activities that take place on a phone activate the part of the brain called the amygdala, which we most commonly associate with the fight or flight response.
When activated, your body prepares itself for survival.
It slows or halts digestion, dilates the pupils and increases metabolism.
In other words, prepares your body for action, not rest.</p>
<p>The issue: one of the most commons staples of modern office spaces are the rows upon rows of florescent lights.
All people working in buildings outfitted with these lights may be experiencing worst quality and quantity of sleep as a result,
especially those working later or longer hours.</p>
<p>In the search for producing deep work and living a full life, getting enough sleep is paramount.
It is the single most effective move one can make, since it makes all other actions more effective.</p>
<p>However, as I previously pointed out, success in the new economy relies particularly on deep work, which requires deep focus.</p>
<h2>Step 2: Deep Focus</h2>
<p>At the beginning, I mentioned there are <strong>two</strong> problems restricting the ability for people to live happy, effective lives.
The first was related to sleep, the second was related to deep focus.</p>
<p>Before I can explain how to achieve deep focus (according to Cal Newport), I first need to explain why it is so critical.</p>
<h2>The Information Age</h2>
<img src="/images/moons.webp" alt="The stages of the moon">
<p>We like to separate history into ages.
The age of the dinosaurs was followed by the ice age which was followed by the human age (the Anthropocene).
Granted, that's an oversimplification, but it highlights the point: we delineate swaths of time with the most important and unique variable at play.</p>
<p>The industrial age began with the widespread use of the steam engine.
With steam engines came factories, and with factories came factory workers.</p>
<p>We live in a new age, identified by the integration and advancement of computer and communication technology.
We call it the <strong>information age.</strong>
With the information age comes information or <strong>knowledge workers.</strong></p>
<p>This is the subject of Cal Newport's fantastic book, <em>Deep Work: Rules for Focused Success in a Distracted World.</em></p>
<h2>Knowledge Work</h2>
<p>Knowledge work is defined by the need for a depth and breadth of knowledge, and a high ability to manipulate it.
Programming, writing, accounting, and teaching are all examples of knowledge work.
They require enormous amounts of knowledge and the value the provide is accomplished through thought.</p>
<p>Cal Newport makes the case that the globalization of knowledge (via the internet) has initiated an economic shift that will benefit just a few knowledge workers, the most skilled superlinearly relative to the rest.</p>
<p>Before the internet, any given company had a limited pool of potential workers: those in the geographic vicinity.
This meant, if a company wanted to hire <em>the best</em> programmer for a job, they would have to make do with what was available.</p>
<p>Today, however, if a company wants to hire the best programmer, they have the entire world's pool.
Knowledge work doesn't typically have a strict on-site requirement.
Companies also have much more to gain from getting <em>the best</em> than getting <em>pretty good</em>, so they're willing to pay much more.
The end result: the people at the top of their field are in higher demand now than ever before.</p>
<p>Cal Newport identifies two key traits in these "superstars" at the top of their field.
First, "the ability to quickly master hard things."
In other words, learn complex and applicable things.</p>
<p>Second, "the ability to produce at an elite level, in terms of both quality and speed."</p>
<p>All of this is to say: to be successful in the changing economy, people need to become better at deep, meaningful work.
Since the most valuable resources are the result of deep thought, it makes sense that the highest achievers are those who practice deep focus.</p>
<p>In <em>Deep Work</em>, Cal Newport highlights a number of strategies for achieving true, deep focus.
However, only one is broadly applicable for most people.
The others require an often unreasonable amount time set aside, something that most employers do not allow, unfortunately.</p>
<h3>Rhythmic Deep Work Scheduling</h3>
<p>This is most effective strategy for achieving deep focus for most people.
There are others Newport highlights in his book, but this is the most important.
Particularly, most people have <em>daily</em> obligations that halt them from deeply focusing for more than 22 hours.
This could be due to your job, family, or school.</p>
<p>Either way, a number of people have centered on a seemingly simple solution: make a habit.
Instead of forcing yourself to make a decision to move into deep work, regularly schedule it.</p>
<p>The brain has a limited number of deep concentration hours per day (I'm not sure why, but I suspect it is related previously described buildup of proteins in the brain).
Use them up front, when you are least likely to be distracted: in the morning.</p>
<p>This is something I've adopted.
I schedule all my classes in the afternoon and most social encounters naturally happen in the evening.
This leaves the entire morning dedicated to deep focus.
I put my phone out of sight and work, undistracted, from 5:45 AM to 11:00 AM.</p>
<p>Since implementing this habit, I've found that I'm able to complete the majority of my work twice as fast,
leaving more time for other pursuits, like <a href="https://writewithharper.com">Harper</a>.</p>
<h3>The Open Office Plan</h3>
<p>The open office plan it an example of a general trend in many modern companies.
Managers believe the largest barrier to productivity is a lack of communication.
In theory, knowledge workers will be able to complete work faster if they are able to share knowledge faster.</p>
<p>For example, a junior accountant needs to perform an operation in Excel to tabulate a company's expenses for this year.
This process, with the knowledge they currently have, will take three hours to perform.
They have to write the formulas, import in the data, and present it in a graph.</p>
<p>A senior accountant, however, if given the same job, would know that there is a single button in Excel that does all the same
operations in five minutes.</p>
<p>The friction at play, is that the senior accountant has more knowledge of Excel capabilities than the junior.
If they could communicate this knowledge, the junior's work could be completed much faster.
This is why managers believe that increasing communication increases productivity.</p>
<p>However, most attempts to increase productivity have a negative effect on the ability for knowledge workers to focus,
which results in a net <em>decrease</em> of productivity in the team.</p>
<p>For example: the open office plan.
Sure, there may be instances where a senior accountant sees that a junior needs help on something, but
if any two people are talking elsewhere in the room, their noise disrupts the focus of every single worker present.</p>
<h3>Hub and Spoke Model</h3>
<img src="/images/bell_labs.webp" alt="The Bell Labs Holmdel Complex">
<p>There is one tried and true method of increasing deep focus without sacrificing communication: changing the way our buildings are designed.</p>
<p>The Bell Labs Holmdel Complex is a fantastic example of this.
At its peak, it was filled with some of the greatest minds of a generation.
Ten Nobel Prizes, five Turing Awards, and twenty-two IEEE Medals of Honor were awarded for work done in this building</p>
<p>The Holmdel Complex was designed like with a Hub and Spokes.
Each worker had their own private office, so they could work completely uninterrupted.
You can think of each office as a spoke on a wheel.
However, if they needed to go anywhere else in the building, they had to enter the hub: a large hallway that connected all rooms in the building.
This meant that chance encounters with other workers were commonplace, but only when a worker wasn't actively trying focus.
They retained the benefits of both deep work and communication.</p>
<p>These chance encounters gave rise to unexpected applications of each individual researcher's work.
The people working on Unix discovered they could help the information theory researchers and vice versa.</p>
<h2>Conclusion</h2>
<p>Let us return to great challenge I'm trying to highlight: the question of how to live happier and more effectively.
The answers are surprisingly simple.</p>
<p>We need to schedule our schools and workplaces to account for the needs of sleep for our workers and students.
Students that remember more material become more effective problem solvers later on, which makes all other problems easier to solve.</p>
<p>The people who are attempting to solve other great issues, like unequal access to education, climate change, and pollution will be able to
work more effectively given the opportunity to enter deep focus.</p>
<p>So please, sleep well, and don't be distracted by the things that don't matter.</p>
<img src="/images/prod_map.png" alt="A map of variables that impact your productivity">
<h2>Works Cited</h2>
<p>[1] M. Walker, Why we sleep: Unlocking the Power of Sleep and Dreams. Simon and Schuster, 2017.</p>
<p>[2] C. Newport, Deep work: Rules for Focused Success in a Distracted World. Hachette UK, 2016.</p>]]></description>
            <content:encoded><![CDATA[<h1>The Optimal Workspace</h1>
<p>There are a number of grand challenges my generation faces.
Some threaten the way we live, like the housing crisis.
Others are difficult to solve, like space travel.
More still both threaten the way we live <strong>and</strong> are difficult to tackle, like climate change and pollution.</p>
<p>But of all the discussion of the problems of the modern world, one is often overlooked: how do we live happier, more meaningful lives.</p>
<p>I've found that there are two main potential answers which involve the way we sleep and the way we work.
They are related to the ongoing brain damage of millions of Americans, social media and lighting.
By attempting to solve this problem (that of happiness and productivity), all other grand challenges become easier to solve.</p>
<p>Unlike other grand challenges, like space travel, whose solutions lie in the hands of relatively few scientists and administrators, this problem of how to live well is the responsibility of <strong>both</strong> individuals and businesses.</p>
<h2>Step 1: Sleep</h2>
<img src="/images/sleeping_fox.webp" alt="A fox sleeping on pavement">
<p>As a high school student, I had an adversarial relationship with sleep.
The way I saw it, every minute asleep was a minute of homework not done, a minute of friendship not cherished.
A minute of life lost.</p>
<p>I went many nights completely forgoing it entirely, believing that it was merely a preview of death.
In hindsight, I think it reflects well on my work ethic, but poorly on my work technique.
The ancient Greeks seemed to agree with me.
According to them, god of sleep was Hypnos, twin brother of death.</p>
<p>I couldn't have been more wrong, for sleep is incredibly important.
In fact, I may have received mild brain damage for my decisions.
Sleep is <strong>not</strong> mere lack of activity.
In the book <em>Why We Sleep</em> by Matthew Walker, he discusses why sleep is critical for living a life containing depth of work and depth of being.</p>
<p>There are two types of sleep, and each are equally important.</p>
<h3>NREM (Non-Rapid Eye Movement) Sleep</h3>
<p>NREM sleep is critical in the integration and solidification of memory.
This is the portion of sleep where your eyes are stationary, thus the name.
In fact, NREM sleep seems to be the only time the brain actively works to retain memory.
Since NREM sleep is more prevalent in the earlier hours of the night, going to bed two hours hours later than usual results in a 70% reduction in retention of factual information the next day.</p>
<h3>REM Sleep</h3>
<p>REM sleep is pivotal in creative thinking and emotional processing.</p>
<p>Walker models ideas as a knowledge graph.
During wakefulness, traversal across this graph for the purpose of problem solving is somewhat isolated.
In other words, people who are awake are only thinking of solutions or making connections that are conceptually similar to the existing solutions and ideas.
During REM sleep, however, the brain makes finds longer-distances relationships, from much further-apart sections of the graph.</p>
<p>PTSD patients always have poor quality sleep from to nightmares.
It isn't just a correlation, it is literally required in the diagnosis.
Walker claims this is because the brain is attempting to <em>process</em> the negative emotions associated with trauma, but, since the emotions are so strong, wakes itself up.
One of the main goals with cognitive behavioral therapy for PTSD patients is get the victim through a full night of sleep.
By giving the brain the opportunity to properly process the negative experience (in REM sleep), they are able to dramatically speed up recovery.</p>
<p>Main takeaway: sleep more and you will live a longer, healthier, more impactful life.</p>
<p>The need for sleep varies across your lifetime less than many people think.
The most desperate need is during infancy, with babies and toddlers requiring as much as 18 hours of sleep per day.
As you get older, decreases until leveling off at eight and half hours of time in bed.
No, that is not <em>time asleep</em> it is <em>time in bed.</em>
It is a myth that seniors require less sleep.
In truth, they require just as much sleep as in middle age, but have greater difficulty achieving it.
This is primarily due to the correlated difficulty breathing, and secondarily due to the earlier bed-time seniors require.</p>
<p>There are a number of traits of the modern world that have resulting in the WHO diagnosis of sleep deprivation as a pandemic.
More than 28% of Americans are chronically sleep deprived.</p>
<h3>The Danger of Sleep Deprivation</h3>
<p>Chronic sleep deprivation can result in brain damage.
While you are awake, the <a href="https://en.wikipedia.org/wiki/Glymphatic_system">glymphatic system</a> (which is similar in function, different in implementation to the <a href="https://en.wikipedia.org/wiki/Lymphatic_system">lymphatic system</a>) is inactive.
As a result, there is a gradual buildup of garbage in the brain, blocking neurotransmitters and generally disrupting the ability for neurons to communicate with each other.
One protein that builds up: beta amyloid, the same protein that causes Alzheimer's disease.
When you sleep, the glymph system starts up and cleans up the garbage wakefulness accumulated.
However, if not given enough time to do its work (by sleeping), the glymphatic system is not able to do its job entirely.
This is why your likelihood of getting Alzheimers increases the less you sleep.</p>
<p>Sleep deprivation also causes increased concentrations of cannabinoids.
Just like Marijuana, these increases appetite for unhealthy foods.
Additionally, sleep is an energy intensive process.
The body burns more calories at night during sleep than during harmful wakefulness.
All in all, there is a very strong connection between sleep deprivation and obesity.</p>
<p>Matthew Walker highlights one especially critical time when physical and mental health (especially memory retention) are important: school.
The primary goal of school is to learn, to which NREM sleep is critical (due to its role in memory retention).</p>
<p>Starting in middle school, children's circadian rhythms start to shift later.
In other words, teenagers naturally fall asleep and wake up later than other age groups.
Walker attributes this to the need for teens to become more independent.
Evolution pushes them to operate by themselves by forcing their parents to go to bed earlier.</p>
<p>Schools operate under the circadian rhythms of parents and teachers.
7 o'clock in the morning is the ideal time for them, but not for the students.
But we find that students don't go to bed any earlier, even though they are forced to awake at the same time as their parents.
The end result: millions of middle and high school students are currently experiencing chronic sleep deprivation, just because school starts too early.</p>
<p>The eating and physical habits developed during one's school years serve as the template for the remainder of their lives.
If one spends their developing years under sleep deprivation (which is the case for millions of students in the U.S.) they are learning how to live their life under the influence of cannabinoids, all the while getting gradual brain damage from the buildup of beta amyloid.</p>
<h3>Sleeping Pills</h3>
<p>As an aside, I feel the need to address sleeping pills.
Do not take them.</p>
<p>There are currently <strong>no</strong> medications on the market that induce real sleep.
There <em>are</em> medications that slow and stop brain function, like Ambien.
They do not provide real sleep, and as such do not result in memory integration, emotional processing or any of the other benefits of sleep.
In fact, studies with Ambien show that induced "sleep" actually results in a net loss of memory, compared with not sleeping at all.
Not only does Ambien stop natural memory integration, it actively causes forgetfulness.</p>
<h3>Phones and Blue LEDs</h3>
<img src="/images/phone_in_bed.jpg" alt="A person looking a phone in bed">
<p>The part of the brain that runs your circadian rhythm is called the <a href="https://en.wikipedia.org/wiki/Suprachiasmatic_nucleus">superchiasmatic nucleus</a>.
This is an extremely small cluster of neurons that sits on the intersection of your two optic nerves.
It watches for signals of blue light.
If is any is detected (since the only significant, natural blue is from the sky) it can safely assume it is day and will slowly adjust itself under that assumption.</p>
<p>Phones and other sources of blue light mess with this process.
If your eyes detected <em>any</em> at night, you are guaranteed to sleep worse.</p>
<p>There are other neurological effects at play.
Most activities that take place on a phone activate the part of the brain called the amygdala, which we most commonly associate with the fight or flight response.
When activated, your body prepares itself for survival.
It slows or halts digestion, dilates the pupils and increases metabolism.
In other words, prepares your body for action, not rest.</p>
<p>The issue: one of the most commons staples of modern office spaces are the rows upon rows of florescent lights.
All people working in buildings outfitted with these lights may be experiencing worst quality and quantity of sleep as a result,
especially those working later or longer hours.</p>
<p>In the search for producing deep work and living a full life, getting enough sleep is paramount.
It is the single most effective move one can make, since it makes all other actions more effective.</p>
<p>However, as I previously pointed out, success in the new economy relies particularly on deep work, which requires deep focus.</p>
<h2>Step 2: Deep Focus</h2>
<p>At the beginning, I mentioned there are <strong>two</strong> problems restricting the ability for people to live happy, effective lives.
The first was related to sleep, the second was related to deep focus.</p>
<p>Before I can explain how to achieve deep focus (according to Cal Newport), I first need to explain why it is so critical.</p>
<h2>The Information Age</h2>
<img src="/images/moons.webp" alt="The stages of the moon">
<p>We like to separate history into ages.
The age of the dinosaurs was followed by the ice age which was followed by the human age (the Anthropocene).
Granted, that's an oversimplification, but it highlights the point: we delineate swaths of time with the most important and unique variable at play.</p>
<p>The industrial age began with the widespread use of the steam engine.
With steam engines came factories, and with factories came factory workers.</p>
<p>We live in a new age, identified by the integration and advancement of computer and communication technology.
We call it the <strong>information age.</strong>
With the information age comes information or <strong>knowledge workers.</strong></p>
<p>This is the subject of Cal Newport's fantastic book, <em>Deep Work: Rules for Focused Success in a Distracted World.</em></p>
<h2>Knowledge Work</h2>
<p>Knowledge work is defined by the need for a depth and breadth of knowledge, and a high ability to manipulate it.
Programming, writing, accounting, and teaching are all examples of knowledge work.
They require enormous amounts of knowledge and the value the provide is accomplished through thought.</p>
<p>Cal Newport makes the case that the globalization of knowledge (via the internet) has initiated an economic shift that will benefit just a few knowledge workers, the most skilled superlinearly relative to the rest.</p>
<p>Before the internet, any given company had a limited pool of potential workers: those in the geographic vicinity.
This meant, if a company wanted to hire <em>the best</em> programmer for a job, they would have to make do with what was available.</p>
<p>Today, however, if a company wants to hire the best programmer, they have the entire world's pool.
Knowledge work doesn't typically have a strict on-site requirement.
Companies also have much more to gain from getting <em>the best</em> than getting <em>pretty good</em>, so they're willing to pay much more.
The end result: the people at the top of their field are in higher demand now than ever before.</p>
<p>Cal Newport identifies two key traits in these "superstars" at the top of their field.
First, "the ability to quickly master hard things."
In other words, learn complex and applicable things.</p>
<p>Second, "the ability to produce at an elite level, in terms of both quality and speed."</p>
<p>All of this is to say: to be successful in the changing economy, people need to become better at deep, meaningful work.
Since the most valuable resources are the result of deep thought, it makes sense that the highest achievers are those who practice deep focus.</p>
<p>In <em>Deep Work</em>, Cal Newport highlights a number of strategies for achieving true, deep focus.
However, only one is broadly applicable for most people.
The others require an often unreasonable amount time set aside, something that most employers do not allow, unfortunately.</p>
<h3>Rhythmic Deep Work Scheduling</h3>
<p>This is most effective strategy for achieving deep focus for most people.
There are others Newport highlights in his book, but this is the most important.
Particularly, most people have <em>daily</em> obligations that halt them from deeply focusing for more than 22 hours.
This could be due to your job, family, or school.</p>
<p>Either way, a number of people have centered on a seemingly simple solution: make a habit.
Instead of forcing yourself to make a decision to move into deep work, regularly schedule it.</p>
<p>The brain has a limited number of deep concentration hours per day (I'm not sure why, but I suspect it is related previously described buildup of proteins in the brain).
Use them up front, when you are least likely to be distracted: in the morning.</p>
<p>This is something I've adopted.
I schedule all my classes in the afternoon and most social encounters naturally happen in the evening.
This leaves the entire morning dedicated to deep focus.
I put my phone out of sight and work, undistracted, from 5:45 AM to 11:00 AM.</p>
<p>Since implementing this habit, I've found that I'm able to complete the majority of my work twice as fast,
leaving more time for other pursuits, like <a href="https://writewithharper.com">Harper</a>.</p>
<h3>The Open Office Plan</h3>
<p>The open office plan it an example of a general trend in many modern companies.
Managers believe the largest barrier to productivity is a lack of communication.
In theory, knowledge workers will be able to complete work faster if they are able to share knowledge faster.</p>
<p>For example, a junior accountant needs to perform an operation in Excel to tabulate a company's expenses for this year.
This process, with the knowledge they currently have, will take three hours to perform.
They have to write the formulas, import in the data, and present it in a graph.</p>
<p>A senior accountant, however, if given the same job, would know that there is a single button in Excel that does all the same
operations in five minutes.</p>
<p>The friction at play, is that the senior accountant has more knowledge of Excel capabilities than the junior.
If they could communicate this knowledge, the junior's work could be completed much faster.
This is why managers believe that increasing communication increases productivity.</p>
<p>However, most attempts to increase productivity have a negative effect on the ability for knowledge workers to focus,
which results in a net <em>decrease</em> of productivity in the team.</p>
<p>For example: the open office plan.
Sure, there may be instances where a senior accountant sees that a junior needs help on something, but
if any two people are talking elsewhere in the room, their noise disrupts the focus of every single worker present.</p>
<h3>Hub and Spoke Model</h3>
<img src="/images/bell_labs.webp" alt="The Bell Labs Holmdel Complex">
<p>There is one tried and true method of increasing deep focus without sacrificing communication: changing the way our buildings are designed.</p>
<p>The Bell Labs Holmdel Complex is a fantastic example of this.
At its peak, it was filled with some of the greatest minds of a generation.
Ten Nobel Prizes, five Turing Awards, and twenty-two IEEE Medals of Honor were awarded for work done in this building</p>
<p>The Holmdel Complex was designed like with a Hub and Spokes.
Each worker had their own private office, so they could work completely uninterrupted.
You can think of each office as a spoke on a wheel.
However, if they needed to go anywhere else in the building, they had to enter the hub: a large hallway that connected all rooms in the building.
This meant that chance encounters with other workers were commonplace, but only when a worker wasn't actively trying focus.
They retained the benefits of both deep work and communication.</p>
<p>These chance encounters gave rise to unexpected applications of each individual researcher's work.
The people working on Unix discovered they could help the information theory researchers and vice versa.</p>
<h2>Conclusion</h2>
<p>Let us return to great challenge I'm trying to highlight: the question of how to live happier and more effectively.
The answers are surprisingly simple.</p>
<p>We need to schedule our schools and workplaces to account for the needs of sleep for our workers and students.
Students that remember more material become more effective problem solvers later on, which makes all other problems easier to solve.</p>
<p>The people who are attempting to solve other great issues, like unequal access to education, climate change, and pollution will be able to
work more effectively given the opportunity to enter deep focus.</p>
<p>So please, sleep well, and don't be distracted by the things that don't matter.</p>
<img src="/images/prod_map.png" alt="A map of variables that impact your productivity">
<h2>Works Cited</h2>
<p>[1] M. Walker, Why we sleep: Unlocking the Power of Sleep and Dreams. Simon and Schuster, 2017.</p>
<p>[2] C. Newport, Deep work: Rules for Focused Success in a Distracted World. Hachette UK, 2016.</p>]]></content:encoded>
            <enclosure url="https://elijahpotter.dev/images/prod_map.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[Stupid Simple Spell Check]]></title>
            <link>https://elijahpotter.dev/articles/stupid_simple_spell_check</link>
            <guid>https://elijahpotter.dev/articles/stupid_simple_spell_check</guid>
            <pubDate>Mon, 04 Mar 2024 07:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Stupid-Simple Spell-Check</h1>
<img src="/images/antiques.webp" alt="A sign advertising for antiques, but misspelled.">
<p>For the last month, I've been spending a lot of time replacing one key component
of my writing and programming environment: my grammar checker.</p>
<p>Up until now, I've been using the eponymous <a href="https://languagetool.org/">LanguageTool</a>
via <a href="https://github.com/valentjn/ltex-ls"><code>ltex-ls</code></a> and <a href="https://github.com/neovim/nvim-lspconfig"><code>nvim-lspconfig</code></a>.
Don't get me wrong, these tools are <em>really good,</em> and I would recommend them to anyone and everyone.
However, they come with a few key annoyances.</p>
<h2>LanguageTool Grievances</h2>
<h3>Performance</h3>
<p>LanguageTool is slow.
I'm not exactly sure why.
Every time I would run LanguageTool over my Markdown or <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mtext>LaTeX</mtext></mrow><annotation encoding="application/x-tex">\LaTeX</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8988em;vertical-align:-0.2155em;"></span><span class="mord text"><span class="mord textrm">L</span><span class="mspace" style="margin-right:-0.36em;"></span><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.6833em;"><span style="top:-2.905em;"><span class="pstrut" style="height:2.7em;"></span><span class="mord"><span class="mord textrm mtight sizing reset-size6 size3">A</span></span></span></span></span></span><span class="mspace" style="margin-right:-0.15em;"></span><span class="mord text"><span class="mord textrm">T</span><span class="mspace" style="margin-right:-0.1667em;"></span><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.4678em;"><span style="top:-2.7845em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord textrm">E</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.2155em;"><span></span></span></span></span><span class="mspace" style="margin-right:-0.125em;"></span><span class="mord textrm">X</span></span></span></span></span></span> documents (which are reasonably sized),
I would have to wait several seconds before even the rudimentary spell-checking would show up.</p>
<p>Additionally, I would find <code>ltex-ls</code> regularly becoming the most memory-hungry application on my laptop,
often exceeding 4 gigabytes.</p>
<p>After hours of scouring their code base, I have come to no better explanation
than that it is written in Java.
There are a couple questionable algorithmic decisions in there as well.</p>
<h3>Download Size</h3>
<p>As I said: LanguageTool is really quite good.
However, to get everything it can offer, you need to not only install a Java Runtime
Environment (>150 MB on my system), the actual <code>.jar</code> file (>300 MB), but you also need to download
a 16 GB n-gram dataset.</p>
<h2>Grammarly Grievances</h2>
<p>"But Elijah," I hear you say, "just use Grammarly!"</p>
<p><strong>No.</strong> I'm not going to drop $12 a month for something even slower and worse.
Not to mention how they are likely going to use my work to train their large language models.
Grammarly is a great product, just not for me.</p>
<h2>The Algorithm</h2>
<p>Now that I've thoroughly explained my reasoning for implementing a new grammar checker (one that I'm calling <a href="https://github.com/elijah-potter/harper">Harper</a>), I'd like to recount
my first, admittedly naive, attempt at spellchecking.</p>
<p>The first idea we need to get a grip on is <em>Levenshtein edit distance.</em>
In essence, edit distance is the least number of single-character edits (insertions, deletions or replacements) necessary to turn one word into another.
For example, the edit distance between "cat" and "bat" is one; the only edit involves replacing the "c" with a "b".</p>
<p>Similarly, the edit distance between "kitten" and "sitting" is three: remove the "g", replace the second "i" with an "e" and replace the "s" with a "k".
For this naive spellchecking, we aren't too concerned with the exact edits (atomic errors) that occur in a given misspelling, only the magnitude of the error.</p>
<p>From a high level view here's how the algorithm is going to work:</p>
<ol>
<li>Determine <em>whether</em> a given word is misspelled.
If not, exit.</li>
<li>Calculate the Levenshtein edit distance between the misspelled word and all valid English words.</li>
<li>Pick the three words with the shortest edit distance and present them to the user as alternative
spelling options.</li>
</ol>
<h3>Step 1.</h3>
<p>To determine whether a given word is misspelled, we will need a list of all the valid words in the English language.
Turns out, this isn't too easy.
For today, we will just use a subset of the English language with this short list:</p>
<pre><code>into
the
a
cat
tree
jumped
</code></pre>
<p>To check if a given word is within the list, we can place the list into a hash set,
and grab it's contents.</p>
<pre><code class="hljs language-rust"><span class="hljs-keyword">let</span> <span class="hljs-variable">words</span>: <span class="hljs-type">Vec</span>&#x3C;<span class="hljs-type">String</span>> = <span class="hljs-built_in">vec!</span>[<span class="hljs-string">"into"</span>, <span class="hljs-string">"the"</span>, <span class="hljs-string">"a"</span>, <span class="hljs-string">"cat"</span>, <span class="hljs-string">"tree"</span>, <span class="hljs-string">"jumped"</span>]
    .<span class="hljs-title function_ invoke__">into_iter</span>()
    .<span class="hljs-title function_ invoke__">map</span>(|s| s.<span class="hljs-title function_ invoke__">to_string</span>())
    .<span class="hljs-title function_ invoke__">collect</span>();

<span class="hljs-keyword">let</span> <span class="hljs-variable">word_set</span>: HashSet&#x3C;<span class="hljs-type">String</span>> = words.<span class="hljs-title function_ invoke__">iter</span>().<span class="hljs-title function_ invoke__">cloned</span>().<span class="hljs-title function_ invoke__">collect</span>();

<span class="hljs-keyword">let</span> <span class="hljs-variable">word</span> = <span class="hljs-string">"thw"</span>;
<span class="hljs-keyword">let</span> <span class="hljs-variable">word_chars</span>: <span class="hljs-type">Vec</span>&#x3C;_> = word.<span class="hljs-title function_ invoke__">chars</span>().<span class="hljs-title function_ invoke__">collect</span>();

<span class="hljs-keyword">if</span> word_set.<span class="hljs-title function_ invoke__">contains</span>(word) {
    <span class="hljs-built_in">println!</span>(<span class="hljs-string">"It is a valid English word!"</span>);
    <span class="hljs-keyword">return</span>;
}

<span class="hljs-built_in">println!</span>(<span class="hljs-string">"Are you sure you meant to spell \"{}\" that way?"</span>, word);
</code></pre>
<h3>The Wagner-Fischer Algorithm</h3>
<p>Now that we know our word is actually misspelled, we can move on to finding the correct spelling.
We need to find the edit distance between the misspelled word and all the words in our set.</p>
<p>To do this, we will be using the <a href="https://en.wikipedia.org/wiki/Wagner%E2%80%93Fischer_algorithm">Wagner-Fischer</a> algorithm.</p>
<pre><code class="hljs language-rust"><span class="hljs-comment">// Computes the Levenstein edit distance between two patterns.</span>
<span class="hljs-comment">// This is accomplished via the Wagner-Fischer algorithm</span>
<span class="hljs-keyword">fn</span> <span class="hljs-title function_">edit_distance</span>(source: &#x26;[<span class="hljs-type">char</span>], target: &#x26;[<span class="hljs-type">char</span>]) <span class="hljs-punctuation">-></span> <span class="hljs-type">u8</span> {
    <span class="hljs-keyword">let</span> <span class="hljs-variable">m</span> = source.<span class="hljs-title function_ invoke__">len</span>();
    <span class="hljs-keyword">let</span> <span class="hljs-variable">n</span> = target.<span class="hljs-title function_ invoke__">len</span>();

    <span class="hljs-comment">// Create an m-by-n matrix.</span>
    <span class="hljs-keyword">let</span> <span class="hljs-keyword">mut </span><span class="hljs-variable">d</span> = <span class="hljs-title function_ invoke__">create_empty_matrix</span>(m + <span class="hljs-number">1</span>, n + <span class="hljs-number">1</span>);

    <span class="hljs-comment">// Since we know we can transform each word into the other by replacing</span>
    <span class="hljs-comment">// successive characters (or deleting them), we can fill the first column and</span>
    <span class="hljs-comment">// row with values from 0..m and 0..n, respectively.</span>
    <span class="hljs-keyword">for</span> <span class="hljs-variable">i</span> <span class="hljs-keyword">in</span> <span class="hljs-number">0</span>..=m {
        d[i][<span class="hljs-number">0</span>] = i <span class="hljs-keyword">as</span> <span class="hljs-type">u8</span>;
    }

    <span class="hljs-keyword">for</span> <span class="hljs-variable">i</span> <span class="hljs-keyword">in</span> <span class="hljs-number">0</span>..=n {
        d[<span class="hljs-number">0</span>][i] = i <span class="hljs-keyword">as</span> <span class="hljs-type">u8</span>;
    }

    <span class="hljs-keyword">for</span> <span class="hljs-variable">j</span> <span class="hljs-keyword">in</span> <span class="hljs-number">1</span>..=n {
        <span class="hljs-keyword">for</span> <span class="hljs-variable">i</span> <span class="hljs-keyword">in</span> <span class="hljs-number">1</span>..=m {
            <span class="hljs-comment">// The total edit distance of two given letter indices i and j, one from each word</span>
            <span class="hljs-comment">// will be the sum of the edit distances of prior combinations + whether the characters</span>
            <span class="hljs-comment">// at the two indices are equal.</span>

            <span class="hljs-keyword">let</span> <span class="hljs-variable">cost</span> = <span class="hljs-keyword">if</span> source[i - <span class="hljs-number">1</span>] == target[j - <span class="hljs-number">1</span>] { <span class="hljs-number">0</span> } <span class="hljs-keyword">else</span> { <span class="hljs-number">1</span> };
            d[i][j] = (d[i - <span class="hljs-number">1</span>][j] + <span class="hljs-number">1</span>)
                .<span class="hljs-title function_ invoke__">min</span>(d[i][j - <span class="hljs-number">1</span>] + <span class="hljs-number">1</span>)
                .<span class="hljs-title function_ invoke__">min</span>(d[i - <span class="hljs-number">1</span>][j - <span class="hljs-number">1</span>] + cost);
        }
    }

    <span class="hljs-comment">// After all possible edits have been explored and minimized</span>
    <span class="hljs-comment">// the resulting minimum edit distance will be in the final item in the matrix.</span>
    d[m][n]
}

<span class="hljs-comment">// Create an empty matrix of size [m, n]</span>
<span class="hljs-keyword">fn</span> <span class="hljs-title function_">create_empty_matrix</span>(m: <span class="hljs-type">usize</span>, n: <span class="hljs-type">usize</span>) <span class="hljs-punctuation">-></span> <span class="hljs-type">Vec</span>&#x3C;<span class="hljs-type">Vec</span>&#x3C;<span class="hljs-type">u8</span>>> {
    <span class="hljs-keyword">let</span> <span class="hljs-keyword">mut </span><span class="hljs-variable">d</span> = <span class="hljs-type">Vec</span>::<span class="hljs-title function_ invoke__">with_capacity</span>(m);

    <span class="hljs-keyword">for</span> <span class="hljs-variable">_</span> <span class="hljs-keyword">in</span> <span class="hljs-number">0</span>..m {
        d.<span class="hljs-title function_ invoke__">push</span>(<span class="hljs-built_in">vec!</span>[<span class="hljs-number">0u8</span>; n]);
    }

    d
}
</code></pre>
<p>This works pretty well.
There are a number of optimizations we could apply to this function alone.
I'll leave that as a problem for the reader, since they aren't particularly relevant to the meat of the larger algorithm.</p>
<h3>Steps 2 + 3</h3>
<p>Now that we can determine the edit distance between two words, we can perform a brute-force search.
In this short example, we're going to use <code>sort_by_key</code> to do this, since our data set is so small.
If we were working with a larger dictionary (say, the entire English language), there would be a number of things we would need to do to reduce time and memory consumption.</p>
<pre><code class="hljs language-rust"><span class="hljs-keyword">let</span> <span class="hljs-keyword">mut </span><span class="hljs-variable">suggestions</span>: <span class="hljs-type">Vec</span>&#x3C;(<span class="hljs-type">String</span>, <span class="hljs-type">u8</span>)> = words
    .<span class="hljs-title function_ invoke__">into_iter</span>()
    .<span class="hljs-title function_ invoke__">filter_map</span>(|possible_word| {
        <span class="hljs-keyword">let</span> <span class="hljs-variable">possible_chars</span>: <span class="hljs-type">Vec</span>&#x3C;_> = possible_word.<span class="hljs-title function_ invoke__">chars</span>().<span class="hljs-title function_ invoke__">collect</span>();

        <span class="hljs-keyword">let</span> <span class="hljs-variable">dist</span> = <span class="hljs-title function_ invoke__">edit_distance</span>(word_chars.<span class="hljs-title function_ invoke__">as_slice</span>(), &#x26;possible_chars);

        <span class="hljs-keyword">if</span> dist &#x3C;= <span class="hljs-number">2</span> {
            <span class="hljs-title function_ invoke__">Some</span>((possible_word, dist))
        } <span class="hljs-keyword">else</span> {
            <span class="hljs-literal">None</span>
        }
    })
    .<span class="hljs-title function_ invoke__">collect</span>();

suggestions.<span class="hljs-title function_ invoke__">sort_by_key</span>(|(_, d)| *d);

<span class="hljs-built_in">println!</span>(<span class="hljs-string">"Possible alternatives: "</span>);

suggestions.<span class="hljs-title function_ invoke__">iter</span>().for_each(|(s, _)| <span class="hljs-built_in">println!</span>(<span class="hljs-string">"- {}"</span>, s));
</code></pre>
<p>If we run the whole program again, we get an output something like:</p>
<pre><code class="hljs language-output">Are you sure you meant to spell "thw" that way?
Possible alternatives:
- the
</code></pre>
<p>That looks pretty good to me!</p>
<p>If you would like to look at the whole program, and maybe try out your own inputs, <a href="https://play.rust-lang.org/?version=stable&#x26;mode=debug&#x26;edition=2021&#x26;gist=fb7910ad1fb3a6c944cbc2ae8659bb31">go right on ahead</a>.</p>]]></description>
            <content:encoded><![CDATA[<h1>Stupid-Simple Spell-Check</h1>
<img src="/images/antiques.webp" alt="A sign advertising for antiques, but misspelled.">
<p>For the last month, I've been spending a lot of time replacing one key component
of my writing and programming environment: my grammar checker.</p>
<p>Up until now, I've been using the eponymous <a href="https://languagetool.org/">LanguageTool</a>
via <a href="https://github.com/valentjn/ltex-ls"><code>ltex-ls</code></a> and <a href="https://github.com/neovim/nvim-lspconfig"><code>nvim-lspconfig</code></a>.
Don't get me wrong, these tools are <em>really good,</em> and I would recommend them to anyone and everyone.
However, they come with a few key annoyances.</p>
<h2>LanguageTool Grievances</h2>
<h3>Performance</h3>
<p>LanguageTool is slow.
I'm not exactly sure why.
Every time I would run LanguageTool over my Markdown or <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mtext>LaTeX</mtext></mrow><annotation encoding="application/x-tex">\LaTeX</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8988em;vertical-align:-0.2155em;"></span><span class="mord text"><span class="mord textrm">L</span><span class="mspace" style="margin-right:-0.36em;"></span><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.6833em;"><span style="top:-2.905em;"><span class="pstrut" style="height:2.7em;"></span><span class="mord"><span class="mord textrm mtight sizing reset-size6 size3">A</span></span></span></span></span></span><span class="mspace" style="margin-right:-0.15em;"></span><span class="mord text"><span class="mord textrm">T</span><span class="mspace" style="margin-right:-0.1667em;"></span><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.4678em;"><span style="top:-2.7845em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord textrm">E</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.2155em;"><span></span></span></span></span><span class="mspace" style="margin-right:-0.125em;"></span><span class="mord textrm">X</span></span></span></span></span></span> documents (which are reasonably sized),
I would have to wait several seconds before even the rudimentary spell-checking would show up.</p>
<p>Additionally, I would find <code>ltex-ls</code> regularly becoming the most memory-hungry application on my laptop,
often exceeding 4 gigabytes.</p>
<p>After hours of scouring their code base, I have come to no better explanation
than that it is written in Java.
There are a couple questionable algorithmic decisions in there as well.</p>
<h3>Download Size</h3>
<p>As I said: LanguageTool is really quite good.
However, to get everything it can offer, you need to not only install a Java Runtime
Environment (>150 MB on my system), the actual <code>.jar</code> file (>300 MB), but you also need to download
a 16 GB n-gram dataset.</p>
<h2>Grammarly Grievances</h2>
<p>"But Elijah," I hear you say, "just use Grammarly!"</p>
<p><strong>No.</strong> I'm not going to drop $12 a month for something even slower and worse.
Not to mention how they are likely going to use my work to train their large language models.
Grammarly is a great product, just not for me.</p>
<h2>The Algorithm</h2>
<p>Now that I've thoroughly explained my reasoning for implementing a new grammar checker (one that I'm calling <a href="https://github.com/elijah-potter/harper">Harper</a>), I'd like to recount
my first, admittedly naive, attempt at spellchecking.</p>
<p>The first idea we need to get a grip on is <em>Levenshtein edit distance.</em>
In essence, edit distance is the least number of single-character edits (insertions, deletions or replacements) necessary to turn one word into another.
For example, the edit distance between "cat" and "bat" is one; the only edit involves replacing the "c" with a "b".</p>
<p>Similarly, the edit distance between "kitten" and "sitting" is three: remove the "g", replace the second "i" with an "e" and replace the "s" with a "k".
For this naive spellchecking, we aren't too concerned with the exact edits (atomic errors) that occur in a given misspelling, only the magnitude of the error.</p>
<p>From a high level view here's how the algorithm is going to work:</p>
<ol>
<li>Determine <em>whether</em> a given word is misspelled.
If not, exit.</li>
<li>Calculate the Levenshtein edit distance between the misspelled word and all valid English words.</li>
<li>Pick the three words with the shortest edit distance and present them to the user as alternative
spelling options.</li>
</ol>
<h3>Step 1.</h3>
<p>To determine whether a given word is misspelled, we will need a list of all the valid words in the English language.
Turns out, this isn't too easy.
For today, we will just use a subset of the English language with this short list:</p>
<pre><code>into
the
a
cat
tree
jumped
</code></pre>
<p>To check if a given word is within the list, we can place the list into a hash set,
and grab it's contents.</p>
<pre><code class="hljs language-rust"><span class="hljs-keyword">let</span> <span class="hljs-variable">words</span>: <span class="hljs-type">Vec</span>&#x3C;<span class="hljs-type">String</span>> = <span class="hljs-built_in">vec!</span>[<span class="hljs-string">"into"</span>, <span class="hljs-string">"the"</span>, <span class="hljs-string">"a"</span>, <span class="hljs-string">"cat"</span>, <span class="hljs-string">"tree"</span>, <span class="hljs-string">"jumped"</span>]
    .<span class="hljs-title function_ invoke__">into_iter</span>()
    .<span class="hljs-title function_ invoke__">map</span>(|s| s.<span class="hljs-title function_ invoke__">to_string</span>())
    .<span class="hljs-title function_ invoke__">collect</span>();

<span class="hljs-keyword">let</span> <span class="hljs-variable">word_set</span>: HashSet&#x3C;<span class="hljs-type">String</span>> = words.<span class="hljs-title function_ invoke__">iter</span>().<span class="hljs-title function_ invoke__">cloned</span>().<span class="hljs-title function_ invoke__">collect</span>();

<span class="hljs-keyword">let</span> <span class="hljs-variable">word</span> = <span class="hljs-string">"thw"</span>;
<span class="hljs-keyword">let</span> <span class="hljs-variable">word_chars</span>: <span class="hljs-type">Vec</span>&#x3C;_> = word.<span class="hljs-title function_ invoke__">chars</span>().<span class="hljs-title function_ invoke__">collect</span>();

<span class="hljs-keyword">if</span> word_set.<span class="hljs-title function_ invoke__">contains</span>(word) {
    <span class="hljs-built_in">println!</span>(<span class="hljs-string">"It is a valid English word!"</span>);
    <span class="hljs-keyword">return</span>;
}

<span class="hljs-built_in">println!</span>(<span class="hljs-string">"Are you sure you meant to spell \"{}\" that way?"</span>, word);
</code></pre>
<h3>The Wagner-Fischer Algorithm</h3>
<p>Now that we know our word is actually misspelled, we can move on to finding the correct spelling.
We need to find the edit distance between the misspelled word and all the words in our set.</p>
<p>To do this, we will be using the <a href="https://en.wikipedia.org/wiki/Wagner%E2%80%93Fischer_algorithm">Wagner-Fischer</a> algorithm.</p>
<pre><code class="hljs language-rust"><span class="hljs-comment">// Computes the Levenstein edit distance between two patterns.</span>
<span class="hljs-comment">// This is accomplished via the Wagner-Fischer algorithm</span>
<span class="hljs-keyword">fn</span> <span class="hljs-title function_">edit_distance</span>(source: &#x26;[<span class="hljs-type">char</span>], target: &#x26;[<span class="hljs-type">char</span>]) <span class="hljs-punctuation">-></span> <span class="hljs-type">u8</span> {
    <span class="hljs-keyword">let</span> <span class="hljs-variable">m</span> = source.<span class="hljs-title function_ invoke__">len</span>();
    <span class="hljs-keyword">let</span> <span class="hljs-variable">n</span> = target.<span class="hljs-title function_ invoke__">len</span>();

    <span class="hljs-comment">// Create an m-by-n matrix.</span>
    <span class="hljs-keyword">let</span> <span class="hljs-keyword">mut </span><span class="hljs-variable">d</span> = <span class="hljs-title function_ invoke__">create_empty_matrix</span>(m + <span class="hljs-number">1</span>, n + <span class="hljs-number">1</span>);

    <span class="hljs-comment">// Since we know we can transform each word into the other by replacing</span>
    <span class="hljs-comment">// successive characters (or deleting them), we can fill the first column and</span>
    <span class="hljs-comment">// row with values from 0..m and 0..n, respectively.</span>
    <span class="hljs-keyword">for</span> <span class="hljs-variable">i</span> <span class="hljs-keyword">in</span> <span class="hljs-number">0</span>..=m {
        d[i][<span class="hljs-number">0</span>] = i <span class="hljs-keyword">as</span> <span class="hljs-type">u8</span>;
    }

    <span class="hljs-keyword">for</span> <span class="hljs-variable">i</span> <span class="hljs-keyword">in</span> <span class="hljs-number">0</span>..=n {
        d[<span class="hljs-number">0</span>][i] = i <span class="hljs-keyword">as</span> <span class="hljs-type">u8</span>;
    }

    <span class="hljs-keyword">for</span> <span class="hljs-variable">j</span> <span class="hljs-keyword">in</span> <span class="hljs-number">1</span>..=n {
        <span class="hljs-keyword">for</span> <span class="hljs-variable">i</span> <span class="hljs-keyword">in</span> <span class="hljs-number">1</span>..=m {
            <span class="hljs-comment">// The total edit distance of two given letter indices i and j, one from each word</span>
            <span class="hljs-comment">// will be the sum of the edit distances of prior combinations + whether the characters</span>
            <span class="hljs-comment">// at the two indices are equal.</span>

            <span class="hljs-keyword">let</span> <span class="hljs-variable">cost</span> = <span class="hljs-keyword">if</span> source[i - <span class="hljs-number">1</span>] == target[j - <span class="hljs-number">1</span>] { <span class="hljs-number">0</span> } <span class="hljs-keyword">else</span> { <span class="hljs-number">1</span> };
            d[i][j] = (d[i - <span class="hljs-number">1</span>][j] + <span class="hljs-number">1</span>)
                .<span class="hljs-title function_ invoke__">min</span>(d[i][j - <span class="hljs-number">1</span>] + <span class="hljs-number">1</span>)
                .<span class="hljs-title function_ invoke__">min</span>(d[i - <span class="hljs-number">1</span>][j - <span class="hljs-number">1</span>] + cost);
        }
    }

    <span class="hljs-comment">// After all possible edits have been explored and minimized</span>
    <span class="hljs-comment">// the resulting minimum edit distance will be in the final item in the matrix.</span>
    d[m][n]
}

<span class="hljs-comment">// Create an empty matrix of size [m, n]</span>
<span class="hljs-keyword">fn</span> <span class="hljs-title function_">create_empty_matrix</span>(m: <span class="hljs-type">usize</span>, n: <span class="hljs-type">usize</span>) <span class="hljs-punctuation">-></span> <span class="hljs-type">Vec</span>&#x3C;<span class="hljs-type">Vec</span>&#x3C;<span class="hljs-type">u8</span>>> {
    <span class="hljs-keyword">let</span> <span class="hljs-keyword">mut </span><span class="hljs-variable">d</span> = <span class="hljs-type">Vec</span>::<span class="hljs-title function_ invoke__">with_capacity</span>(m);

    <span class="hljs-keyword">for</span> <span class="hljs-variable">_</span> <span class="hljs-keyword">in</span> <span class="hljs-number">0</span>..m {
        d.<span class="hljs-title function_ invoke__">push</span>(<span class="hljs-built_in">vec!</span>[<span class="hljs-number">0u8</span>; n]);
    }

    d
}
</code></pre>
<p>This works pretty well.
There are a number of optimizations we could apply to this function alone.
I'll leave that as a problem for the reader, since they aren't particularly relevant to the meat of the larger algorithm.</p>
<h3>Steps 2 + 3</h3>
<p>Now that we can determine the edit distance between two words, we can perform a brute-force search.
In this short example, we're going to use <code>sort_by_key</code> to do this, since our data set is so small.
If we were working with a larger dictionary (say, the entire English language), there would be a number of things we would need to do to reduce time and memory consumption.</p>
<pre><code class="hljs language-rust"><span class="hljs-keyword">let</span> <span class="hljs-keyword">mut </span><span class="hljs-variable">suggestions</span>: <span class="hljs-type">Vec</span>&#x3C;(<span class="hljs-type">String</span>, <span class="hljs-type">u8</span>)> = words
    .<span class="hljs-title function_ invoke__">into_iter</span>()
    .<span class="hljs-title function_ invoke__">filter_map</span>(|possible_word| {
        <span class="hljs-keyword">let</span> <span class="hljs-variable">possible_chars</span>: <span class="hljs-type">Vec</span>&#x3C;_> = possible_word.<span class="hljs-title function_ invoke__">chars</span>().<span class="hljs-title function_ invoke__">collect</span>();

        <span class="hljs-keyword">let</span> <span class="hljs-variable">dist</span> = <span class="hljs-title function_ invoke__">edit_distance</span>(word_chars.<span class="hljs-title function_ invoke__">as_slice</span>(), &#x26;possible_chars);

        <span class="hljs-keyword">if</span> dist &#x3C;= <span class="hljs-number">2</span> {
            <span class="hljs-title function_ invoke__">Some</span>((possible_word, dist))
        } <span class="hljs-keyword">else</span> {
            <span class="hljs-literal">None</span>
        }
    })
    .<span class="hljs-title function_ invoke__">collect</span>();

suggestions.<span class="hljs-title function_ invoke__">sort_by_key</span>(|(_, d)| *d);

<span class="hljs-built_in">println!</span>(<span class="hljs-string">"Possible alternatives: "</span>);

suggestions.<span class="hljs-title function_ invoke__">iter</span>().for_each(|(s, _)| <span class="hljs-built_in">println!</span>(<span class="hljs-string">"- {}"</span>, s));
</code></pre>
<p>If we run the whole program again, we get an output something like:</p>
<pre><code class="hljs language-output">Are you sure you meant to spell "thw" that way?
Possible alternatives:
- the
</code></pre>
<p>That looks pretty good to me!</p>
<p>If you would like to look at the whole program, and maybe try out your own inputs, <a href="https://play.rust-lang.org/?version=stable&#x26;mode=debug&#x26;edition=2021&#x26;gist=fb7910ad1fb3a6c944cbc2ae8659bb31">go right on ahead</a>.</p>]]></content:encoded>
            <enclosure url="https://elijahpotter.dev/images/antiques.webp" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[Building a Software Render Engine from Scratch]]></title>
            <link>https://elijahpotter.dev/articles/building_a_software_render_engine_from_scratch</link>
            <guid>https://elijahpotter.dev/articles/building_a_software_render_engine_from_scratch</guid>
            <pubDate>Wed, 31 Jan 2024 07:00:00 GMT</pubDate>
            <description><![CDATA[<h1>How I Built a Software Render Engine from Scratch</h1>
<blockquote>
<p><strong>Heads Up:</strong> This article is a republished (with some tweaks on spelling, grammar and layout) version of
an article I wrote in my senior year of high school for my Linear Algebra class.
As such, the publish date is not quite correct.</p>
</blockquote>
<p>Computers are fantastic at processing, producing, and consuming data.
But I've often found that the most difficult part of the pipeline is representing the data to a user.
This can often take the form of bar charts or scatter plots, but there are situations where they just don't fit the bill.</p>
<p>3D graphics enable developers to create interactive programs that appear most similar to the natural world.
By presenting a three-dimensional space, the barriers for entry drop.
I wanted to learn more about how this worked.</p>
<figure><img src="/images/star_fox.png" alt="Star Fox, one of the earliest major successes of 3D graphics in the gaming industry." title="Star Fox, one of the earliest major successes of 3D graphics in the gaming industry."><figcaption>Star Fox, one of the earliest major successes of 3D graphics in the gaming industry.</figcaption></figure>
<h2>Inspiration</h2>
<p>For a long time, I've been told that the most prevalent application of linear algebra was computer graphics.
Before I even began my study on linear algebra, I knew I wanted to get into software rendering.</p>
<p>One of the big roadblocks was the amount of technical know-how I thought it required.
You see, most 3D programs do all the number-crunching on the specially designed <strong>graphics processing unit</strong> that is readily available on most modern computers.
From my previous attempts to use GPUs, I knew setting up the pipeline is quite involved.
If I went that route again, I know I would likely spend most of my time dealing with vendor-specific APIs.</p>
<p>Since I wanted to focus on the math, I postponed the project.
That is, until it occurred to me that I could simply <em>not use the GPU.</em>
I know it might sound obvious, but it felt so freeing at the time.</p>
<h3>What Is a Software Render Engine?</h3>
<p>A render engine is a piece of software that takes a set of triangles in space and projects them onto a 2D grid that can be displayed on a computer screen.</p>
<p>A software render engine is, as it may sound, a render engine that does all computation in software.
No specialized hardware is utilized <strong>at all.</strong></p>
<h2>Demo</h2>
<p>Before I get into <strong>how</strong> it works, I want to give you the chance to try it out yourself.
I've created a <strong>very</strong> simple scene to demonstrate.</p>
<table>
<thead>
<tr>
<th>Function</th>
<th align="right">Key</th>
</tr>
</thead>
<tbody>
<tr>
<td>Look Around</td>
<td align="right">Left Mouse Click</td>
</tr>
<tr>
<td>Toggle Depth Buffer</td>
<td align="right">R</td>
</tr>
<tr>
<td>Toggle Face Sorting</td>
<td align="right">O</td>
</tr>
<tr>
<td>Toggle Backface Culling</td>
<td align="right">B</td>
</tr>
<tr>
<td>Increase FOV</td>
<td align="right">Arrow Key Up 🔼</td>
</tr>
<tr>
<td>Decrease FOV</td>
<td align="right">Arrow Key Down 🔽</td>
</tr>
<tr>
<td>Move View</td>
<td align="right">W, A, S, D</td>
</tr>
</tbody>
</table>
<iframe iframe frameBorder="0" style="width: 100%; aspect-ratio: 16/9;" allowfullscreen src="/standalonerenderer.html"></iframe>
<h2>Explanation</h2>
<h3>Note</h3>
<p>In this article, I intend only to talk about the math related to the problem.
If you are interesting in the nitty-gritty <em>how lines and shapes get drawn to the screen,</em>
I suggest you read up on <a href="https://en.wikipedia.org/wiki/Bresenham%27s_line_algorithm">Bresenham's line algorithm</a>,
<a href="https://en.wikipedia.org/wiki/Xiaolin_Wu%27s_line_algorithm">Xiaolin Wu's line algorithm</a>, and
<a href="https://www.gabrielgambetta.com/computer-graphics-from-scratch/07-filled-triangles.html">Drawing Filled Triangles</a></p>
<h3>Projection</h3>
<p>With a perspective camera, projection happens from the world, towards the "sensor," which is a defined point.
For the sake of argument, let's say that point is at the origin and the camera is facing the positive <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>z</mi></mrow><annotation encoding="application/x-tex">z</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.4306em;"></span><span class="mord mathnormal" style="margin-right:0.04398em;">z</span></span></span></span> axis.</p>
<p>We want all other points to be projected onto a plane <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>d</mi></mrow><annotation encoding="application/x-tex">d</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6944em;"></span><span class="mord mathnormal">d</span></span></span></span> distance away.
If you are a photographer, <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>d</mi></mrow><annotation encoding="application/x-tex">d</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6944em;"></span><span class="mord mathnormal">d</span></span></span></span> will be the focal length of the camera.</p>
<figure><img src="/images/projection_comparison.gif" alt="A graphic demonstrating the different projection types" title="Perspective Projection (left) vs Orthographic Projection (right)"><figcaption>Perspective Projection (left) vs Orthographic Projection (right)</figcaption></figure>
<h3>Orthographic Projection</h3>
<p>In the case of orthographic projection, this is easy.
Because we have placed the camera on the origin, facing the positive <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>z</mi></mrow><annotation encoding="application/x-tex">z</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.4306em;"></span><span class="mord mathnormal" style="margin-right:0.04398em;">z</span></span></span></span> axis, we can draw the coordinates of any given point directly to the screen.
The only consideration necessary pertains to the points <em>behind</em> the camera, which we can skip by checking the sign of the <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>z</mi></mrow><annotation encoding="application/x-tex">z</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.4306em;"></span><span class="mord mathnormal" style="margin-right:0.04398em;">z</span></span></span></span> component.</p>
<h3>Homogeneous Coordinates</h3>
<p>This is where <em>homogeneous coordinates</em> come in.
When working in euclidean space, we represent a given vector or coordinate using three components:</p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mo fence="true">[</mo><mtable rowspacing="0.16em" columnalign="center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mi>x</mi></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mi>y</mi></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mi>z</mi></mstyle></mtd></mtr></mtable><mo fence="true">]</mo></mrow><annotation encoding="application/x-tex">\begin{bmatrix}
  x \\
  y \\
  z
\end{bmatrix}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:3.6em;vertical-align:-1.55em;"></span><span class="minner"><span class="mopen"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.05em;"><span style="top:-4.05em;"><span class="pstrut" style="height:5.6em;"></span><span style="width:0.667em;height:3.600em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="3.600em" viewBox="0 0 667 3600"><path d="M403 1759 V84 H666 V0 H319 V1759 v0 v1759 h347 v-84
H403z M403 1759 V0 H319 V1759 v0 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:1.55em;"><span></span></span></span></span></span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.05em;"><span style="top:-4.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord mathnormal">x</span></span></span><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord mathnormal" style="margin-right:0.03588em;">y</span></span></span><span style="top:-1.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord mathnormal" style="margin-right:0.04398em;">z</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:1.55em;"><span></span></span></span></span></span></span></span><span class="mclose"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.05em;"><span style="top:-4.05em;"><span class="pstrut" style="height:5.6em;"></span><span style="width:0.667em;height:3.600em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="3.600em" viewBox="0 0 667 3600"><path d="M347 1759 V0 H0 V84 H263 V1759 v0 v1759 H0 v84 H347z
M347 1759 V0 H263 V1759 v0 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:1.55em;"><span></span></span></span></span></span></span></span></span></span></span></span>
<p>When we are working in projective space, we can represent any given vector or coordinate using <em>four</em> components.</p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mo fence="true">[</mo><mtable rowspacing="0.16em" columnalign="center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mi>x</mi></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mi>y</mi></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mi>z</mi></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mi>w</mi></mstyle></mtd></mtr></mtable><mo fence="true">]</mo></mrow><annotation encoding="application/x-tex">\begin{bmatrix}
  x \\
  y \\
  z \\
  w
\end{bmatrix}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:4.8em;vertical-align:-2.15em;"></span><span class="minner"><span class="mopen"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.65em;"><span class="pstrut" style="height:6.8em;"></span><span style="width:0.667em;height:4.800em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="4.800em" viewBox="0 0 667 4800"><path d="M403 1759 V84 H666 V0 H319 V1759 v1200 v1759 h347 v-84
H403z M403 1759 V0 H319 V1759 v1200 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord mathnormal">x</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord mathnormal" style="margin-right:0.03588em;">y</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord mathnormal" style="margin-right:0.04398em;">z</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord mathnormal" style="margin-right:0.02691em;">w</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span></span></span><span class="mclose"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.65em;"><span class="pstrut" style="height:6.8em;"></span><span style="width:0.667em;height:4.800em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="4.800em" viewBox="0 0 667 4800"><path d="M347 1759 V0 H0 V84 H263 V1759 v1200 v1759 H0 v84 H347z
M347 1759 V0 H263 V1759 v1200 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span></span></span></span></span></span></span>
<p>We can convert between these formats interchangeably.
To convert a homogenous coordinate to euclidean, divide all other components by <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>w</mi></mrow><annotation encoding="application/x-tex">w</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.4306em;"></span><span class="mord mathnormal" style="margin-right:0.02691em;">w</span></span></span></span>:</p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mtext>euclidean coordinate</mtext><mo>=</mo><mrow><mo fence="true">[</mo><mtable rowspacing="0.16em" columnalign="center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mrow><mi>x</mi><mi mathvariant="normal">/</mi><mi>w</mi></mrow></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mrow><mi>y</mi><mi mathvariant="normal">/</mi><mi>w</mi></mrow></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mrow><mi>z</mi><mi mathvariant="normal">/</mi><mi>w</mi></mrow></mstyle></mtd></mtr></mtable><mo fence="true">]</mo></mrow></mrow><annotation encoding="application/x-tex">\text{euclidean coordinate} = \begin{bmatrix}
  x / w \\
  y / w \\
  z / w
\end{bmatrix}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6944em;"></span><span class="mord text"><span class="mord">euclidean coordinate</span></span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:3.6em;vertical-align:-1.55em;"></span><span class="minner"><span class="mopen"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.05em;"><span style="top:-4.05em;"><span class="pstrut" style="height:5.6em;"></span><span style="width:0.667em;height:3.600em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="3.600em" viewBox="0 0 667 3600"><path d="M403 1759 V84 H666 V0 H319 V1759 v0 v1759 h347 v-84
H403z M403 1759 V0 H319 V1759 v0 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:1.55em;"><span></span></span></span></span></span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.05em;"><span style="top:-4.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord mathnormal">x</span><span class="mord">/</span><span class="mord mathnormal" style="margin-right:0.02691em;">w</span></span></span><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord mathnormal" style="margin-right:0.03588em;">y</span><span class="mord">/</span><span class="mord mathnormal" style="margin-right:0.02691em;">w</span></span></span><span style="top:-1.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord mathnormal" style="margin-right:0.04398em;">z</span><span class="mord">/</span><span class="mord mathnormal" style="margin-right:0.02691em;">w</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:1.55em;"><span></span></span></span></span></span></span></span><span class="mclose"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.05em;"><span style="top:-4.05em;"><span class="pstrut" style="height:5.6em;"></span><span style="width:0.667em;height:3.600em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="3.600em" viewBox="0 0 667 3600"><path d="M347 1759 V0 H0 V84 H263 V1759 v0 v1759 H0 v84 H347z
M347 1759 V0 H263 V1759 v0 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:1.55em;"><span></span></span></span></span></span></span></span></span></span></span></span>
<p>We can transform any euclidean coordinate to a homogenous coordinate by setting <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>w</mi><mo>=</mo><mn>1</mn></mrow><annotation encoding="application/x-tex">w = 1</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.4306em;"></span><span class="mord mathnormal" style="margin-right:0.02691em;">w</span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:0.6444em;"></span><span class="mord">1</span></span></span></span></p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mtext>homogenous coordinate</mtext><mo>=</mo><mrow><mo fence="true">[</mo><mtable rowspacing="0.16em" columnalign="center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mi>x</mi></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mi>y</mi></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mi>z</mi></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd></mtr></mtable><mo fence="true">]</mo></mrow></mrow><annotation encoding="application/x-tex">\text{homogenous coordinate} = \begin{bmatrix}
  x \\
  y \\
  z \\
  1
\end{bmatrix}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8889em;vertical-align:-0.1944em;"></span><span class="mord text"><span class="mord">homogenous coordinate</span></span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:4.8em;vertical-align:-2.15em;"></span><span class="minner"><span class="mopen"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.65em;"><span class="pstrut" style="height:6.8em;"></span><span style="width:0.667em;height:4.800em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="4.800em" viewBox="0 0 667 4800"><path d="M403 1759 V84 H666 V0 H319 V1759 v1200 v1759 h347 v-84
H403z M403 1759 V0 H319 V1759 v1200 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord mathnormal">x</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord mathnormal" style="margin-right:0.03588em;">y</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord mathnormal" style="margin-right:0.04398em;">z</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span></span></span><span class="mclose"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.65em;"><span class="pstrut" style="height:6.8em;"></span><span style="width:0.667em;height:4.800em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="4.800em" viewBox="0 0 667 4800"><path d="M347 1759 V0 H0 V84 H263 V1759 v1200 v1759 H0 v84 H347z
M347 1759 V0 H263 V1759 v1200 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span></span></span></span></span></span></span>
<p>Just like we can perform various transformations on euclidean coordinates, we can perform similar ones on homogenous coordinates.
The major difference: instead of requiring conventional additions or subtractions, homogenous coordinates can be translated via matrix multiplication.</p>
<p>For example, let's say we have a point at the origin, and we want to perform both a translation and rotation.
If we were using euclidean coordinates, we would have to translate it via addition, then rotate it separately.</p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mrow><mo fence="true">[</mo><mtable rowspacing="0.16em" columnalign="center center center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mrow><mo>−</mo><mn>1</mn></mrow></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd></mtr></mtable><mo fence="true">]</mo></mrow><mrow><mo fence="true">(</mo><mrow><mo fence="true">[</mo><mtable rowspacing="0.16em" columnalign="center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr></mtable><mo fence="true">]</mo></mrow><mo>+</mo><mrow><mo fence="true">[</mo><mtable rowspacing="0.16em" columnalign="center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>2</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr></mtable><mo fence="true">]</mo></mrow><mo fence="true">)</mo></mrow><mo>=</mo><mrow><mo fence="true">[</mo><mtable rowspacing="0.16em" columnalign="center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>2</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mrow><mo>−</mo><mn>1</mn></mrow></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr></mtable><mo fence="true">]</mo></mrow></mrow><annotation encoding="application/x-tex">
\begin{bmatrix}
  0 &#x26; 1 &#x26; 0 \\
  -1 &#x26; 0 &#x26; 0 \\
  0 &#x26; 0 &#x26; 1 \\
\end{bmatrix}

\left(
  \begin{bmatrix}
    0 \\
    0 \\
    0
  \end{bmatrix}
  +
  \begin{bmatrix}
    1 \\
    2 \\
    0
  \end{bmatrix}
\right)

=

\begin{bmatrix}
  2 \\
  -1 \\
  0
\end{bmatrix}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:3.6em;vertical-align:-1.55em;"></span><span class="minner"><span class="mopen"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.05em;"><span style="top:-4.05em;"><span class="pstrut" style="height:5.6em;"></span><span style="width:0.667em;height:3.600em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="3.600em" viewBox="0 0 667 3600"><path d="M403 1759 V84 H666 V0 H319 V1759 v0 v1759 h347 v-84
H403z M403 1759 V0 H319 V1759 v0 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:1.55em;"><span></span></span></span></span></span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.05em;"><span style="top:-4.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">−</span><span class="mord">1</span></span></span><span style="top:-1.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:1.55em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.05em;"><span style="top:-4.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-1.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:1.55em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.05em;"><span style="top:-4.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-1.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:1.55em;"><span></span></span></span></span></span></span></span><span class="mclose"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.05em;"><span style="top:-4.05em;"><span class="pstrut" style="height:5.6em;"></span><span style="width:0.667em;height:3.600em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="3.600em" viewBox="0 0 667 3600"><path d="M347 1759 V0 H0 V84 H263 V1759 v0 v1759 H0 v84 H347z
M347 1759 V0 H263 V1759 v0 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:1.55em;"><span></span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.1667em;"></span><span class="minner"><span class="mopen"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.05em;"><span style="top:-4.05em;"><span class="pstrut" style="height:5.6em;"></span><span style="width:0.875em;height:3.600em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.875em" height="3.600em" viewBox="0 0 875 3600"><path d="M863,9c0,-2,-2,-5,-6,-9c0,0,-17,0,-17,0c-12.7,0,-19.3,0.3,-20,1
c-5.3,5.3,-10.3,11,-15,17c-242.7,294.7,-395.3,682,-458,1162c-21.3,163.3,-33.3,349,
-36,557 l0,84c0.2,6,0,26,0,60c2,159.3,10,310.7,24,454c53.3,528,210,
949.7,470,1265c4.7,6,9.7,11.7,15,17c0.7,0.7,7,1,19,1c0,0,18,0,18,0c4,-4,6,-7,6,-9
c0,-2.7,-3.3,-8.7,-10,-18c-135.3,-192.7,-235.5,-414.3,-300.5,-665c-65,-250.7,-102.5,
-544.7,-112.5,-882c-2,-104,-3,-167,-3,-189
l0,-92c0,-162.7,5.7,-314,17,-454c20.7,-272,63.7,-513,129,-723c65.3,
-210,155.3,-396.3,270,-559c6.7,-9.3,10,-15.3,10,-18z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:1.55em;"><span></span></span></span></span></span></span><span class="minner"><span class="mopen"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.05em;"><span style="top:-4.05em;"><span class="pstrut" style="height:5.6em;"></span><span style="width:0.667em;height:3.600em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="3.600em" viewBox="0 0 667 3600"><path d="M403 1759 V84 H666 V0 H319 V1759 v0 v1759 h347 v-84
H403z M403 1759 V0 H319 V1759 v0 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:1.55em;"><span></span></span></span></span></span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.05em;"><span style="top:-4.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-1.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:1.55em;"><span></span></span></span></span></span></span></span><span class="mclose"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.05em;"><span style="top:-4.05em;"><span class="pstrut" style="height:5.6em;"></span><span style="width:0.667em;height:3.600em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="3.600em" viewBox="0 0 667 3600"><path d="M347 1759 V0 H0 V84 H263 V1759 v0 v1759 H0 v84 H347z
M347 1759 V0 H263 V1759 v0 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:1.55em;"><span></span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">+</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="minner"><span class="mopen"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.05em;"><span style="top:-4.05em;"><span class="pstrut" style="height:5.6em;"></span><span style="width:0.667em;height:3.600em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="3.600em" viewBox="0 0 667 3600"><path d="M403 1759 V84 H666 V0 H319 V1759 v0 v1759 h347 v-84
H403z M403 1759 V0 H319 V1759 v0 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:1.55em;"><span></span></span></span></span></span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.05em;"><span style="top:-4.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">2</span></span></span><span style="top:-1.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:1.55em;"><span></span></span></span></span></span></span></span><span class="mclose"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.05em;"><span style="top:-4.05em;"><span class="pstrut" style="height:5.6em;"></span><span style="width:0.667em;height:3.600em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="3.600em" viewBox="0 0 667 3600"><path d="M347 1759 V0 H0 V84 H263 V1759 v0 v1759 H0 v84 H347z
M347 1759 V0 H263 V1759 v0 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:1.55em;"><span></span></span></span></span></span></span></span><span class="mclose"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.05em;"><span style="top:-4.05em;"><span class="pstrut" style="height:5.6em;"></span><span style="width:0.875em;height:3.600em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.875em" height="3.600em" viewBox="0 0 875 3600"><path d="M76,0c-16.7,0,-25,3,-25,9c0,2,2,6.3,6,13c21.3,28.7,42.3,60.3,
63,95c96.7,156.7,172.8,332.5,228.5,527.5c55.7,195,92.8,416.5,111.5,664.5
c11.3,139.3,17,290.7,17,454c0,28,1.7,43,3.3,45l0,9
c-3,4,-3.3,16.7,-3.3,38c0,162,-5.7,313.7,-17,455c-18.7,248,-55.8,469.3,-111.5,664
c-55.7,194.7,-131.8,370.3,-228.5,527c-20.7,34.7,-41.7,66.3,-63,95c-2,3.3,-4,7,-6,11
c0,7.3,5.7,11,17,11c0,0,11,0,11,0c9.3,0,14.3,-0.3,15,-1c5.3,-5.3,10.3,-11,15,-17
c242.7,-294.7,395.3,-681.7,458,-1161c21.3,-164.7,33.3,-350.7,36,-558
l0,-144c-2,-159.3,-10,-310.7,-24,-454c-53.3,-528,-210,-949.7,
-470,-1265c-4.7,-6,-9.7,-11.7,-15,-17c-0.7,-0.7,-6.7,-1,-18,-1z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:1.55em;"><span></span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:3.6em;vertical-align:-1.55em;"></span><span class="minner"><span class="mopen"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.05em;"><span style="top:-4.05em;"><span class="pstrut" style="height:5.6em;"></span><span style="width:0.667em;height:3.600em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="3.600em" viewBox="0 0 667 3600"><path d="M403 1759 V84 H666 V0 H319 V1759 v0 v1759 h347 v-84
H403z M403 1759 V0 H319 V1759 v0 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:1.55em;"><span></span></span></span></span></span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.05em;"><span style="top:-4.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">2</span></span></span><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">−</span><span class="mord">1</span></span></span><span style="top:-1.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:1.55em;"><span></span></span></span></span></span></span></span><span class="mclose"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.05em;"><span style="top:-4.05em;"><span class="pstrut" style="height:5.6em;"></span><span style="width:0.667em;height:3.600em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="3.600em" viewBox="0 0 667 3600"><path d="M347 1759 V0 H0 V84 H263 V1759 v0 v1759 H0 v84 H347z
M347 1759 V0 H263 V1759 v0 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:1.55em;"><span></span></span></span></span></span></span></span></span></span></span></span>
<p>In homogenous coordinates, we can do it with a single matrix operation by preparing the matrix ahead of time:</p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mi>T</mi><mo>=</mo><mrow><mo fence="true">[</mo><mtable rowspacing="0.16em" columnalign="center center center center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mrow><mo>−</mo><mn>1</mn></mrow></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd></mtr></mtable><mo fence="true">]</mo></mrow><mrow><mo fence="true">[</mo><mtable rowspacing="0.16em" columnalign="center center center center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>2</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd></mtr></mtable><mo fence="true">]</mo></mrow></mrow><annotation encoding="application/x-tex">T = \begin{bmatrix}
  0 &#x26; 1 &#x26; 0 &#x26; 0 \\
  -1 &#x26; 0 &#x26; 0 &#x26; 0 \\
  0 &#x26; 0 &#x26; 1 &#x26; 0 \\
  0 &#x26; 0 &#x26; 0 &#x26; 1
\end{bmatrix}

\begin{bmatrix}
  1 &#x26; 0 &#x26; 0 &#x26; 1 \\
  0 &#x26; 1 &#x26; 0 &#x26; 2 \\
  0 &#x26; 0 &#x26; 1 &#x26; 0 \\
  0 &#x26; 0 &#x26; 0 &#x26; 1
\end{bmatrix}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6833em;"></span><span class="mord mathnormal" style="margin-right:0.13889em;">T</span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:4.8em;vertical-align:-2.15em;"></span><span class="minner"><span class="mopen"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.65em;"><span class="pstrut" style="height:6.8em;"></span><span style="width:0.667em;height:4.800em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="4.800em" viewBox="0 0 667 4800"><path d="M403 1759 V84 H666 V0 H319 V1759 v1200 v1759 h347 v-84
H403z M403 1759 V0 H319 V1759 v1200 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">−</span><span class="mord">1</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span></span></span><span class="mclose"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.65em;"><span class="pstrut" style="height:6.8em;"></span><span style="width:0.667em;height:4.800em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="4.800em" viewBox="0 0 667 4800"><path d="M347 1759 V0 H0 V84 H263 V1759 v1200 v1759 H0 v84 H347z
M347 1759 V0 H263 V1759 v1200 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.1667em;"></span><span class="minner"><span class="mopen"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.65em;"><span class="pstrut" style="height:6.8em;"></span><span style="width:0.667em;height:4.800em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="4.800em" viewBox="0 0 667 4800"><path d="M403 1759 V84 H666 V0 H319 V1759 v1200 v1759 h347 v-84
H403z M403 1759 V0 H319 V1759 v1200 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">2</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span></span></span><span class="mclose"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.65em;"><span class="pstrut" style="height:6.8em;"></span><span style="width:0.667em;height:4.800em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="4.800em" viewBox="0 0 667 4800"><path d="M347 1759 V0 H0 V84 H263 V1759 v1200 v1759 H0 v84 H347z
M347 1759 V0 H263 V1759 v1200 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span></span></span></span></span></span></span>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mi>T</mi><mrow><mo fence="true">[</mo><mtable rowspacing="0.16em" columnalign="center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd></mtr></mtable><mo fence="true">]</mo></mrow><mo>=</mo><mrow><mo fence="true">[</mo><mtable rowspacing="0.16em" columnalign="center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>2</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mrow><mo>−</mo><mn>1</mn></mrow></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd></mtr></mtable><mo fence="true">]</mo></mrow></mrow><annotation encoding="application/x-tex">T \begin{bmatrix}
  0 \\
  0 \\
  0 \\
  1
\end{bmatrix}

=

\begin{bmatrix}
  2 \\
  -1 \\
  0 \\
  1
\end{bmatrix}

</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:4.8em;vertical-align:-2.15em;"></span><span class="mord mathnormal" style="margin-right:0.13889em;">T</span><span class="mspace" style="margin-right:0.1667em;"></span><span class="minner"><span class="mopen"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.65em;"><span class="pstrut" style="height:6.8em;"></span><span style="width:0.667em;height:4.800em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="4.800em" viewBox="0 0 667 4800"><path d="M403 1759 V84 H666 V0 H319 V1759 v1200 v1759 h347 v-84
H403z M403 1759 V0 H319 V1759 v1200 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span></span></span><span class="mclose"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.65em;"><span class="pstrut" style="height:6.8em;"></span><span style="width:0.667em;height:4.800em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="4.800em" viewBox="0 0 667 4800"><path d="M347 1759 V0 H0 V84 H263 V1759 v1200 v1759 H0 v84 H347z
M347 1759 V0 H263 V1759 v1200 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:4.8em;vertical-align:-2.15em;"></span><span class="minner"><span class="mopen"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.65em;"><span class="pstrut" style="height:6.8em;"></span><span style="width:0.667em;height:4.800em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="4.800em" viewBox="0 0 667 4800"><path d="M403 1759 V84 H666 V0 H319 V1759 v1200 v1759 h347 v-84
H403z M403 1759 V0 H319 V1759 v1200 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">2</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">−</span><span class="mord">1</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span></span></span><span class="mclose"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.65em;"><span class="pstrut" style="height:6.8em;"></span><span style="width:0.667em;height:4.800em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="4.800em" viewBox="0 0 667 4800"><path d="M347 1759 V0 H0 V84 H263 V1759 v1200 v1759 H0 v84 H347z
M347 1759 V0 H263 V1759 v1200 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span></span></span></span></span></span></span>
<p>By condensing the entire transformation into a single matrix, we are able to save a ton of computing time.</p>
<h3>Perspective Projection</h3>
<p>The essential idea of perspective projection is simple: we want points further from the camera to appear closer to the <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>z</mi></mrow><annotation encoding="application/x-tex">z</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.4306em;"></span><span class="mord mathnormal" style="margin-right:0.04398em;">z</span></span></span></span> axis the further away they are.
Remember that <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>d</mi></mrow><annotation encoding="application/x-tex">d</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6944em;"></span><span class="mord mathnormal">d</span></span></span></span> is the surface we are projecting onto.
This is possible with homogenous coordinates with the following matrix:</p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mtext>perspective projection matrix</mtext><mo>=</mo><mrow><mo fence="true">[</mo><mtable rowspacing="0.16em" columnalign="center center center center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mrow><mo>−</mo><mn>1</mn><mi mathvariant="normal">/</mi><mi>d</mi></mrow></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd></mtr></mtable><mo fence="true">]</mo></mrow></mrow><annotation encoding="application/x-tex">\text{perspective projection matrix} =

\begin{bmatrix}
  1 &#x26; 0 &#x26; 0 &#x26; 0 \\
  0 &#x26; 1 &#x26; 0 &#x26; 0 \\
  0 &#x26; 0 &#x26; 1 &#x26; 0 \\
  0 &#x26; 0 &#x26; -1 / d &#x26; 1
\end{bmatrix}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8623em;vertical-align:-0.1944em;"></span><span class="mord text"><span class="mord">perspective projection matrix</span></span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:4.8em;vertical-align:-2.15em;"></span><span class="minner"><span class="mopen"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.65em;"><span class="pstrut" style="height:6.8em;"></span><span style="width:0.667em;height:4.800em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="4.800em" viewBox="0 0 667 4800"><path d="M403 1759 V84 H666 V0 H319 V1759 v1200 v1759 h347 v-84
H403z M403 1759 V0 H319 V1759 v1200 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">−</span><span class="mord">1/</span><span class="mord mathnormal">d</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span></span></span><span class="mclose"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.65em;"><span class="pstrut" style="height:6.8em;"></span><span style="width:0.667em;height:4.800em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="4.800em" viewBox="0 0 667 4800"><path d="M347 1759 V0 H0 V84 H263 V1759 v1200 v1759 H0 v84 H347z
M347 1759 V0 H263 V1759 v1200 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span></span></span></span></span></span></span>
<p>Assuming you have an understanding of matrix multiplication, it should be apparent why this works.
When the <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>w</mi></mrow><annotation encoding="application/x-tex">w</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.4306em;"></span><span class="mord mathnormal" style="margin-right:0.02691em;">w</span></span></span></span> component of the matrix is being computed, the <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>z</mi></mrow><annotation encoding="application/x-tex">z</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.4306em;"></span><span class="mord mathnormal" style="margin-right:0.04398em;">z</span></span></span></span> component will be divided by <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>d</mi></mrow><annotation encoding="application/x-tex">d</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6944em;"></span><span class="mord mathnormal">d</span></span></span></span>.
The result then becomes a divisor of <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>w</mi></mrow><annotation encoding="application/x-tex">w</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.4306em;"></span><span class="mord mathnormal" style="margin-right:0.02691em;">w</span></span></span></span>, which affects all components of the resulting vector due to the nature of homogenous coordinates.
In short: <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>w</mi><mo>←</mo><mi>w</mi><mo>∗</mo><mo stretchy="false">(</mo><mo>−</mo><mi>z</mi><mi mathvariant="normal">/</mi><mi>d</mi><mo stretchy="false">)</mo></mrow><annotation encoding="application/x-tex">w \leftarrow w * (-z / d)</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.4306em;"></span><span class="mord mathnormal" style="margin-right:0.02691em;">w</span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">←</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:0.4653em;"></span><span class="mord mathnormal" style="margin-right:0.02691em;">w</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">∗</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:1em;vertical-align:-0.25em;"></span><span class="mopen">(</span><span class="mord">−</span><span class="mord mathnormal" style="margin-right:0.04398em;">z</span><span class="mord">/</span><span class="mord mathnormal">d</span><span class="mclose">)</span></span></span></span></p>
<h3>Color</h3>
<p>Now that we've established exactly how to project points in space onto the screen, we need to start coloring in triangles.
As I said before, I am not going to go into the algorithms that do this.
I want to discuss how to determine the color to fill in.</p>
<p>We could just choose one solid color.
As you can see from the demo (by pressing <code>R</code>), this doesn't lead to a particularly impressive or visually pleasing result.
I want an additional way to convey depth.</p>
<p>Given the three points that make up a triangle <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><msub><mi>p</mi><mn>1</mn></msub></mrow><annotation encoding="application/x-tex">p_1</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.625em;vertical-align:-0.1944em;"></span><span class="mord"><span class="mord mathnormal">p</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3011em;"><span style="top:-2.55em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span></span></span></span>, <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><msub><mi>p</mi><mn>2</mn></msub></mrow><annotation encoding="application/x-tex">p_2</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.625em;vertical-align:-0.1944em;"></span><span class="mord"><span class="mord mathnormal">p</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3011em;"><span style="top:-2.55em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">2</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span></span></span></span>, and <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><msub><mi>p</mi><mn>3</mn></msub></mrow><annotation encoding="application/x-tex">p_3</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.625em;vertical-align:-0.1944em;"></span><span class="mord"><span class="mord mathnormal">p</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3011em;"><span style="top:-2.55em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">3</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span></span></span></span>, we can find its normal vector (the vector perpendicular to it's surface), <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mover accent="true"><mi>n</mi><mo>⃗</mo></mover></mrow><annotation encoding="application/x-tex">\vec{n}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.714em;"></span><span class="mord accent"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.714em;"><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="mord mathnormal">n</span></span><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="accent-body" style="left:-0.2355em;"><span class="overlay" style="height:0.714em;width:0.471em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.471em" height="0.714em" style="width:0.471em" viewBox="0 0 471 714" preserveAspectRatio="xMinYMin"><path d="M377 20c0-5.333 1.833-10 5.5-14S391 0 397 0c4.667 0 8.667 1.667 12 5
3.333 2.667 6.667 9 10 19 6.667 24.667 20.333 43.667 41 57 7.333 4.667 11
10.667 11 18 0 6-1 10-3 12s-6.667 5-14 9c-28.667 14.667-53.667 35.667-75 63
-1.333 1.333-3.167 3.5-5.5 6.5s-4 4.833-5 5.5c-1 .667-2.5 1.333-4.5 2s-4.333 1
-7 1c-4.667 0-9.167-1.833-13.5-5.5S337 184 337 178c0-12.667 15.667-32.333 47-59
H213l-171-1c-8.667-6-13-12.333-13-19 0-4.667 4.333-11.333 13-20h359
c-16-25.333-24-45-24-59z"></path></svg></span></span></span></span></span></span></span></span></span></span> fairly easily.</p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mover accent="true"><mi>h</mi><mo>⃗</mo></mover><mo>=</mo><mfrac><mrow><msub><mi>p</mi><mn>2</mn></msub><mo>−</mo><msub><mi>p</mi><mn>1</mn></msub></mrow><mrow><mi mathvariant="normal">∣</mi><mi mathvariant="normal">∣</mi><msub><mi>p</mi><mn>2</mn></msub><mo>−</mo><msub><mi>p</mi><mn>1</mn></msub><mi mathvariant="normal">∣</mi><mi mathvariant="normal">∣</mi></mrow></mfrac><mo>×</mo><mfrac><mrow><msub><mi>p</mi><mn>3</mn></msub><mo>−</mo><msub><mi>p</mi><mn>1</mn></msub></mrow><mrow><mi mathvariant="normal">∣</mi><mi mathvariant="normal">∣</mi><msub><mi>p</mi><mn>3</mn></msub><mo>−</mo><msub><mi>p</mi><mn>1</mn></msub><mi mathvariant="normal">∣</mi><mi mathvariant="normal">∣</mi></mrow></mfrac></mrow><annotation encoding="application/x-tex">  \vec{h} = \frac{p_2 - p_1}{||p_2 - p_1||} \times \frac{p_3 - p_1}{||p_3 - p_1||}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.9774em;"></span><span class="mord accent"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.9774em;"><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="mord mathnormal">h</span></span><span style="top:-3.2634em;"><span class="pstrut" style="height:3em;"></span><span class="accent-body" style="left:-0.2355em;"><span class="overlay" style="height:0.714em;width:0.471em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.471em" height="0.714em" style="width:0.471em" viewBox="0 0 471 714" preserveAspectRatio="xMinYMin"><path d="M377 20c0-5.333 1.833-10 5.5-14S391 0 397 0c4.667 0 8.667 1.667 12 5
3.333 2.667 6.667 9 10 19 6.667 24.667 20.333 43.667 41 57 7.333 4.667 11
10.667 11 18 0 6-1 10-3 12s-6.667 5-14 9c-28.667 14.667-53.667 35.667-75 63
-1.333 1.333-3.167 3.5-5.5 6.5s-4 4.833-5 5.5c-1 .667-2.5 1.333-4.5 2s-4.333 1
-7 1c-4.667 0-9.167-1.833-13.5-5.5S337 184 337 178c0-12.667 15.667-32.333 47-59
H213l-171-1c-8.667-6-13-12.333-13-19 0-4.667 4.333-11.333 13-20h359
c-16-25.333-24-45-24-59z"></path></svg></span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:2.1963em;vertical-align:-0.936em;"></span><span class="mord"><span class="mopen nulldelimiter"></span><span class="mfrac"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:1.2603em;"><span style="top:-2.314em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">∣∣</span><span class="mord"><span class="mord mathnormal">p</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3011em;"><span style="top:-2.55em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">2</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">−</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mord"><span class="mord mathnormal">p</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3011em;"><span style="top:-2.55em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span><span class="mord">∣∣</span></span></span><span style="top:-3.23em;"><span class="pstrut" style="height:3em;"></span><span class="frac-line" style="border-bottom-width:0.04em;"></span></span><span style="top:-3.677em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord"><span class="mord mathnormal">p</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3011em;"><span style="top:-2.55em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">2</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">−</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mord"><span class="mord mathnormal">p</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3011em;"><span style="top:-2.55em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.936em;"><span></span></span></span></span></span><span class="mclose nulldelimiter"></span></span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">×</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:2.1963em;vertical-align:-0.936em;"></span><span class="mord"><span class="mopen nulldelimiter"></span><span class="mfrac"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:1.2603em;"><span style="top:-2.314em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">∣∣</span><span class="mord"><span class="mord mathnormal">p</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3011em;"><span style="top:-2.55em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">3</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">−</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mord"><span class="mord mathnormal">p</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3011em;"><span style="top:-2.55em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span><span class="mord">∣∣</span></span></span><span style="top:-3.23em;"><span class="pstrut" style="height:3em;"></span><span class="frac-line" style="border-bottom-width:0.04em;"></span></span><span style="top:-3.677em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord"><span class="mord mathnormal">p</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3011em;"><span style="top:-2.55em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">3</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">−</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mord"><span class="mord mathnormal">p</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3011em;"><span style="top:-2.55em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.936em;"><span></span></span></span></span></span><span class="mclose nulldelimiter"></span></span></span></span></span></span>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mover accent="true"><mi>n</mi><mo>⃗</mo></mover><mo>=</mo><mfrac><mi>h</mi><mrow><mi mathvariant="normal">∣</mi><mi mathvariant="normal">∣</mi><mi>h</mi><mi mathvariant="normal">∣</mi><mi mathvariant="normal">∣</mi></mrow></mfrac></mrow><annotation encoding="application/x-tex">  \vec{n} = \frac{h}{||h||}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.714em;"></span><span class="mord accent"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.714em;"><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="mord mathnormal">n</span></span><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="accent-body" style="left:-0.2355em;"><span class="overlay" style="height:0.714em;width:0.471em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.471em" height="0.714em" style="width:0.471em" viewBox="0 0 471 714" preserveAspectRatio="xMinYMin"><path d="M377 20c0-5.333 1.833-10 5.5-14S391 0 397 0c4.667 0 8.667 1.667 12 5
3.333 2.667 6.667 9 10 19 6.667 24.667 20.333 43.667 41 57 7.333 4.667 11
10.667 11 18 0 6-1 10-3 12s-6.667 5-14 9c-28.667 14.667-53.667 35.667-75 63
-1.333 1.333-3.167 3.5-5.5 6.5s-4 4.833-5 5.5c-1 .667-2.5 1.333-4.5 2s-4.333 1
-7 1c-4.667 0-9.167-1.833-13.5-5.5S337 184 337 178c0-12.667 15.667-32.333 47-59
H213l-171-1c-8.667-6-13-12.333-13-19 0-4.667 4.333-11.333 13-20h359
c-16-25.333-24-45-24-59z"></path></svg></span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:2.3074em;vertical-align:-0.936em;"></span><span class="mord"><span class="mopen nulldelimiter"></span><span class="mfrac"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:1.3714em;"><span style="top:-2.314em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">∣∣</span><span class="mord mathnormal">h</span><span class="mord">∣∣</span></span></span><span style="top:-3.23em;"><span class="pstrut" style="height:3em;"></span><span class="frac-line" style="border-bottom-width:0.04em;"></span></span><span style="top:-3.677em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord mathnormal">h</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.936em;"><span></span></span></span></span></span><span class="mclose nulldelimiter"></span></span></span></span></span></span>
<blockquote>
<p><strong>Note:</strong> the vertical bars around a vector <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi mathvariant="normal">∣</mi><mi mathvariant="normal">∣</mi><mover accent="true"><mi>v</mi><mo>⃗</mo></mover><mi mathvariant="normal">∣</mi><mi mathvariant="normal">∣</mi></mrow><annotation encoding="application/x-tex">||\vec{v}||</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:1em;vertical-align:-0.25em;"></span><span class="mord">∣∣</span><span class="mord accent"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.714em;"><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="mord mathnormal" style="margin-right:0.03588em;">v</span></span><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="accent-body" style="left:-0.2077em;"><span class="overlay" style="height:0.714em;width:0.471em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.471em" height="0.714em" style="width:0.471em" viewBox="0 0 471 714" preserveAspectRatio="xMinYMin"><path d="M377 20c0-5.333 1.833-10 5.5-14S391 0 397 0c4.667 0 8.667 1.667 12 5
3.333 2.667 6.667 9 10 19 6.667 24.667 20.333 43.667 41 57 7.333 4.667 11
10.667 11 18 0 6-1 10-3 12s-6.667 5-14 9c-28.667 14.667-53.667 35.667-75 63
-1.333 1.333-3.167 3.5-5.5 6.5s-4 4.833-5 5.5c-1 .667-2.5 1.333-4.5 2s-4.333 1
-7 1c-4.667 0-9.167-1.833-13.5-5.5S337 184 337 178c0-12.667 15.667-32.333 47-59
H213l-171-1c-8.667-6-13-12.333-13-19 0-4.667 4.333-11.333 13-20h359
c-16-25.333-24-45-24-59z"></path></svg></span></span></span></span></span></span></span><span class="mord">∣∣</span></span></span></span> signify getting the vector's length.</p>
</blockquote>
<p>Now that we have the triangles normal, we can fill it in more brightly depending on how directly it is facing the camera.</p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mtext>brightness</mtext><mo>=</mo><mover accent="true"><mi>n</mi><mo>⃗</mo></mover><mo>⋅</mo><mi mathvariant="normal">∣</mi><mi mathvariant="normal">∣</mi><mfrac><mrow><msub><mi>p</mi><mn>1</mn></msub><mo>+</mo><msub><mi>p</mi><mn>2</mn></msub><mo>+</mo><msub><mi>p</mi><mn>3</mn></msub></mrow><mn>3</mn></mfrac><mi mathvariant="normal">∣</mi><mi mathvariant="normal">∣</mi></mrow><annotation encoding="application/x-tex">\text{brightness} = \vec{n} \cdot ||\frac{p_1 + p_2 + p_3}{3}||</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8889em;vertical-align:-0.1944em;"></span><span class="mord text"><span class="mord">brightness</span></span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:0.714em;"></span><span class="mord accent"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.714em;"><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="mord mathnormal">n</span></span><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="accent-body" style="left:-0.2355em;"><span class="overlay" style="height:0.714em;width:0.471em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.471em" height="0.714em" style="width:0.471em" viewBox="0 0 471 714" preserveAspectRatio="xMinYMin"><path d="M377 20c0-5.333 1.833-10 5.5-14S391 0 397 0c4.667 0 8.667 1.667 12 5
3.333 2.667 6.667 9 10 19 6.667 24.667 20.333 43.667 41 57 7.333 4.667 11
10.667 11 18 0 6-1 10-3 12s-6.667 5-14 9c-28.667 14.667-53.667 35.667-75 63
-1.333 1.333-3.167 3.5-5.5 6.5s-4 4.833-5 5.5c-1 .667-2.5 1.333-4.5 2s-4.333 1
-7 1c-4.667 0-9.167-1.833-13.5-5.5S337 184 337 178c0-12.667 15.667-32.333 47-59
H213l-171-1c-8.667-6-13-12.333-13-19 0-4.667 4.333-11.333 13-20h359
c-16-25.333-24-45-24-59z"></path></svg></span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">⋅</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:1.9463em;vertical-align:-0.686em;"></span><span class="mord">∣∣</span><span class="mord"><span class="mopen nulldelimiter"></span><span class="mfrac"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:1.2603em;"><span style="top:-2.314em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">3</span></span></span><span style="top:-3.23em;"><span class="pstrut" style="height:3em;"></span><span class="frac-line" style="border-bottom-width:0.04em;"></span></span><span style="top:-3.677em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord"><span class="mord mathnormal">p</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3011em;"><span style="top:-2.55em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">+</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mord"><span class="mord mathnormal">p</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3011em;"><span style="top:-2.55em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">2</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">+</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mord"><span class="mord mathnormal">p</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3011em;"><span style="top:-2.55em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">3</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.686em;"><span></span></span></span></span></span><span class="mclose nulldelimiter"></span></span><span class="mord">∣∣</span></span></span></span></span>
<p>The resulting shading is the default in the demo.</p>
<p>Alternatively, we can also simply color based on the distance from the camera.
The resulting image is called a <em>depth map.</em></p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mtext>brightness</mtext><mo>=</mo><mtext>view distance</mtext><mo>−</mo><mi mathvariant="normal">∣</mi><mi mathvariant="normal">∣</mi><mfrac><mrow><msub><mi>p</mi><mn>1</mn></msub><mo>+</mo><msub><mi>p</mi><mn>2</mn></msub><mo>+</mo><msub><mi>p</mi><mn>3</mn></msub></mrow><mn>3</mn></mfrac><mi mathvariant="normal">∣</mi><mi mathvariant="normal">∣</mi></mrow><annotation encoding="application/x-tex">\text{brightness} = \text{view distance} - ||\frac{p_1 + p_2 + p_3}{3}||</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8889em;vertical-align:-0.1944em;"></span><span class="mord text"><span class="mord">brightness</span></span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:0.7778em;vertical-align:-0.0833em;"></span><span class="mord text"><span class="mord">view distance</span></span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">−</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:1.9463em;vertical-align:-0.686em;"></span><span class="mord">∣∣</span><span class="mord"><span class="mopen nulldelimiter"></span><span class="mfrac"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:1.2603em;"><span style="top:-2.314em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">3</span></span></span><span style="top:-3.23em;"><span class="pstrut" style="height:3em;"></span><span class="frac-line" style="border-bottom-width:0.04em;"></span></span><span style="top:-3.677em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord"><span class="mord mathnormal">p</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3011em;"><span style="top:-2.55em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">+</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mord"><span class="mord mathnormal">p</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3011em;"><span style="top:-2.55em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">2</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">+</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mord"><span class="mord mathnormal">p</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3011em;"><span style="top:-2.55em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">3</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.686em;"><span></span></span></span></span></span><span class="mclose nulldelimiter"></span></span><span class="mord">∣∣</span></span></span></span></span>
<h3>Sorting</h3>
<p>When the program is supplied a mesh, the faces are not in any specific order.
If we were to just draw each face in the order it arrives, nothing would make sense.</p>
<p>To solve this, we simple sort each face by it's distance, then draw the furthest faces first.</p>
<p>If you want to see what it would look like, go to the demo and press <code>O</code> to toggle face sorting.</p>
<h3>Optimization</h3>
<p>There are countless ways to optimize a renderer like this.
They all involve work-avoidance.
The one I want to discuss is often referred to as <em>backface culling.</em></p>
<p>In most situations, there is no need to see the inside of a mesh.
This allows us to avoid a lot of work for very little effort.
By checking the alignment of the point-to-face vector with a face's normal, we can check if a given face is facing toward us or not.</p>
<p>In the demo, you can toggle backface culling with <code>B</code>.</p>
<h2>Conclusion</h2>
<p>When I initially designed this project, I hoped it would allow me to apply some of the more advanced linear algebra concepts that I've learnt in the second trimester.
In this regard, it did not live up to my expectations.</p>
<p>While I was allowed to explore some concepts, like orthogonality, it was not quite satisfactory.</p>
<p>It was not for naught, though.
I learned a lot about the fields of math and computers integrate together, as well as how to more effectively convert mathematical concepts into a working prototype.
I want to continue doing projects like this, and cannot wait to re-take Linear Algebra when I go to college.</p>]]></description>
            <content:encoded><![CDATA[<h1>How I Built a Software Render Engine from Scratch</h1>
<blockquote>
<p><strong>Heads Up:</strong> This article is a republished (with some tweaks on spelling, grammar and layout) version of
an article I wrote in my senior year of high school for my Linear Algebra class.
As such, the publish date is not quite correct.</p>
</blockquote>
<p>Computers are fantastic at processing, producing, and consuming data.
But I've often found that the most difficult part of the pipeline is representing the data to a user.
This can often take the form of bar charts or scatter plots, but there are situations where they just don't fit the bill.</p>
<p>3D graphics enable developers to create interactive programs that appear most similar to the natural world.
By presenting a three-dimensional space, the barriers for entry drop.
I wanted to learn more about how this worked.</p>
<figure><img src="/images/star_fox.png" alt="Star Fox, one of the earliest major successes of 3D graphics in the gaming industry." title="Star Fox, one of the earliest major successes of 3D graphics in the gaming industry."><figcaption>Star Fox, one of the earliest major successes of 3D graphics in the gaming industry.</figcaption></figure>
<h2>Inspiration</h2>
<p>For a long time, I've been told that the most prevalent application of linear algebra was computer graphics.
Before I even began my study on linear algebra, I knew I wanted to get into software rendering.</p>
<p>One of the big roadblocks was the amount of technical know-how I thought it required.
You see, most 3D programs do all the number-crunching on the specially designed <strong>graphics processing unit</strong> that is readily available on most modern computers.
From my previous attempts to use GPUs, I knew setting up the pipeline is quite involved.
If I went that route again, I know I would likely spend most of my time dealing with vendor-specific APIs.</p>
<p>Since I wanted to focus on the math, I postponed the project.
That is, until it occurred to me that I could simply <em>not use the GPU.</em>
I know it might sound obvious, but it felt so freeing at the time.</p>
<h3>What Is a Software Render Engine?</h3>
<p>A render engine is a piece of software that takes a set of triangles in space and projects them onto a 2D grid that can be displayed on a computer screen.</p>
<p>A software render engine is, as it may sound, a render engine that does all computation in software.
No specialized hardware is utilized <strong>at all.</strong></p>
<h2>Demo</h2>
<p>Before I get into <strong>how</strong> it works, I want to give you the chance to try it out yourself.
I've created a <strong>very</strong> simple scene to demonstrate.</p>
<table>
<thead>
<tr>
<th>Function</th>
<th align="right">Key</th>
</tr>
</thead>
<tbody>
<tr>
<td>Look Around</td>
<td align="right">Left Mouse Click</td>
</tr>
<tr>
<td>Toggle Depth Buffer</td>
<td align="right">R</td>
</tr>
<tr>
<td>Toggle Face Sorting</td>
<td align="right">O</td>
</tr>
<tr>
<td>Toggle Backface Culling</td>
<td align="right">B</td>
</tr>
<tr>
<td>Increase FOV</td>
<td align="right">Arrow Key Up 🔼</td>
</tr>
<tr>
<td>Decrease FOV</td>
<td align="right">Arrow Key Down 🔽</td>
</tr>
<tr>
<td>Move View</td>
<td align="right">W, A, S, D</td>
</tr>
</tbody>
</table>
<iframe iframe frameBorder="0" style="width: 100%; aspect-ratio: 16/9;" allowfullscreen src="/standalonerenderer.html"></iframe>
<h2>Explanation</h2>
<h3>Note</h3>
<p>In this article, I intend only to talk about the math related to the problem.
If you are interesting in the nitty-gritty <em>how lines and shapes get drawn to the screen,</em>
I suggest you read up on <a href="https://en.wikipedia.org/wiki/Bresenham%27s_line_algorithm">Bresenham's line algorithm</a>,
<a href="https://en.wikipedia.org/wiki/Xiaolin_Wu%27s_line_algorithm">Xiaolin Wu's line algorithm</a>, and
<a href="https://www.gabrielgambetta.com/computer-graphics-from-scratch/07-filled-triangles.html">Drawing Filled Triangles</a></p>
<h3>Projection</h3>
<p>With a perspective camera, projection happens from the world, towards the "sensor," which is a defined point.
For the sake of argument, let's say that point is at the origin and the camera is facing the positive <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>z</mi></mrow><annotation encoding="application/x-tex">z</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.4306em;"></span><span class="mord mathnormal" style="margin-right:0.04398em;">z</span></span></span></span> axis.</p>
<p>We want all other points to be projected onto a plane <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>d</mi></mrow><annotation encoding="application/x-tex">d</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6944em;"></span><span class="mord mathnormal">d</span></span></span></span> distance away.
If you are a photographer, <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>d</mi></mrow><annotation encoding="application/x-tex">d</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6944em;"></span><span class="mord mathnormal">d</span></span></span></span> will be the focal length of the camera.</p>
<figure><img src="/images/projection_comparison.gif" alt="A graphic demonstrating the different projection types" title="Perspective Projection (left) vs Orthographic Projection (right)"><figcaption>Perspective Projection (left) vs Orthographic Projection (right)</figcaption></figure>
<h3>Orthographic Projection</h3>
<p>In the case of orthographic projection, this is easy.
Because we have placed the camera on the origin, facing the positive <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>z</mi></mrow><annotation encoding="application/x-tex">z</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.4306em;"></span><span class="mord mathnormal" style="margin-right:0.04398em;">z</span></span></span></span> axis, we can draw the coordinates of any given point directly to the screen.
The only consideration necessary pertains to the points <em>behind</em> the camera, which we can skip by checking the sign of the <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>z</mi></mrow><annotation encoding="application/x-tex">z</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.4306em;"></span><span class="mord mathnormal" style="margin-right:0.04398em;">z</span></span></span></span> component.</p>
<h3>Homogeneous Coordinates</h3>
<p>This is where <em>homogeneous coordinates</em> come in.
When working in euclidean space, we represent a given vector or coordinate using three components:</p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mo fence="true">[</mo><mtable rowspacing="0.16em" columnalign="center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mi>x</mi></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mi>y</mi></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mi>z</mi></mstyle></mtd></mtr></mtable><mo fence="true">]</mo></mrow><annotation encoding="application/x-tex">\begin{bmatrix}
  x \\
  y \\
  z
\end{bmatrix}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:3.6em;vertical-align:-1.55em;"></span><span class="minner"><span class="mopen"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.05em;"><span style="top:-4.05em;"><span class="pstrut" style="height:5.6em;"></span><span style="width:0.667em;height:3.600em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="3.600em" viewBox="0 0 667 3600"><path d="M403 1759 V84 H666 V0 H319 V1759 v0 v1759 h347 v-84
H403z M403 1759 V0 H319 V1759 v0 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:1.55em;"><span></span></span></span></span></span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.05em;"><span style="top:-4.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord mathnormal">x</span></span></span><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord mathnormal" style="margin-right:0.03588em;">y</span></span></span><span style="top:-1.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord mathnormal" style="margin-right:0.04398em;">z</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:1.55em;"><span></span></span></span></span></span></span></span><span class="mclose"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.05em;"><span style="top:-4.05em;"><span class="pstrut" style="height:5.6em;"></span><span style="width:0.667em;height:3.600em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="3.600em" viewBox="0 0 667 3600"><path d="M347 1759 V0 H0 V84 H263 V1759 v0 v1759 H0 v84 H347z
M347 1759 V0 H263 V1759 v0 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:1.55em;"><span></span></span></span></span></span></span></span></span></span></span></span>
<p>When we are working in projective space, we can represent any given vector or coordinate using <em>four</em> components.</p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mo fence="true">[</mo><mtable rowspacing="0.16em" columnalign="center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mi>x</mi></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mi>y</mi></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mi>z</mi></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mi>w</mi></mstyle></mtd></mtr></mtable><mo fence="true">]</mo></mrow><annotation encoding="application/x-tex">\begin{bmatrix}
  x \\
  y \\
  z \\
  w
\end{bmatrix}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:4.8em;vertical-align:-2.15em;"></span><span class="minner"><span class="mopen"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.65em;"><span class="pstrut" style="height:6.8em;"></span><span style="width:0.667em;height:4.800em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="4.800em" viewBox="0 0 667 4800"><path d="M403 1759 V84 H666 V0 H319 V1759 v1200 v1759 h347 v-84
H403z M403 1759 V0 H319 V1759 v1200 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord mathnormal">x</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord mathnormal" style="margin-right:0.03588em;">y</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord mathnormal" style="margin-right:0.04398em;">z</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord mathnormal" style="margin-right:0.02691em;">w</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span></span></span><span class="mclose"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.65em;"><span class="pstrut" style="height:6.8em;"></span><span style="width:0.667em;height:4.800em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="4.800em" viewBox="0 0 667 4800"><path d="M347 1759 V0 H0 V84 H263 V1759 v1200 v1759 H0 v84 H347z
M347 1759 V0 H263 V1759 v1200 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span></span></span></span></span></span></span>
<p>We can convert between these formats interchangeably.
To convert a homogenous coordinate to euclidean, divide all other components by <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>w</mi></mrow><annotation encoding="application/x-tex">w</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.4306em;"></span><span class="mord mathnormal" style="margin-right:0.02691em;">w</span></span></span></span>:</p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mtext>euclidean coordinate</mtext><mo>=</mo><mrow><mo fence="true">[</mo><mtable rowspacing="0.16em" columnalign="center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mrow><mi>x</mi><mi mathvariant="normal">/</mi><mi>w</mi></mrow></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mrow><mi>y</mi><mi mathvariant="normal">/</mi><mi>w</mi></mrow></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mrow><mi>z</mi><mi mathvariant="normal">/</mi><mi>w</mi></mrow></mstyle></mtd></mtr></mtable><mo fence="true">]</mo></mrow></mrow><annotation encoding="application/x-tex">\text{euclidean coordinate} = \begin{bmatrix}
  x / w \\
  y / w \\
  z / w
\end{bmatrix}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6944em;"></span><span class="mord text"><span class="mord">euclidean coordinate</span></span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:3.6em;vertical-align:-1.55em;"></span><span class="minner"><span class="mopen"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.05em;"><span style="top:-4.05em;"><span class="pstrut" style="height:5.6em;"></span><span style="width:0.667em;height:3.600em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="3.600em" viewBox="0 0 667 3600"><path d="M403 1759 V84 H666 V0 H319 V1759 v0 v1759 h347 v-84
H403z M403 1759 V0 H319 V1759 v0 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:1.55em;"><span></span></span></span></span></span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.05em;"><span style="top:-4.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord mathnormal">x</span><span class="mord">/</span><span class="mord mathnormal" style="margin-right:0.02691em;">w</span></span></span><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord mathnormal" style="margin-right:0.03588em;">y</span><span class="mord">/</span><span class="mord mathnormal" style="margin-right:0.02691em;">w</span></span></span><span style="top:-1.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord mathnormal" style="margin-right:0.04398em;">z</span><span class="mord">/</span><span class="mord mathnormal" style="margin-right:0.02691em;">w</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:1.55em;"><span></span></span></span></span></span></span></span><span class="mclose"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.05em;"><span style="top:-4.05em;"><span class="pstrut" style="height:5.6em;"></span><span style="width:0.667em;height:3.600em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="3.600em" viewBox="0 0 667 3600"><path d="M347 1759 V0 H0 V84 H263 V1759 v0 v1759 H0 v84 H347z
M347 1759 V0 H263 V1759 v0 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:1.55em;"><span></span></span></span></span></span></span></span></span></span></span></span>
<p>We can transform any euclidean coordinate to a homogenous coordinate by setting <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>w</mi><mo>=</mo><mn>1</mn></mrow><annotation encoding="application/x-tex">w = 1</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.4306em;"></span><span class="mord mathnormal" style="margin-right:0.02691em;">w</span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:0.6444em;"></span><span class="mord">1</span></span></span></span></p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mtext>homogenous coordinate</mtext><mo>=</mo><mrow><mo fence="true">[</mo><mtable rowspacing="0.16em" columnalign="center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mi>x</mi></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mi>y</mi></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mi>z</mi></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd></mtr></mtable><mo fence="true">]</mo></mrow></mrow><annotation encoding="application/x-tex">\text{homogenous coordinate} = \begin{bmatrix}
  x \\
  y \\
  z \\
  1
\end{bmatrix}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8889em;vertical-align:-0.1944em;"></span><span class="mord text"><span class="mord">homogenous coordinate</span></span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:4.8em;vertical-align:-2.15em;"></span><span class="minner"><span class="mopen"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.65em;"><span class="pstrut" style="height:6.8em;"></span><span style="width:0.667em;height:4.800em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="4.800em" viewBox="0 0 667 4800"><path d="M403 1759 V84 H666 V0 H319 V1759 v1200 v1759 h347 v-84
H403z M403 1759 V0 H319 V1759 v1200 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord mathnormal">x</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord mathnormal" style="margin-right:0.03588em;">y</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord mathnormal" style="margin-right:0.04398em;">z</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span></span></span><span class="mclose"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.65em;"><span class="pstrut" style="height:6.8em;"></span><span style="width:0.667em;height:4.800em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="4.800em" viewBox="0 0 667 4800"><path d="M347 1759 V0 H0 V84 H263 V1759 v1200 v1759 H0 v84 H347z
M347 1759 V0 H263 V1759 v1200 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span></span></span></span></span></span></span>
<p>Just like we can perform various transformations on euclidean coordinates, we can perform similar ones on homogenous coordinates.
The major difference: instead of requiring conventional additions or subtractions, homogenous coordinates can be translated via matrix multiplication.</p>
<p>For example, let's say we have a point at the origin, and we want to perform both a translation and rotation.
If we were using euclidean coordinates, we would have to translate it via addition, then rotate it separately.</p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mrow><mo fence="true">[</mo><mtable rowspacing="0.16em" columnalign="center center center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mrow><mo>−</mo><mn>1</mn></mrow></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd></mtr></mtable><mo fence="true">]</mo></mrow><mrow><mo fence="true">(</mo><mrow><mo fence="true">[</mo><mtable rowspacing="0.16em" columnalign="center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr></mtable><mo fence="true">]</mo></mrow><mo>+</mo><mrow><mo fence="true">[</mo><mtable rowspacing="0.16em" columnalign="center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>2</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr></mtable><mo fence="true">]</mo></mrow><mo fence="true">)</mo></mrow><mo>=</mo><mrow><mo fence="true">[</mo><mtable rowspacing="0.16em" columnalign="center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>2</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mrow><mo>−</mo><mn>1</mn></mrow></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr></mtable><mo fence="true">]</mo></mrow></mrow><annotation encoding="application/x-tex">
\begin{bmatrix}
  0 &#x26; 1 &#x26; 0 \\
  -1 &#x26; 0 &#x26; 0 \\
  0 &#x26; 0 &#x26; 1 \\
\end{bmatrix}

\left(
  \begin{bmatrix}
    0 \\
    0 \\
    0
  \end{bmatrix}
  +
  \begin{bmatrix}
    1 \\
    2 \\
    0
  \end{bmatrix}
\right)

=

\begin{bmatrix}
  2 \\
  -1 \\
  0
\end{bmatrix}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:3.6em;vertical-align:-1.55em;"></span><span class="minner"><span class="mopen"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.05em;"><span style="top:-4.05em;"><span class="pstrut" style="height:5.6em;"></span><span style="width:0.667em;height:3.600em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="3.600em" viewBox="0 0 667 3600"><path d="M403 1759 V84 H666 V0 H319 V1759 v0 v1759 h347 v-84
H403z M403 1759 V0 H319 V1759 v0 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:1.55em;"><span></span></span></span></span></span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.05em;"><span style="top:-4.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">−</span><span class="mord">1</span></span></span><span style="top:-1.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:1.55em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.05em;"><span style="top:-4.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-1.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:1.55em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.05em;"><span style="top:-4.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-1.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:1.55em;"><span></span></span></span></span></span></span></span><span class="mclose"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.05em;"><span style="top:-4.05em;"><span class="pstrut" style="height:5.6em;"></span><span style="width:0.667em;height:3.600em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="3.600em" viewBox="0 0 667 3600"><path d="M347 1759 V0 H0 V84 H263 V1759 v0 v1759 H0 v84 H347z
M347 1759 V0 H263 V1759 v0 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:1.55em;"><span></span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.1667em;"></span><span class="minner"><span class="mopen"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.05em;"><span style="top:-4.05em;"><span class="pstrut" style="height:5.6em;"></span><span style="width:0.875em;height:3.600em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.875em" height="3.600em" viewBox="0 0 875 3600"><path d="M863,9c0,-2,-2,-5,-6,-9c0,0,-17,0,-17,0c-12.7,0,-19.3,0.3,-20,1
c-5.3,5.3,-10.3,11,-15,17c-242.7,294.7,-395.3,682,-458,1162c-21.3,163.3,-33.3,349,
-36,557 l0,84c0.2,6,0,26,0,60c2,159.3,10,310.7,24,454c53.3,528,210,
949.7,470,1265c4.7,6,9.7,11.7,15,17c0.7,0.7,7,1,19,1c0,0,18,0,18,0c4,-4,6,-7,6,-9
c0,-2.7,-3.3,-8.7,-10,-18c-135.3,-192.7,-235.5,-414.3,-300.5,-665c-65,-250.7,-102.5,
-544.7,-112.5,-882c-2,-104,-3,-167,-3,-189
l0,-92c0,-162.7,5.7,-314,17,-454c20.7,-272,63.7,-513,129,-723c65.3,
-210,155.3,-396.3,270,-559c6.7,-9.3,10,-15.3,10,-18z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:1.55em;"><span></span></span></span></span></span></span><span class="minner"><span class="mopen"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.05em;"><span style="top:-4.05em;"><span class="pstrut" style="height:5.6em;"></span><span style="width:0.667em;height:3.600em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="3.600em" viewBox="0 0 667 3600"><path d="M403 1759 V84 H666 V0 H319 V1759 v0 v1759 h347 v-84
H403z M403 1759 V0 H319 V1759 v0 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:1.55em;"><span></span></span></span></span></span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.05em;"><span style="top:-4.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-1.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:1.55em;"><span></span></span></span></span></span></span></span><span class="mclose"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.05em;"><span style="top:-4.05em;"><span class="pstrut" style="height:5.6em;"></span><span style="width:0.667em;height:3.600em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="3.600em" viewBox="0 0 667 3600"><path d="M347 1759 V0 H0 V84 H263 V1759 v0 v1759 H0 v84 H347z
M347 1759 V0 H263 V1759 v0 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:1.55em;"><span></span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">+</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="minner"><span class="mopen"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.05em;"><span style="top:-4.05em;"><span class="pstrut" style="height:5.6em;"></span><span style="width:0.667em;height:3.600em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="3.600em" viewBox="0 0 667 3600"><path d="M403 1759 V84 H666 V0 H319 V1759 v0 v1759 h347 v-84
H403z M403 1759 V0 H319 V1759 v0 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:1.55em;"><span></span></span></span></span></span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.05em;"><span style="top:-4.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">2</span></span></span><span style="top:-1.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:1.55em;"><span></span></span></span></span></span></span></span><span class="mclose"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.05em;"><span style="top:-4.05em;"><span class="pstrut" style="height:5.6em;"></span><span style="width:0.667em;height:3.600em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="3.600em" viewBox="0 0 667 3600"><path d="M347 1759 V0 H0 V84 H263 V1759 v0 v1759 H0 v84 H347z
M347 1759 V0 H263 V1759 v0 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:1.55em;"><span></span></span></span></span></span></span></span><span class="mclose"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.05em;"><span style="top:-4.05em;"><span class="pstrut" style="height:5.6em;"></span><span style="width:0.875em;height:3.600em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.875em" height="3.600em" viewBox="0 0 875 3600"><path d="M76,0c-16.7,0,-25,3,-25,9c0,2,2,6.3,6,13c21.3,28.7,42.3,60.3,
63,95c96.7,156.7,172.8,332.5,228.5,527.5c55.7,195,92.8,416.5,111.5,664.5
c11.3,139.3,17,290.7,17,454c0,28,1.7,43,3.3,45l0,9
c-3,4,-3.3,16.7,-3.3,38c0,162,-5.7,313.7,-17,455c-18.7,248,-55.8,469.3,-111.5,664
c-55.7,194.7,-131.8,370.3,-228.5,527c-20.7,34.7,-41.7,66.3,-63,95c-2,3.3,-4,7,-6,11
c0,7.3,5.7,11,17,11c0,0,11,0,11,0c9.3,0,14.3,-0.3,15,-1c5.3,-5.3,10.3,-11,15,-17
c242.7,-294.7,395.3,-681.7,458,-1161c21.3,-164.7,33.3,-350.7,36,-558
l0,-144c-2,-159.3,-10,-310.7,-24,-454c-53.3,-528,-210,-949.7,
-470,-1265c-4.7,-6,-9.7,-11.7,-15,-17c-0.7,-0.7,-6.7,-1,-18,-1z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:1.55em;"><span></span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:3.6em;vertical-align:-1.55em;"></span><span class="minner"><span class="mopen"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.05em;"><span style="top:-4.05em;"><span class="pstrut" style="height:5.6em;"></span><span style="width:0.667em;height:3.600em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="3.600em" viewBox="0 0 667 3600"><path d="M403 1759 V84 H666 V0 H319 V1759 v0 v1759 h347 v-84
H403z M403 1759 V0 H319 V1759 v0 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:1.55em;"><span></span></span></span></span></span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.05em;"><span style="top:-4.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">2</span></span></span><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">−</span><span class="mord">1</span></span></span><span style="top:-1.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:1.55em;"><span></span></span></span></span></span></span></span><span class="mclose"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.05em;"><span style="top:-4.05em;"><span class="pstrut" style="height:5.6em;"></span><span style="width:0.667em;height:3.600em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="3.600em" viewBox="0 0 667 3600"><path d="M347 1759 V0 H0 V84 H263 V1759 v0 v1759 H0 v84 H347z
M347 1759 V0 H263 V1759 v0 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:1.55em;"><span></span></span></span></span></span></span></span></span></span></span></span>
<p>In homogenous coordinates, we can do it with a single matrix operation by preparing the matrix ahead of time:</p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mi>T</mi><mo>=</mo><mrow><mo fence="true">[</mo><mtable rowspacing="0.16em" columnalign="center center center center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mrow><mo>−</mo><mn>1</mn></mrow></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd></mtr></mtable><mo fence="true">]</mo></mrow><mrow><mo fence="true">[</mo><mtable rowspacing="0.16em" columnalign="center center center center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>2</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd></mtr></mtable><mo fence="true">]</mo></mrow></mrow><annotation encoding="application/x-tex">T = \begin{bmatrix}
  0 &#x26; 1 &#x26; 0 &#x26; 0 \\
  -1 &#x26; 0 &#x26; 0 &#x26; 0 \\
  0 &#x26; 0 &#x26; 1 &#x26; 0 \\
  0 &#x26; 0 &#x26; 0 &#x26; 1
\end{bmatrix}

\begin{bmatrix}
  1 &#x26; 0 &#x26; 0 &#x26; 1 \\
  0 &#x26; 1 &#x26; 0 &#x26; 2 \\
  0 &#x26; 0 &#x26; 1 &#x26; 0 \\
  0 &#x26; 0 &#x26; 0 &#x26; 1
\end{bmatrix}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6833em;"></span><span class="mord mathnormal" style="margin-right:0.13889em;">T</span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:4.8em;vertical-align:-2.15em;"></span><span class="minner"><span class="mopen"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.65em;"><span class="pstrut" style="height:6.8em;"></span><span style="width:0.667em;height:4.800em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="4.800em" viewBox="0 0 667 4800"><path d="M403 1759 V84 H666 V0 H319 V1759 v1200 v1759 h347 v-84
H403z M403 1759 V0 H319 V1759 v1200 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">−</span><span class="mord">1</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span></span></span><span class="mclose"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.65em;"><span class="pstrut" style="height:6.8em;"></span><span style="width:0.667em;height:4.800em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="4.800em" viewBox="0 0 667 4800"><path d="M347 1759 V0 H0 V84 H263 V1759 v1200 v1759 H0 v84 H347z
M347 1759 V0 H263 V1759 v1200 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.1667em;"></span><span class="minner"><span class="mopen"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.65em;"><span class="pstrut" style="height:6.8em;"></span><span style="width:0.667em;height:4.800em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="4.800em" viewBox="0 0 667 4800"><path d="M403 1759 V84 H666 V0 H319 V1759 v1200 v1759 h347 v-84
H403z M403 1759 V0 H319 V1759 v1200 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">2</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span></span></span><span class="mclose"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.65em;"><span class="pstrut" style="height:6.8em;"></span><span style="width:0.667em;height:4.800em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="4.800em" viewBox="0 0 667 4800"><path d="M347 1759 V0 H0 V84 H263 V1759 v1200 v1759 H0 v84 H347z
M347 1759 V0 H263 V1759 v1200 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span></span></span></span></span></span></span>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mi>T</mi><mrow><mo fence="true">[</mo><mtable rowspacing="0.16em" columnalign="center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd></mtr></mtable><mo fence="true">]</mo></mrow><mo>=</mo><mrow><mo fence="true">[</mo><mtable rowspacing="0.16em" columnalign="center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>2</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mrow><mo>−</mo><mn>1</mn></mrow></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd></mtr></mtable><mo fence="true">]</mo></mrow></mrow><annotation encoding="application/x-tex">T \begin{bmatrix}
  0 \\
  0 \\
  0 \\
  1
\end{bmatrix}

=

\begin{bmatrix}
  2 \\
  -1 \\
  0 \\
  1
\end{bmatrix}

</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:4.8em;vertical-align:-2.15em;"></span><span class="mord mathnormal" style="margin-right:0.13889em;">T</span><span class="mspace" style="margin-right:0.1667em;"></span><span class="minner"><span class="mopen"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.65em;"><span class="pstrut" style="height:6.8em;"></span><span style="width:0.667em;height:4.800em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="4.800em" viewBox="0 0 667 4800"><path d="M403 1759 V84 H666 V0 H319 V1759 v1200 v1759 h347 v-84
H403z M403 1759 V0 H319 V1759 v1200 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span></span></span><span class="mclose"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.65em;"><span class="pstrut" style="height:6.8em;"></span><span style="width:0.667em;height:4.800em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="4.800em" viewBox="0 0 667 4800"><path d="M347 1759 V0 H0 V84 H263 V1759 v1200 v1759 H0 v84 H347z
M347 1759 V0 H263 V1759 v1200 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:4.8em;vertical-align:-2.15em;"></span><span class="minner"><span class="mopen"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.65em;"><span class="pstrut" style="height:6.8em;"></span><span style="width:0.667em;height:4.800em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="4.800em" viewBox="0 0 667 4800"><path d="M403 1759 V84 H666 V0 H319 V1759 v1200 v1759 h347 v-84
H403z M403 1759 V0 H319 V1759 v1200 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">2</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">−</span><span class="mord">1</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span></span></span><span class="mclose"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.65em;"><span class="pstrut" style="height:6.8em;"></span><span style="width:0.667em;height:4.800em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="4.800em" viewBox="0 0 667 4800"><path d="M347 1759 V0 H0 V84 H263 V1759 v1200 v1759 H0 v84 H347z
M347 1759 V0 H263 V1759 v1200 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span></span></span></span></span></span></span>
<p>By condensing the entire transformation into a single matrix, we are able to save a ton of computing time.</p>
<h3>Perspective Projection</h3>
<p>The essential idea of perspective projection is simple: we want points further from the camera to appear closer to the <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>z</mi></mrow><annotation encoding="application/x-tex">z</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.4306em;"></span><span class="mord mathnormal" style="margin-right:0.04398em;">z</span></span></span></span> axis the further away they are.
Remember that <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>d</mi></mrow><annotation encoding="application/x-tex">d</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6944em;"></span><span class="mord mathnormal">d</span></span></span></span> is the surface we are projecting onto.
This is possible with homogenous coordinates with the following matrix:</p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mtext>perspective projection matrix</mtext><mo>=</mo><mrow><mo fence="true">[</mo><mtable rowspacing="0.16em" columnalign="center center center center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mrow><mo>−</mo><mn>1</mn><mi mathvariant="normal">/</mi><mi>d</mi></mrow></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd></mtr></mtable><mo fence="true">]</mo></mrow></mrow><annotation encoding="application/x-tex">\text{perspective projection matrix} =

\begin{bmatrix}
  1 &#x26; 0 &#x26; 0 &#x26; 0 \\
  0 &#x26; 1 &#x26; 0 &#x26; 0 \\
  0 &#x26; 0 &#x26; 1 &#x26; 0 \\
  0 &#x26; 0 &#x26; -1 / d &#x26; 1
\end{bmatrix}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8623em;vertical-align:-0.1944em;"></span><span class="mord text"><span class="mord">perspective projection matrix</span></span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:4.8em;vertical-align:-2.15em;"></span><span class="minner"><span class="mopen"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.65em;"><span class="pstrut" style="height:6.8em;"></span><span style="width:0.667em;height:4.800em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="4.800em" viewBox="0 0 667 4800"><path d="M403 1759 V84 H666 V0 H319 V1759 v1200 v1759 h347 v-84
H403z M403 1759 V0 H319 V1759 v1200 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">−</span><span class="mord">1/</span><span class="mord mathnormal">d</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span></span></span><span class="mclose"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:2.65em;"><span style="top:-4.65em;"><span class="pstrut" style="height:6.8em;"></span><span style="width:0.667em;height:4.800em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="4.800em" viewBox="0 0 667 4800"><path d="M347 1759 V0 H0 V84 H263 V1759 v1200 v1759 H0 v84 H347z
M347 1759 V0 H263 V1759 v1200 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:2.15em;"><span></span></span></span></span></span></span></span></span></span></span></span>
<p>Assuming you have an understanding of matrix multiplication, it should be apparent why this works.
When the <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>w</mi></mrow><annotation encoding="application/x-tex">w</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.4306em;"></span><span class="mord mathnormal" style="margin-right:0.02691em;">w</span></span></span></span> component of the matrix is being computed, the <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>z</mi></mrow><annotation encoding="application/x-tex">z</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.4306em;"></span><span class="mord mathnormal" style="margin-right:0.04398em;">z</span></span></span></span> component will be divided by <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>d</mi></mrow><annotation encoding="application/x-tex">d</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6944em;"></span><span class="mord mathnormal">d</span></span></span></span>.
The result then becomes a divisor of <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>w</mi></mrow><annotation encoding="application/x-tex">w</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.4306em;"></span><span class="mord mathnormal" style="margin-right:0.02691em;">w</span></span></span></span>, which affects all components of the resulting vector due to the nature of homogenous coordinates.
In short: <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>w</mi><mo>←</mo><mi>w</mi><mo>∗</mo><mo stretchy="false">(</mo><mo>−</mo><mi>z</mi><mi mathvariant="normal">/</mi><mi>d</mi><mo stretchy="false">)</mo></mrow><annotation encoding="application/x-tex">w \leftarrow w * (-z / d)</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.4306em;"></span><span class="mord mathnormal" style="margin-right:0.02691em;">w</span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">←</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:0.4653em;"></span><span class="mord mathnormal" style="margin-right:0.02691em;">w</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">∗</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:1em;vertical-align:-0.25em;"></span><span class="mopen">(</span><span class="mord">−</span><span class="mord mathnormal" style="margin-right:0.04398em;">z</span><span class="mord">/</span><span class="mord mathnormal">d</span><span class="mclose">)</span></span></span></span></p>
<h3>Color</h3>
<p>Now that we've established exactly how to project points in space onto the screen, we need to start coloring in triangles.
As I said before, I am not going to go into the algorithms that do this.
I want to discuss how to determine the color to fill in.</p>
<p>We could just choose one solid color.
As you can see from the demo (by pressing <code>R</code>), this doesn't lead to a particularly impressive or visually pleasing result.
I want an additional way to convey depth.</p>
<p>Given the three points that make up a triangle <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><msub><mi>p</mi><mn>1</mn></msub></mrow><annotation encoding="application/x-tex">p_1</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.625em;vertical-align:-0.1944em;"></span><span class="mord"><span class="mord mathnormal">p</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3011em;"><span style="top:-2.55em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span></span></span></span>, <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><msub><mi>p</mi><mn>2</mn></msub></mrow><annotation encoding="application/x-tex">p_2</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.625em;vertical-align:-0.1944em;"></span><span class="mord"><span class="mord mathnormal">p</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3011em;"><span style="top:-2.55em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">2</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span></span></span></span>, and <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><msub><mi>p</mi><mn>3</mn></msub></mrow><annotation encoding="application/x-tex">p_3</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.625em;vertical-align:-0.1944em;"></span><span class="mord"><span class="mord mathnormal">p</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3011em;"><span style="top:-2.55em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">3</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span></span></span></span>, we can find its normal vector (the vector perpendicular to it's surface), <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mover accent="true"><mi>n</mi><mo>⃗</mo></mover></mrow><annotation encoding="application/x-tex">\vec{n}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.714em;"></span><span class="mord accent"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.714em;"><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="mord mathnormal">n</span></span><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="accent-body" style="left:-0.2355em;"><span class="overlay" style="height:0.714em;width:0.471em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.471em" height="0.714em" style="width:0.471em" viewBox="0 0 471 714" preserveAspectRatio="xMinYMin"><path d="M377 20c0-5.333 1.833-10 5.5-14S391 0 397 0c4.667 0 8.667 1.667 12 5
3.333 2.667 6.667 9 10 19 6.667 24.667 20.333 43.667 41 57 7.333 4.667 11
10.667 11 18 0 6-1 10-3 12s-6.667 5-14 9c-28.667 14.667-53.667 35.667-75 63
-1.333 1.333-3.167 3.5-5.5 6.5s-4 4.833-5 5.5c-1 .667-2.5 1.333-4.5 2s-4.333 1
-7 1c-4.667 0-9.167-1.833-13.5-5.5S337 184 337 178c0-12.667 15.667-32.333 47-59
H213l-171-1c-8.667-6-13-12.333-13-19 0-4.667 4.333-11.333 13-20h359
c-16-25.333-24-45-24-59z"></path></svg></span></span></span></span></span></span></span></span></span></span> fairly easily.</p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mover accent="true"><mi>h</mi><mo>⃗</mo></mover><mo>=</mo><mfrac><mrow><msub><mi>p</mi><mn>2</mn></msub><mo>−</mo><msub><mi>p</mi><mn>1</mn></msub></mrow><mrow><mi mathvariant="normal">∣</mi><mi mathvariant="normal">∣</mi><msub><mi>p</mi><mn>2</mn></msub><mo>−</mo><msub><mi>p</mi><mn>1</mn></msub><mi mathvariant="normal">∣</mi><mi mathvariant="normal">∣</mi></mrow></mfrac><mo>×</mo><mfrac><mrow><msub><mi>p</mi><mn>3</mn></msub><mo>−</mo><msub><mi>p</mi><mn>1</mn></msub></mrow><mrow><mi mathvariant="normal">∣</mi><mi mathvariant="normal">∣</mi><msub><mi>p</mi><mn>3</mn></msub><mo>−</mo><msub><mi>p</mi><mn>1</mn></msub><mi mathvariant="normal">∣</mi><mi mathvariant="normal">∣</mi></mrow></mfrac></mrow><annotation encoding="application/x-tex">  \vec{h} = \frac{p_2 - p_1}{||p_2 - p_1||} \times \frac{p_3 - p_1}{||p_3 - p_1||}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.9774em;"></span><span class="mord accent"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.9774em;"><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="mord mathnormal">h</span></span><span style="top:-3.2634em;"><span class="pstrut" style="height:3em;"></span><span class="accent-body" style="left:-0.2355em;"><span class="overlay" style="height:0.714em;width:0.471em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.471em" height="0.714em" style="width:0.471em" viewBox="0 0 471 714" preserveAspectRatio="xMinYMin"><path d="M377 20c0-5.333 1.833-10 5.5-14S391 0 397 0c4.667 0 8.667 1.667 12 5
3.333 2.667 6.667 9 10 19 6.667 24.667 20.333 43.667 41 57 7.333 4.667 11
10.667 11 18 0 6-1 10-3 12s-6.667 5-14 9c-28.667 14.667-53.667 35.667-75 63
-1.333 1.333-3.167 3.5-5.5 6.5s-4 4.833-5 5.5c-1 .667-2.5 1.333-4.5 2s-4.333 1
-7 1c-4.667 0-9.167-1.833-13.5-5.5S337 184 337 178c0-12.667 15.667-32.333 47-59
H213l-171-1c-8.667-6-13-12.333-13-19 0-4.667 4.333-11.333 13-20h359
c-16-25.333-24-45-24-59z"></path></svg></span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:2.1963em;vertical-align:-0.936em;"></span><span class="mord"><span class="mopen nulldelimiter"></span><span class="mfrac"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:1.2603em;"><span style="top:-2.314em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">∣∣</span><span class="mord"><span class="mord mathnormal">p</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3011em;"><span style="top:-2.55em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">2</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">−</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mord"><span class="mord mathnormal">p</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3011em;"><span style="top:-2.55em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span><span class="mord">∣∣</span></span></span><span style="top:-3.23em;"><span class="pstrut" style="height:3em;"></span><span class="frac-line" style="border-bottom-width:0.04em;"></span></span><span style="top:-3.677em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord"><span class="mord mathnormal">p</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3011em;"><span style="top:-2.55em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">2</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">−</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mord"><span class="mord mathnormal">p</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3011em;"><span style="top:-2.55em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.936em;"><span></span></span></span></span></span><span class="mclose nulldelimiter"></span></span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">×</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:2.1963em;vertical-align:-0.936em;"></span><span class="mord"><span class="mopen nulldelimiter"></span><span class="mfrac"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:1.2603em;"><span style="top:-2.314em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">∣∣</span><span class="mord"><span class="mord mathnormal">p</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3011em;"><span style="top:-2.55em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">3</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">−</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mord"><span class="mord mathnormal">p</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3011em;"><span style="top:-2.55em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span><span class="mord">∣∣</span></span></span><span style="top:-3.23em;"><span class="pstrut" style="height:3em;"></span><span class="frac-line" style="border-bottom-width:0.04em;"></span></span><span style="top:-3.677em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord"><span class="mord mathnormal">p</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3011em;"><span style="top:-2.55em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">3</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">−</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mord"><span class="mord mathnormal">p</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3011em;"><span style="top:-2.55em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.936em;"><span></span></span></span></span></span><span class="mclose nulldelimiter"></span></span></span></span></span></span>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mover accent="true"><mi>n</mi><mo>⃗</mo></mover><mo>=</mo><mfrac><mi>h</mi><mrow><mi mathvariant="normal">∣</mi><mi mathvariant="normal">∣</mi><mi>h</mi><mi mathvariant="normal">∣</mi><mi mathvariant="normal">∣</mi></mrow></mfrac></mrow><annotation encoding="application/x-tex">  \vec{n} = \frac{h}{||h||}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.714em;"></span><span class="mord accent"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.714em;"><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="mord mathnormal">n</span></span><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="accent-body" style="left:-0.2355em;"><span class="overlay" style="height:0.714em;width:0.471em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.471em" height="0.714em" style="width:0.471em" viewBox="0 0 471 714" preserveAspectRatio="xMinYMin"><path d="M377 20c0-5.333 1.833-10 5.5-14S391 0 397 0c4.667 0 8.667 1.667 12 5
3.333 2.667 6.667 9 10 19 6.667 24.667 20.333 43.667 41 57 7.333 4.667 11
10.667 11 18 0 6-1 10-3 12s-6.667 5-14 9c-28.667 14.667-53.667 35.667-75 63
-1.333 1.333-3.167 3.5-5.5 6.5s-4 4.833-5 5.5c-1 .667-2.5 1.333-4.5 2s-4.333 1
-7 1c-4.667 0-9.167-1.833-13.5-5.5S337 184 337 178c0-12.667 15.667-32.333 47-59
H213l-171-1c-8.667-6-13-12.333-13-19 0-4.667 4.333-11.333 13-20h359
c-16-25.333-24-45-24-59z"></path></svg></span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:2.3074em;vertical-align:-0.936em;"></span><span class="mord"><span class="mopen nulldelimiter"></span><span class="mfrac"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:1.3714em;"><span style="top:-2.314em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">∣∣</span><span class="mord mathnormal">h</span><span class="mord">∣∣</span></span></span><span style="top:-3.23em;"><span class="pstrut" style="height:3em;"></span><span class="frac-line" style="border-bottom-width:0.04em;"></span></span><span style="top:-3.677em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord mathnormal">h</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.936em;"><span></span></span></span></span></span><span class="mclose nulldelimiter"></span></span></span></span></span></span>
<blockquote>
<p><strong>Note:</strong> the vertical bars around a vector <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi mathvariant="normal">∣</mi><mi mathvariant="normal">∣</mi><mover accent="true"><mi>v</mi><mo>⃗</mo></mover><mi mathvariant="normal">∣</mi><mi mathvariant="normal">∣</mi></mrow><annotation encoding="application/x-tex">||\vec{v}||</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:1em;vertical-align:-0.25em;"></span><span class="mord">∣∣</span><span class="mord accent"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.714em;"><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="mord mathnormal" style="margin-right:0.03588em;">v</span></span><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="accent-body" style="left:-0.2077em;"><span class="overlay" style="height:0.714em;width:0.471em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.471em" height="0.714em" style="width:0.471em" viewBox="0 0 471 714" preserveAspectRatio="xMinYMin"><path d="M377 20c0-5.333 1.833-10 5.5-14S391 0 397 0c4.667 0 8.667 1.667 12 5
3.333 2.667 6.667 9 10 19 6.667 24.667 20.333 43.667 41 57 7.333 4.667 11
10.667 11 18 0 6-1 10-3 12s-6.667 5-14 9c-28.667 14.667-53.667 35.667-75 63
-1.333 1.333-3.167 3.5-5.5 6.5s-4 4.833-5 5.5c-1 .667-2.5 1.333-4.5 2s-4.333 1
-7 1c-4.667 0-9.167-1.833-13.5-5.5S337 184 337 178c0-12.667 15.667-32.333 47-59
H213l-171-1c-8.667-6-13-12.333-13-19 0-4.667 4.333-11.333 13-20h359
c-16-25.333-24-45-24-59z"></path></svg></span></span></span></span></span></span></span><span class="mord">∣∣</span></span></span></span> signify getting the vector's length.</p>
</blockquote>
<p>Now that we have the triangles normal, we can fill it in more brightly depending on how directly it is facing the camera.</p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mtext>brightness</mtext><mo>=</mo><mover accent="true"><mi>n</mi><mo>⃗</mo></mover><mo>⋅</mo><mi mathvariant="normal">∣</mi><mi mathvariant="normal">∣</mi><mfrac><mrow><msub><mi>p</mi><mn>1</mn></msub><mo>+</mo><msub><mi>p</mi><mn>2</mn></msub><mo>+</mo><msub><mi>p</mi><mn>3</mn></msub></mrow><mn>3</mn></mfrac><mi mathvariant="normal">∣</mi><mi mathvariant="normal">∣</mi></mrow><annotation encoding="application/x-tex">\text{brightness} = \vec{n} \cdot ||\frac{p_1 + p_2 + p_3}{3}||</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8889em;vertical-align:-0.1944em;"></span><span class="mord text"><span class="mord">brightness</span></span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:0.714em;"></span><span class="mord accent"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.714em;"><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="mord mathnormal">n</span></span><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="accent-body" style="left:-0.2355em;"><span class="overlay" style="height:0.714em;width:0.471em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.471em" height="0.714em" style="width:0.471em" viewBox="0 0 471 714" preserveAspectRatio="xMinYMin"><path d="M377 20c0-5.333 1.833-10 5.5-14S391 0 397 0c4.667 0 8.667 1.667 12 5
3.333 2.667 6.667 9 10 19 6.667 24.667 20.333 43.667 41 57 7.333 4.667 11
10.667 11 18 0 6-1 10-3 12s-6.667 5-14 9c-28.667 14.667-53.667 35.667-75 63
-1.333 1.333-3.167 3.5-5.5 6.5s-4 4.833-5 5.5c-1 .667-2.5 1.333-4.5 2s-4.333 1
-7 1c-4.667 0-9.167-1.833-13.5-5.5S337 184 337 178c0-12.667 15.667-32.333 47-59
H213l-171-1c-8.667-6-13-12.333-13-19 0-4.667 4.333-11.333 13-20h359
c-16-25.333-24-45-24-59z"></path></svg></span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">⋅</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:1.9463em;vertical-align:-0.686em;"></span><span class="mord">∣∣</span><span class="mord"><span class="mopen nulldelimiter"></span><span class="mfrac"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:1.2603em;"><span style="top:-2.314em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">3</span></span></span><span style="top:-3.23em;"><span class="pstrut" style="height:3em;"></span><span class="frac-line" style="border-bottom-width:0.04em;"></span></span><span style="top:-3.677em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord"><span class="mord mathnormal">p</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3011em;"><span style="top:-2.55em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">+</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mord"><span class="mord mathnormal">p</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3011em;"><span style="top:-2.55em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">2</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">+</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mord"><span class="mord mathnormal">p</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3011em;"><span style="top:-2.55em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">3</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.686em;"><span></span></span></span></span></span><span class="mclose nulldelimiter"></span></span><span class="mord">∣∣</span></span></span></span></span>
<p>The resulting shading is the default in the demo.</p>
<p>Alternatively, we can also simply color based on the distance from the camera.
The resulting image is called a <em>depth map.</em></p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mtext>brightness</mtext><mo>=</mo><mtext>view distance</mtext><mo>−</mo><mi mathvariant="normal">∣</mi><mi mathvariant="normal">∣</mi><mfrac><mrow><msub><mi>p</mi><mn>1</mn></msub><mo>+</mo><msub><mi>p</mi><mn>2</mn></msub><mo>+</mo><msub><mi>p</mi><mn>3</mn></msub></mrow><mn>3</mn></mfrac><mi mathvariant="normal">∣</mi><mi mathvariant="normal">∣</mi></mrow><annotation encoding="application/x-tex">\text{brightness} = \text{view distance} - ||\frac{p_1 + p_2 + p_3}{3}||</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8889em;vertical-align:-0.1944em;"></span><span class="mord text"><span class="mord">brightness</span></span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:0.7778em;vertical-align:-0.0833em;"></span><span class="mord text"><span class="mord">view distance</span></span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">−</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:1.9463em;vertical-align:-0.686em;"></span><span class="mord">∣∣</span><span class="mord"><span class="mopen nulldelimiter"></span><span class="mfrac"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:1.2603em;"><span style="top:-2.314em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">3</span></span></span><span style="top:-3.23em;"><span class="pstrut" style="height:3em;"></span><span class="frac-line" style="border-bottom-width:0.04em;"></span></span><span style="top:-3.677em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord"><span class="mord mathnormal">p</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3011em;"><span style="top:-2.55em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">+</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mord"><span class="mord mathnormal">p</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3011em;"><span style="top:-2.55em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">2</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">+</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mord"><span class="mord mathnormal">p</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3011em;"><span style="top:-2.55em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">3</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.686em;"><span></span></span></span></span></span><span class="mclose nulldelimiter"></span></span><span class="mord">∣∣</span></span></span></span></span>
<h3>Sorting</h3>
<p>When the program is supplied a mesh, the faces are not in any specific order.
If we were to just draw each face in the order it arrives, nothing would make sense.</p>
<p>To solve this, we simple sort each face by it's distance, then draw the furthest faces first.</p>
<p>If you want to see what it would look like, go to the demo and press <code>O</code> to toggle face sorting.</p>
<h3>Optimization</h3>
<p>There are countless ways to optimize a renderer like this.
They all involve work-avoidance.
The one I want to discuss is often referred to as <em>backface culling.</em></p>
<p>In most situations, there is no need to see the inside of a mesh.
This allows us to avoid a lot of work for very little effort.
By checking the alignment of the point-to-face vector with a face's normal, we can check if a given face is facing toward us or not.</p>
<p>In the demo, you can toggle backface culling with <code>B</code>.</p>
<h2>Conclusion</h2>
<p>When I initially designed this project, I hoped it would allow me to apply some of the more advanced linear algebra concepts that I've learnt in the second trimester.
In this regard, it did not live up to my expectations.</p>
<p>While I was allowed to explore some concepts, like orthogonality, it was not quite satisfactory.</p>
<p>It was not for naught, though.
I learned a lot about the fields of math and computers integrate together, as well as how to more effectively convert mathematical concepts into a working prototype.
I want to continue doing projects like this, and cannot wait to re-take Linear Algebra when I go to college.</p>]]></content:encoded>
            <enclosure url="https://elijahpotter.dev/images/star_fox.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[Markov Chains Are the Original Language Models]]></title>
            <link>https://elijahpotter.dev/articles/markov_chains_are_the_original_language_models</link>
            <guid>https://elijahpotter.dev/articles/markov_chains_are_the_original_language_models</guid>
            <pubDate>Wed, 31 Jan 2024 07:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Markov Chains are the Original Language Models</h1>
<img src="/images/old_car.webp" alt="An old car interior">
<blockquote>
<p><strong>Heads Up:</strong> This article is a republished (with some tweaks on spelling, grammar and layout) version of
an article I wrote in my senior year of high school for my Linear Algebra class.
As such, the publish date is not quite correct.</p>
</blockquote>
<h2>The AI Buzz is Boring Now</h2>
<p>I've come to the conclusion that there are four stages to the current AI hype cycle in an individual person's brain, at least as it pertains to large language models.
At the very least, these are the stages I went through.</p>
<h3>Stage One: Amazement</h3>
<p>"Wow! This is so cool! I can converse with a computer just like a real person!"</p>
<p>This is where all the science fiction fantasies come to fruition.
The possibilities seem endless.
We can all kick back and relax now, right?</p>
<h3>Stage Two: Frustration</h3>
<p>"Hmm... This isn't as effective as I originally thought."</p>
<p>It seems like the brand-new technology is really only applicable to the kinds of work no one wants to do anyway.
What it <strong>is</strong> able to do doesn't provide too much value to you.
It gets information and logic wrong often enough that it cannot be trusted for just about anything.</p>
<h3>Stage Three: Confusion</h3>
<p>After stage two, you start to forget about it.
But the hype is inescapable.
Your friends bring it up.
Your parents ask you about it when you go home for the holidays.
Even your dentist tries to extol its virtues.</p>
<p>Even if you moved on it, no one else did.
Could that mean that you were wrong?</p>
<h3>Stage Four: Boredom</h3>
<p>At this point the rate of new language models appearing has become faster than rate of new JavaScript frameworks (and just as annoying).
You want to go back to your roots and start from scratch.
You want the freedom of knowing the whole stack from start to finish.
You don't want any of the ineffective magic.</p>
<p>This is where I am right now.
Want to go back to my roots.
Some people work on old cars, even though they are less efficient.
At the same time though, they are more fun to work on than new cars.
I've decided to look into Markov chains.</p>
<h2>Markov Chains</h2>
<p>Below is a demonstration of my implementation of auto-completion using Markov Chains.</p>
<p>Though it is written in Rust and compiled to WebAssembly, it is not particularly efficient. To find out why, continue down the page to my detailed explanation of the implementation.</p>
<h2>Controls</h2>
<p>You may use either "Choose Word" or your right arrow key [→] to let the system choose the next word. Alternatively, you can tap any of the [Possible Next Words] to do so yourself.</p>
<iframe iframe frameBorder="0" style="width: 100%; height: 1100px;" allowfullscreen src="/standalonemarkov.html"></iframe>
<h1>Explanation</h1>
<p>Markov chains, named after their inventor, Andrey Markov, are often used to model sequences of probabilistic events. That is, systems that cannot be modeled deterministically.</p>
<h2>Example</h2>
<p>Alice is at the grocery store. For every hour she is there, she has a 70% chance of leaving and going to the planetarium. Conversely, she has a 30% chance of staying.
If Alice is already at the planetarium, she has a 10% chance of leaving and going to the grocery store and a 90% chance of staying.
We can represent these probabilities as a table, where each column belongs to a start location, and each row belongs to a end location:</p>
<table>
<thead>
<tr>
<th></th>
<th align="right"></th>
<th align="right"></th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td align="right">Start at Grocery Store</td>
<td align="right">Start at Planetarium</td>
</tr>
<tr>
<td>End at Grocery Store</td>
<td align="right">30%</td>
<td align="right">10%</td>
</tr>
<tr>
<td>End at Planetarium</td>
<td align="right">70%</td>
<td align="right">90%</td>
</tr>
</tbody>
</table>
<p>If we already know Alice's location for sure, we can simply perform table lookups to predict her most likely next move.
For example, we <em>know</em> she is at the grocery store right now. So by looking at row 2, column 1, we can be 70% confident she will be at the planetarium next hour.
However, this doesn't work if we aren't sure of her location, or we want to predict more than one hour in advance. How do we predict her next move if we aren't certain of her current location?
In the latter case, we might express her current location as another table.</p>
<table>
<thead>
<tr>
<th>Location</th>
<th align="right">% Alice Present</th>
</tr>
</thead>
<tbody>
<tr>
<td>Grocery Store</td>
<td align="right">25%</td>
</tr>
<tr>
<td>Planetarium</td>
<td align="right">75%</td>
</tr>
</tbody>
</table>
<p>How do we estimate Alice's location in this new plane of possibility? In particular, how likely will Alice be at the Planetarium next hour?
Since there is a 25% probability Alice is at the grocery store, we multiply that with the probility of her transitioning to the Planetarium: <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mn>25</mn><mi mathvariant="normal">%</mi><mo>∗</mo><mn>75</mn><mi mathvariant="normal">%</mi></mrow><annotation encoding="application/x-tex">25\% * 75\%</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8056em;vertical-align:-0.0556em;"></span><span class="mord">25%</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">∗</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:0.8056em;vertical-align:-0.0556em;"></span><span class="mord">75%</span></span></span></span>. Next, we add the result with the probability of being at the Planetarium multiplied with the probability of her staying: <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mn>75</mn><mi mathvariant="normal">%</mi><mo>∗</mo><mn>90</mn><mi mathvariant="normal">%</mi></mrow><annotation encoding="application/x-tex">75\% * 90\%</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8056em;vertical-align:-0.0556em;"></span><span class="mord">75%</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">∗</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:0.8056em;vertical-align:-0.0556em;"></span><span class="mord">90%</span></span></span></span>.
In full, <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mn>25</mn><mi mathvariant="normal">%</mi><mo>∗</mo><mn>75</mn><mi mathvariant="normal">%</mi><mo>+</mo><mn>75</mn><mi mathvariant="normal">%</mi><mo>∗</mo><mn>90</mn><mi mathvariant="normal">%</mi><mo>=</mo><mn>85</mn><mi mathvariant="normal">%</mi></mrow><annotation encoding="application/x-tex">25\% * 75\% + 75\% * 90\% = 85\%</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8056em;vertical-align:-0.0556em;"></span><span class="mord">25%</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">∗</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:0.8333em;vertical-align:-0.0833em;"></span><span class="mord">75%</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">+</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:0.8056em;vertical-align:-0.0556em;"></span><span class="mord">75%</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">∗</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:0.8056em;vertical-align:-0.0556em;"></span><span class="mord">90%</span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:0.8056em;vertical-align:-0.0556em;"></span><span class="mord">85%</span></span></span></span>.
To see the probabilities as a table:</p>
<table>
<thead>
<tr>
<th>Next Location</th>
<th>Calculation</th>
<th align="right">% Alice Present</th>
</tr>
</thead>
<tbody>
<tr>
<td>Grocery Store</td>
<td><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mn>25</mn><mi mathvariant="normal">%</mi><mo>∗</mo><mn>30</mn><mi mathvariant="normal">%</mi><mo>+</mo><mn>75</mn><mi mathvariant="normal">%</mi><mo>∗</mo><mn>10</mn><mi mathvariant="normal">%</mi></mrow><annotation encoding="application/x-tex">25\% * 30\% + 75\% * 10\%</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8056em;vertical-align:-0.0556em;"></span><span class="mord">25%</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">∗</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:0.8333em;vertical-align:-0.0833em;"></span><span class="mord">30%</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">+</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:0.8056em;vertical-align:-0.0556em;"></span><span class="mord">75%</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">∗</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:0.8056em;vertical-align:-0.0556em;"></span><span class="mord">10%</span></span></span></span></td>
<td align="right">15%</td>
</tr>
<tr>
<td>Planetarium</td>
<td><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mn>25</mn><mi mathvariant="normal">%</mi><mo>∗</mo><mn>70</mn><mi mathvariant="normal">%</mi><mo>+</mo><mn>75</mn><mi mathvariant="normal">%</mi><mo>∗</mo><mn>90</mn><mi mathvariant="normal">%</mi></mrow><annotation encoding="application/x-tex">25\% * 70\% + 75\% * 90\%</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8056em;vertical-align:-0.0556em;"></span><span class="mord">25%</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">∗</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:0.8333em;vertical-align:-0.0833em;"></span><span class="mord">70%</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">+</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:0.8056em;vertical-align:-0.0556em;"></span><span class="mord">75%</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">∗</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:0.8056em;vertical-align:-0.0556em;"></span><span class="mord">90%</span></span></span></span></td>
<td align="right">85%</td>
</tr>
</tbody>
</table>
<p>The keen-eyed among you may have noticed that these operations look a lot like matrix multiplication.
Instead of a table, we may represent these possible transitions as a matrix <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>T</mi></mrow><annotation encoding="application/x-tex">T</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6833em;"></span><span class="mord mathnormal" style="margin-right:0.13889em;">T</span></span></span></span>, and the Alice's current location as a vector <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mover accent="true"><mi>s</mi><mo>⃗</mo></mover></mrow><annotation encoding="application/x-tex">\vec{s}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.714em;"></span><span class="mord accent"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.714em;"><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="mord mathnormal">s</span></span><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="accent-body" style="left:-0.1799em;"><span class="overlay" style="height:0.714em;width:0.471em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.471em" height="0.714em" style="width:0.471em" viewBox="0 0 471 714" preserveAspectRatio="xMinYMin"><path d="M377 20c0-5.333 1.833-10 5.5-14S391 0 397 0c4.667 0 8.667 1.667 12 5
3.333 2.667 6.667 9 10 19 6.667 24.667 20.333 43.667 41 57 7.333 4.667 11
10.667 11 18 0 6-1 10-3 12s-6.667 5-14 9c-28.667 14.667-53.667 35.667-75 63
-1.333 1.333-3.167 3.5-5.5 6.5s-4 4.833-5 5.5c-1 .667-2.5 1.333-4.5 2s-4.333 1
-7 1c-4.667 0-9.167-1.833-13.5-5.5S337 184 337 178c0-12.667 15.667-32.333 47-59
H213l-171-1c-8.667-6-13-12.333-13-19 0-4.667 4.333-11.333 13-20h359
c-16-25.333-24-45-24-59z"></path></svg></span></span></span></span></span></span></span></span></span></span>.</p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mi>T</mi><mo>=</mo><mrow><mo fence="true">[</mo><mtable rowspacing="0.16em" columnalign="center center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0.3</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0.1</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0.7</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0.9</mn></mstyle></mtd></mtr></mtable><mo fence="true">]</mo></mrow></mrow><annotation encoding="application/x-tex">T = \begin{bmatrix}
  0.3 &#x26; 0.1 \\
  0.7 &#x26; 0.9
\end{bmatrix}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6833em;"></span><span class="mord mathnormal" style="margin-right:0.13889em;">T</span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:2.4em;vertical-align:-0.95em;"></span><span class="minner"><span class="mopen delimcenter" style="top:0em;"><span class="delimsizing size3">[</span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:1.45em;"><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0.3</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0.7</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.95em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:1.45em;"><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0.1</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0.9</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.95em;"><span></span></span></span></span></span></span></span><span class="mclose delimcenter" style="top:0em;"><span class="delimsizing size3">]</span></span></span></span></span></span></span>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mover accent="true"><mi>s</mi><mo>⃗</mo></mover><mo>=</mo><mrow><mo fence="true">[</mo><mtable rowspacing="0.16em" columnalign="center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>.25</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>.75</mn></mstyle></mtd></mtr></mtable><mo fence="true">]</mo></mrow></mrow><annotation encoding="application/x-tex">\vec{s} = \begin{bmatrix}
  .25 \\
  .75 \\
\end{bmatrix}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.714em;"></span><span class="mord accent"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.714em;"><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="mord mathnormal">s</span></span><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="accent-body" style="left:-0.1799em;"><span class="overlay" style="height:0.714em;width:0.471em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.471em" height="0.714em" style="width:0.471em" viewBox="0 0 471 714" preserveAspectRatio="xMinYMin"><path d="M377 20c0-5.333 1.833-10 5.5-14S391 0 397 0c4.667 0 8.667 1.667 12 5
3.333 2.667 6.667 9 10 19 6.667 24.667 20.333 43.667 41 57 7.333 4.667 11
10.667 11 18 0 6-1 10-3 12s-6.667 5-14 9c-28.667 14.667-53.667 35.667-75 63
-1.333 1.333-3.167 3.5-5.5 6.5s-4 4.833-5 5.5c-1 .667-2.5 1.333-4.5 2s-4.333 1
-7 1c-4.667 0-9.167-1.833-13.5-5.5S337 184 337 178c0-12.667 15.667-32.333 47-59
H213l-171-1c-8.667-6-13-12.333-13-19 0-4.667 4.333-11.333 13-20h359
c-16-25.333-24-45-24-59z"></path></svg></span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:2.4em;vertical-align:-0.95em;"></span><span class="minner"><span class="mopen delimcenter" style="top:0em;"><span class="delimsizing size3">[</span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:1.45em;"><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">.25</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">.75</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.95em;"><span></span></span></span></span></span></span></span><span class="mclose delimcenter" style="top:0em;"><span class="delimsizing size3">]</span></span></span></span></span></span></span>
<blockquote>
<p><strong>Note:</strong> The location of each element remains the same as the table, even if we aren't explicitly labeling the rows and columns.</p>
</blockquote>
<p>Finding the next state matrix becomes as easy as multiplying the current location vector <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mover accent="true"><mi>s</mi><mo>⃗</mo></mover></mrow><annotation encoding="application/x-tex">\vec{s}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.714em;"></span><span class="mord accent"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.714em;"><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="mord mathnormal">s</span></span><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="accent-body" style="left:-0.1799em;"><span class="overlay" style="height:0.714em;width:0.471em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.471em" height="0.714em" style="width:0.471em" viewBox="0 0 471 714" preserveAspectRatio="xMinYMin"><path d="M377 20c0-5.333 1.833-10 5.5-14S391 0 397 0c4.667 0 8.667 1.667 12 5
3.333 2.667 6.667 9 10 19 6.667 24.667 20.333 43.667 41 57 7.333 4.667 11
10.667 11 18 0 6-1 10-3 12s-6.667 5-14 9c-28.667 14.667-53.667 35.667-75 63
-1.333 1.333-3.167 3.5-5.5 6.5s-4 4.833-5 5.5c-1 .667-2.5 1.333-4.5 2s-4.333 1
-7 1c-4.667 0-9.167-1.833-13.5-5.5S337 184 337 178c0-12.667 15.667-32.333 47-59
H213l-171-1c-8.667-6-13-12.333-13-19 0-4.667 4.333-11.333 13-20h359
c-16-25.333-24-45-24-59z"></path></svg></span></span></span></span></span></span></span></span></span></span> by <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>T</mi></mrow><annotation encoding="application/x-tex">T</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6833em;"></span><span class="mord mathnormal" style="margin-right:0.13889em;">T</span></span></span></span>. To find further hours in the future, we do it more than once. For example, to estimate three hours in the future: <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>T</mi><mi>T</mi><mi>T</mi><mover accent="true"><mi>s</mi><mo>⃗</mo></mover></mrow><annotation encoding="application/x-tex">TTT\vec{s}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.714em;"></span><span class="mord mathnormal" style="margin-right:0.13889em;">TTT</span><span class="mord accent"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.714em;"><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="mord mathnormal">s</span></span><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="accent-body" style="left:-0.1799em;"><span class="overlay" style="height:0.714em;width:0.471em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.471em" height="0.714em" style="width:0.471em" viewBox="0 0 471 714" preserveAspectRatio="xMinYMin"><path d="M377 20c0-5.333 1.833-10 5.5-14S391 0 397 0c4.667 0 8.667 1.667 12 5
3.333 2.667 6.667 9 10 19 6.667 24.667 20.333 43.667 41 57 7.333 4.667 11
10.667 11 18 0 6-1 10-3 12s-6.667 5-14 9c-28.667 14.667-53.667 35.667-75 63
-1.333 1.333-3.167 3.5-5.5 6.5s-4 4.833-5 5.5c-1 .667-2.5 1.333-4.5 2s-4.333 1
-7 1c-4.667 0-9.167-1.833-13.5-5.5S337 184 337 178c0-12.667 15.667-32.333 47-59
H213l-171-1c-8.667-6-13-12.333-13-19 0-4.667 4.333-11.333 13-20h359
c-16-25.333-24-45-24-59z"></path></svg></span></span></span></span></span></span></span></span></span></span>. We can condense this with an exponent: <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><msup><mi>T</mi><mn>3</mn></msup><mover accent="true"><mi>s</mi><mo>⃗</mo></mover></mrow><annotation encoding="application/x-tex">T^3\vec{s}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8141em;"></span><span class="mord"><span class="mord mathnormal" style="margin-right:0.13889em;">T</span><span class="msupsub"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.8141em;"><span style="top:-3.063em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">3</span></span></span></span></span></span></span></span><span class="mord accent"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.714em;"><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="mord mathnormal">s</span></span><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="accent-body" style="left:-0.1799em;"><span class="overlay" style="height:0.714em;width:0.471em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.471em" height="0.714em" style="width:0.471em" viewBox="0 0 471 714" preserveAspectRatio="xMinYMin"><path d="M377 20c0-5.333 1.833-10 5.5-14S391 0 397 0c4.667 0 8.667 1.667 12 5
3.333 2.667 6.667 9 10 19 6.667 24.667 20.333 43.667 41 57 7.333 4.667 11
10.667 11 18 0 6-1 10-3 12s-6.667 5-14 9c-28.667 14.667-53.667 35.667-75 63
-1.333 1.333-3.167 3.5-5.5 6.5s-4 4.833-5 5.5c-1 .667-2.5 1.333-4.5 2s-4.333 1
-7 1c-4.667 0-9.167-1.833-13.5-5.5S337 184 337 178c0-12.667 15.667-32.333 47-59
H213l-171-1c-8.667-6-13-12.333-13-19 0-4.667 4.333-11.333 13-20h359
c-16-25.333-24-45-24-59z"></path></svg></span></span></span></span></span></span></span></span></span></span> or generalize it to <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>n</mi></mrow><annotation encoding="application/x-tex">n</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.4306em;"></span><span class="mord mathnormal">n</span></span></span></span> hours with: <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><msup><mi>T</mi><mi>n</mi></msup><mover accent="true"><mi>s</mi><mo>⃗</mo></mover></mrow><annotation encoding="application/x-tex">T^n\vec{s}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.714em;"></span><span class="mord"><span class="mord mathnormal" style="margin-right:0.13889em;">T</span><span class="msupsub"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.6644em;"><span style="top:-3.063em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathnormal mtight">n</span></span></span></span></span></span></span></span><span class="mord accent"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.714em;"><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="mord mathnormal">s</span></span><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="accent-body" style="left:-0.1799em;"><span class="overlay" style="height:0.714em;width:0.471em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.471em" height="0.714em" style="width:0.471em" viewBox="0 0 471 714" preserveAspectRatio="xMinYMin"><path d="M377 20c0-5.333 1.833-10 5.5-14S391 0 397 0c4.667 0 8.667 1.667 12 5
3.333 2.667 6.667 9 10 19 6.667 24.667 20.333 43.667 41 57 7.333 4.667 11
10.667 11 18 0 6-1 10-3 12s-6.667 5-14 9c-28.667 14.667-53.667 35.667-75 63
-1.333 1.333-3.167 3.5-5.5 6.5s-4 4.833-5 5.5c-1 .667-2.5 1.333-4.5 2s-4.333 1
-7 1c-4.667 0-9.167-1.833-13.5-5.5S337 184 337 178c0-12.667 15.667-32.333 47-59
H213l-171-1c-8.667-6-13-12.333-13-19 0-4.667 4.333-11.333 13-20h359
c-16-25.333-24-45-24-59z"></path></svg></span></span></span></span></span></span></span></span></span></span>.</p>
<h2>Application to Text-Completion</h2>
<p>The principles above can be applied to a variety of probabilistic situations. Most relavant to this particular webpage, is text completion.
We want to estimate the most likely next word to the user. Given the last word, what are the most likely next words? First, we need a dictionary.</p>
<h3>The Dictionary</h3>
<p>It is trivial to build a dictionary from sample text. For the purposes of the explanation, we are going to start with an arbitrary dictionary.</p>
<table>
<thead>
<tr>
<th>Index</th>
<th>Word</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>orange</td>
</tr>
<tr>
<td>1</td>
<td>fruit</td>
</tr>
<tr>
<td>2</td>
<td>passion</td>
</tr>
<tr>
<td>3</td>
<td>cheese</td>
</tr>
<tr>
<td>4</td>
<td>not</td>
</tr>
<tr>
<td>5</td>
<td>is</td>
</tr>
</tbody>
</table>
<h3>Building the Transition Matrix</h3>
<p>To build our transition matrix, we need to count all the transitions that occur between possible words in our dictionary.
In the interest of performance, my implementation converts the dictionary into a <code>HashMap&#x3C;String, usize></code>.
Next, I go through the training text and match each word to it's index in the dictionary, effectively transforming the <code>String</code> into a <code>Vec&#x3C;usize></code>.
For example, the phrase, "passion fruit is not orange, cheese is orange," becomes, <code>[ 2, 1, 5, 4, 0, 3, 5, 0 ]</code>.
Next, the implementation iterates through each element in this vector, counting each transition. The counts are stored in another <code>HashMap</code> in the interest of performance, but is eventually converted into a matrix <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>C</mi></mrow><annotation encoding="application/x-tex">C</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6833em;"></span><span class="mord mathnormal" style="margin-right:0.07153em;">C</span></span></span></span>. Each row is the output word's index, and the column is the input word's index.
For example, the transition <code>"fruit" (index 1) -> "is" (index 5)</code> occurs exactly once, so we record <code>1</code> in column 1, row 5.</p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mi>C</mi><mo>=</mo><mrow><mo fence="true">[</mo><mtable rowspacing="0.16em" columnalign="center center center center center center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr></mtable><mo fence="true">]</mo></mrow></mrow><annotation encoding="application/x-tex">C = \begin{bmatrix}
    0 &#x26; 0 &#x26; 0 &#x26; 0 &#x26; 1 &#x26; 1 \\
    0 &#x26; 0 &#x26; 1 &#x26; 0 &#x26; 0 &#x26; 0 \\
    0 &#x26; 0 &#x26; 0 &#x26; 0 &#x26; 0 &#x26; 0 \\
    1 &#x26; 0 &#x26; 0 &#x26; 0 &#x26; 0 &#x26; 0 \\
    0 &#x26; 0 &#x26; 0 &#x26; 0 &#x26; 0 &#x26; 1 \\
    0 &#x26; 1 &#x26; 0 &#x26; 1 &#x26; 0 &#x26; 0
\end{bmatrix}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6833em;"></span><span class="mord mathnormal" style="margin-right:0.07153em;">C</span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:7.2001em;vertical-align:-3.35em;"></span><span class="minner"><span class="mopen"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:3.85em;"><span style="top:-5.8499em;"><span class="pstrut" style="height:9.2em;"></span><span style="width:0.667em;height:7.200em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="7.200em" viewBox="0 0 667 7200"><path d="M403 1759 V84 H666 V0 H319 V1759 v3600 v1759 h347 v-84
H403z M403 1759 V0 H319 V1759 v3600 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:3.35em;"><span></span></span></span></span></span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:3.85em;"><span style="top:-6.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-0.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:3.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:3.85em;"><span style="top:-6.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-0.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:3.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:3.85em;"><span style="top:-6.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-0.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:3.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:3.85em;"><span style="top:-6.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-0.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:3.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:3.85em;"><span style="top:-6.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-0.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:3.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:3.85em;"><span style="top:-6.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span><span style="top:-0.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:3.35em;"><span></span></span></span></span></span></span></span><span class="mclose"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:3.85em;"><span style="top:-5.8499em;"><span class="pstrut" style="height:9.2em;"></span><span style="width:0.667em;height:7.200em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="7.200em" viewBox="0 0 667 7200"><path d="M347 1759 V0 H0 V84 H263 V1759 v3600 v1759 H0 v84 H347z
M347 1759 V0 H263 V1759 v3600 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:3.35em;"><span></span></span></span></span></span></span></span></span></span></span></span>
<p>Not a very interesting matrix, is it?</p>
<p>Each element needs to be converted into a probability. Take the sum of each column:</p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mo fence="true">[</mo><mtable rowspacing="0.16em" columnalign="center center center center center center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>2</mn></mstyle></mtd></mtr></mtable><mo fence="true">]</mo></mrow><annotation encoding="application/x-tex">\begin{bmatrix}
    1 &#x26; 1 &#x26; 1 &#x26; 1 &#x26; 1 &#x26; 2
\end{bmatrix}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:1.2em;vertical-align:-0.35em;"></span><span class="minner"><span class="mopen delimcenter" style="top:0em;"><span class="delimsizing size1">[</span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.85em;"><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.85em;"><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.85em;"><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.85em;"><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.85em;"><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.85em;"><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">2</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.35em;"><span></span></span></span></span></span></span></span><span class="mclose delimcenter" style="top:0em;"><span class="delimsizing size1">]</span></span></span></span></span></span></span>
<p>Create a diagonal matrix <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>D</mi></mrow><annotation encoding="application/x-tex">D</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6833em;"></span><span class="mord mathnormal" style="margin-right:0.02778em;">D</span></span></span></span> composed of <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mfrac><mn>1</mn><mtext>column sum</mtext></mfrac></mrow><annotation encoding="application/x-tex">\frac{1}{\text{column sum}}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:1.1901em;vertical-align:-0.345em;"></span><span class="mord"><span class="mopen nulldelimiter"></span><span class="mfrac"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.8451em;"><span style="top:-2.655em;"><span class="pstrut" style="height:3em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight"><span class="mord text mtight"><span class="mord mtight">column sum</span></span></span></span></span><span style="top:-3.23em;"><span class="pstrut" style="height:3em;"></span><span class="frac-line" style="border-bottom-width:0.04em;"></span></span><span style="top:-3.394em;"><span class="pstrut" style="height:3em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight"><span class="mord mtight">1</span></span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.345em;"><span></span></span></span></span></span><span class="mclose nulldelimiter"></span></span></span></span></span></p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mi>C</mi><mo>=</mo><mrow><mo fence="true">[</mo><mtable rowspacing="0.16em" columnalign="center center center center center center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0.5</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0.5</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr></mtable><mo fence="true">]</mo></mrow></mrow><annotation encoding="application/x-tex">C = \begin{bmatrix}
    0 &#x26; 0 &#x26; 0 &#x26; 0 &#x26; 1 &#x26; 0.5 \\
    0 &#x26; 0 &#x26; 1 &#x26; 0 &#x26; 0 &#x26; 0 \\
    0 &#x26; 0 &#x26; 0 &#x26; 0 &#x26; 0 &#x26; 0 \\
    1 &#x26; 0 &#x26; 0 &#x26; 0 &#x26; 0 &#x26; 0 \\
    0 &#x26; 0 &#x26; 0 &#x26; 0 &#x26; 0 &#x26; 0.5 \\
    0 &#x26; 1 &#x26; 0 &#x26; 1 &#x26; 0 &#x26; 0
\end{bmatrix}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6833em;"></span><span class="mord mathnormal" style="margin-right:0.07153em;">C</span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:7.2001em;vertical-align:-3.35em;"></span><span class="minner"><span class="mopen"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:3.85em;"><span style="top:-5.8499em;"><span class="pstrut" style="height:9.2em;"></span><span style="width:0.667em;height:7.200em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="7.200em" viewBox="0 0 667 7200"><path d="M403 1759 V84 H666 V0 H319 V1759 v3600 v1759 h347 v-84
H403z M403 1759 V0 H319 V1759 v3600 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:3.35em;"><span></span></span></span></span></span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:3.85em;"><span style="top:-6.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-0.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:3.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:3.85em;"><span style="top:-6.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-0.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:3.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:3.85em;"><span style="top:-6.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-0.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:3.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:3.85em;"><span style="top:-6.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-0.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:3.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:3.85em;"><span style="top:-6.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-0.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:3.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:3.85em;"><span style="top:-6.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0.5</span></span></span><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0.5</span></span></span><span style="top:-0.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:3.35em;"><span></span></span></span></span></span></span></span><span class="mclose"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:3.85em;"><span style="top:-5.8499em;"><span class="pstrut" style="height:9.2em;"></span><span style="width:0.667em;height:7.200em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="7.200em" viewBox="0 0 667 7200"><path d="M347 1759 V0 H0 V84 H263 V1759 v3600 v1759 H0 v84 H347z
M347 1759 V0 H263 V1759 v3600 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:3.35em;"><span></span></span></span></span></span></span></span></span></span></span></span>
<p>To finalize our Markov (a.k.a. transition) matrix <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>M</mi></mrow><annotation encoding="application/x-tex">M</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6833em;"></span><span class="mord mathnormal" style="margin-right:0.10903em;">M</span></span></span></span>, we simply perform:</p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mi>M</mi><mo>=</mo><mi>D</mi><mi>C</mi></mrow><annotation encoding="application/x-tex">M = DC</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6833em;"></span><span class="mord mathnormal" style="margin-right:0.10903em;">M</span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:0.6833em;"></span><span class="mord mathnormal" style="margin-right:0.02778em;">D</span><span class="mord mathnormal" style="margin-right:0.07153em;">C</span></span></span></span></span>
<h3>Using the transition matrix</h3>
<p>There are two possible situations: the user is in the process of typing, or they have finished their last word.
The latter is the easiest to implement.
Scan the user's text, and isolate the last word. Perform a lookup on the word list to identify it's index. Create a new vector containing <code>0</code>s except for that index, which should contain a <code>1</code>.
For example, if the last word was 'is',</p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mover accent="true"><mi>s</mi><mo>⃗</mo></mover><mo>=</mo><mrow><mo fence="true">[</mo><mtable rowspacing="0.16em" columnalign="center center center center center center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd></mtr></mtable><mo fence="true">]</mo></mrow></mrow><annotation encoding="application/x-tex">\vec{s} = \begin{bmatrix}
    0 &#x26; 0 &#x26; 0 &#x26; 0 &#x26; 0 &#x26; 1
\end{bmatrix}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.714em;"></span><span class="mord accent"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.714em;"><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="mord mathnormal">s</span></span><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="accent-body" style="left:-0.1799em;"><span class="overlay" style="height:0.714em;width:0.471em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.471em" height="0.714em" style="width:0.471em" viewBox="0 0 471 714" preserveAspectRatio="xMinYMin"><path d="M377 20c0-5.333 1.833-10 5.5-14S391 0 397 0c4.667 0 8.667 1.667 12 5
3.333 2.667 6.667 9 10 19 6.667 24.667 20.333 43.667 41 57 7.333 4.667 11
10.667 11 18 0 6-1 10-3 12s-6.667 5-14 9c-28.667 14.667-53.667 35.667-75 63
-1.333 1.333-3.167 3.5-5.5 6.5s-4 4.833-5 5.5c-1 .667-2.5 1.333-4.5 2s-4.333 1
-7 1c-4.667 0-9.167-1.833-13.5-5.5S337 184 337 178c0-12.667 15.667-32.333 47-59
H213l-171-1c-8.667-6-13-12.333-13-19 0-4.667 4.333-11.333 13-20h359
c-16-25.333-24-45-24-59z"></path></svg></span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:1.2em;vertical-align:-0.35em;"></span><span class="minner"><span class="mopen delimcenter" style="top:0em;"><span class="delimsizing size1">[</span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.85em;"><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.85em;"><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.85em;"><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.85em;"><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.85em;"><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.85em;"><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.35em;"><span></span></span></span></span></span></span></span><span class="mclose delimcenter" style="top:0em;"><span class="delimsizing size1">]</span></span></span></span></span></span></span>
<p>Run it through our transition matrix:</p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mi>M</mi><mover accent="true"><mi>s</mi><mo>⃗</mo></mover><mo>=</mo><mrow><mo fence="true">[</mo><mtable rowspacing="0.16em" columnalign="center center center center center center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0.5</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0.5</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr></mtable><mo fence="true">]</mo></mrow></mrow><annotation encoding="application/x-tex">M\vec{s} = \begin{bmatrix}
  0.5 &#x26; 0 &#x26; 0 &#x26; 0 &#x26; 0.5 &#x26; 0
\end{bmatrix}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.714em;"></span><span class="mord mathnormal" style="margin-right:0.10903em;">M</span><span class="mord accent"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.714em;"><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="mord mathnormal">s</span></span><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="accent-body" style="left:-0.1799em;"><span class="overlay" style="height:0.714em;width:0.471em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.471em" height="0.714em" style="width:0.471em" viewBox="0 0 471 714" preserveAspectRatio="xMinYMin"><path d="M377 20c0-5.333 1.833-10 5.5-14S391 0 397 0c4.667 0 8.667 1.667 12 5
3.333 2.667 6.667 9 10 19 6.667 24.667 20.333 43.667 41 57 7.333 4.667 11
10.667 11 18 0 6-1 10-3 12s-6.667 5-14 9c-28.667 14.667-53.667 35.667-75 63
-1.333 1.333-3.167 3.5-5.5 6.5s-4 4.833-5 5.5c-1 .667-2.5 1.333-4.5 2s-4.333 1
-7 1c-4.667 0-9.167-1.833-13.5-5.5S337 184 337 178c0-12.667 15.667-32.333 47-59
H213l-171-1c-8.667-6-13-12.333-13-19 0-4.667 4.333-11.333 13-20h359
c-16-25.333-24-45-24-59z"></path></svg></span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:1.2em;vertical-align:-0.35em;"></span><span class="minner"><span class="mopen delimcenter" style="top:0em;"><span class="delimsizing size1">[</span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.85em;"><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0.5</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.85em;"><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.85em;"><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.85em;"><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.85em;"><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0.5</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.85em;"><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.35em;"><span></span></span></span></span></span></span></span><span class="mclose delimcenter" style="top:0em;"><span class="delimsizing size1">]</span></span></span></span></span></span></span>
<p>Meaning the most probable next choices are at indices <code>0</code> and <code>4</code>, which correspond to "orange" and "not" respectively.
This is great for autocomplete. We can simply list the most probable options to the user.</p>
<h3>Text-Generation and Steady State</h3>
<p>It would be pretty neat if we could use this method to automagically generate text, right?</p>
<h4>The Naive Solution</h4>
<p>Each iteration, choose the most likely word from the set. Maybe randomize it a bit: choose a random word from the top 5 options.
Unfortunately, there is an issue. All Markov chains are guaranteed to converge on a specific probabilistic state given enough iterations. In order to get text generation to work unpredictably and without converging, we need something a bit more complex.</p>
<h4>My Solution</h4>
<p>Create a square diagonal matrix <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>R</mi></mrow><annotation encoding="application/x-tex">R</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6833em;"></span><span class="mord mathnormal" style="margin-right:0.00773em;">R</span></span></span></span> with a side length equal to the length of <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mover accent="true"><mi>s</mi><mo>⃗</mo></mover></mrow><annotation encoding="application/x-tex">\vec{s}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.714em;"></span><span class="mord accent"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.714em;"><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="mord mathnormal">s</span></span><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="accent-body" style="left:-0.1799em;"><span class="overlay" style="height:0.714em;width:0.471em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.471em" height="0.714em" style="width:0.471em" viewBox="0 0 471 714" preserveAspectRatio="xMinYMin"><path d="M377 20c0-5.333 1.833-10 5.5-14S391 0 397 0c4.667 0 8.667 1.667 12 5
3.333 2.667 6.667 9 10 19 6.667 24.667 20.333 43.667 41 57 7.333 4.667 11
10.667 11 18 0 6-1 10-3 12s-6.667 5-14 9c-28.667 14.667-53.667 35.667-75 63
-1.333 1.333-3.167 3.5-5.5 6.5s-4 4.833-5 5.5c-1 .667-2.5 1.333-4.5 2s-4.333 1
-7 1c-4.667 0-9.167-1.833-13.5-5.5S337 184 337 178c0-12.667 15.667-32.333 47-59
H213l-171-1c-8.667-6-13-12.333-13-19 0-4.667 4.333-11.333 13-20h359
c-16-25.333-24-45-24-59z"></path></svg></span></span></span></span></span></span></span></span></span></span>. Fill the diagonal elements with random numbers between <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mn>0</mn></mrow><annotation encoding="application/x-tex">0</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6444em;"></span><span class="mord">0</span></span></span></span> and <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mn>1</mn></mrow><annotation encoding="application/x-tex">1</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6444em;"></span><span class="mord">1</span></span></span></span>. Then choose the word whose index corresponds with the highest value of <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>R</mi><mover accent="true"><mi>s</mi><mo>⃗</mo></mover></mrow><annotation encoding="application/x-tex">R\vec{s}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.714em;"></span><span class="mord mathnormal" style="margin-right:0.00773em;">R</span><span class="mord accent"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.714em;"><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="mord mathnormal">s</span></span><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="accent-body" style="left:-0.1799em;"><span class="overlay" style="height:0.714em;width:0.471em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.471em" height="0.714em" style="width:0.471em" viewBox="0 0 471 714" preserveAspectRatio="xMinYMin"><path d="M377 20c0-5.333 1.833-10 5.5-14S391 0 397 0c4.667 0 8.667 1.667 12 5
3.333 2.667 6.667 9 10 19 6.667 24.667 20.333 43.667 41 57 7.333 4.667 11
10.667 11 18 0 6-1 10-3 12s-6.667 5-14 9c-28.667 14.667-53.667 35.667-75 63
-1.333 1.333-3.167 3.5-5.5 6.5s-4 4.833-5 5.5c-1 .667-2.5 1.333-4.5 2s-4.333 1
-7 1c-4.667 0-9.167-1.833-13.5-5.5S337 184 337 178c0-12.667 15.667-32.333 47-59
H213l-171-1c-8.667-6-13-12.333-13-19 0-4.667 4.333-11.333 13-20h359
c-16-25.333-24-45-24-59z"></path></svg></span></span></span></span></span></span></span></span></span></span></p>]]></description>
            <content:encoded><![CDATA[<h1>Markov Chains are the Original Language Models</h1>
<img src="/images/old_car.webp" alt="An old car interior">
<blockquote>
<p><strong>Heads Up:</strong> This article is a republished (with some tweaks on spelling, grammar and layout) version of
an article I wrote in my senior year of high school for my Linear Algebra class.
As such, the publish date is not quite correct.</p>
</blockquote>
<h2>The AI Buzz is Boring Now</h2>
<p>I've come to the conclusion that there are four stages to the current AI hype cycle in an individual person's brain, at least as it pertains to large language models.
At the very least, these are the stages I went through.</p>
<h3>Stage One: Amazement</h3>
<p>"Wow! This is so cool! I can converse with a computer just like a real person!"</p>
<p>This is where all the science fiction fantasies come to fruition.
The possibilities seem endless.
We can all kick back and relax now, right?</p>
<h3>Stage Two: Frustration</h3>
<p>"Hmm... This isn't as effective as I originally thought."</p>
<p>It seems like the brand-new technology is really only applicable to the kinds of work no one wants to do anyway.
What it <strong>is</strong> able to do doesn't provide too much value to you.
It gets information and logic wrong often enough that it cannot be trusted for just about anything.</p>
<h3>Stage Three: Confusion</h3>
<p>After stage two, you start to forget about it.
But the hype is inescapable.
Your friends bring it up.
Your parents ask you about it when you go home for the holidays.
Even your dentist tries to extol its virtues.</p>
<p>Even if you moved on it, no one else did.
Could that mean that you were wrong?</p>
<h3>Stage Four: Boredom</h3>
<p>At this point the rate of new language models appearing has become faster than rate of new JavaScript frameworks (and just as annoying).
You want to go back to your roots and start from scratch.
You want the freedom of knowing the whole stack from start to finish.
You don't want any of the ineffective magic.</p>
<p>This is where I am right now.
Want to go back to my roots.
Some people work on old cars, even though they are less efficient.
At the same time though, they are more fun to work on than new cars.
I've decided to look into Markov chains.</p>
<h2>Markov Chains</h2>
<p>Below is a demonstration of my implementation of auto-completion using Markov Chains.</p>
<p>Though it is written in Rust and compiled to WebAssembly, it is not particularly efficient. To find out why, continue down the page to my detailed explanation of the implementation.</p>
<h2>Controls</h2>
<p>You may use either "Choose Word" or your right arrow key [→] to let the system choose the next word. Alternatively, you can tap any of the [Possible Next Words] to do so yourself.</p>
<iframe iframe frameBorder="0" style="width: 100%; height: 1100px;" allowfullscreen src="/standalonemarkov.html"></iframe>
<h1>Explanation</h1>
<p>Markov chains, named after their inventor, Andrey Markov, are often used to model sequences of probabilistic events. That is, systems that cannot be modeled deterministically.</p>
<h2>Example</h2>
<p>Alice is at the grocery store. For every hour she is there, she has a 70% chance of leaving and going to the planetarium. Conversely, she has a 30% chance of staying.
If Alice is already at the planetarium, she has a 10% chance of leaving and going to the grocery store and a 90% chance of staying.
We can represent these probabilities as a table, where each column belongs to a start location, and each row belongs to a end location:</p>
<table>
<thead>
<tr>
<th></th>
<th align="right"></th>
<th align="right"></th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td align="right">Start at Grocery Store</td>
<td align="right">Start at Planetarium</td>
</tr>
<tr>
<td>End at Grocery Store</td>
<td align="right">30%</td>
<td align="right">10%</td>
</tr>
<tr>
<td>End at Planetarium</td>
<td align="right">70%</td>
<td align="right">90%</td>
</tr>
</tbody>
</table>
<p>If we already know Alice's location for sure, we can simply perform table lookups to predict her most likely next move.
For example, we <em>know</em> she is at the grocery store right now. So by looking at row 2, column 1, we can be 70% confident she will be at the planetarium next hour.
However, this doesn't work if we aren't sure of her location, or we want to predict more than one hour in advance. How do we predict her next move if we aren't certain of her current location?
In the latter case, we might express her current location as another table.</p>
<table>
<thead>
<tr>
<th>Location</th>
<th align="right">% Alice Present</th>
</tr>
</thead>
<tbody>
<tr>
<td>Grocery Store</td>
<td align="right">25%</td>
</tr>
<tr>
<td>Planetarium</td>
<td align="right">75%</td>
</tr>
</tbody>
</table>
<p>How do we estimate Alice's location in this new plane of possibility? In particular, how likely will Alice be at the Planetarium next hour?
Since there is a 25% probability Alice is at the grocery store, we multiply that with the probility of her transitioning to the Planetarium: <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mn>25</mn><mi mathvariant="normal">%</mi><mo>∗</mo><mn>75</mn><mi mathvariant="normal">%</mi></mrow><annotation encoding="application/x-tex">25\% * 75\%</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8056em;vertical-align:-0.0556em;"></span><span class="mord">25%</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">∗</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:0.8056em;vertical-align:-0.0556em;"></span><span class="mord">75%</span></span></span></span>. Next, we add the result with the probability of being at the Planetarium multiplied with the probability of her staying: <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mn>75</mn><mi mathvariant="normal">%</mi><mo>∗</mo><mn>90</mn><mi mathvariant="normal">%</mi></mrow><annotation encoding="application/x-tex">75\% * 90\%</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8056em;vertical-align:-0.0556em;"></span><span class="mord">75%</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">∗</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:0.8056em;vertical-align:-0.0556em;"></span><span class="mord">90%</span></span></span></span>.
In full, <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mn>25</mn><mi mathvariant="normal">%</mi><mo>∗</mo><mn>75</mn><mi mathvariant="normal">%</mi><mo>+</mo><mn>75</mn><mi mathvariant="normal">%</mi><mo>∗</mo><mn>90</mn><mi mathvariant="normal">%</mi><mo>=</mo><mn>85</mn><mi mathvariant="normal">%</mi></mrow><annotation encoding="application/x-tex">25\% * 75\% + 75\% * 90\% = 85\%</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8056em;vertical-align:-0.0556em;"></span><span class="mord">25%</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">∗</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:0.8333em;vertical-align:-0.0833em;"></span><span class="mord">75%</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">+</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:0.8056em;vertical-align:-0.0556em;"></span><span class="mord">75%</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">∗</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:0.8056em;vertical-align:-0.0556em;"></span><span class="mord">90%</span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:0.8056em;vertical-align:-0.0556em;"></span><span class="mord">85%</span></span></span></span>.
To see the probabilities as a table:</p>
<table>
<thead>
<tr>
<th>Next Location</th>
<th>Calculation</th>
<th align="right">% Alice Present</th>
</tr>
</thead>
<tbody>
<tr>
<td>Grocery Store</td>
<td><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mn>25</mn><mi mathvariant="normal">%</mi><mo>∗</mo><mn>30</mn><mi mathvariant="normal">%</mi><mo>+</mo><mn>75</mn><mi mathvariant="normal">%</mi><mo>∗</mo><mn>10</mn><mi mathvariant="normal">%</mi></mrow><annotation encoding="application/x-tex">25\% * 30\% + 75\% * 10\%</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8056em;vertical-align:-0.0556em;"></span><span class="mord">25%</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">∗</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:0.8333em;vertical-align:-0.0833em;"></span><span class="mord">30%</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">+</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:0.8056em;vertical-align:-0.0556em;"></span><span class="mord">75%</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">∗</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:0.8056em;vertical-align:-0.0556em;"></span><span class="mord">10%</span></span></span></span></td>
<td align="right">15%</td>
</tr>
<tr>
<td>Planetarium</td>
<td><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mn>25</mn><mi mathvariant="normal">%</mi><mo>∗</mo><mn>70</mn><mi mathvariant="normal">%</mi><mo>+</mo><mn>75</mn><mi mathvariant="normal">%</mi><mo>∗</mo><mn>90</mn><mi mathvariant="normal">%</mi></mrow><annotation encoding="application/x-tex">25\% * 70\% + 75\% * 90\%</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8056em;vertical-align:-0.0556em;"></span><span class="mord">25%</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">∗</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:0.8333em;vertical-align:-0.0833em;"></span><span class="mord">70%</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">+</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:0.8056em;vertical-align:-0.0556em;"></span><span class="mord">75%</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">∗</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:0.8056em;vertical-align:-0.0556em;"></span><span class="mord">90%</span></span></span></span></td>
<td align="right">85%</td>
</tr>
</tbody>
</table>
<p>The keen-eyed among you may have noticed that these operations look a lot like matrix multiplication.
Instead of a table, we may represent these possible transitions as a matrix <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>T</mi></mrow><annotation encoding="application/x-tex">T</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6833em;"></span><span class="mord mathnormal" style="margin-right:0.13889em;">T</span></span></span></span>, and the Alice's current location as a vector <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mover accent="true"><mi>s</mi><mo>⃗</mo></mover></mrow><annotation encoding="application/x-tex">\vec{s}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.714em;"></span><span class="mord accent"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.714em;"><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="mord mathnormal">s</span></span><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="accent-body" style="left:-0.1799em;"><span class="overlay" style="height:0.714em;width:0.471em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.471em" height="0.714em" style="width:0.471em" viewBox="0 0 471 714" preserveAspectRatio="xMinYMin"><path d="M377 20c0-5.333 1.833-10 5.5-14S391 0 397 0c4.667 0 8.667 1.667 12 5
3.333 2.667 6.667 9 10 19 6.667 24.667 20.333 43.667 41 57 7.333 4.667 11
10.667 11 18 0 6-1 10-3 12s-6.667 5-14 9c-28.667 14.667-53.667 35.667-75 63
-1.333 1.333-3.167 3.5-5.5 6.5s-4 4.833-5 5.5c-1 .667-2.5 1.333-4.5 2s-4.333 1
-7 1c-4.667 0-9.167-1.833-13.5-5.5S337 184 337 178c0-12.667 15.667-32.333 47-59
H213l-171-1c-8.667-6-13-12.333-13-19 0-4.667 4.333-11.333 13-20h359
c-16-25.333-24-45-24-59z"></path></svg></span></span></span></span></span></span></span></span></span></span>.</p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mi>T</mi><mo>=</mo><mrow><mo fence="true">[</mo><mtable rowspacing="0.16em" columnalign="center center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0.3</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0.1</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0.7</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0.9</mn></mstyle></mtd></mtr></mtable><mo fence="true">]</mo></mrow></mrow><annotation encoding="application/x-tex">T = \begin{bmatrix}
  0.3 &#x26; 0.1 \\
  0.7 &#x26; 0.9
\end{bmatrix}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6833em;"></span><span class="mord mathnormal" style="margin-right:0.13889em;">T</span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:2.4em;vertical-align:-0.95em;"></span><span class="minner"><span class="mopen delimcenter" style="top:0em;"><span class="delimsizing size3">[</span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:1.45em;"><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0.3</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0.7</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.95em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:1.45em;"><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0.1</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0.9</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.95em;"><span></span></span></span></span></span></span></span><span class="mclose delimcenter" style="top:0em;"><span class="delimsizing size3">]</span></span></span></span></span></span></span>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mover accent="true"><mi>s</mi><mo>⃗</mo></mover><mo>=</mo><mrow><mo fence="true">[</mo><mtable rowspacing="0.16em" columnalign="center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>.25</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>.75</mn></mstyle></mtd></mtr></mtable><mo fence="true">]</mo></mrow></mrow><annotation encoding="application/x-tex">\vec{s} = \begin{bmatrix}
  .25 \\
  .75 \\
\end{bmatrix}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.714em;"></span><span class="mord accent"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.714em;"><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="mord mathnormal">s</span></span><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="accent-body" style="left:-0.1799em;"><span class="overlay" style="height:0.714em;width:0.471em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.471em" height="0.714em" style="width:0.471em" viewBox="0 0 471 714" preserveAspectRatio="xMinYMin"><path d="M377 20c0-5.333 1.833-10 5.5-14S391 0 397 0c4.667 0 8.667 1.667 12 5
3.333 2.667 6.667 9 10 19 6.667 24.667 20.333 43.667 41 57 7.333 4.667 11
10.667 11 18 0 6-1 10-3 12s-6.667 5-14 9c-28.667 14.667-53.667 35.667-75 63
-1.333 1.333-3.167 3.5-5.5 6.5s-4 4.833-5 5.5c-1 .667-2.5 1.333-4.5 2s-4.333 1
-7 1c-4.667 0-9.167-1.833-13.5-5.5S337 184 337 178c0-12.667 15.667-32.333 47-59
H213l-171-1c-8.667-6-13-12.333-13-19 0-4.667 4.333-11.333 13-20h359
c-16-25.333-24-45-24-59z"></path></svg></span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:2.4em;vertical-align:-0.95em;"></span><span class="minner"><span class="mopen delimcenter" style="top:0em;"><span class="delimsizing size3">[</span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:1.45em;"><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">.25</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">.75</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.95em;"><span></span></span></span></span></span></span></span><span class="mclose delimcenter" style="top:0em;"><span class="delimsizing size3">]</span></span></span></span></span></span></span>
<blockquote>
<p><strong>Note:</strong> The location of each element remains the same as the table, even if we aren't explicitly labeling the rows and columns.</p>
</blockquote>
<p>Finding the next state matrix becomes as easy as multiplying the current location vector <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mover accent="true"><mi>s</mi><mo>⃗</mo></mover></mrow><annotation encoding="application/x-tex">\vec{s}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.714em;"></span><span class="mord accent"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.714em;"><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="mord mathnormal">s</span></span><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="accent-body" style="left:-0.1799em;"><span class="overlay" style="height:0.714em;width:0.471em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.471em" height="0.714em" style="width:0.471em" viewBox="0 0 471 714" preserveAspectRatio="xMinYMin"><path d="M377 20c0-5.333 1.833-10 5.5-14S391 0 397 0c4.667 0 8.667 1.667 12 5
3.333 2.667 6.667 9 10 19 6.667 24.667 20.333 43.667 41 57 7.333 4.667 11
10.667 11 18 0 6-1 10-3 12s-6.667 5-14 9c-28.667 14.667-53.667 35.667-75 63
-1.333 1.333-3.167 3.5-5.5 6.5s-4 4.833-5 5.5c-1 .667-2.5 1.333-4.5 2s-4.333 1
-7 1c-4.667 0-9.167-1.833-13.5-5.5S337 184 337 178c0-12.667 15.667-32.333 47-59
H213l-171-1c-8.667-6-13-12.333-13-19 0-4.667 4.333-11.333 13-20h359
c-16-25.333-24-45-24-59z"></path></svg></span></span></span></span></span></span></span></span></span></span> by <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>T</mi></mrow><annotation encoding="application/x-tex">T</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6833em;"></span><span class="mord mathnormal" style="margin-right:0.13889em;">T</span></span></span></span>. To find further hours in the future, we do it more than once. For example, to estimate three hours in the future: <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>T</mi><mi>T</mi><mi>T</mi><mover accent="true"><mi>s</mi><mo>⃗</mo></mover></mrow><annotation encoding="application/x-tex">TTT\vec{s}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.714em;"></span><span class="mord mathnormal" style="margin-right:0.13889em;">TTT</span><span class="mord accent"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.714em;"><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="mord mathnormal">s</span></span><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="accent-body" style="left:-0.1799em;"><span class="overlay" style="height:0.714em;width:0.471em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.471em" height="0.714em" style="width:0.471em" viewBox="0 0 471 714" preserveAspectRatio="xMinYMin"><path d="M377 20c0-5.333 1.833-10 5.5-14S391 0 397 0c4.667 0 8.667 1.667 12 5
3.333 2.667 6.667 9 10 19 6.667 24.667 20.333 43.667 41 57 7.333 4.667 11
10.667 11 18 0 6-1 10-3 12s-6.667 5-14 9c-28.667 14.667-53.667 35.667-75 63
-1.333 1.333-3.167 3.5-5.5 6.5s-4 4.833-5 5.5c-1 .667-2.5 1.333-4.5 2s-4.333 1
-7 1c-4.667 0-9.167-1.833-13.5-5.5S337 184 337 178c0-12.667 15.667-32.333 47-59
H213l-171-1c-8.667-6-13-12.333-13-19 0-4.667 4.333-11.333 13-20h359
c-16-25.333-24-45-24-59z"></path></svg></span></span></span></span></span></span></span></span></span></span>. We can condense this with an exponent: <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><msup><mi>T</mi><mn>3</mn></msup><mover accent="true"><mi>s</mi><mo>⃗</mo></mover></mrow><annotation encoding="application/x-tex">T^3\vec{s}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8141em;"></span><span class="mord"><span class="mord mathnormal" style="margin-right:0.13889em;">T</span><span class="msupsub"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.8141em;"><span style="top:-3.063em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">3</span></span></span></span></span></span></span></span><span class="mord accent"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.714em;"><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="mord mathnormal">s</span></span><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="accent-body" style="left:-0.1799em;"><span class="overlay" style="height:0.714em;width:0.471em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.471em" height="0.714em" style="width:0.471em" viewBox="0 0 471 714" preserveAspectRatio="xMinYMin"><path d="M377 20c0-5.333 1.833-10 5.5-14S391 0 397 0c4.667 0 8.667 1.667 12 5
3.333 2.667 6.667 9 10 19 6.667 24.667 20.333 43.667 41 57 7.333 4.667 11
10.667 11 18 0 6-1 10-3 12s-6.667 5-14 9c-28.667 14.667-53.667 35.667-75 63
-1.333 1.333-3.167 3.5-5.5 6.5s-4 4.833-5 5.5c-1 .667-2.5 1.333-4.5 2s-4.333 1
-7 1c-4.667 0-9.167-1.833-13.5-5.5S337 184 337 178c0-12.667 15.667-32.333 47-59
H213l-171-1c-8.667-6-13-12.333-13-19 0-4.667 4.333-11.333 13-20h359
c-16-25.333-24-45-24-59z"></path></svg></span></span></span></span></span></span></span></span></span></span> or generalize it to <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>n</mi></mrow><annotation encoding="application/x-tex">n</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.4306em;"></span><span class="mord mathnormal">n</span></span></span></span> hours with: <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><msup><mi>T</mi><mi>n</mi></msup><mover accent="true"><mi>s</mi><mo>⃗</mo></mover></mrow><annotation encoding="application/x-tex">T^n\vec{s}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.714em;"></span><span class="mord"><span class="mord mathnormal" style="margin-right:0.13889em;">T</span><span class="msupsub"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.6644em;"><span style="top:-3.063em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathnormal mtight">n</span></span></span></span></span></span></span></span><span class="mord accent"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.714em;"><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="mord mathnormal">s</span></span><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="accent-body" style="left:-0.1799em;"><span class="overlay" style="height:0.714em;width:0.471em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.471em" height="0.714em" style="width:0.471em" viewBox="0 0 471 714" preserveAspectRatio="xMinYMin"><path d="M377 20c0-5.333 1.833-10 5.5-14S391 0 397 0c4.667 0 8.667 1.667 12 5
3.333 2.667 6.667 9 10 19 6.667 24.667 20.333 43.667 41 57 7.333 4.667 11
10.667 11 18 0 6-1 10-3 12s-6.667 5-14 9c-28.667 14.667-53.667 35.667-75 63
-1.333 1.333-3.167 3.5-5.5 6.5s-4 4.833-5 5.5c-1 .667-2.5 1.333-4.5 2s-4.333 1
-7 1c-4.667 0-9.167-1.833-13.5-5.5S337 184 337 178c0-12.667 15.667-32.333 47-59
H213l-171-1c-8.667-6-13-12.333-13-19 0-4.667 4.333-11.333 13-20h359
c-16-25.333-24-45-24-59z"></path></svg></span></span></span></span></span></span></span></span></span></span>.</p>
<h2>Application to Text-Completion</h2>
<p>The principles above can be applied to a variety of probabilistic situations. Most relavant to this particular webpage, is text completion.
We want to estimate the most likely next word to the user. Given the last word, what are the most likely next words? First, we need a dictionary.</p>
<h3>The Dictionary</h3>
<p>It is trivial to build a dictionary from sample text. For the purposes of the explanation, we are going to start with an arbitrary dictionary.</p>
<table>
<thead>
<tr>
<th>Index</th>
<th>Word</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>orange</td>
</tr>
<tr>
<td>1</td>
<td>fruit</td>
</tr>
<tr>
<td>2</td>
<td>passion</td>
</tr>
<tr>
<td>3</td>
<td>cheese</td>
</tr>
<tr>
<td>4</td>
<td>not</td>
</tr>
<tr>
<td>5</td>
<td>is</td>
</tr>
</tbody>
</table>
<h3>Building the Transition Matrix</h3>
<p>To build our transition matrix, we need to count all the transitions that occur between possible words in our dictionary.
In the interest of performance, my implementation converts the dictionary into a <code>HashMap&#x3C;String, usize></code>.
Next, I go through the training text and match each word to it's index in the dictionary, effectively transforming the <code>String</code> into a <code>Vec&#x3C;usize></code>.
For example, the phrase, "passion fruit is not orange, cheese is orange," becomes, <code>[ 2, 1, 5, 4, 0, 3, 5, 0 ]</code>.
Next, the implementation iterates through each element in this vector, counting each transition. The counts are stored in another <code>HashMap</code> in the interest of performance, but is eventually converted into a matrix <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>C</mi></mrow><annotation encoding="application/x-tex">C</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6833em;"></span><span class="mord mathnormal" style="margin-right:0.07153em;">C</span></span></span></span>. Each row is the output word's index, and the column is the input word's index.
For example, the transition <code>"fruit" (index 1) -> "is" (index 5)</code> occurs exactly once, so we record <code>1</code> in column 1, row 5.</p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mi>C</mi><mo>=</mo><mrow><mo fence="true">[</mo><mtable rowspacing="0.16em" columnalign="center center center center center center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr></mtable><mo fence="true">]</mo></mrow></mrow><annotation encoding="application/x-tex">C = \begin{bmatrix}
    0 &#x26; 0 &#x26; 0 &#x26; 0 &#x26; 1 &#x26; 1 \\
    0 &#x26; 0 &#x26; 1 &#x26; 0 &#x26; 0 &#x26; 0 \\
    0 &#x26; 0 &#x26; 0 &#x26; 0 &#x26; 0 &#x26; 0 \\
    1 &#x26; 0 &#x26; 0 &#x26; 0 &#x26; 0 &#x26; 0 \\
    0 &#x26; 0 &#x26; 0 &#x26; 0 &#x26; 0 &#x26; 1 \\
    0 &#x26; 1 &#x26; 0 &#x26; 1 &#x26; 0 &#x26; 0
\end{bmatrix}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6833em;"></span><span class="mord mathnormal" style="margin-right:0.07153em;">C</span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:7.2001em;vertical-align:-3.35em;"></span><span class="minner"><span class="mopen"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:3.85em;"><span style="top:-5.8499em;"><span class="pstrut" style="height:9.2em;"></span><span style="width:0.667em;height:7.200em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="7.200em" viewBox="0 0 667 7200"><path d="M403 1759 V84 H666 V0 H319 V1759 v3600 v1759 h347 v-84
H403z M403 1759 V0 H319 V1759 v3600 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:3.35em;"><span></span></span></span></span></span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:3.85em;"><span style="top:-6.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-0.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:3.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:3.85em;"><span style="top:-6.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-0.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:3.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:3.85em;"><span style="top:-6.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-0.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:3.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:3.85em;"><span style="top:-6.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-0.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:3.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:3.85em;"><span style="top:-6.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-0.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:3.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:3.85em;"><span style="top:-6.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span><span style="top:-0.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:3.35em;"><span></span></span></span></span></span></span></span><span class="mclose"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:3.85em;"><span style="top:-5.8499em;"><span class="pstrut" style="height:9.2em;"></span><span style="width:0.667em;height:7.200em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="7.200em" viewBox="0 0 667 7200"><path d="M347 1759 V0 H0 V84 H263 V1759 v3600 v1759 H0 v84 H347z
M347 1759 V0 H263 V1759 v3600 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:3.35em;"><span></span></span></span></span></span></span></span></span></span></span></span>
<p>Not a very interesting matrix, is it?</p>
<p>Each element needs to be converted into a probability. Take the sum of each column:</p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mo fence="true">[</mo><mtable rowspacing="0.16em" columnalign="center center center center center center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>2</mn></mstyle></mtd></mtr></mtable><mo fence="true">]</mo></mrow><annotation encoding="application/x-tex">\begin{bmatrix}
    1 &#x26; 1 &#x26; 1 &#x26; 1 &#x26; 1 &#x26; 2
\end{bmatrix}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:1.2em;vertical-align:-0.35em;"></span><span class="minner"><span class="mopen delimcenter" style="top:0em;"><span class="delimsizing size1">[</span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.85em;"><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.85em;"><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.85em;"><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.85em;"><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.85em;"><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.85em;"><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">2</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.35em;"><span></span></span></span></span></span></span></span><span class="mclose delimcenter" style="top:0em;"><span class="delimsizing size1">]</span></span></span></span></span></span></span>
<p>Create a diagonal matrix <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>D</mi></mrow><annotation encoding="application/x-tex">D</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6833em;"></span><span class="mord mathnormal" style="margin-right:0.02778em;">D</span></span></span></span> composed of <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mfrac><mn>1</mn><mtext>column sum</mtext></mfrac></mrow><annotation encoding="application/x-tex">\frac{1}{\text{column sum}}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:1.1901em;vertical-align:-0.345em;"></span><span class="mord"><span class="mopen nulldelimiter"></span><span class="mfrac"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.8451em;"><span style="top:-2.655em;"><span class="pstrut" style="height:3em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight"><span class="mord text mtight"><span class="mord mtight">column sum</span></span></span></span></span><span style="top:-3.23em;"><span class="pstrut" style="height:3em;"></span><span class="frac-line" style="border-bottom-width:0.04em;"></span></span><span style="top:-3.394em;"><span class="pstrut" style="height:3em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight"><span class="mord mtight">1</span></span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.345em;"><span></span></span></span></span></span><span class="mclose nulldelimiter"></span></span></span></span></span></p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mi>C</mi><mo>=</mo><mrow><mo fence="true">[</mo><mtable rowspacing="0.16em" columnalign="center center center center center center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0.5</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0.5</mn></mstyle></mtd></mtr><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr></mtable><mo fence="true">]</mo></mrow></mrow><annotation encoding="application/x-tex">C = \begin{bmatrix}
    0 &#x26; 0 &#x26; 0 &#x26; 0 &#x26; 1 &#x26; 0.5 \\
    0 &#x26; 0 &#x26; 1 &#x26; 0 &#x26; 0 &#x26; 0 \\
    0 &#x26; 0 &#x26; 0 &#x26; 0 &#x26; 0 &#x26; 0 \\
    1 &#x26; 0 &#x26; 0 &#x26; 0 &#x26; 0 &#x26; 0 \\
    0 &#x26; 0 &#x26; 0 &#x26; 0 &#x26; 0 &#x26; 0.5 \\
    0 &#x26; 1 &#x26; 0 &#x26; 1 &#x26; 0 &#x26; 0
\end{bmatrix}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6833em;"></span><span class="mord mathnormal" style="margin-right:0.07153em;">C</span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:7.2001em;vertical-align:-3.35em;"></span><span class="minner"><span class="mopen"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:3.85em;"><span style="top:-5.8499em;"><span class="pstrut" style="height:9.2em;"></span><span style="width:0.667em;height:7.200em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="7.200em" viewBox="0 0 667 7200"><path d="M403 1759 V84 H666 V0 H319 V1759 v3600 v1759 h347 v-84
H403z M403 1759 V0 H319 V1759 v3600 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:3.35em;"><span></span></span></span></span></span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:3.85em;"><span style="top:-6.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-0.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:3.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:3.85em;"><span style="top:-6.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-0.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:3.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:3.85em;"><span style="top:-6.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-0.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:3.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:3.85em;"><span style="top:-6.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-0.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:3.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:3.85em;"><span style="top:-6.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-0.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:3.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:3.85em;"><span style="top:-6.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0.5</span></span></span><span style="top:-4.81em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-3.61em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-2.41em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span><span style="top:-1.21em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0.5</span></span></span><span style="top:-0.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:3.35em;"><span></span></span></span></span></span></span></span><span class="mclose"><span class="delimsizing mult"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:3.85em;"><span style="top:-5.8499em;"><span class="pstrut" style="height:9.2em;"></span><span style="width:0.667em;height:7.200em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.667em" height="7.200em" viewBox="0 0 667 7200"><path d="M347 1759 V0 H0 V84 H263 V1759 v3600 v1759 H0 v84 H347z
M347 1759 V0 H263 V1759 v3600 v1759 h84z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:3.35em;"><span></span></span></span></span></span></span></span></span></span></span></span>
<p>To finalize our Markov (a.k.a. transition) matrix <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>M</mi></mrow><annotation encoding="application/x-tex">M</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6833em;"></span><span class="mord mathnormal" style="margin-right:0.10903em;">M</span></span></span></span>, we simply perform:</p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mi>M</mi><mo>=</mo><mi>D</mi><mi>C</mi></mrow><annotation encoding="application/x-tex">M = DC</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6833em;"></span><span class="mord mathnormal" style="margin-right:0.10903em;">M</span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:0.6833em;"></span><span class="mord mathnormal" style="margin-right:0.02778em;">D</span><span class="mord mathnormal" style="margin-right:0.07153em;">C</span></span></span></span></span>
<h3>Using the transition matrix</h3>
<p>There are two possible situations: the user is in the process of typing, or they have finished their last word.
The latter is the easiest to implement.
Scan the user's text, and isolate the last word. Perform a lookup on the word list to identify it's index. Create a new vector containing <code>0</code>s except for that index, which should contain a <code>1</code>.
For example, if the last word was 'is',</p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mover accent="true"><mi>s</mi><mo>⃗</mo></mover><mo>=</mo><mrow><mo fence="true">[</mo><mtable rowspacing="0.16em" columnalign="center center center center center center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>1</mn></mstyle></mtd></mtr></mtable><mo fence="true">]</mo></mrow></mrow><annotation encoding="application/x-tex">\vec{s} = \begin{bmatrix}
    0 &#x26; 0 &#x26; 0 &#x26; 0 &#x26; 0 &#x26; 1
\end{bmatrix}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.714em;"></span><span class="mord accent"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.714em;"><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="mord mathnormal">s</span></span><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="accent-body" style="left:-0.1799em;"><span class="overlay" style="height:0.714em;width:0.471em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.471em" height="0.714em" style="width:0.471em" viewBox="0 0 471 714" preserveAspectRatio="xMinYMin"><path d="M377 20c0-5.333 1.833-10 5.5-14S391 0 397 0c4.667 0 8.667 1.667 12 5
3.333 2.667 6.667 9 10 19 6.667 24.667 20.333 43.667 41 57 7.333 4.667 11
10.667 11 18 0 6-1 10-3 12s-6.667 5-14 9c-28.667 14.667-53.667 35.667-75 63
-1.333 1.333-3.167 3.5-5.5 6.5s-4 4.833-5 5.5c-1 .667-2.5 1.333-4.5 2s-4.333 1
-7 1c-4.667 0-9.167-1.833-13.5-5.5S337 184 337 178c0-12.667 15.667-32.333 47-59
H213l-171-1c-8.667-6-13-12.333-13-19 0-4.667 4.333-11.333 13-20h359
c-16-25.333-24-45-24-59z"></path></svg></span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:1.2em;vertical-align:-0.35em;"></span><span class="minner"><span class="mopen delimcenter" style="top:0em;"><span class="delimsizing size1">[</span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.85em;"><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.85em;"><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.85em;"><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.85em;"><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.85em;"><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.85em;"><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.35em;"><span></span></span></span></span></span></span></span><span class="mclose delimcenter" style="top:0em;"><span class="delimsizing size1">]</span></span></span></span></span></span></span>
<p>Run it through our transition matrix:</p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mi>M</mi><mover accent="true"><mi>s</mi><mo>⃗</mo></mover><mo>=</mo><mrow><mo fence="true">[</mo><mtable rowspacing="0.16em" columnalign="center center center center center center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0.5</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0.5</mn></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mn>0</mn></mstyle></mtd></mtr></mtable><mo fence="true">]</mo></mrow></mrow><annotation encoding="application/x-tex">M\vec{s} = \begin{bmatrix}
  0.5 &#x26; 0 &#x26; 0 &#x26; 0 &#x26; 0.5 &#x26; 0
\end{bmatrix}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.714em;"></span><span class="mord mathnormal" style="margin-right:0.10903em;">M</span><span class="mord accent"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.714em;"><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="mord mathnormal">s</span></span><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="accent-body" style="left:-0.1799em;"><span class="overlay" style="height:0.714em;width:0.471em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.471em" height="0.714em" style="width:0.471em" viewBox="0 0 471 714" preserveAspectRatio="xMinYMin"><path d="M377 20c0-5.333 1.833-10 5.5-14S391 0 397 0c4.667 0 8.667 1.667 12 5
3.333 2.667 6.667 9 10 19 6.667 24.667 20.333 43.667 41 57 7.333 4.667 11
10.667 11 18 0 6-1 10-3 12s-6.667 5-14 9c-28.667 14.667-53.667 35.667-75 63
-1.333 1.333-3.167 3.5-5.5 6.5s-4 4.833-5 5.5c-1 .667-2.5 1.333-4.5 2s-4.333 1
-7 1c-4.667 0-9.167-1.833-13.5-5.5S337 184 337 178c0-12.667 15.667-32.333 47-59
H213l-171-1c-8.667-6-13-12.333-13-19 0-4.667 4.333-11.333 13-20h359
c-16-25.333-24-45-24-59z"></path></svg></span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:1.2em;vertical-align:-0.35em;"></span><span class="minner"><span class="mopen delimcenter" style="top:0em;"><span class="delimsizing size1">[</span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.85em;"><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0.5</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.85em;"><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.85em;"><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.85em;"><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.85em;"><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0.5</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.85em;"><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.35em;"><span></span></span></span></span></span></span></span><span class="mclose delimcenter" style="top:0em;"><span class="delimsizing size1">]</span></span></span></span></span></span></span>
<p>Meaning the most probable next choices are at indices <code>0</code> and <code>4</code>, which correspond to "orange" and "not" respectively.
This is great for autocomplete. We can simply list the most probable options to the user.</p>
<h3>Text-Generation and Steady State</h3>
<p>It would be pretty neat if we could use this method to automagically generate text, right?</p>
<h4>The Naive Solution</h4>
<p>Each iteration, choose the most likely word from the set. Maybe randomize it a bit: choose a random word from the top 5 options.
Unfortunately, there is an issue. All Markov chains are guaranteed to converge on a specific probabilistic state given enough iterations. In order to get text generation to work unpredictably and without converging, we need something a bit more complex.</p>
<h4>My Solution</h4>
<p>Create a square diagonal matrix <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>R</mi></mrow><annotation encoding="application/x-tex">R</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6833em;"></span><span class="mord mathnormal" style="margin-right:0.00773em;">R</span></span></span></span> with a side length equal to the length of <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mover accent="true"><mi>s</mi><mo>⃗</mo></mover></mrow><annotation encoding="application/x-tex">\vec{s}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.714em;"></span><span class="mord accent"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.714em;"><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="mord mathnormal">s</span></span><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="accent-body" style="left:-0.1799em;"><span class="overlay" style="height:0.714em;width:0.471em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.471em" height="0.714em" style="width:0.471em" viewBox="0 0 471 714" preserveAspectRatio="xMinYMin"><path d="M377 20c0-5.333 1.833-10 5.5-14S391 0 397 0c4.667 0 8.667 1.667 12 5
3.333 2.667 6.667 9 10 19 6.667 24.667 20.333 43.667 41 57 7.333 4.667 11
10.667 11 18 0 6-1 10-3 12s-6.667 5-14 9c-28.667 14.667-53.667 35.667-75 63
-1.333 1.333-3.167 3.5-5.5 6.5s-4 4.833-5 5.5c-1 .667-2.5 1.333-4.5 2s-4.333 1
-7 1c-4.667 0-9.167-1.833-13.5-5.5S337 184 337 178c0-12.667 15.667-32.333 47-59
H213l-171-1c-8.667-6-13-12.333-13-19 0-4.667 4.333-11.333 13-20h359
c-16-25.333-24-45-24-59z"></path></svg></span></span></span></span></span></span></span></span></span></span>. Fill the diagonal elements with random numbers between <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mn>0</mn></mrow><annotation encoding="application/x-tex">0</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6444em;"></span><span class="mord">0</span></span></span></span> and <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mn>1</mn></mrow><annotation encoding="application/x-tex">1</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6444em;"></span><span class="mord">1</span></span></span></span>. Then choose the word whose index corresponds with the highest value of <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>R</mi><mover accent="true"><mi>s</mi><mo>⃗</mo></mover></mrow><annotation encoding="application/x-tex">R\vec{s}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.714em;"></span><span class="mord mathnormal" style="margin-right:0.00773em;">R</span><span class="mord accent"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.714em;"><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="mord mathnormal">s</span></span><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="accent-body" style="left:-0.1799em;"><span class="overlay" style="height:0.714em;width:0.471em;"><svg xmlns="http://www.w3.org/2000/svg" width="0.471em" height="0.714em" style="width:0.471em" viewBox="0 0 471 714" preserveAspectRatio="xMinYMin"><path d="M377 20c0-5.333 1.833-10 5.5-14S391 0 397 0c4.667 0 8.667 1.667 12 5
3.333 2.667 6.667 9 10 19 6.667 24.667 20.333 43.667 41 57 7.333 4.667 11
10.667 11 18 0 6-1 10-3 12s-6.667 5-14 9c-28.667 14.667-53.667 35.667-75 63
-1.333 1.333-3.167 3.5-5.5 6.5s-4 4.833-5 5.5c-1 .667-2.5 1.333-4.5 2s-4.333 1
-7 1c-4.667 0-9.167-1.833-13.5-5.5S337 184 337 178c0-12.667 15.667-32.333 47-59
H213l-171-1c-8.667-6-13-12.333-13-19 0-4.667 4.333-11.333 13-20h359
c-16-25.333-24-45-24-59z"></path></svg></span></span></span></span></span></span></span></span></span></span></p>]]></content:encoded>
            <enclosure url="https://elijahpotter.dev/images/andrei_markov.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[The Easiest Way to Run Llms Locally]]></title>
            <link>https://elijahpotter.dev/articles/the_easiest_way_to_run_llms_locally</link>
            <guid>https://elijahpotter.dev/articles/the_easiest_way_to_run_llms_locally</guid>
            <pubDate>Mon, 18 Dec 2023 07:00:00 GMT</pubDate>
            <description><![CDATA[<h1>The Easiest Way to Run LLMs Locally</h1>
<img src="/images/llama.webp" alt="A Goofy Lookin&#x27; Llama">
<h2>LLMs</h2>
<p>Unless you've been living under a rock for the past year, you already know what LLMs are.
If you <em>do</em> happen to be one of the lucky few unaware of the current hype around these things, I'll go through it real quick.</p>
<p>A large language model (or LLM) is a statistical model capable of "predicting" a subsequent word or letter, given a body of text.
Essentially, it is a computer program capable of filling in the blank.
If you let it predict the next word, then feed the result back in, you can get some pretty human-looking text.</p>
<h2>Let's Be Clear</h2>
<p>I hold a lot of skepticism on the practical applications of LLMs as a tool.
As a blanket rule, I never use LLMs or any similar technology in my education.</p>
<p>I know some people ask LLMs questions like "explain the fundamental theorem of calculus to me like I'm five."
While they may get good results for questions, I do not want to lean on them as a crutch.
College is not only an opportunity to learn the raw material, but also an opportunity to learn how to learn.
If we know anything about LLMs, it's that its ability to answer complex questions break down as you move to more specialized classes.</p>
<p>Which is all to say: I did not investigate this with the intention of using it as a tool, I just wanted to play around.</p>
<h2>My Circumstance</h2>
<p>I use <a href="https://archlinux.org/">arch, btw.</a>
While I enjoy the level of control it provides, I don't think it's for everybody.
This is partly because some things are quite difficult to set up.</p>
<p>For example, GPU support is limited and finicky, especially if you run an Intel Arc card, like I do.
While it works perfectly for some apps, like <a href="https://www.blender.org/">blender</a>, it doesn't work so well for other things.
My card only has 3 GB of VRAM, so it wouldn't be able to fit most models anyway.</p>
<p>So when I took on the task of running an LLM on my local machine, I started at looking at CPU-only solutions.</p>
<p>Initially, I tried to raw-dog <a href="https://github.com/ggerganov/llama.cpp">llama.cpp</a>.
That <em>worked</em> but only so.
The command-line interface left a lot to be desired, and the process of downloading and loading various models was tedious and confusing.</p>
<h2>Ollama</h2>
<p>That's when I discovered <a href="https://ollama.ai/">Ollama</a>.
Installing it was as easy as running:</p>
<pre><code class="hljs language-bash"><span class="hljs-built_in">sudo</span> pacman -S ollama
</code></pre>
<p>To avoid wasting resources on multiple instances of each model, Ollama uses a server architecture.
You can start the server by running</p>
<pre><code class="hljs language-bash">ollama serve
</code></pre>
<p>Then, you can download an start chatting with a model with:</p>
<pre><code class="hljs language-bash">ollama run llama2
<span class="hljs-comment"># Or:</span>
ollama run mistral
</code></pre>
<h2>That's It</h2>
<p>That's it!
It really is that simple.</p>
<p>Again, you might have no reason to do any of this.
Especially if you are happy with the privacy nightmare that is OpenAI, Google or Anthropic, or if you already have a system that works for you.</p>]]></description>
            <content:encoded><![CDATA[<h1>The Easiest Way to Run LLMs Locally</h1>
<img src="/images/llama.webp" alt="A Goofy Lookin&#x27; Llama">
<h2>LLMs</h2>
<p>Unless you've been living under a rock for the past year, you already know what LLMs are.
If you <em>do</em> happen to be one of the lucky few unaware of the current hype around these things, I'll go through it real quick.</p>
<p>A large language model (or LLM) is a statistical model capable of "predicting" a subsequent word or letter, given a body of text.
Essentially, it is a computer program capable of filling in the blank.
If you let it predict the next word, then feed the result back in, you can get some pretty human-looking text.</p>
<h2>Let's Be Clear</h2>
<p>I hold a lot of skepticism on the practical applications of LLMs as a tool.
As a blanket rule, I never use LLMs or any similar technology in my education.</p>
<p>I know some people ask LLMs questions like "explain the fundamental theorem of calculus to me like I'm five."
While they may get good results for questions, I do not want to lean on them as a crutch.
College is not only an opportunity to learn the raw material, but also an opportunity to learn how to learn.
If we know anything about LLMs, it's that its ability to answer complex questions break down as you move to more specialized classes.</p>
<p>Which is all to say: I did not investigate this with the intention of using it as a tool, I just wanted to play around.</p>
<h2>My Circumstance</h2>
<p>I use <a href="https://archlinux.org/">arch, btw.</a>
While I enjoy the level of control it provides, I don't think it's for everybody.
This is partly because some things are quite difficult to set up.</p>
<p>For example, GPU support is limited and finicky, especially if you run an Intel Arc card, like I do.
While it works perfectly for some apps, like <a href="https://www.blender.org/">blender</a>, it doesn't work so well for other things.
My card only has 3 GB of VRAM, so it wouldn't be able to fit most models anyway.</p>
<p>So when I took on the task of running an LLM on my local machine, I started at looking at CPU-only solutions.</p>
<p>Initially, I tried to raw-dog <a href="https://github.com/ggerganov/llama.cpp">llama.cpp</a>.
That <em>worked</em> but only so.
The command-line interface left a lot to be desired, and the process of downloading and loading various models was tedious and confusing.</p>
<h2>Ollama</h2>
<p>That's when I discovered <a href="https://ollama.ai/">Ollama</a>.
Installing it was as easy as running:</p>
<pre><code class="hljs language-bash"><span class="hljs-built_in">sudo</span> pacman -S ollama
</code></pre>
<p>To avoid wasting resources on multiple instances of each model, Ollama uses a server architecture.
You can start the server by running</p>
<pre><code class="hljs language-bash">ollama serve
</code></pre>
<p>Then, you can download an start chatting with a model with:</p>
<pre><code class="hljs language-bash">ollama run llama2
<span class="hljs-comment"># Or:</span>
ollama run mistral
</code></pre>
<h2>That's It</h2>
<p>That's it!
It really is that simple.</p>
<p>Again, you might have no reason to do any of this.
Especially if you are happy with the privacy nightmare that is OpenAI, Google or Anthropic, or if you already have a system that works for you.</p>]]></content:encoded>
            <enclosure url="https://elijahpotter.dev/images/llama.webp" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[Do Not Type Your Notes]]></title>
            <link>https://elijahpotter.dev/articles/do_not_type_your_notes</link>
            <guid>https://elijahpotter.dev/articles/do_not_type_your_notes</guid>
            <pubDate>Sun, 29 Oct 2023 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Do <em>Not</em> Type Your Notes</h1>
<img src="/images/pen_paper.webp" alt="A man taking notes on pen and paper">
<p>I feel it necessary to make it clear who I am speaking to.
First and foremost, I am speaking to anyone who is considering switching to a typed note-taking system for school.
If you already value handwritten notes, none of what I say will likely apply to you.</p>
<p>For the last two years, I've typed my notes.
It has worked well for me.
<em>Really well.</em>
For a time, I believed that it gave me a leg-up compared to my peers who handwrote their notes.
I still think that is true, for that specific context.
A context I no longer reside in.</p>
<p>This article is also a reflection.
I will return to the fact that any kind of note-taking is a personal, customized process.
I want to look back at what worked, why, and why it doesn't work anymore.</p>
<p>In his books, Cal Newport has recommended to students taking non-technical courses to type their notes.
This is for a number of reasons.
For one, typing is pretty much universally faster than handwriting, to the point that <a href="https://radiolab.org/podcast/wubi-effect">some believe</a> it was a major economic advantage to cultures who spoke languages with smaller character sets.
If I can write down more information, I will have a better record of the lecture, and thus an easier time studying later.
If I was taking history, creative writing, psychology, or business, and I intentionally paraphrase all my notes, I can paste them into my essay outline and use it as a framework.
Further, getting good at <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mtext>LaTeX</mtext></mrow><annotation encoding="application/x-tex">\LaTeX</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8988em;vertical-align:-0.2155em;"></span><span class="mord text"><span class="mord textrm">L</span><span class="mspace" style="margin-right:-0.36em;"></span><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.6833em;"><span style="top:-2.905em;"><span class="pstrut" style="height:2.7em;"></span><span class="mord"><span class="mord textrm mtight sizing reset-size6 size3">A</span></span></span></span></span></span><span class="mspace" style="margin-right:-0.15em;"></span><span class="mord text"><span class="mord textrm">T</span><span class="mspace" style="margin-right:-0.1667em;"></span><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.4678em;"><span style="top:-2.7845em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord textrm">E</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.2155em;"><span></span></span></span></span><span class="mspace" style="margin-right:-0.125em;"></span><span class="mord textrm">X</span></span></span></span></span></span> and edit-focused text editors like <a href="https://neovim.io/">NeoVim</a>, I found I could decimate my revision time.</p>
<p>Unfortunately, this only applies to humanities classes.
If you are focused on any of the classes mentioned above, this articles doesn't apply to you.</p>
<h2>It's Hard to Keep Up</h2>
<p>Yes.
I just said that typed notes allows me to record more information.
This is true for humanities classes that are working primarily in English.
This is <em>not</em> true for technical classes that are working primarily in <em>math notation.</em></p>
<p>Pretty much the only way to reliably record mathematical expressions is via <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mtext>LaTeX</mtext></mrow><annotation encoding="application/x-tex">\LaTeX</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8988em;vertical-align:-0.2155em;"></span><span class="mord text"><span class="mord textrm">L</span><span class="mspace" style="margin-right:-0.36em;"></span><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.6833em;"><span style="top:-2.905em;"><span class="pstrut" style="height:2.7em;"></span><span class="mord"><span class="mord textrm mtight sizing reset-size6 size3">A</span></span></span></span></span></span><span class="mspace" style="margin-right:-0.15em;"></span><span class="mord text"><span class="mord textrm">T</span><span class="mspace" style="margin-right:-0.1667em;"></span><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.4678em;"><span style="top:-2.7845em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord textrm">E</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.2155em;"><span></span></span></span></span><span class="mspace" style="margin-right:-0.125em;"></span><span class="mord textrm">X</span></span></span></span></span></span>.
Sound simple?
<strong>No.</strong></p>
<h3>An Example</h3>
<p>You can perform ideal gas computations using the formula <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>P</mi><mi>V</mi><mo>=</mo><mi>n</mi><mi>R</mi><mi>T</mi></mrow><annotation encoding="application/x-tex">PV = nRT</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6833em;"></span><span class="mord mathnormal" style="margin-right:0.13889em;">P</span><span class="mord mathnormal" style="margin-right:0.22222em;">V</span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:0.6833em;"></span><span class="mord mathnormal">n</span><span class="mord mathnormal" style="margin-right:0.13889em;">RT</span></span></span></span>.
If you want to find the total weight of a gas given the volume, temperature and pressure, you can rearrange to form the expression <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mfrac><mrow><mi>P</mi><mi>V</mi></mrow><mrow><mi>R</mi><mi>T</mi></mrow></mfrac><mo>∗</mo><mtext>molecular weight</mtext><mo>=</mo><mtext>total weight</mtext></mrow><annotation encoding="application/x-tex">\frac{PV}{RT} * \text{molecular weight} = \text{total weight}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:1.2173em;vertical-align:-0.345em;"></span><span class="mord"><span class="mopen nulldelimiter"></span><span class="mfrac"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.8723em;"><span style="top:-2.655em;"><span class="pstrut" style="height:3em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight"><span class="mord mathnormal mtight" style="margin-right:0.13889em;">RT</span></span></span></span><span style="top:-3.23em;"><span class="pstrut" style="height:3em;"></span><span class="frac-line" style="border-bottom-width:0.04em;"></span></span><span style="top:-3.394em;"><span class="pstrut" style="height:3em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight"><span class="mord mathnormal mtight" style="margin-right:0.13889em;">P</span><span class="mord mathnormal mtight" style="margin-right:0.22222em;">V</span></span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.345em;"><span></span></span></span></span></span><span class="mclose nulldelimiter"></span></span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">∗</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:0.8889em;vertical-align:-0.1944em;"></span><span class="mord text"><span class="mord">molecular weight</span></span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:0.8889em;vertical-align:-0.1944em;"></span><span class="mord text"><span class="mord">total weight</span></span></span></span></span>.</p>
<p>So far these expressions have been pretty simple.
The one above looks like:</p>
<pre><code class="hljs language-latex">\frac{PV}{RT} * \text{molecular weight} = \text{total weight}
</code></pre>
<p>Using a combination of manic typing and <a href="https://github.com/L3MON4D3/LuaSnip">LuaSnip</a> shortcuts, I can manage to generate the formulas during the lecture with a little time to spare.
Once you start plugging in values, however, it gets difficult.</p>
<p>If we are looking at hydrogen, and we use values of <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>V</mi><mo>=</mo><mn>27.0</mn><mtext> liters</mtext></mrow><annotation encoding="application/x-tex">V = 27.0 \text{ liters}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6833em;"></span><span class="mord mathnormal" style="margin-right:0.22222em;">V</span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:0.6944em;"></span><span class="mord">27.0</span><span class="mord text"><span class="mord"> liters</span></span></span></span></span>, <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>P</mi><mo>=</mo><mn>755</mn><mtext> torr</mtext></mrow><annotation encoding="application/x-tex">P = 755 \text{ torr}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6833em;"></span><span class="mord mathnormal" style="margin-right:0.13889em;">P</span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:0.6444em;"></span><span class="mord">755</span><span class="mord text"><span class="mord"> torr</span></span></span></span></span>, and <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>T</mi><mo>=</mo><mn>31.3</mn><mtext> celsius</mtext></mrow><annotation encoding="application/x-tex">T = 31.3 \text{ celsius}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6833em;"></span><span class="mord mathnormal" style="margin-right:0.13889em;">T</span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:0.6944em;"></span><span class="mord">31.3</span><span class="mord text"><span class="mord"> celsius</span></span></span></span></span>, the full expression looks like:</p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mfrac><mrow><mo stretchy="false">(</mo><mn>27.0</mn><mo stretchy="false">)</mo><mo stretchy="false">(</mo><mfrac><mn>755</mn><mn>760</mn></mfrac><mo stretchy="false">)</mo></mrow><mrow><mo stretchy="false">(</mo><mn>31.3</mn><mo>+</mo><mn>273.15</mn><mo stretchy="false">)</mo><mo stretchy="false">(</mo><mn>0.08206</mn><mo stretchy="false">)</mo></mrow></mfrac><mo>×</mo><mo stretchy="false">(</mo><mn>2.016</mn><mo stretchy="false">)</mo><mo>=</mo><mn>2.16</mn></mrow><annotation encoding="application/x-tex">\frac{(27.0)(\frac{755}{760})}{(31.3 + 273.15)(0.08206)} \times (2.016) = 2.16</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:2.5161em;vertical-align:-0.936em;"></span><span class="mord"><span class="mopen nulldelimiter"></span><span class="mfrac"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:1.5801em;"><span style="top:-2.314em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mopen">(</span><span class="mord">31.3</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">+</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mord">273.15</span><span class="mclose">)</span><span class="mopen">(</span><span class="mord">0.08206</span><span class="mclose">)</span></span></span><span style="top:-3.23em;"><span class="pstrut" style="height:3em;"></span><span class="frac-line" style="border-bottom-width:0.04em;"></span></span><span style="top:-3.735em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mopen">(</span><span class="mord">27.0</span><span class="mclose">)</span><span class="mopen">(</span><span class="mord"><span class="mopen nulldelimiter"></span><span class="mfrac"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.8451em;"><span style="top:-2.655em;"><span class="pstrut" style="height:3em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight"><span class="mord mtight">760</span></span></span></span><span style="top:-3.23em;"><span class="pstrut" style="height:3em;"></span><span class="frac-line" style="border-bottom-width:0.04em;"></span></span><span style="top:-3.394em;"><span class="pstrut" style="height:3em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight"><span class="mord mtight">755</span></span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.345em;"><span></span></span></span></span></span><span class="mclose nulldelimiter"></span></span><span class="mclose">)</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.936em;"><span></span></span></span></span></span><span class="mclose nulldelimiter"></span></span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">×</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:1em;vertical-align:-0.25em;"></span><span class="mopen">(</span><span class="mord">2.016</span><span class="mclose">)</span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:0.6444em;"></span><span class="mord">2.16</span></span></span></span></span>
<p>If that still looks simple to you, take a look at the <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mtext>LaTeX</mtext></mrow><annotation encoding="application/x-tex">\LaTeX</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8988em;vertical-align:-0.2155em;"></span><span class="mord text"><span class="mord textrm">L</span><span class="mspace" style="margin-right:-0.36em;"></span><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.6833em;"><span style="top:-2.905em;"><span class="pstrut" style="height:2.7em;"></span><span class="mord"><span class="mord textrm mtight sizing reset-size6 size3">A</span></span></span></span></span></span><span class="mspace" style="margin-right:-0.15em;"></span><span class="mord text"><span class="mord textrm">T</span><span class="mspace" style="margin-right:-0.1667em;"></span><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.4678em;"><span style="top:-2.7845em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord textrm">E</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.2155em;"><span></span></span></span></span><span class="mspace" style="margin-right:-0.125em;"></span><span class="mord textrm">X</span></span></span></span></span></span>:</p>
<pre><code class="hljs language-latex">\frac{(27.0)(\frac{755}{760})}{(31.3 + 273.15)(0.08206)} * (2.016) = 2.16
</code></pre>
<p>There are two separate problems here.</p>
<ol>
<li>The <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mtext>LaTeX</mtext></mrow><annotation encoding="application/x-tex">\LaTeX</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8988em;vertical-align:-0.2155em;"></span><span class="mord text"><span class="mord textrm">L</span><span class="mspace" style="margin-right:-0.36em;"></span><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.6833em;"><span style="top:-2.905em;"><span class="pstrut" style="height:2.7em;"></span><span class="mord"><span class="mord textrm mtight sizing reset-size6 size3">A</span></span></span></span></span></span><span class="mspace" style="margin-right:-0.15em;"></span><span class="mord text"><span class="mord textrm">T</span><span class="mspace" style="margin-right:-0.1667em;"></span><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.4678em;"><span style="top:-2.7845em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord textrm">E</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.2155em;"><span></span></span></span></span><span class="mspace" style="margin-right:-0.125em;"></span><span class="mord textrm">X</span></span></span></span></span></span> is not quite self describing.
In order to get a good grasp of what it represents, I have to render it out to either HTML or in a <code>pdf</code>.</li>
<li>It is extraordinarily difficult to write and revise.
Even with snippets and a good grasp of the keyboard, even minor edits are laborious.</li>
</ol>
<p>The net result: lectures become manic, unsuccessful attempt at replicating the chalkboard in <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mtext>LaTeX</mtext></mrow><annotation encoding="application/x-tex">\LaTeX</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8988em;vertical-align:-0.2155em;"></span><span class="mord text"><span class="mord textrm">L</span><span class="mspace" style="margin-right:-0.36em;"></span><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.6833em;"><span style="top:-2.905em;"><span class="pstrut" style="height:2.7em;"></span><span class="mord"><span class="mord textrm mtight sizing reset-size6 size3">A</span></span></span></span></span></span><span class="mspace" style="margin-right:-0.15em;"></span><span class="mord text"><span class="mord textrm">T</span><span class="mspace" style="margin-right:-0.1667em;"></span><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.4678em;"><span style="top:-2.7845em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord textrm">E</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.2155em;"><span></span></span></span></span><span class="mspace" style="margin-right:-0.125em;"></span><span class="mord textrm">X</span></span></span></span></span></span>, not an actual learning experience.</p>
<h2>Computers are Distracting</h2>
<p>I've <a href="/videos">spoken</a> on the negative effects of social media before.
Needless to say, social media has a <a href="https://doi.org/10.5817/CP2019-1-4">significant effect</a> on academic performance across the board.
An enormous pile of scientific evident has proven that reducing social media use makes you a happier, healthier, more productive human being.
Taking notes on the same device I use for entertainment is a recipe for distraction.
This is why I don't like iPads.</p>
<p>There are distractions on laptops too.
I found myself Googling my questions instead of asking the professor.
Email was constant disturbance, interrupting at what seemed like the most opportune time.
If I had a stray thought about any of my side projects, I invariably found myself working on them instead of participating in the lecture.</p>
<p>These are extreme examples, and they didn't occur often, but when they did, they were significant.</p>
<h2>Customize</h2>
<p>Books like <em>Deep Work</em> by Cal Newport and <em>The Happiness Advantage</em> by Shawn Anchor claim to be able to help readers become a happier, more productive individual.
Both books start by acknowledging that the most successful people customize their work strategies to what is optimal for <em>them</em>.</p>
<p>When I asked other members of the NeoVim community, their response was pretty similar.
By creating flexibility in my study and work process, I can shift around and find what works best for me.
Typed notes do not easily provide this flexibility.</p>
<h3>The Flexibility of Handwritten Notes</h3>
<p>Personally, whenever I type my notes, they become a form of graph, almost always a tree.</p>
<img src="/images/china_history_graph.webp" alt="A subset of Chinese history expressed as knowledge graph">
<p>The above is what I am intending to write in my mind.
But given that I am working in unformatted text, it ends up looking like:</p>
<pre><code class="hljs language-markdown"># Chinese History

## Warring States Period

### Lao Tzu

This is some information on Lao Tzu

### Confucius

This is some information on Confucius

## Civil Service Exam

The civil service exam was initally based on Confucian values, and resulted in a highly educated government.
</code></pre>
<p>While this is a non-technical example, it does well to represent the problem I am speaking to.
With a pen and paper, you are physically writing out the associations between concepts.
When typing notes, you must do so linguistically.
When handwriting notes, you have the opportunity to include navigational components.
You can physically place different concepts in different places, and interrelate them similarly.
By transcribing your full two-dimeninsional mental map, you are starting to utilize the parts of your brain normally reserved for navigation.</p>
<p>I suspect this is why retention is often so much higher with handwritten notes.
Human navigational memory is the most accurate and long-standing kind.
This likely comes down the navigational challenges of early hunter-gatherers.
Sport memorizers invariably turn to this kind of memory to remember long sequences of seemingly meaningless information.</p>
<p>By associating the physical location of information in my notes with the information itself, I am able to retain the information more accurately.</p>
<h2>Final Thoughts</h2>
<p>Do I regret not switching to handwritten notes sooner?
<strong>No.</strong>
Will I continue typing my notes in humanities classes?
<strong>Yes.</strong></p>
<p>I know I phrased the title in a primarily instructive way, but there is only one thing I want you to remember: taking notes is an ultimately personal process.
<strong>Do what works for you.</strong>
Similarly, I am just now entering what feels like a new world possibility.
It may <em>not</em> end up going as well as I theorize.
Either way, you will hear from me again.</p>]]></description>
            <content:encoded><![CDATA[<h1>Do <em>Not</em> Type Your Notes</h1>
<img src="/images/pen_paper.webp" alt="A man taking notes on pen and paper">
<p>I feel it necessary to make it clear who I am speaking to.
First and foremost, I am speaking to anyone who is considering switching to a typed note-taking system for school.
If you already value handwritten notes, none of what I say will likely apply to you.</p>
<p>For the last two years, I've typed my notes.
It has worked well for me.
<em>Really well.</em>
For a time, I believed that it gave me a leg-up compared to my peers who handwrote their notes.
I still think that is true, for that specific context.
A context I no longer reside in.</p>
<p>This article is also a reflection.
I will return to the fact that any kind of note-taking is a personal, customized process.
I want to look back at what worked, why, and why it doesn't work anymore.</p>
<p>In his books, Cal Newport has recommended to students taking non-technical courses to type their notes.
This is for a number of reasons.
For one, typing is pretty much universally faster than handwriting, to the point that <a href="https://radiolab.org/podcast/wubi-effect">some believe</a> it was a major economic advantage to cultures who spoke languages with smaller character sets.
If I can write down more information, I will have a better record of the lecture, and thus an easier time studying later.
If I was taking history, creative writing, psychology, or business, and I intentionally paraphrase all my notes, I can paste them into my essay outline and use it as a framework.
Further, getting good at <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mtext>LaTeX</mtext></mrow><annotation encoding="application/x-tex">\LaTeX</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8988em;vertical-align:-0.2155em;"></span><span class="mord text"><span class="mord textrm">L</span><span class="mspace" style="margin-right:-0.36em;"></span><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.6833em;"><span style="top:-2.905em;"><span class="pstrut" style="height:2.7em;"></span><span class="mord"><span class="mord textrm mtight sizing reset-size6 size3">A</span></span></span></span></span></span><span class="mspace" style="margin-right:-0.15em;"></span><span class="mord text"><span class="mord textrm">T</span><span class="mspace" style="margin-right:-0.1667em;"></span><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.4678em;"><span style="top:-2.7845em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord textrm">E</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.2155em;"><span></span></span></span></span><span class="mspace" style="margin-right:-0.125em;"></span><span class="mord textrm">X</span></span></span></span></span></span> and edit-focused text editors like <a href="https://neovim.io/">NeoVim</a>, I found I could decimate my revision time.</p>
<p>Unfortunately, this only applies to humanities classes.
If you are focused on any of the classes mentioned above, this articles doesn't apply to you.</p>
<h2>It's Hard to Keep Up</h2>
<p>Yes.
I just said that typed notes allows me to record more information.
This is true for humanities classes that are working primarily in English.
This is <em>not</em> true for technical classes that are working primarily in <em>math notation.</em></p>
<p>Pretty much the only way to reliably record mathematical expressions is via <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mtext>LaTeX</mtext></mrow><annotation encoding="application/x-tex">\LaTeX</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8988em;vertical-align:-0.2155em;"></span><span class="mord text"><span class="mord textrm">L</span><span class="mspace" style="margin-right:-0.36em;"></span><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.6833em;"><span style="top:-2.905em;"><span class="pstrut" style="height:2.7em;"></span><span class="mord"><span class="mord textrm mtight sizing reset-size6 size3">A</span></span></span></span></span></span><span class="mspace" style="margin-right:-0.15em;"></span><span class="mord text"><span class="mord textrm">T</span><span class="mspace" style="margin-right:-0.1667em;"></span><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.4678em;"><span style="top:-2.7845em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord textrm">E</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.2155em;"><span></span></span></span></span><span class="mspace" style="margin-right:-0.125em;"></span><span class="mord textrm">X</span></span></span></span></span></span>.
Sound simple?
<strong>No.</strong></p>
<h3>An Example</h3>
<p>You can perform ideal gas computations using the formula <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>P</mi><mi>V</mi><mo>=</mo><mi>n</mi><mi>R</mi><mi>T</mi></mrow><annotation encoding="application/x-tex">PV = nRT</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6833em;"></span><span class="mord mathnormal" style="margin-right:0.13889em;">P</span><span class="mord mathnormal" style="margin-right:0.22222em;">V</span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:0.6833em;"></span><span class="mord mathnormal">n</span><span class="mord mathnormal" style="margin-right:0.13889em;">RT</span></span></span></span>.
If you want to find the total weight of a gas given the volume, temperature and pressure, you can rearrange to form the expression <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mfrac><mrow><mi>P</mi><mi>V</mi></mrow><mrow><mi>R</mi><mi>T</mi></mrow></mfrac><mo>∗</mo><mtext>molecular weight</mtext><mo>=</mo><mtext>total weight</mtext></mrow><annotation encoding="application/x-tex">\frac{PV}{RT} * \text{molecular weight} = \text{total weight}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:1.2173em;vertical-align:-0.345em;"></span><span class="mord"><span class="mopen nulldelimiter"></span><span class="mfrac"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.8723em;"><span style="top:-2.655em;"><span class="pstrut" style="height:3em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight"><span class="mord mathnormal mtight" style="margin-right:0.13889em;">RT</span></span></span></span><span style="top:-3.23em;"><span class="pstrut" style="height:3em;"></span><span class="frac-line" style="border-bottom-width:0.04em;"></span></span><span style="top:-3.394em;"><span class="pstrut" style="height:3em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight"><span class="mord mathnormal mtight" style="margin-right:0.13889em;">P</span><span class="mord mathnormal mtight" style="margin-right:0.22222em;">V</span></span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.345em;"><span></span></span></span></span></span><span class="mclose nulldelimiter"></span></span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">∗</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:0.8889em;vertical-align:-0.1944em;"></span><span class="mord text"><span class="mord">molecular weight</span></span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:0.8889em;vertical-align:-0.1944em;"></span><span class="mord text"><span class="mord">total weight</span></span></span></span></span>.</p>
<p>So far these expressions have been pretty simple.
The one above looks like:</p>
<pre><code class="hljs language-latex">\frac{PV}{RT} * \text{molecular weight} = \text{total weight}
</code></pre>
<p>Using a combination of manic typing and <a href="https://github.com/L3MON4D3/LuaSnip">LuaSnip</a> shortcuts, I can manage to generate the formulas during the lecture with a little time to spare.
Once you start plugging in values, however, it gets difficult.</p>
<p>If we are looking at hydrogen, and we use values of <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>V</mi><mo>=</mo><mn>27.0</mn><mtext> liters</mtext></mrow><annotation encoding="application/x-tex">V = 27.0 \text{ liters}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6833em;"></span><span class="mord mathnormal" style="margin-right:0.22222em;">V</span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:0.6944em;"></span><span class="mord">27.0</span><span class="mord text"><span class="mord"> liters</span></span></span></span></span>, <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>P</mi><mo>=</mo><mn>755</mn><mtext> torr</mtext></mrow><annotation encoding="application/x-tex">P = 755 \text{ torr}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6833em;"></span><span class="mord mathnormal" style="margin-right:0.13889em;">P</span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:0.6444em;"></span><span class="mord">755</span><span class="mord text"><span class="mord"> torr</span></span></span></span></span>, and <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>T</mi><mo>=</mo><mn>31.3</mn><mtext> celsius</mtext></mrow><annotation encoding="application/x-tex">T = 31.3 \text{ celsius}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.6833em;"></span><span class="mord mathnormal" style="margin-right:0.13889em;">T</span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:0.6944em;"></span><span class="mord">31.3</span><span class="mord text"><span class="mord"> celsius</span></span></span></span></span>, the full expression looks like:</p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mfrac><mrow><mo stretchy="false">(</mo><mn>27.0</mn><mo stretchy="false">)</mo><mo stretchy="false">(</mo><mfrac><mn>755</mn><mn>760</mn></mfrac><mo stretchy="false">)</mo></mrow><mrow><mo stretchy="false">(</mo><mn>31.3</mn><mo>+</mo><mn>273.15</mn><mo stretchy="false">)</mo><mo stretchy="false">(</mo><mn>0.08206</mn><mo stretchy="false">)</mo></mrow></mfrac><mo>×</mo><mo stretchy="false">(</mo><mn>2.016</mn><mo stretchy="false">)</mo><mo>=</mo><mn>2.16</mn></mrow><annotation encoding="application/x-tex">\frac{(27.0)(\frac{755}{760})}{(31.3 + 273.15)(0.08206)} \times (2.016) = 2.16</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:2.5161em;vertical-align:-0.936em;"></span><span class="mord"><span class="mopen nulldelimiter"></span><span class="mfrac"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:1.5801em;"><span style="top:-2.314em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mopen">(</span><span class="mord">31.3</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">+</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mord">273.15</span><span class="mclose">)</span><span class="mopen">(</span><span class="mord">0.08206</span><span class="mclose">)</span></span></span><span style="top:-3.23em;"><span class="pstrut" style="height:3em;"></span><span class="frac-line" style="border-bottom-width:0.04em;"></span></span><span style="top:-3.735em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mopen">(</span><span class="mord">27.0</span><span class="mclose">)</span><span class="mopen">(</span><span class="mord"><span class="mopen nulldelimiter"></span><span class="mfrac"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.8451em;"><span style="top:-2.655em;"><span class="pstrut" style="height:3em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight"><span class="mord mtight">760</span></span></span></span><span style="top:-3.23em;"><span class="pstrut" style="height:3em;"></span><span class="frac-line" style="border-bottom-width:0.04em;"></span></span><span style="top:-3.394em;"><span class="pstrut" style="height:3em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight"><span class="mord mtight">755</span></span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.345em;"><span></span></span></span></span></span><span class="mclose nulldelimiter"></span></span><span class="mclose">)</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.936em;"><span></span></span></span></span></span><span class="mclose nulldelimiter"></span></span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">×</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:1em;vertical-align:-0.25em;"></span><span class="mopen">(</span><span class="mord">2.016</span><span class="mclose">)</span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:0.6444em;"></span><span class="mord">2.16</span></span></span></span></span>
<p>If that still looks simple to you, take a look at the <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mtext>LaTeX</mtext></mrow><annotation encoding="application/x-tex">\LaTeX</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8988em;vertical-align:-0.2155em;"></span><span class="mord text"><span class="mord textrm">L</span><span class="mspace" style="margin-right:-0.36em;"></span><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.6833em;"><span style="top:-2.905em;"><span class="pstrut" style="height:2.7em;"></span><span class="mord"><span class="mord textrm mtight sizing reset-size6 size3">A</span></span></span></span></span></span><span class="mspace" style="margin-right:-0.15em;"></span><span class="mord text"><span class="mord textrm">T</span><span class="mspace" style="margin-right:-0.1667em;"></span><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.4678em;"><span style="top:-2.7845em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord textrm">E</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.2155em;"><span></span></span></span></span><span class="mspace" style="margin-right:-0.125em;"></span><span class="mord textrm">X</span></span></span></span></span></span>:</p>
<pre><code class="hljs language-latex">\frac{(27.0)(\frac{755}{760})}{(31.3 + 273.15)(0.08206)} * (2.016) = 2.16
</code></pre>
<p>There are two separate problems here.</p>
<ol>
<li>The <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mtext>LaTeX</mtext></mrow><annotation encoding="application/x-tex">\LaTeX</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8988em;vertical-align:-0.2155em;"></span><span class="mord text"><span class="mord textrm">L</span><span class="mspace" style="margin-right:-0.36em;"></span><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.6833em;"><span style="top:-2.905em;"><span class="pstrut" style="height:2.7em;"></span><span class="mord"><span class="mord textrm mtight sizing reset-size6 size3">A</span></span></span></span></span></span><span class="mspace" style="margin-right:-0.15em;"></span><span class="mord text"><span class="mord textrm">T</span><span class="mspace" style="margin-right:-0.1667em;"></span><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.4678em;"><span style="top:-2.7845em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord textrm">E</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.2155em;"><span></span></span></span></span><span class="mspace" style="margin-right:-0.125em;"></span><span class="mord textrm">X</span></span></span></span></span></span> is not quite self describing.
In order to get a good grasp of what it represents, I have to render it out to either HTML or in a <code>pdf</code>.</li>
<li>It is extraordinarily difficult to write and revise.
Even with snippets and a good grasp of the keyboard, even minor edits are laborious.</li>
</ol>
<p>The net result: lectures become manic, unsuccessful attempt at replicating the chalkboard in <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mtext>LaTeX</mtext></mrow><annotation encoding="application/x-tex">\LaTeX</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8988em;vertical-align:-0.2155em;"></span><span class="mord text"><span class="mord textrm">L</span><span class="mspace" style="margin-right:-0.36em;"></span><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.6833em;"><span style="top:-2.905em;"><span class="pstrut" style="height:2.7em;"></span><span class="mord"><span class="mord textrm mtight sizing reset-size6 size3">A</span></span></span></span></span></span><span class="mspace" style="margin-right:-0.15em;"></span><span class="mord text"><span class="mord textrm">T</span><span class="mspace" style="margin-right:-0.1667em;"></span><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.4678em;"><span style="top:-2.7845em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord textrm">E</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.2155em;"><span></span></span></span></span><span class="mspace" style="margin-right:-0.125em;"></span><span class="mord textrm">X</span></span></span></span></span></span>, not an actual learning experience.</p>
<h2>Computers are Distracting</h2>
<p>I've <a href="/videos">spoken</a> on the negative effects of social media before.
Needless to say, social media has a <a href="https://doi.org/10.5817/CP2019-1-4">significant effect</a> on academic performance across the board.
An enormous pile of scientific evident has proven that reducing social media use makes you a happier, healthier, more productive human being.
Taking notes on the same device I use for entertainment is a recipe for distraction.
This is why I don't like iPads.</p>
<p>There are distractions on laptops too.
I found myself Googling my questions instead of asking the professor.
Email was constant disturbance, interrupting at what seemed like the most opportune time.
If I had a stray thought about any of my side projects, I invariably found myself working on them instead of participating in the lecture.</p>
<p>These are extreme examples, and they didn't occur often, but when they did, they were significant.</p>
<h2>Customize</h2>
<p>Books like <em>Deep Work</em> by Cal Newport and <em>The Happiness Advantage</em> by Shawn Anchor claim to be able to help readers become a happier, more productive individual.
Both books start by acknowledging that the most successful people customize their work strategies to what is optimal for <em>them</em>.</p>
<p>When I asked other members of the NeoVim community, their response was pretty similar.
By creating flexibility in my study and work process, I can shift around and find what works best for me.
Typed notes do not easily provide this flexibility.</p>
<h3>The Flexibility of Handwritten Notes</h3>
<p>Personally, whenever I type my notes, they become a form of graph, almost always a tree.</p>
<img src="/images/china_history_graph.webp" alt="A subset of Chinese history expressed as knowledge graph">
<p>The above is what I am intending to write in my mind.
But given that I am working in unformatted text, it ends up looking like:</p>
<pre><code class="hljs language-markdown"># Chinese History

## Warring States Period

### Lao Tzu

This is some information on Lao Tzu

### Confucius

This is some information on Confucius

## Civil Service Exam

The civil service exam was initally based on Confucian values, and resulted in a highly educated government.
</code></pre>
<p>While this is a non-technical example, it does well to represent the problem I am speaking to.
With a pen and paper, you are physically writing out the associations between concepts.
When typing notes, you must do so linguistically.
When handwriting notes, you have the opportunity to include navigational components.
You can physically place different concepts in different places, and interrelate them similarly.
By transcribing your full two-dimeninsional mental map, you are starting to utilize the parts of your brain normally reserved for navigation.</p>
<p>I suspect this is why retention is often so much higher with handwritten notes.
Human navigational memory is the most accurate and long-standing kind.
This likely comes down the navigational challenges of early hunter-gatherers.
Sport memorizers invariably turn to this kind of memory to remember long sequences of seemingly meaningless information.</p>
<p>By associating the physical location of information in my notes with the information itself, I am able to retain the information more accurately.</p>
<h2>Final Thoughts</h2>
<p>Do I regret not switching to handwritten notes sooner?
<strong>No.</strong>
Will I continue typing my notes in humanities classes?
<strong>Yes.</strong></p>
<p>I know I phrased the title in a primarily instructive way, but there is only one thing I want you to remember: taking notes is an ultimately personal process.
<strong>Do what works for you.</strong>
Similarly, I am just now entering what feels like a new world possibility.
It may <em>not</em> end up going as well as I theorize.
Either way, you will hear from me again.</p>]]></content:encoded>
            <enclosure url="https://elijahpotter.dev/images/pen_paper.webp" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[Quantifying Hope on a Global Scale]]></title>
            <link>https://elijahpotter.dev/articles/quantifying_hope_on_a_global_scale</link>
            <guid>https://elijahpotter.dev/articles/quantifying_hope_on_a_global_scale</guid>
            <pubDate>Fri, 20 Oct 2023 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Quantifying Hope on a Global Scale</h1>
<p><strong><em>Hope</em></strong> is a somewhat nebulous word.</p>
<p>For some, it is an expectation of what the future will be.
For others, it is a goal for what the future <em>should</em> be.
The word itself means something different to each person.</p>
<p>Most generally though, hope describes how one perceives the momentum of the world.
Do things seem to be generally getting better, or do they generally seem to be getting worse?
Personally, I think things are rapidly getting <strong>much, much</strong> better.</p>
<h2>We <em>Should</em> Be Hopeful</h2>
<p>First, let's define our metric.
The <strong>world</strong> is better off when <strong>people</strong> are better off.
The more people who are able to live long, meaningful lives, the better.</p>
<p>In his posthumously published book, <a href="https://www.amazon.com/Factfulness-Reasons-World-Things-Better/dp/1250123828/"><em>Factfulness: Ten Reasons We're Wrong About the World – and Why Things Are Better Than You Think</em></a>, Hans Rosling made the case that the quality and length of human life has been trending upward for more than a century.
Life expectancy at birth has <a href="https://ourworldindata.org/life-expectancy">more than doubled</a> in the last century alone.
The portion of the global population living in poverty is lower than <a href="https://ourworldindata.org/poverty">any point in history</a>.</p>
<blockquote>
<p><strong>Side note:</strong> a <em>great</em> resource to learn about this is Our World In Data's <a href="https://ourworldindata.org/extreme-poverty-in-brief">comprehensive, yet approachable report</a>.</p>
</blockquote>
<p>All of this progress does <strong>not</strong> mean we should just stop here and say the work is "done."
However, I believe that often the best approaches to solving world-scale problems (like poverty) start by first looking at the solutions that already work.</p>
<p>In the book, Rosling tells how he is often described as an "optimist."
He thinks it is a misnomer, since optimists are often portrayed as foolish and misinformed.
He, however was likely more qualified than anyone to speak on the possibilities of the likely good future, so his positive outlook was entirely justified.
For this reason, he described himself a "possibilist".
While I don't even approach his qualifications, I would similarly describe my worldview as "possibilist."</p>
<h2>The Solutions That Exist</h2>
<p>I've spoken on the topic of solution-focused thinking <a href="/articles/the_climate_change_progress_bar">before</a>, but I think it warrants revisiting.
I don't want this to be the focus of my writing today, but I do want to provide a brief example of the kind of solutions that exist <strong>today</strong> that are having profound, real-world impacts.</p>
<p>One of the critical environmental problems is related to recycling.
Every year, <a href="https://ourworldindata.org/how-much-plastic-waste-ends-up-in-the-ocean">18 million tonnes</a> of trash is leaked to the environment, including our oceans and rivers.
According to <a href="https://www.amprobotics.com/">AMP Robotics</a> founder Matanya Horowitz, this trash can actually be a hugely profitable commodity.
Depending on market conditions, one ton of recyclable material can be worth up to $100.
The issue: the cost of sorting the material into a usable form is often also around $100, most of which is spent on human labor.
Since the cost of processing the material to <em>prepare</em> to sell is about the same as it's sold price, there is little economic incentive to do much recycling.
In the case of plastics recycling, the issue is <strong>not</strong> that people don't recycle enough.
It is that the cost of recycling the resulting material is too high.</p>
<p>So what does AMP Robotics do?
They have automated the process of recycling, bringing down the costs precipitously.
With robotics, the cost of recycling (to the waste company) is now much lower than the actual sold price of material, making it an extremely profitable business.</p>
<p>I think this is a great example for two reasons.
First and foremost, we are often told, without much substantiating evidence, that the problem of ocean plastics and pollution are the result of individual people choosing not to recycle.
In reality, however, the limiting factor is not on the individual, but the waste management company's efficiency.</p>
<h2>Quantifying Hope</h2>
<p>I've established that the world <strong>is</strong> becoming a better place to live for almost everybody, all the time.
I've also made clear that this positive change is the result of solutions that get reported on (by the news) less frequently than the problems they solve.
With that done, I think it is time to return to the topic of hope.</p>
<p>My end goal was to automate the process of finding solutions that worked (like AMP Robotics).
I understood that this would be an enormously difficult natural language processing task, so I decided to start simple.
The plan: I would first quantify how hopeful a given news article was, then work from there.</p>
<h3>Web Scraping</h3>
<p>The first step was to write a system that could retrieve recent, important news articles reliably.
There is already infrastructure on most news websites that makes this pretty easy: RSS.
By regularly iterating through a list of RSS feeds, we can get pretty good news coverage for most of the world.
Additionally, news organizations prioritize the content of their RSS based on what <em>they</em> think is important.</p>
<p>My initial implementation was in Rust, which I will show here.
As you can see, I spent some time parallelizing the scrape job.
This was primarily to reduce the amount of time it would take to iterate on the hope quantifying algorithm, so I could get close to real-time feedback on it when I made changes.</p>
<pre><code class="hljs language-rust"><span class="hljs-keyword">use</span> rss::{Channel, Item};
<span class="hljs-keyword">use</span> tokio::sync::mpsc::unbounded_channel;

<span class="hljs-comment">/// Download all items from an RSS Feed</span>
<span class="hljs-keyword">async</span> <span class="hljs-keyword">fn</span> <span class="hljs-title function_">get_feed_url_items</span>(url: &#x26;<span class="hljs-type">str</span>) <span class="hljs-punctuation">-></span> anyhow::<span class="hljs-type">Result</span>&#x3C;<span class="hljs-type">Vec</span>&#x3C;Item>> {
    <span class="hljs-keyword">let</span> <span class="hljs-variable">content</span> = reqwest::<span class="hljs-title function_ invoke__">get</span>(url).<span class="hljs-keyword">await</span>?.<span class="hljs-title function_ invoke__">bytes</span>().<span class="hljs-keyword">await</span>?;

    <span class="hljs-keyword">let</span> <span class="hljs-variable">channel</span> = Channel::<span class="hljs-title function_ invoke__">read_from</span>(&#x26;content[..])?;
    <span class="hljs-title function_ invoke__">Ok</span>(channel.<span class="hljs-title function_ invoke__">into_items</span>())
}

<span class="hljs-comment">/// Download all items from a list of RSS feeds, in parallel.</span>
<span class="hljs-keyword">pub</span> <span class="hljs-keyword">async</span> <span class="hljs-keyword">fn</span> <span class="hljs-title function_">get_all_feed_urls</span>(urls: <span class="hljs-keyword">impl</span> <span class="hljs-title class_">IntoIterator</span>&#x3C;Item = &#x26;<span class="hljs-type">str</span>>) <span class="hljs-punctuation">-></span> anyhow::<span class="hljs-type">Result</span>&#x3C;<span class="hljs-type">Vec</span>&#x3C;Item>> {
    <span class="hljs-keyword">let</span> <span class="hljs-keyword">mut </span><span class="hljs-variable">items_receiver</span> = {
        <span class="hljs-keyword">let</span> (items_sender, items_receiver) = <span class="hljs-title function_ invoke__">unbounded_channel</span>();

        <span class="hljs-keyword">for</span> <span class="hljs-variable">url</span> <span class="hljs-keyword">in</span> urls {
            <span class="hljs-keyword">let</span> <span class="hljs-variable">url</span> = url.<span class="hljs-title function_ invoke__">to_string</span>();
            <span class="hljs-keyword">let</span> <span class="hljs-variable">items_sender</span> = items_sender.<span class="hljs-title function_ invoke__">clone</span>();
            tokio::<span class="hljs-title function_ invoke__">spawn</span>(<span class="hljs-keyword">async</span> <span class="hljs-keyword">move</span> {
                items_sender.<span class="hljs-title function_ invoke__">send</span>(<span class="hljs-title function_ invoke__">get_feed_url_items</span>(&#x26;url).<span class="hljs-keyword">await</span>).<span class="hljs-title function_ invoke__">unwrap</span>();
            });
        }
        items_receiver
    };

    <span class="hljs-keyword">let</span> <span class="hljs-keyword">mut </span><span class="hljs-variable">res</span> = <span class="hljs-type">Vec</span>::<span class="hljs-title function_ invoke__">new</span>();

    <span class="hljs-keyword">while</span> <span class="hljs-keyword">let</span> <span class="hljs-variable">Some</span>(items_res) = items_receiver.<span class="hljs-title function_ invoke__">recv</span>().<span class="hljs-keyword">await</span> {
        <span class="hljs-keyword">let</span> <span class="hljs-keyword">mut </span><span class="hljs-variable">items</span> = items_res?;
        res.<span class="hljs-title function_ invoke__">append</span>(&#x26;<span class="hljs-keyword">mut</span> items);
    }

    <span class="hljs-title function_ invoke__">Ok</span>(res)
}
</code></pre>
<h3>Rewriting into .NET Core</h3>
<p>Now that I've found a relatively high quality source of raw data to estimate public opinion, I need to condense it into a single number, and look at how it changes over time.
It was at this point I rewrote most of the code into C# and <a href="https://dotnet.microsoft.com/en-us/apps/aspnet">ASP.NET Core</a>.
I had four reasons for this:</p>
<ol>
<li>While I've used ASP.NET Core and Entity Framework Core in the past, I haven't used it recently.
I wanted to see how the ecosystem has changed in the intervening time.</li>
<li>Evaluate the tech stack for <a href="/articles/the_climate_change_progress_bar">other projects</a>.</li>
<li>Entity Framework Core is a great ORM, and I may be able to apply its ideologies to other technologies, even if I don't continue using it directly.</li>
<li>In order to evaluate positivity over time, I need as much historical data as possible, which should be stored in a persistent database.</li>
</ol>
<p>One main drawback of ASP.NET Core is the amount of boilerplate necessary.
As a result, I am hesitant to show any clippings from the rewrite, as it takes a lot of space without adding much value to the discussion here.
However, all the code for this project is open source and available on <a href="https://github.com/elijah-potter/hope-meter">GitHub</a>.</p>
<p>Here is the gist of the overall process the re-written app goes through:</p>
<ol>
<li>Scrape a number of RSS feeds with the method already discussed.</li>
<li>Store headlines in a table in an SQLite database, along with timestamp of the retrieval and the source URL.</li>
<li>Repeat the first two steps every two minutes, checking and removing duplicates.</li>
</ol>
<h3>Finally: The Hope Algorithm</h3>
<p>Finally, we can discuss the actual algorithm.</p>
<p>First, I would run the <a href="https://doi.org/10.1609/icwsm.v8i1.14550">VADER sentiment analysis algorithm</a> over each headline from the last month to evaluate their positivity.
Since VADER returns three values, a "positivity" rating, a "negativity" rating, and a "neutral" rating, I consolidated them into a single number by simply subtracting negativity from positivity.</p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mtext>headline rating</mtext><mo>=</mo><mtext>positivity</mtext><mo>−</mo><mtext>negativity</mtext></mrow><annotation encoding="application/x-tex">\text{headline rating} = \text{positivity} - \text{negativity}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8889em;vertical-align:-0.1944em;"></span><span class="mord text"><span class="mord">headline rating</span></span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:0.8623em;vertical-align:-0.1944em;"></span><span class="mord text"><span class="mord">positivity</span></span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">−</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:0.8623em;vertical-align:-0.1944em;"></span><span class="mord text"><span class="mord">negativity</span></span></span></span></span></span>
<p>Next, I find the average of the subset of the positivity score from the past two hours and subtract the average from the last month.
Finally, I use the month's standard deviation to place it on a scale of 0-100, where 50 is average.
Here is the formula, where <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><msub><mover accent="true"><mi>x</mi><mo>ˉ</mo></mover><mi>h</mi></msub></mrow><annotation encoding="application/x-tex">\bar{x}_h</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.7178em;vertical-align:-0.15em;"></span><span class="mord"><span class="mord accent"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.5678em;"><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="mord mathnormal">x</span></span><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="accent-body" style="left:-0.2222em;"><span class="mord">ˉ</span></span></span></span></span></span></span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3361em;"><span style="top:-2.55em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathnormal mtight">h</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span></span></span></span> is the positivity mean from the last two hours and <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><msub><mover accent="true"><mi>x</mi><mo>ˉ</mo></mover><mi>m</mi></msub></mrow><annotation encoding="application/x-tex">\bar{x}_m</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.7178em;vertical-align:-0.15em;"></span><span class="mord"><span class="mord accent"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.5678em;"><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="mord mathnormal">x</span></span><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="accent-body" style="left:-0.2222em;"><span class="mord">ˉ</span></span></span></span></span></span></span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.1514em;"><span style="top:-2.55em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathnormal mtight">m</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span></span></span></span> is the positivity mean from the last month.</p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mtext>hope score</mtext><mo>=</mo><mn>200</mn><mrow><mo fence="true">(</mo><mfrac><mrow><msub><mover accent="true"><mi>x</mi><mo>ˉ</mo></mover><mi>h</mi></msub><mo>−</mo><msub><mover accent="true"><mi>x</mi><mo>ˉ</mo></mover><mi>m</mi></msub></mrow><msub><mi>σ</mi><mi>m</mi></msub></mfrac><mo fence="true">)</mo></mrow><mo>+</mo><mn>50</mn></mrow><annotation encoding="application/x-tex">\text{hope score} = 200\left(\frac{\bar{x}_h - \bar{x}_m}{\sigma_m} \right) + 50</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8889em;vertical-align:-0.1944em;"></span><span class="mord text"><span class="mord">hope score</span></span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:2.4em;vertical-align:-0.95em;"></span><span class="mord">200</span><span class="mspace" style="margin-right:0.1667em;"></span><span class="minner"><span class="mopen delimcenter" style="top:0em;"><span class="delimsizing size3">(</span></span><span class="mord"><span class="mopen nulldelimiter"></span><span class="mfrac"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:1.2603em;"><span style="top:-2.314em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord"><span class="mord mathnormal" style="margin-right:0.03588em;">σ</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.1514em;"><span style="top:-2.55em;margin-left:-0.0359em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathnormal mtight">m</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span></span></span><span style="top:-3.23em;"><span class="pstrut" style="height:3em;"></span><span class="frac-line" style="border-bottom-width:0.04em;"></span></span><span style="top:-3.677em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord"><span class="mord accent"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.5678em;"><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="mord mathnormal">x</span></span><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="accent-body" style="left:-0.2222em;"><span class="mord">ˉ</span></span></span></span></span></span></span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3361em;"><span style="top:-2.55em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathnormal mtight">h</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">−</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mord"><span class="mord accent"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.5678em;"><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="mord mathnormal">x</span></span><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="accent-body" style="left:-0.2222em;"><span class="mord">ˉ</span></span></span></span></span></span></span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.1514em;"><span style="top:-2.55em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathnormal mtight">m</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.836em;"><span></span></span></span></span></span><span class="mclose nulldelimiter"></span></span><span class="mclose delimcenter" style="top:0em;"><span class="delimsizing size3">)</span></span></span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">+</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:0.6444em;"></span><span class="mord">50</span></span></span></span></span>
<p>Naturally, there are a few things to note about this formula.
I assumed that the positivity score is a normal distribution among news headlines (I did not have time to evaluate whether this is actually true).
The score will also often fall <em>outside</em> the range of 0-100.
This is expected, since values outside that range will be less accurate anyway.
In those situations, I simply clamp to within the range 0-100.</p>
<h2>Communicating Hope</h2>
<p>I wanted to explore and get feedback on a couple different ways of communicating abstract concepts of hope.
First, I threw together a quick website using <a href="https://svelte.dev/">Svelte</a> and <a href="https://tailwindcss.com/">TailwindCSS</a>.</p>
<img src="/images/hope_screenshot.webp" alt="A screenshot of my website prototype">
<p>I ran the website for a couple of days and had friends, family, and peers test it out to get feedback on the concept.</p>
<h3>Feedback</h3>
<p>The people I spoke to had a lot to say on the dial itself.
Notably, the lack of +/- signaling was confusing.
People couldn't tell what a "good" or a "bad" reading on the dial looked like.
One person suggested I include a happy face on the hopeful side and a sad face on the despair side.</p>
<p>We brainstormed the different places we could put a hope-meter.
IOS and Android both support home-screen widgets.
Maybe we could put a global hope-meter on one of those, right next to the clock?
Maybe you could click on a specific spot on the hope meter, and you would be served a news article with that amount of hope?
The possibilities were endless.
I was pushed to look into what a physical hope meter would look like, so I sketched it out.</p>
<img src="/images/hope_sketch.webp" alt="A quick little sketch of a physical hope meter">
<h3>Moving To the Real World</h3>
<p>I got hooked on the physical dial.
I wanted something tangible that I could put somewhere.
Something whose physical weight would give an amount of substance to the thing it measured.</p>
<p>I decided to keep riffing on the sketches I had drawn out.
I loaded the web server I used for the original site onto a Raspberry Pi 3, planning on using the existing HTTP endpoint to control a servo, which in turn, moved a hand on a dial.</p>
<p>The only material I needed to buy was a <a href="https://www.adafruit.com/product/154">FS5103R continuous rotation servo</a>.
I already had access to everything else.
While I was waiting on that part to arrive, I modeled out what the physical device <em>could</em> look like.
Initially, I imagined something like this:</p>
<img src="/images/hope_model.jpg" alt="A Blender render of the initial Hope meter 3D model">
<p>All the electronics would be placed inside the bottle, and the servo would be attached to the cap.
The entire contraption would rest on it's side.
In hindsight, there are a couple of issues with this concept.
The circular body would cause it to roll all over the place, and the complete lack of access to the inside would make maintenance quite difficult.
That isn't even to mention there isn't a single hole for a power cord in this model for the electronics.</p>
<img src="/images/hope_model_face.webp" alt="Final Hope Meter Face Model">
<p>In the final model (pictured above), I included facial expressions to differentiate hope and despair on the dial.
I also completely removed the enclosure, instead planning on directly attaching the Raspberry Pi and servo to the back of the dial.</p>
<h4>The Print</h4>
<p>With the model complete, I went to my local makerspace, and printed it out.
How did it go?
Not so well.</p>
<img src="/images/failed_hope_print.webp" alt="My first (failed) 3D print">
<p>The first print failed spectacularly.
Parts were coming off, and it looked like it was drawn with one of those 3D pens.
Turns out, the issue was that the underlying 3D model had not been exported out of Blender (the 3D modeling software I used) properly, and the geometry was screwed up before it even entered the slicing software.
Garbage in, garbage out, I suppose.</p>
<video controls>
  <source src="/videos/hope_demo.webm" type="video/webm">
</video>
<p>The second print, however, turned out great.
I hot-glued the Raspberry Pi to the back, plugged in the servo, and we were off to the races.
As you can see above, I programmed a little servo control script in Python, which included a little startup motion.
After the motion is completed, it moves to the current hope value.</p>
<p>The video was recorded right after the <a href="https://www.nytimes.com/live/2023/10/11/world/israel-news-hamas-war">Isreal-Hamas War</a> began, so global hope values were not doing so hot.
Although it truly is a tragedy, if anything, this is evidence that the system works.</p>
<h2>Reflection</h2>
<p>This project stretched my abilities in a couple key ways.</p>
<p>After the initial prototyping phase, <em>all</em> the code was in programming environments I am relatively unfamiliar with.
While I have a lot of experience with Node.js, Rust and React, I felt some difficulty when working with ASP.NET Core and Python.
The difficulties were exacerbated when trying to run everything on a Raspberry Pi, which I have never worked with in this capacity.</p>
<p>Additionally, I found it pretty difficult to get the servo working reliably.
Apparently, servos rely on a potentiometer to stay calibrated.
If the potentiometer moves to an ambient environment with a different temperature than it was calibrated in, the servo starts to move in unexpected ways.
This resulted in frequently inaccurate readings.
If I could go back and re-do this project, I would have used a different kind of motor, maybe a stepper motor.</p>
<p>Part of my motivation for doing this project was to gather more information on the best way to implement a <a href="/articles/the_climate_change_progress_bar">climate change progress bar</a>.
To that end, I found this quite successful.
I now have a better understanding of what kind of server and database architecture I want to use.
I also now know what kinds of formats people are more likely to see global metrics like hope or climate progress (home screen widgets and push notifications are at the top of the list).</p>]]></description>
            <content:encoded><![CDATA[<h1>Quantifying Hope on a Global Scale</h1>
<p><strong><em>Hope</em></strong> is a somewhat nebulous word.</p>
<p>For some, it is an expectation of what the future will be.
For others, it is a goal for what the future <em>should</em> be.
The word itself means something different to each person.</p>
<p>Most generally though, hope describes how one perceives the momentum of the world.
Do things seem to be generally getting better, or do they generally seem to be getting worse?
Personally, I think things are rapidly getting <strong>much, much</strong> better.</p>
<h2>We <em>Should</em> Be Hopeful</h2>
<p>First, let's define our metric.
The <strong>world</strong> is better off when <strong>people</strong> are better off.
The more people who are able to live long, meaningful lives, the better.</p>
<p>In his posthumously published book, <a href="https://www.amazon.com/Factfulness-Reasons-World-Things-Better/dp/1250123828/"><em>Factfulness: Ten Reasons We're Wrong About the World – and Why Things Are Better Than You Think</em></a>, Hans Rosling made the case that the quality and length of human life has been trending upward for more than a century.
Life expectancy at birth has <a href="https://ourworldindata.org/life-expectancy">more than doubled</a> in the last century alone.
The portion of the global population living in poverty is lower than <a href="https://ourworldindata.org/poverty">any point in history</a>.</p>
<blockquote>
<p><strong>Side note:</strong> a <em>great</em> resource to learn about this is Our World In Data's <a href="https://ourworldindata.org/extreme-poverty-in-brief">comprehensive, yet approachable report</a>.</p>
</blockquote>
<p>All of this progress does <strong>not</strong> mean we should just stop here and say the work is "done."
However, I believe that often the best approaches to solving world-scale problems (like poverty) start by first looking at the solutions that already work.</p>
<p>In the book, Rosling tells how he is often described as an "optimist."
He thinks it is a misnomer, since optimists are often portrayed as foolish and misinformed.
He, however was likely more qualified than anyone to speak on the possibilities of the likely good future, so his positive outlook was entirely justified.
For this reason, he described himself a "possibilist".
While I don't even approach his qualifications, I would similarly describe my worldview as "possibilist."</p>
<h2>The Solutions That Exist</h2>
<p>I've spoken on the topic of solution-focused thinking <a href="/articles/the_climate_change_progress_bar">before</a>, but I think it warrants revisiting.
I don't want this to be the focus of my writing today, but I do want to provide a brief example of the kind of solutions that exist <strong>today</strong> that are having profound, real-world impacts.</p>
<p>One of the critical environmental problems is related to recycling.
Every year, <a href="https://ourworldindata.org/how-much-plastic-waste-ends-up-in-the-ocean">18 million tonnes</a> of trash is leaked to the environment, including our oceans and rivers.
According to <a href="https://www.amprobotics.com/">AMP Robotics</a> founder Matanya Horowitz, this trash can actually be a hugely profitable commodity.
Depending on market conditions, one ton of recyclable material can be worth up to $100.
The issue: the cost of sorting the material into a usable form is often also around $100, most of which is spent on human labor.
Since the cost of processing the material to <em>prepare</em> to sell is about the same as it's sold price, there is little economic incentive to do much recycling.
In the case of plastics recycling, the issue is <strong>not</strong> that people don't recycle enough.
It is that the cost of recycling the resulting material is too high.</p>
<p>So what does AMP Robotics do?
They have automated the process of recycling, bringing down the costs precipitously.
With robotics, the cost of recycling (to the waste company) is now much lower than the actual sold price of material, making it an extremely profitable business.</p>
<p>I think this is a great example for two reasons.
First and foremost, we are often told, without much substantiating evidence, that the problem of ocean plastics and pollution are the result of individual people choosing not to recycle.
In reality, however, the limiting factor is not on the individual, but the waste management company's efficiency.</p>
<h2>Quantifying Hope</h2>
<p>I've established that the world <strong>is</strong> becoming a better place to live for almost everybody, all the time.
I've also made clear that this positive change is the result of solutions that get reported on (by the news) less frequently than the problems they solve.
With that done, I think it is time to return to the topic of hope.</p>
<p>My end goal was to automate the process of finding solutions that worked (like AMP Robotics).
I understood that this would be an enormously difficult natural language processing task, so I decided to start simple.
The plan: I would first quantify how hopeful a given news article was, then work from there.</p>
<h3>Web Scraping</h3>
<p>The first step was to write a system that could retrieve recent, important news articles reliably.
There is already infrastructure on most news websites that makes this pretty easy: RSS.
By regularly iterating through a list of RSS feeds, we can get pretty good news coverage for most of the world.
Additionally, news organizations prioritize the content of their RSS based on what <em>they</em> think is important.</p>
<p>My initial implementation was in Rust, which I will show here.
As you can see, I spent some time parallelizing the scrape job.
This was primarily to reduce the amount of time it would take to iterate on the hope quantifying algorithm, so I could get close to real-time feedback on it when I made changes.</p>
<pre><code class="hljs language-rust"><span class="hljs-keyword">use</span> rss::{Channel, Item};
<span class="hljs-keyword">use</span> tokio::sync::mpsc::unbounded_channel;

<span class="hljs-comment">/// Download all items from an RSS Feed</span>
<span class="hljs-keyword">async</span> <span class="hljs-keyword">fn</span> <span class="hljs-title function_">get_feed_url_items</span>(url: &#x26;<span class="hljs-type">str</span>) <span class="hljs-punctuation">-></span> anyhow::<span class="hljs-type">Result</span>&#x3C;<span class="hljs-type">Vec</span>&#x3C;Item>> {
    <span class="hljs-keyword">let</span> <span class="hljs-variable">content</span> = reqwest::<span class="hljs-title function_ invoke__">get</span>(url).<span class="hljs-keyword">await</span>?.<span class="hljs-title function_ invoke__">bytes</span>().<span class="hljs-keyword">await</span>?;

    <span class="hljs-keyword">let</span> <span class="hljs-variable">channel</span> = Channel::<span class="hljs-title function_ invoke__">read_from</span>(&#x26;content[..])?;
    <span class="hljs-title function_ invoke__">Ok</span>(channel.<span class="hljs-title function_ invoke__">into_items</span>())
}

<span class="hljs-comment">/// Download all items from a list of RSS feeds, in parallel.</span>
<span class="hljs-keyword">pub</span> <span class="hljs-keyword">async</span> <span class="hljs-keyword">fn</span> <span class="hljs-title function_">get_all_feed_urls</span>(urls: <span class="hljs-keyword">impl</span> <span class="hljs-title class_">IntoIterator</span>&#x3C;Item = &#x26;<span class="hljs-type">str</span>>) <span class="hljs-punctuation">-></span> anyhow::<span class="hljs-type">Result</span>&#x3C;<span class="hljs-type">Vec</span>&#x3C;Item>> {
    <span class="hljs-keyword">let</span> <span class="hljs-keyword">mut </span><span class="hljs-variable">items_receiver</span> = {
        <span class="hljs-keyword">let</span> (items_sender, items_receiver) = <span class="hljs-title function_ invoke__">unbounded_channel</span>();

        <span class="hljs-keyword">for</span> <span class="hljs-variable">url</span> <span class="hljs-keyword">in</span> urls {
            <span class="hljs-keyword">let</span> <span class="hljs-variable">url</span> = url.<span class="hljs-title function_ invoke__">to_string</span>();
            <span class="hljs-keyword">let</span> <span class="hljs-variable">items_sender</span> = items_sender.<span class="hljs-title function_ invoke__">clone</span>();
            tokio::<span class="hljs-title function_ invoke__">spawn</span>(<span class="hljs-keyword">async</span> <span class="hljs-keyword">move</span> {
                items_sender.<span class="hljs-title function_ invoke__">send</span>(<span class="hljs-title function_ invoke__">get_feed_url_items</span>(&#x26;url).<span class="hljs-keyword">await</span>).<span class="hljs-title function_ invoke__">unwrap</span>();
            });
        }
        items_receiver
    };

    <span class="hljs-keyword">let</span> <span class="hljs-keyword">mut </span><span class="hljs-variable">res</span> = <span class="hljs-type">Vec</span>::<span class="hljs-title function_ invoke__">new</span>();

    <span class="hljs-keyword">while</span> <span class="hljs-keyword">let</span> <span class="hljs-variable">Some</span>(items_res) = items_receiver.<span class="hljs-title function_ invoke__">recv</span>().<span class="hljs-keyword">await</span> {
        <span class="hljs-keyword">let</span> <span class="hljs-keyword">mut </span><span class="hljs-variable">items</span> = items_res?;
        res.<span class="hljs-title function_ invoke__">append</span>(&#x26;<span class="hljs-keyword">mut</span> items);
    }

    <span class="hljs-title function_ invoke__">Ok</span>(res)
}
</code></pre>
<h3>Rewriting into .NET Core</h3>
<p>Now that I've found a relatively high quality source of raw data to estimate public opinion, I need to condense it into a single number, and look at how it changes over time.
It was at this point I rewrote most of the code into C# and <a href="https://dotnet.microsoft.com/en-us/apps/aspnet">ASP.NET Core</a>.
I had four reasons for this:</p>
<ol>
<li>While I've used ASP.NET Core and Entity Framework Core in the past, I haven't used it recently.
I wanted to see how the ecosystem has changed in the intervening time.</li>
<li>Evaluate the tech stack for <a href="/articles/the_climate_change_progress_bar">other projects</a>.</li>
<li>Entity Framework Core is a great ORM, and I may be able to apply its ideologies to other technologies, even if I don't continue using it directly.</li>
<li>In order to evaluate positivity over time, I need as much historical data as possible, which should be stored in a persistent database.</li>
</ol>
<p>One main drawback of ASP.NET Core is the amount of boilerplate necessary.
As a result, I am hesitant to show any clippings from the rewrite, as it takes a lot of space without adding much value to the discussion here.
However, all the code for this project is open source and available on <a href="https://github.com/elijah-potter/hope-meter">GitHub</a>.</p>
<p>Here is the gist of the overall process the re-written app goes through:</p>
<ol>
<li>Scrape a number of RSS feeds with the method already discussed.</li>
<li>Store headlines in a table in an SQLite database, along with timestamp of the retrieval and the source URL.</li>
<li>Repeat the first two steps every two minutes, checking and removing duplicates.</li>
</ol>
<h3>Finally: The Hope Algorithm</h3>
<p>Finally, we can discuss the actual algorithm.</p>
<p>First, I would run the <a href="https://doi.org/10.1609/icwsm.v8i1.14550">VADER sentiment analysis algorithm</a> over each headline from the last month to evaluate their positivity.
Since VADER returns three values, a "positivity" rating, a "negativity" rating, and a "neutral" rating, I consolidated them into a single number by simply subtracting negativity from positivity.</p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mtext>headline rating</mtext><mo>=</mo><mtext>positivity</mtext><mo>−</mo><mtext>negativity</mtext></mrow><annotation encoding="application/x-tex">\text{headline rating} = \text{positivity} - \text{negativity}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8889em;vertical-align:-0.1944em;"></span><span class="mord text"><span class="mord">headline rating</span></span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:0.8623em;vertical-align:-0.1944em;"></span><span class="mord text"><span class="mord">positivity</span></span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">−</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:0.8623em;vertical-align:-0.1944em;"></span><span class="mord text"><span class="mord">negativity</span></span></span></span></span></span>
<p>Next, I find the average of the subset of the positivity score from the past two hours and subtract the average from the last month.
Finally, I use the month's standard deviation to place it on a scale of 0-100, where 50 is average.
Here is the formula, where <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><msub><mover accent="true"><mi>x</mi><mo>ˉ</mo></mover><mi>h</mi></msub></mrow><annotation encoding="application/x-tex">\bar{x}_h</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.7178em;vertical-align:-0.15em;"></span><span class="mord"><span class="mord accent"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.5678em;"><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="mord mathnormal">x</span></span><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="accent-body" style="left:-0.2222em;"><span class="mord">ˉ</span></span></span></span></span></span></span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3361em;"><span style="top:-2.55em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathnormal mtight">h</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span></span></span></span> is the positivity mean from the last two hours and <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><msub><mover accent="true"><mi>x</mi><mo>ˉ</mo></mover><mi>m</mi></msub></mrow><annotation encoding="application/x-tex">\bar{x}_m</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.7178em;vertical-align:-0.15em;"></span><span class="mord"><span class="mord accent"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.5678em;"><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="mord mathnormal">x</span></span><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="accent-body" style="left:-0.2222em;"><span class="mord">ˉ</span></span></span></span></span></span></span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.1514em;"><span style="top:-2.55em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathnormal mtight">m</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span></span></span></span> is the positivity mean from the last month.</p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mtext>hope score</mtext><mo>=</mo><mn>200</mn><mrow><mo fence="true">(</mo><mfrac><mrow><msub><mover accent="true"><mi>x</mi><mo>ˉ</mo></mover><mi>h</mi></msub><mo>−</mo><msub><mover accent="true"><mi>x</mi><mo>ˉ</mo></mover><mi>m</mi></msub></mrow><msub><mi>σ</mi><mi>m</mi></msub></mfrac><mo fence="true">)</mo></mrow><mo>+</mo><mn>50</mn></mrow><annotation encoding="application/x-tex">\text{hope score} = 200\left(\frac{\bar{x}_h - \bar{x}_m}{\sigma_m} \right) + 50</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8889em;vertical-align:-0.1944em;"></span><span class="mord text"><span class="mord">hope score</span></span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:2.4em;vertical-align:-0.95em;"></span><span class="mord">200</span><span class="mspace" style="margin-right:0.1667em;"></span><span class="minner"><span class="mopen delimcenter" style="top:0em;"><span class="delimsizing size3">(</span></span><span class="mord"><span class="mopen nulldelimiter"></span><span class="mfrac"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:1.2603em;"><span style="top:-2.314em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord"><span class="mord mathnormal" style="margin-right:0.03588em;">σ</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.1514em;"><span style="top:-2.55em;margin-left:-0.0359em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathnormal mtight">m</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span></span></span><span style="top:-3.23em;"><span class="pstrut" style="height:3em;"></span><span class="frac-line" style="border-bottom-width:0.04em;"></span></span><span style="top:-3.677em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord"><span class="mord accent"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.5678em;"><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="mord mathnormal">x</span></span><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="accent-body" style="left:-0.2222em;"><span class="mord">ˉ</span></span></span></span></span></span></span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3361em;"><span style="top:-2.55em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathnormal mtight">h</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">−</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mord"><span class="mord accent"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.5678em;"><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="mord mathnormal">x</span></span><span style="top:-3em;"><span class="pstrut" style="height:3em;"></span><span class="accent-body" style="left:-0.2222em;"><span class="mord">ˉ</span></span></span></span></span></span></span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.1514em;"><span style="top:-2.55em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mathnormal mtight">m</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.836em;"><span></span></span></span></span></span><span class="mclose nulldelimiter"></span></span><span class="mclose delimcenter" style="top:0em;"><span class="delimsizing size3">)</span></span></span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">+</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:0.6444em;"></span><span class="mord">50</span></span></span></span></span>
<p>Naturally, there are a few things to note about this formula.
I assumed that the positivity score is a normal distribution among news headlines (I did not have time to evaluate whether this is actually true).
The score will also often fall <em>outside</em> the range of 0-100.
This is expected, since values outside that range will be less accurate anyway.
In those situations, I simply clamp to within the range 0-100.</p>
<h2>Communicating Hope</h2>
<p>I wanted to explore and get feedback on a couple different ways of communicating abstract concepts of hope.
First, I threw together a quick website using <a href="https://svelte.dev/">Svelte</a> and <a href="https://tailwindcss.com/">TailwindCSS</a>.</p>
<img src="/images/hope_screenshot.webp" alt="A screenshot of my website prototype">
<p>I ran the website for a couple of days and had friends, family, and peers test it out to get feedback on the concept.</p>
<h3>Feedback</h3>
<p>The people I spoke to had a lot to say on the dial itself.
Notably, the lack of +/- signaling was confusing.
People couldn't tell what a "good" or a "bad" reading on the dial looked like.
One person suggested I include a happy face on the hopeful side and a sad face on the despair side.</p>
<p>We brainstormed the different places we could put a hope-meter.
IOS and Android both support home-screen widgets.
Maybe we could put a global hope-meter on one of those, right next to the clock?
Maybe you could click on a specific spot on the hope meter, and you would be served a news article with that amount of hope?
The possibilities were endless.
I was pushed to look into what a physical hope meter would look like, so I sketched it out.</p>
<img src="/images/hope_sketch.webp" alt="A quick little sketch of a physical hope meter">
<h3>Moving To the Real World</h3>
<p>I got hooked on the physical dial.
I wanted something tangible that I could put somewhere.
Something whose physical weight would give an amount of substance to the thing it measured.</p>
<p>I decided to keep riffing on the sketches I had drawn out.
I loaded the web server I used for the original site onto a Raspberry Pi 3, planning on using the existing HTTP endpoint to control a servo, which in turn, moved a hand on a dial.</p>
<p>The only material I needed to buy was a <a href="https://www.adafruit.com/product/154">FS5103R continuous rotation servo</a>.
I already had access to everything else.
While I was waiting on that part to arrive, I modeled out what the physical device <em>could</em> look like.
Initially, I imagined something like this:</p>
<img src="/images/hope_model.jpg" alt="A Blender render of the initial Hope meter 3D model">
<p>All the electronics would be placed inside the bottle, and the servo would be attached to the cap.
The entire contraption would rest on it's side.
In hindsight, there are a couple of issues with this concept.
The circular body would cause it to roll all over the place, and the complete lack of access to the inside would make maintenance quite difficult.
That isn't even to mention there isn't a single hole for a power cord in this model for the electronics.</p>
<img src="/images/hope_model_face.webp" alt="Final Hope Meter Face Model">
<p>In the final model (pictured above), I included facial expressions to differentiate hope and despair on the dial.
I also completely removed the enclosure, instead planning on directly attaching the Raspberry Pi and servo to the back of the dial.</p>
<h4>The Print</h4>
<p>With the model complete, I went to my local makerspace, and printed it out.
How did it go?
Not so well.</p>
<img src="/images/failed_hope_print.webp" alt="My first (failed) 3D print">
<p>The first print failed spectacularly.
Parts were coming off, and it looked like it was drawn with one of those 3D pens.
Turns out, the issue was that the underlying 3D model had not been exported out of Blender (the 3D modeling software I used) properly, and the geometry was screwed up before it even entered the slicing software.
Garbage in, garbage out, I suppose.</p>
<video controls>
  <source src="/videos/hope_demo.webm" type="video/webm">
</video>
<p>The second print, however, turned out great.
I hot-glued the Raspberry Pi to the back, plugged in the servo, and we were off to the races.
As you can see above, I programmed a little servo control script in Python, which included a little startup motion.
After the motion is completed, it moves to the current hope value.</p>
<p>The video was recorded right after the <a href="https://www.nytimes.com/live/2023/10/11/world/israel-news-hamas-war">Isreal-Hamas War</a> began, so global hope values were not doing so hot.
Although it truly is a tragedy, if anything, this is evidence that the system works.</p>
<h2>Reflection</h2>
<p>This project stretched my abilities in a couple key ways.</p>
<p>After the initial prototyping phase, <em>all</em> the code was in programming environments I am relatively unfamiliar with.
While I have a lot of experience with Node.js, Rust and React, I felt some difficulty when working with ASP.NET Core and Python.
The difficulties were exacerbated when trying to run everything on a Raspberry Pi, which I have never worked with in this capacity.</p>
<p>Additionally, I found it pretty difficult to get the servo working reliably.
Apparently, servos rely on a potentiometer to stay calibrated.
If the potentiometer moves to an ambient environment with a different temperature than it was calibrated in, the servo starts to move in unexpected ways.
This resulted in frequently inaccurate readings.
If I could go back and re-do this project, I would have used a different kind of motor, maybe a stepper motor.</p>
<p>Part of my motivation for doing this project was to gather more information on the best way to implement a <a href="/articles/the_climate_change_progress_bar">climate change progress bar</a>.
To that end, I found this quite successful.
I now have a better understanding of what kind of server and database architecture I want to use.
I also now know what kinds of formats people are more likely to see global metrics like hope or climate progress (home screen widgets and push notifications are at the top of the list).</p>]]></content:encoded>
            <enclosure url="https://elijahpotter.dev/images/hope_sketch.webp" length="0" type="image/webp"/>
        </item>
        <item>
            <title><![CDATA[The Climate Change Progress Bar]]></title>
            <link>https://elijahpotter.dev/articles/the_climate_change_progress_bar</link>
            <guid>https://elijahpotter.dev/articles/the_climate_change_progress_bar</guid>
            <pubDate>Thu, 12 Oct 2023 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>The Climate Change Progress Bar</h1>
<img src="/images/blue_marble.jpg" alt="The Blue Marble">
<h2>The Problem</h2>
<p>Over the last couple years, I've noticed a growing distinction between two groups of people.</p>
<p>On one hand, we have those who deny climate change is a problem.
Some believe it is a <a href="https://en.wikipedia.org/wiki/Climate_change_conspiracy_theory">hoax</a>, while others merely don't grasp the gravity of the situation.
This perspective is misinformed.</p>
<p>On the other hand, there are some who believe climate change is a problem beyond their abilities to impact.
They believe that, as mere individuals, there is not a single thing they can do that could have a meaningful effect toward reducing carbon emissions.
This perspective is equally misinformed.</p>
<p>In reality, the climate problem is <em>huge.</em>
It <em>will</em> result in massive habitat losses for human life and wildlife.
In all likelihood, hundreds of millions of people will lose the ability to farm their land, either from drought, floods, lost soil fertility or wildfires.</p>
<blockquote>
<p><strong>Side Note:</strong> If you want to learn more about human migration in the 21st century, I highly recommend the documentary <em>Climate Refugees</em>.</p>
</blockquote>
<p>The point of this article is <em>not</em> to convince you that climate change is a real problem that has real solutions.
At this point, I don't need to tell you that the climate problem is actively being solved by some of the most courageous, influential, intelligent minds in the world.
There <em>are</em> solutions out there that are having a <a href="https://www.ted.com/talks/al_gore_the_case_for_optimism_on_climate_change"><strong>huge</strong> impact</a>, but people don't have a straightforward way to discover them.</p>
<p>The problem <strong>I</strong> want to solve is this: there is no single metric or dashboard to see the progress of climate change and its solutions.</p>
<h2>What's the Pitch?</h2>
<p>As I am sure you figured out from the title, I want to make a progress bar for climate change.
A single place people can go to see where we, as a planet and as a species, stand.
Functionally, I want to replicate how we can view <a href="https://www.usdebtclock.org/">federal government's debt in real time</a>, but for climate change.</p>
<p>I've conversed with a friend of mine, Anyll Markevich, who knows much more about climate issues than I do.
While we would obviously need multiple rounds of peer review before we can put anything of substance out, Anyll has recommended I start my research in a couple specific areas.</p>
<h2>Planetary Boundaries</h2>
<p>There exist some projects and tools that hold a similar goal as the climate change progress bar.
The closest I could find was the Planetary Boundaries project.</p>
<p>When having an argument with someone, there are some lines you just cannot cross.
Some lines are unavoidable.
Godwin's law states that "as an online discussion grows longer, the probability of a comparison to Hitler approaches 1."
While comparing your argumentative opponent to a genocidal dictator is almost never necessary or useful, the line gets crossed eventually.</p>
<p>The planet's ecosystem is similar.
There are certain ecological lines that cannot not be crossed, like having more than 10 extinctions per million species in a year, without significant impact on global ecological health.
The planet can handle a certain amount of carbon dioxide in the air, but it cannot handle more than 350 parts per million without significant damage occurring. At the time of writing, <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>C</mi><msub><mi>O</mi><mn>2</mn></msub></mrow><annotation encoding="application/x-tex">CO_2</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8333em;vertical-align:-0.15em;"></span><span class="mord mathnormal" style="margin-right:0.07153em;">C</span><span class="mord"><span class="mord mathnormal" style="margin-right:0.02778em;">O</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3011em;"><span style="top:-2.55em;margin-left:-0.0278em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">2</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span></span></span></span> is currently at 418 ppm.</p>
<img src="/images/planetary_boundaries.webp" alt="The Planetary Boundaries Graph, Credit Azote for Stockholm Resilience Centre, based on analysis in Richardson et al 2023.">
<p>The above graph shows where the planet is on several planetary boundaries.
As you can see, we've exceeded most of them.</p>
<p>While planetary boundaries are a great way of communicating where we are on a bunch of environmental metrics, it falls short of our goal in a few key ways.
For one, it doesn't specialize in climate change.
While it <em>does</em> show current carbon dioxide levels, it doesn't include any other greenhouse gases, or how deepy they are effecting global temperatures.
Finally, it is only updated annually.
The Climate Change Progress Bar's key trait is that it is real-time, so people can see how we stand in the climate battle <em>right now.</em></p>
<h2>Possible Data Sources</h2>
<p>This is something I need to speak to more climate experts on.
However, at the bare minimum there are a few elements our technology can look at.</p>
<h3>Real Time Carbon Data</h3>
<p>Anyll suggested I look into the <a href="https://www.nature.com/articles/s41597-020-00708-7">Carbon Monitor</a>, a real-time carbon emissions' dataset.
Since it doesn't look like their API is accessible, but their code <em>is</em>, it looks like we can just run in on our infrastructure.
I need to look into the technical implementation details a bit more, but initial prospects are good.</p>
<h3><a href="https://en.wikipedia.org/wiki/Normalized_difference_vegetation_index">Normalized Difference Vegetation Index</a></h3>
<p>NDVI is an industry-standard way of measuring the photosynthetic productivity of plants around the globe.
It works by applying some simple math on two bands of satellite imagery, both of which are widely available.
The comparative low computational cost of getting the NDVI of an area is a boon for our real-time goals.
It seems like its main drawback is that it doesn't give us many metrics on biodiversity.</p>
<h2>Getting Started</h2>
<p>We want to get started on an MVP as quickly as possible.
The relative simplicity of the NVDI gives us a great way to set up and test a real-time data processing pipeline.
We aim to include other metrics, like population growth and clean investing, but we want to prepare a pipeline first.</p>
<h2>Feedback</h2>
<p>This post, as well as all other posts related to this project, exist to serve two purposes:</p>
<ol>
<li>Bookkeeping and whiteboarding device.</li>
<li>To get feedback on every stage of the process.</li>
</ol>
<p>In other words, I want to hear from you!
If you have any ideas to how we can make the Climate Change Progress bar better, let us know by emailing <a href="mailto:feedback@elijahpotter.dev">feedback@elijahpotter.dev</a>
If there are <strong>better</strong> metrics we should be including in our index, tell us!</p>]]></description>
            <content:encoded><![CDATA[<h1>The Climate Change Progress Bar</h1>
<img src="/images/blue_marble.jpg" alt="The Blue Marble">
<h2>The Problem</h2>
<p>Over the last couple years, I've noticed a growing distinction between two groups of people.</p>
<p>On one hand, we have those who deny climate change is a problem.
Some believe it is a <a href="https://en.wikipedia.org/wiki/Climate_change_conspiracy_theory">hoax</a>, while others merely don't grasp the gravity of the situation.
This perspective is misinformed.</p>
<p>On the other hand, there are some who believe climate change is a problem beyond their abilities to impact.
They believe that, as mere individuals, there is not a single thing they can do that could have a meaningful effect toward reducing carbon emissions.
This perspective is equally misinformed.</p>
<p>In reality, the climate problem is <em>huge.</em>
It <em>will</em> result in massive habitat losses for human life and wildlife.
In all likelihood, hundreds of millions of people will lose the ability to farm their land, either from drought, floods, lost soil fertility or wildfires.</p>
<blockquote>
<p><strong>Side Note:</strong> If you want to learn more about human migration in the 21st century, I highly recommend the documentary <em>Climate Refugees</em>.</p>
</blockquote>
<p>The point of this article is <em>not</em> to convince you that climate change is a real problem that has real solutions.
At this point, I don't need to tell you that the climate problem is actively being solved by some of the most courageous, influential, intelligent minds in the world.
There <em>are</em> solutions out there that are having a <a href="https://www.ted.com/talks/al_gore_the_case_for_optimism_on_climate_change"><strong>huge</strong> impact</a>, but people don't have a straightforward way to discover them.</p>
<p>The problem <strong>I</strong> want to solve is this: there is no single metric or dashboard to see the progress of climate change and its solutions.</p>
<h2>What's the Pitch?</h2>
<p>As I am sure you figured out from the title, I want to make a progress bar for climate change.
A single place people can go to see where we, as a planet and as a species, stand.
Functionally, I want to replicate how we can view <a href="https://www.usdebtclock.org/">federal government's debt in real time</a>, but for climate change.</p>
<p>I've conversed with a friend of mine, Anyll Markevich, who knows much more about climate issues than I do.
While we would obviously need multiple rounds of peer review before we can put anything of substance out, Anyll has recommended I start my research in a couple specific areas.</p>
<h2>Planetary Boundaries</h2>
<p>There exist some projects and tools that hold a similar goal as the climate change progress bar.
The closest I could find was the Planetary Boundaries project.</p>
<p>When having an argument with someone, there are some lines you just cannot cross.
Some lines are unavoidable.
Godwin's law states that "as an online discussion grows longer, the probability of a comparison to Hitler approaches 1."
While comparing your argumentative opponent to a genocidal dictator is almost never necessary or useful, the line gets crossed eventually.</p>
<p>The planet's ecosystem is similar.
There are certain ecological lines that cannot not be crossed, like having more than 10 extinctions per million species in a year, without significant impact on global ecological health.
The planet can handle a certain amount of carbon dioxide in the air, but it cannot handle more than 350 parts per million without significant damage occurring. At the time of writing, <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>C</mi><msub><mi>O</mi><mn>2</mn></msub></mrow><annotation encoding="application/x-tex">CO_2</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8333em;vertical-align:-0.15em;"></span><span class="mord mathnormal" style="margin-right:0.07153em;">C</span><span class="mord"><span class="mord mathnormal" style="margin-right:0.02778em;">O</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3011em;"><span style="top:-2.55em;margin-left:-0.0278em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">2</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span></span></span></span> is currently at 418 ppm.</p>
<img src="/images/planetary_boundaries.webp" alt="The Planetary Boundaries Graph, Credit Azote for Stockholm Resilience Centre, based on analysis in Richardson et al 2023.">
<p>The above graph shows where the planet is on several planetary boundaries.
As you can see, we've exceeded most of them.</p>
<p>While planetary boundaries are a great way of communicating where we are on a bunch of environmental metrics, it falls short of our goal in a few key ways.
For one, it doesn't specialize in climate change.
While it <em>does</em> show current carbon dioxide levels, it doesn't include any other greenhouse gases, or how deepy they are effecting global temperatures.
Finally, it is only updated annually.
The Climate Change Progress Bar's key trait is that it is real-time, so people can see how we stand in the climate battle <em>right now.</em></p>
<h2>Possible Data Sources</h2>
<p>This is something I need to speak to more climate experts on.
However, at the bare minimum there are a few elements our technology can look at.</p>
<h3>Real Time Carbon Data</h3>
<p>Anyll suggested I look into the <a href="https://www.nature.com/articles/s41597-020-00708-7">Carbon Monitor</a>, a real-time carbon emissions' dataset.
Since it doesn't look like their API is accessible, but their code <em>is</em>, it looks like we can just run in on our infrastructure.
I need to look into the technical implementation details a bit more, but initial prospects are good.</p>
<h3><a href="https://en.wikipedia.org/wiki/Normalized_difference_vegetation_index">Normalized Difference Vegetation Index</a></h3>
<p>NDVI is an industry-standard way of measuring the photosynthetic productivity of plants around the globe.
It works by applying some simple math on two bands of satellite imagery, both of which are widely available.
The comparative low computational cost of getting the NDVI of an area is a boon for our real-time goals.
It seems like its main drawback is that it doesn't give us many metrics on biodiversity.</p>
<h2>Getting Started</h2>
<p>We want to get started on an MVP as quickly as possible.
The relative simplicity of the NVDI gives us a great way to set up and test a real-time data processing pipeline.
We aim to include other metrics, like population growth and clean investing, but we want to prepare a pipeline first.</p>
<h2>Feedback</h2>
<p>This post, as well as all other posts related to this project, exist to serve two purposes:</p>
<ol>
<li>Bookkeeping and whiteboarding device.</li>
<li>To get feedback on every stage of the process.</li>
</ol>
<p>In other words, I want to hear from you!
If you have any ideas to how we can make the Climate Change Progress bar better, let us know by emailing <a href="mailto:feedback@elijahpotter.dev">feedback@elijahpotter.dev</a>
If there are <strong>better</strong> metrics we should be including in our index, tell us!</p>]]></content:encoded>
            <enclosure url="https://elijahpotter.dev/images/blue_marble.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[A Case for Procrastination]]></title>
            <link>https://elijahpotter.dev/articles/a_case_for_procrastination</link>
            <guid>https://elijahpotter.dev/articles/a_case_for_procrastination</guid>
            <pubDate>Wed, 27 Sep 2023 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>A Case for Procrastination</h1>
<img src="/images/model_t.jpg" alt="A linup of Model T Fords">
<p>The most valuable, unique aspect of software development is the speed at which we can iterate.
Software projects that don't iterate quickly and frequently usually stagnate and fail, while projects that work with user feedback on an almost real-time basis are the most successful.
I've noticed the difference personally, with my work at <a href="https://archytasinc.com/">Archytas Automation</a>.
Most recently, I've read <em>The Lean Startup</em>, <em>Zen and the Art of Motorcycle Maintenance</em> and <em>The Pragmatic Programmer</em>.
While each uses their own terminology, both go into depth on the depth on the topic of fast iteration cycles.</p>
<h2>Lean Manufacturing and Lean Thinking</h2>
<p>Lean manufacturing, as explained in <em>The Lean Startup</em> is a production methodology focused on avoiding waste.
The book's primary example is the Toyota Production System.</p>
<p>In the mid 1930s, the world's largest car manufacturer was Ford.
Ford's factories produced thousands of unique parts, which together could form a car.
They produced these parts in bulk, and each distinct part had an entire machine dedicated to producing it.
There was a machine that made springs, a machine that made nuts, and a machine that made bolts.
No machine could make any part other than the one it was designed for.</p>
<p>This was great: it meant that cars would be cheaper than ever, and more people could experience car ownership.</p>
<p>Around this time, Toyota entered the automotive industry.
However, they did not have the time, nor the money to design and build the same kind of hyper-specific manufacturing machines that Ford used.
Instead, they chose to purchase comparatively few hyper-<strong>generalized</strong> machines.
Their strategy was to build one car at time, and sell it as soon as possible (ideally the moment it rolled off of the factory floor).
This way, if there is an issue with the vehicle, the design can be modified before too many faulty cars are assembled.</p>
<p>The Toyota Production System emphasized flexibility over efficiency.
By staying flexible, Toyota gave itself the ability to optimize all aspects of the business in the middle of a production run.</p>
<p>On one hand, the Ford process looked like this:</p>
<ul>
<li>Get materials for one hundred cars.</li>
<li>Fabricate over a thousand distinct parts required for each car (likely more than a hundred thousand total parts).</li>
<li>Assemble one hundred cars .</li>
<li>Sell one hundred cars.</li>
<li>Receive feedback on vehicle and production process.</li>
<li>Perform all possible changes to the vehicle's design and production process (which would be very little).</li>
</ul>
<p>With this many steps, each taking place over a comparatively long time, the iteration cycles are few and far between.
On the other hand, the Toyota Production System looked like this:</p>
<ul>
<li>Get materials for one car</li>
<li>Fabricate the relatively few distinct parts required for the car</li>
<li>If there is an issue with the production process (a machine breaks, someone gets hurt, etc.).
Stop the whole process and fix the issue, so it will not affect future cars.</li>
<li>Assemble one car.</li>
<li>Sell one car.</li>
<li>Receive feedback on vehicle and production process.</li>
<li>Perform all possible changes to the vehicle's design and production process (which could be quite a lot, given the flexibility of both).</li>
</ul>
<p>Counterintuitively, performing each step with a batch size of one actually ended up being <em>more</em> efficient.
Even though Ford experienced a higher level of efficient within each batch, their process led to many more wasted parts being manufactured,
as well as a lack of communication between steps, which dramatically increased the amount of wasted effort.
Additionally, because the Toyota Production System allowed comparatively rapid iteration, Toyota vehicles were simply better: more durable, and more useful to their customers.</p>
<h3>Lean Thinking In Software</h3>
<p>In software development, it can be tempting to go the Ford route:
create a detailed plan of all the features you wish to add to your service,
then, develop the features, and deliver them all at once.
Only then do you accept feedback and evaluate the usefulness of each feature.</p>
<p>In <em>The Lean Startup</em> however, Eric Ries suggests an alternative methodology, inspired by the Toyota Production System.
Before deciding on which features to include in your service in an all-at-once, shot-in-the-dark manner, perform experiments.
Create an absolutely minimal prototype of the feature, give to a customer, and observe how they use it.
It is possible they find the feature unhelpful, or they use it in a way you didn't expect.
By running the experiment early on, there are far fewer unhelpful features introduced, and the customer gets a product more closely tied to their needs.</p>
<p>Further, it is possible that your understanding of the problem domain is misinformed.
Planning out the entire product release months in advance is problematic when the requirements of the project change (and they will, <strong>often</strong>).</p>
<h2>Why Interpreted Languages Can Be So Productive</h2>
<p>Interpreted programming languages are almost laughably pervasive.
Of the top ten most popular programming languages on the 2023 StackOverflow Developer survey, six were interpreted.
That count does not include TypeScript, which can be used in a semi-interpreted manner.
Interpreted language popularity is often chalked up to their ease-of-use, abstraction, and simple, English-like syntax.
However, I believe there is an additional reason: they allow one to iterate much, much faster.</p>
<h2>Micro-iterations</h2>
<p>Iterations can happen on any timescale.
<strong>Micro-iterations</strong> occur happen over the shortest time scale.
These are small, incremental changes that occur (usually) when debugging or optimizing code.
It is the amount of time (and by extension, mental overhead) that results from a change.
The longer it takes for tests to compile and run, the longer it takes to debug.</p>
<p>I believe you can model it like so:</p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mtext>time to debug</mtext><mo>=</mo><mo stretchy="false">(</mo><mtext>time to test</mtext><msup><mo stretchy="false">)</mo><mn>2</mn></msup><mo>∗</mo><mtext>text editing</mtext></mrow><annotation encoding="application/x-tex">\text{time to debug} = (\text{time to test})^2 * \text{text editing}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8889em;vertical-align:-0.1944em;"></span><span class="mord text"><span class="mord">time to debug</span></span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:1.1141em;vertical-align:-0.25em;"></span><span class="mopen">(</span><span class="mord text"><span class="mord">time to test</span></span><span class="mclose"><span class="mclose">)</span><span class="msupsub"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.8641em;"><span style="top:-3.113em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">2</span></span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">∗</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:0.8889em;vertical-align:-0.1944em;"></span><span class="mord text"><span class="mord">text editing</span></span></span></span></span></span>
<blockquote>
<p>This is <strong>not</strong> an exact, representative equation, merely an abstraction to illustrate my point, based on personal experience.</p>
</blockquote>
<p>In other words, the total debugging time increases super-linearly as the time it takes to <em>test</em> a change to the program increases.
Notice how, as the <code>time to test</code> increases, the actual amount of time <code>text editing</code> falls to a negligible ratio of the total.
I believe this comes from the fact that as compilation time increases, the amount of information gleaned per micro-iteration tends to decrease.</p>
<h3>JIT Compilation and Hot Reload</h3>
<p>This is where we get to the title of the article.</p>
<p>The concepts of lean thinking extend into compiler technology.
The Toyota Production System includes the concept of "Just-in-time", meaning,
"to only produce exactly what is needed for the product, exactly when in the process it is required."
Avoid planning resource requirements ahead of time and build systems that only produce what is actually needed.</p>
<p>While the meaning of "Just-in-time" <strong>is different</strong> when we refer to compilers, the line of thinking is ultimately the same.
Instead of wasting valuable, human-scale time on compilation and optimization of irrelevant code, just compile the code you need, when you need it.
This drastically reduces the amount of time to test, and ultimately, the time to debug.</p>
<p>Further, technologies like React and Flutter's Hot Reload make it possible to complete these micro-iterations without having to restart your program.
I believe this is partly why interpreted languages are so beginner-friendly (other than the high level of abstraction they typically provide).
Newcomers to programming spend most of their time on syntactic issues and simple runtime errors.
By making micro-iterations happen as fast as possible, you make these issues less annoying to deal with.</p>
<h3>Procrastination</h3>
<p>If you are in university (like I currently am), or any other level of education, it can actually be beneficial to hold off on doing your homework.
It is not uncommon (at least in my experience) for the project or paper requirements to change at the last minute.
Some of your peers may choose to complete the project or paper as soon as possible.
If this is the case, you may benefit from their experience.
If a certain chain of logic or analysis is confusing or misleading, they can warn you to avoid losing time.
Assuming they are willing, they can also provide insightful feedback on your work mid-way through your writing or study process.</p>
<h2>Conclusion</h2>
<p>Iteration is incredibly important to any design and development cycle, especially in software.
Since I have embraced iteration, I've seen a dramatic improvement in my productivity and effectiveness.
So I encourage you: if your build process takes 30 minutes, spend some time on improving it.
If you are working on a project that hasn't been put in front of a customer, try showing it to a family member or non-technical friend.
Ask them for brutal honesty, and your project will be <em>much</em> better off.</p>]]></description>
            <content:encoded><![CDATA[<h1>A Case for Procrastination</h1>
<img src="/images/model_t.jpg" alt="A linup of Model T Fords">
<p>The most valuable, unique aspect of software development is the speed at which we can iterate.
Software projects that don't iterate quickly and frequently usually stagnate and fail, while projects that work with user feedback on an almost real-time basis are the most successful.
I've noticed the difference personally, with my work at <a href="https://archytasinc.com/">Archytas Automation</a>.
Most recently, I've read <em>The Lean Startup</em>, <em>Zen and the Art of Motorcycle Maintenance</em> and <em>The Pragmatic Programmer</em>.
While each uses their own terminology, both go into depth on the depth on the topic of fast iteration cycles.</p>
<h2>Lean Manufacturing and Lean Thinking</h2>
<p>Lean manufacturing, as explained in <em>The Lean Startup</em> is a production methodology focused on avoiding waste.
The book's primary example is the Toyota Production System.</p>
<p>In the mid 1930s, the world's largest car manufacturer was Ford.
Ford's factories produced thousands of unique parts, which together could form a car.
They produced these parts in bulk, and each distinct part had an entire machine dedicated to producing it.
There was a machine that made springs, a machine that made nuts, and a machine that made bolts.
No machine could make any part other than the one it was designed for.</p>
<p>This was great: it meant that cars would be cheaper than ever, and more people could experience car ownership.</p>
<p>Around this time, Toyota entered the automotive industry.
However, they did not have the time, nor the money to design and build the same kind of hyper-specific manufacturing machines that Ford used.
Instead, they chose to purchase comparatively few hyper-<strong>generalized</strong> machines.
Their strategy was to build one car at time, and sell it as soon as possible (ideally the moment it rolled off of the factory floor).
This way, if there is an issue with the vehicle, the design can be modified before too many faulty cars are assembled.</p>
<p>The Toyota Production System emphasized flexibility over efficiency.
By staying flexible, Toyota gave itself the ability to optimize all aspects of the business in the middle of a production run.</p>
<p>On one hand, the Ford process looked like this:</p>
<ul>
<li>Get materials for one hundred cars.</li>
<li>Fabricate over a thousand distinct parts required for each car (likely more than a hundred thousand total parts).</li>
<li>Assemble one hundred cars .</li>
<li>Sell one hundred cars.</li>
<li>Receive feedback on vehicle and production process.</li>
<li>Perform all possible changes to the vehicle's design and production process (which would be very little).</li>
</ul>
<p>With this many steps, each taking place over a comparatively long time, the iteration cycles are few and far between.
On the other hand, the Toyota Production System looked like this:</p>
<ul>
<li>Get materials for one car</li>
<li>Fabricate the relatively few distinct parts required for the car</li>
<li>If there is an issue with the production process (a machine breaks, someone gets hurt, etc.).
Stop the whole process and fix the issue, so it will not affect future cars.</li>
<li>Assemble one car.</li>
<li>Sell one car.</li>
<li>Receive feedback on vehicle and production process.</li>
<li>Perform all possible changes to the vehicle's design and production process (which could be quite a lot, given the flexibility of both).</li>
</ul>
<p>Counterintuitively, performing each step with a batch size of one actually ended up being <em>more</em> efficient.
Even though Ford experienced a higher level of efficient within each batch, their process led to many more wasted parts being manufactured,
as well as a lack of communication between steps, which dramatically increased the amount of wasted effort.
Additionally, because the Toyota Production System allowed comparatively rapid iteration, Toyota vehicles were simply better: more durable, and more useful to their customers.</p>
<h3>Lean Thinking In Software</h3>
<p>In software development, it can be tempting to go the Ford route:
create a detailed plan of all the features you wish to add to your service,
then, develop the features, and deliver them all at once.
Only then do you accept feedback and evaluate the usefulness of each feature.</p>
<p>In <em>The Lean Startup</em> however, Eric Ries suggests an alternative methodology, inspired by the Toyota Production System.
Before deciding on which features to include in your service in an all-at-once, shot-in-the-dark manner, perform experiments.
Create an absolutely minimal prototype of the feature, give to a customer, and observe how they use it.
It is possible they find the feature unhelpful, or they use it in a way you didn't expect.
By running the experiment early on, there are far fewer unhelpful features introduced, and the customer gets a product more closely tied to their needs.</p>
<p>Further, it is possible that your understanding of the problem domain is misinformed.
Planning out the entire product release months in advance is problematic when the requirements of the project change (and they will, <strong>often</strong>).</p>
<h2>Why Interpreted Languages Can Be So Productive</h2>
<p>Interpreted programming languages are almost laughably pervasive.
Of the top ten most popular programming languages on the 2023 StackOverflow Developer survey, six were interpreted.
That count does not include TypeScript, which can be used in a semi-interpreted manner.
Interpreted language popularity is often chalked up to their ease-of-use, abstraction, and simple, English-like syntax.
However, I believe there is an additional reason: they allow one to iterate much, much faster.</p>
<h2>Micro-iterations</h2>
<p>Iterations can happen on any timescale.
<strong>Micro-iterations</strong> occur happen over the shortest time scale.
These are small, incremental changes that occur (usually) when debugging or optimizing code.
It is the amount of time (and by extension, mental overhead) that results from a change.
The longer it takes for tests to compile and run, the longer it takes to debug.</p>
<p>I believe you can model it like so:</p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><mtext>time to debug</mtext><mo>=</mo><mo stretchy="false">(</mo><mtext>time to test</mtext><msup><mo stretchy="false">)</mo><mn>2</mn></msup><mo>∗</mo><mtext>text editing</mtext></mrow><annotation encoding="application/x-tex">\text{time to debug} = (\text{time to test})^2 * \text{text editing}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8889em;vertical-align:-0.1944em;"></span><span class="mord text"><span class="mord">time to debug</span></span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:1.1141em;vertical-align:-0.25em;"></span><span class="mopen">(</span><span class="mord text"><span class="mord">time to test</span></span><span class="mclose"><span class="mclose">)</span><span class="msupsub"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.8641em;"><span style="top:-3.113em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">2</span></span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">∗</span><span class="mspace" style="margin-right:0.2222em;"></span></span><span class="base"><span class="strut" style="height:0.8889em;vertical-align:-0.1944em;"></span><span class="mord text"><span class="mord">text editing</span></span></span></span></span></span>
<blockquote>
<p>This is <strong>not</strong> an exact, representative equation, merely an abstraction to illustrate my point, based on personal experience.</p>
</blockquote>
<p>In other words, the total debugging time increases super-linearly as the time it takes to <em>test</em> a change to the program increases.
Notice how, as the <code>time to test</code> increases, the actual amount of time <code>text editing</code> falls to a negligible ratio of the total.
I believe this comes from the fact that as compilation time increases, the amount of information gleaned per micro-iteration tends to decrease.</p>
<h3>JIT Compilation and Hot Reload</h3>
<p>This is where we get to the title of the article.</p>
<p>The concepts of lean thinking extend into compiler technology.
The Toyota Production System includes the concept of "Just-in-time", meaning,
"to only produce exactly what is needed for the product, exactly when in the process it is required."
Avoid planning resource requirements ahead of time and build systems that only produce what is actually needed.</p>
<p>While the meaning of "Just-in-time" <strong>is different</strong> when we refer to compilers, the line of thinking is ultimately the same.
Instead of wasting valuable, human-scale time on compilation and optimization of irrelevant code, just compile the code you need, when you need it.
This drastically reduces the amount of time to test, and ultimately, the time to debug.</p>
<p>Further, technologies like React and Flutter's Hot Reload make it possible to complete these micro-iterations without having to restart your program.
I believe this is partly why interpreted languages are so beginner-friendly (other than the high level of abstraction they typically provide).
Newcomers to programming spend most of their time on syntactic issues and simple runtime errors.
By making micro-iterations happen as fast as possible, you make these issues less annoying to deal with.</p>
<h3>Procrastination</h3>
<p>If you are in university (like I currently am), or any other level of education, it can actually be beneficial to hold off on doing your homework.
It is not uncommon (at least in my experience) for the project or paper requirements to change at the last minute.
Some of your peers may choose to complete the project or paper as soon as possible.
If this is the case, you may benefit from their experience.
If a certain chain of logic or analysis is confusing or misleading, they can warn you to avoid losing time.
Assuming they are willing, they can also provide insightful feedback on your work mid-way through your writing or study process.</p>
<h2>Conclusion</h2>
<p>Iteration is incredibly important to any design and development cycle, especially in software.
Since I have embraced iteration, I've seen a dramatic improvement in my productivity and effectiveness.
So I encourage you: if your build process takes 30 minutes, spend some time on improving it.
If you are working on a project that hasn't been put in front of a customer, try showing it to a family member or non-technical friend.
Ask them for brutal honesty, and your project will be <em>much</em> better off.</p>]]></content:encoded>
            <enclosure url="https://elijahpotter.dev/images/model_t.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[I Designed My Own Pen Plotter]]></title>
            <link>https://elijahpotter.dev/articles/i_designed_my_own_pen_plotter</link>
            <guid>https://elijahpotter.dev/articles/i_designed_my_own_pen_plotter</guid>
            <pubDate>Tue, 18 Oct 2022 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>How I Designed (and built) My Own Pen Plotter</h1>
<p>For the last few months, after reading Preslav Rachev's book <a href="https://p5v.gumroad.com/l/generative-art-in-golang">Generative Art in Go</a>, I have been playing around with writing algorithms that create interesting graphics. I eventually ended up building a <a href="https://github.com/elijah-potter/denim">modular canvas library</a> for Rust to make that easier to do, for me and other people.</p>
<p>After making a few algorithms and posting them to the web, I started looking around for more inspiration. I happened upon a video with this pen plotter:</p>
<img src="/images/reddit_pen_plotter.png" alt="A screencap of a pen plotter from Reddit">
<p>I was amazed. A robot that could draw <em>vector graphics</em>? I had been nerd sniped.</p>
<img src="/images/xkcd_nerd_sniping.png" alt="XKCD Comic #356">
<h2>The Math</h2>
<p>The mathematics involved interested my especially. How do you translate <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mo fence="true">[</mo><mtable rowspacing="0.16em" columnalign="center center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mi>x</mi></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mi>y</mi></mstyle></mtd></mtr></mtable><mo fence="true">]</mo></mrow><annotation encoding="application/x-tex">\begin{bmatrix} x &#x26; y \end{bmatrix}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:1.2em;vertical-align:-0.35em;"></span><span class="minner"><span class="mopen delimcenter" style="top:0em;"><span class="delimsizing size1">[</span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.85em;"><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord mathnormal">x</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.85em;"><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord mathnormal" style="margin-right:0.03588em;">y</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.35em;"><span></span></span></span></span></span></span></span><span class="mclose delimcenter" style="top:0em;"><span class="delimsizing size1">]</span></span></span></span></span></span> to rotations of motors? It intimidated me a little at first, but once I sat down and worked it out, I realized it was surprisingly simple.</p>
<img src="/images/plotter_math_diagram.png" alt="A diagram of the math involved to create the plotter">
<p>We consider that the two motors are simply lengthening or shortening each string a specific amount. Here are the equations that dictate the length of each string, given a point <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mo fence="true">[</mo><mtable rowspacing="0.16em" columnalign="center center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mi>x</mi></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mi>y</mi></mstyle></mtd></mtr></mtable><mo fence="true">]</mo></mrow><annotation encoding="application/x-tex">\begin{bmatrix} x &#x26; y \end{bmatrix}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:1.2em;vertical-align:-0.35em;"></span><span class="minner"><span class="mopen delimcenter" style="top:0em;"><span class="delimsizing size1">[</span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.85em;"><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord mathnormal">x</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.85em;"><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord mathnormal" style="margin-right:0.03588em;">y</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.35em;"><span></span></span></span></span></span></span></span><span class="mclose delimcenter" style="top:0em;"><span class="delimsizing size1">]</span></span></span></span></span></span>, and a distance <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>m</mi></mrow><annotation encoding="application/x-tex">m</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.4306em;"></span><span class="mord mathnormal">m</span></span></span></span> between the two motors.</p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><msqrt><mrow><msup><mi>x</mi><mn>2</mn></msup><mo>+</mo><msup><mi>y</mi><mn>2</mn></msup></mrow></msqrt><mo>=</mo><msub><mi>r</mi><mn>0</mn></msub></mrow><annotation encoding="application/x-tex">\sqrt{x^2+y^2} = r_0</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:1.24em;vertical-align:-0.2333em;"></span><span class="mord sqrt"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:1.0067em;"><span class="svg-align" style="top:-3.2em;"><span class="pstrut" style="height:3.2em;"></span><span class="mord" style="padding-left:1em;"><span class="mord"><span class="mord mathnormal">x</span><span class="msupsub"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.7401em;"><span style="top:-2.989em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">2</span></span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">+</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mord"><span class="mord mathnormal" style="margin-right:0.03588em;">y</span><span class="msupsub"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.7401em;"><span style="top:-2.989em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">2</span></span></span></span></span></span></span></span></span></span><span style="top:-2.9667em;"><span class="pstrut" style="height:3.2em;"></span><span class="hide-tail" style="min-width:1.02em;height:1.28em;"><svg xmlns="http://www.w3.org/2000/svg" width="400em" height="1.28em" viewBox="0 0 400000 1296" preserveAspectRatio="xMinYMin slice"><path d="M263,681c0.7,0,18,39.7,52,119
c34,79.3,68.167,158.7,102.5,238c34.3,79.3,51.8,119.3,52.5,120
c340,-704.7,510.7,-1060.3,512,-1067
l0 -0
c4.7,-7.3,11,-11,19,-11
H40000v40H1012.3
s-271.3,567,-271.3,567c-38.7,80.7,-84,175,-136,283c-52,108,-89.167,185.3,-111.5,232
c-22.3,46.7,-33.8,70.3,-34.5,71c-4.7,4.7,-12.3,7,-23,7s-12,-1,-12,-1
s-109,-253,-109,-253c-72.7,-168,-109.3,-252,-110,-252c-10.7,8,-22,16.7,-34,26
c-22,17.3,-33.3,26,-34,26s-26,-26,-26,-26s76,-59,76,-59s76,-60,76,-60z
M1001 80h400000v40h-400000z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.2333em;"><span></span></span></span></span></span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:0.5806em;vertical-align:-0.15em;"></span><span class="mord"><span class="mord mathnormal" style="margin-right:0.02778em;">r</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3011em;"><span style="top:-2.55em;margin-left:-0.0278em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span></span></span></span></span>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><msqrt><mrow><mo stretchy="false">(</mo><mi>x</mi><mo>−</mo><mi>m</mi><msup><mo stretchy="false">)</mo><mn>2</mn></msup><mo>+</mo><msup><mi>y</mi><mn>2</mn></msup></mrow></msqrt><mo>=</mo><msub><mi>r</mi><mn>1</mn></msub></mrow><annotation encoding="application/x-tex">\sqrt{(x-m)^2+y^2} = r_1</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:1.24em;vertical-align:-0.2561em;"></span><span class="mord sqrt"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.9839em;"><span class="svg-align" style="top:-3.2em;"><span class="pstrut" style="height:3.2em;"></span><span class="mord" style="padding-left:1em;"><span class="mopen">(</span><span class="mord mathnormal">x</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">−</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mord mathnormal">m</span><span class="mclose"><span class="mclose">)</span><span class="msupsub"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.7401em;"><span style="top:-2.989em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">2</span></span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">+</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mord"><span class="mord mathnormal" style="margin-right:0.03588em;">y</span><span class="msupsub"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.7401em;"><span style="top:-2.989em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">2</span></span></span></span></span></span></span></span></span></span><span style="top:-2.9439em;"><span class="pstrut" style="height:3.2em;"></span><span class="hide-tail" style="min-width:1.02em;height:1.28em;"><svg xmlns="http://www.w3.org/2000/svg" width="400em" height="1.28em" viewBox="0 0 400000 1296" preserveAspectRatio="xMinYMin slice"><path d="M263,681c0.7,0,18,39.7,52,119
c34,79.3,68.167,158.7,102.5,238c34.3,79.3,51.8,119.3,52.5,120
c340,-704.7,510.7,-1060.3,512,-1067
l0 -0
c4.7,-7.3,11,-11,19,-11
H40000v40H1012.3
s-271.3,567,-271.3,567c-38.7,80.7,-84,175,-136,283c-52,108,-89.167,185.3,-111.5,232
c-22.3,46.7,-33.8,70.3,-34.5,71c-4.7,4.7,-12.3,7,-23,7s-12,-1,-12,-1
s-109,-253,-109,-253c-72.7,-168,-109.3,-252,-110,-252c-10.7,8,-22,16.7,-34,26
c-22,17.3,-33.3,26,-34,26s-26,-26,-26,-26s76,-59,76,-59s76,-60,76,-60z
M1001 80h400000v40h-400000z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.2561em;"><span></span></span></span></span></span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:0.5806em;vertical-align:-0.15em;"></span><span class="mord"><span class="mord mathnormal" style="margin-right:0.02778em;">r</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3011em;"><span style="top:-2.55em;margin-left:-0.0278em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span></span></span></span></span>
<h2>Is This Actually Going to Happen?</h2>
<p>I initially sat down and did the math out of curiosity. I wasn't planning on building anything. After realizing the elegant simplicty of the math, I falt empowered to go through with it. I threw together a parts list.</p>
<img src="/images/pen_plotter_parts.png" alt="A visual of all the parts needed for the plotter">
<h3>Materials</h3>
<ul>
<li>Whiteboard</li>
<li>Expo Marker</li>
<li>2 Stepper Motors</li>
<li>3D Printer Timing Belt</li>
<li>Arduino</li>
<li>Adafruit Motor Shield</li>
<li>12V 2A Power Adapter</li>
<li>Duct Tape (as any project requires)</li>
</ul>
<p>I wanted to build this from scratch, without using anyone else's designs or software.</p>
<p>I order the materials off Amazon and soon enough, I had everything I needed.</p>
<h2>Putting It All Together</h2>
<p>The first step was to solder the motor shield onto the Arduino, and wiring up stepper motors. I used my school's laser cutter to make a little bracket.</p>
<img src="/images/mounted_motors_pen_plotter.jpeg" alt="The stepper motors mounted to a bracket">
<p>I am using weights to balance the belts on the motors. I mounted it to a whiteboard so I can iterate in software faster/easier.</p>
<img src="/images/mounted_bracket_on_whiteboard_pen_plotter.jpeg" alt="Everything taped onto an old whiteboard. It&#x27;s super rough, but that&#x27;s OK">
<p>I initially didn't have the timing belt, so I prototyped with an old length of wire. This is the last picture before I wrote all the software, which I want to talk about before I show you that version.</p>
<h2>The Software</h2>
<p>There are two, separate programs that, together, make the plotter work. The first is running on the Arduino, accepting commands over serial from the second, which is running on a USB-connected computer (my laptop).</p>
<p>There are three main reasons there needs to be a laptop in the system:</p>
<ol>
<li>The Arduino doesn't have enough program memory</li>
<li>The Arduino cannot easily accept files (like SVGs)</li>
<li>It takes forever for an Arduino program to compile and upload, which makes iteration frustrating.</li>
</ol>
<h3>Arduino</h3>
<p>The Arduino is running a very simple loop:</p>
<ol>
<li>Receive two 32 bit signed integers over serial (one for each motor)</li>
<li>Linear interpolate the stepper motors to positions described by received integers</li>
</ol>
<p>That's it. It's important that the time it takes for each motor to reach it's destination is the same.</p>
<h3>Laptop</h3>
<p>The laptop is doing all the math.</p>
<p>I am used my canvas library <a href="https://github.com/elijah-potter/denim">Denim</a> to do all the virtual drawing. I just added a renderer that:</p>
<ol>
<li>Converts all points to a sequence of belt-lengths</li>
<li>Converts belt-lengths to sequence of motor movements</li>
</ol>
<p>And sent the resulting motor movements over serial to the Arduino.</p>
<p>I also wrote a quick little parser using <a href="https://github.com/Geal/nom">nom</a> to parse and execute the math commands you can find inside SVG <code>&#x3C;path></code> elements.</p>
<h2>Final Results</h2>
<img src="/images/pen_plotter_drawing_a.jpeg" alt="The first drawing on the plotter: the letter A">
<p>As you can see, I also made a big, rectangular box, to hold the marker. It worked alright, but not great. You also probably noticed the main limitation of my design: it cannot lift the marker off the whiteboard.</p>
<p>I figured the roughness was mainly due to unbalanced weights and the slant of the whiteboard. I adjusted everything and tried again.</p>
<h3>Take Two</h3>
<img src="/images/pen_plotter_drawing_a2.jpeg" alt="A slightly improved letter A">
<p>This one is much better. The lines are crisp, and exactly where they are supposed to be. At this point I wanted to try something a little more advanced out. Something a little more <strong>generative</strong>.</p>
<h3>The First Hilbert Curve</h3>
<img src="/images/pen_plotter_drawing_hilbert.jpeg" alt="An attempt at drawing a hilbert curve">
<p>After seeing the plotter draw a Hilbert Curve, I felt proud <em>and</em> a little disappointed. The corners aren't crisp, the lines aren't straight. Frankly, it looks like it was drawn by a two-year old.</p>
<h3>Solving the Issue</h3>
<img src="/images/pen_plotter_drawing_hilbert2.jpeg" alt="The final attempt at a Hilbert Curve">
<p>It worked! Now all the lines are precise, crisp, smooth. The biggest issue is that the marker itself rotates as it moves. That wouldn't normally be an issue, but the marker is chiseled, so it results in varying levels of pressure on the whiteboard.</p>
<h2>Conclusion</h2>
<p>This was a really cool project. I learned a lot. I am not very experienced with robotics, so this really challenged me. I intend on continuing working on it. I want to add the ability for the pen to lift off the canvas, and really solidify the marker holder. Most of all, I look forward to taking my projects into the <em>real</em> world.</p>]]></description>
            <content:encoded><![CDATA[<h1>How I Designed (and built) My Own Pen Plotter</h1>
<p>For the last few months, after reading Preslav Rachev's book <a href="https://p5v.gumroad.com/l/generative-art-in-golang">Generative Art in Go</a>, I have been playing around with writing algorithms that create interesting graphics. I eventually ended up building a <a href="https://github.com/elijah-potter/denim">modular canvas library</a> for Rust to make that easier to do, for me and other people.</p>
<p>After making a few algorithms and posting them to the web, I started looking around for more inspiration. I happened upon a video with this pen plotter:</p>
<img src="/images/reddit_pen_plotter.png" alt="A screencap of a pen plotter from Reddit">
<p>I was amazed. A robot that could draw <em>vector graphics</em>? I had been nerd sniped.</p>
<img src="/images/xkcd_nerd_sniping.png" alt="XKCD Comic #356">
<h2>The Math</h2>
<p>The mathematics involved interested my especially. How do you translate <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mo fence="true">[</mo><mtable rowspacing="0.16em" columnalign="center center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mi>x</mi></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mi>y</mi></mstyle></mtd></mtr></mtable><mo fence="true">]</mo></mrow><annotation encoding="application/x-tex">\begin{bmatrix} x &#x26; y \end{bmatrix}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:1.2em;vertical-align:-0.35em;"></span><span class="minner"><span class="mopen delimcenter" style="top:0em;"><span class="delimsizing size1">[</span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.85em;"><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord mathnormal">x</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.85em;"><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord mathnormal" style="margin-right:0.03588em;">y</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.35em;"><span></span></span></span></span></span></span></span><span class="mclose delimcenter" style="top:0em;"><span class="delimsizing size1">]</span></span></span></span></span></span> to rotations of motors? It intimidated me a little at first, but once I sat down and worked it out, I realized it was surprisingly simple.</p>
<img src="/images/plotter_math_diagram.png" alt="A diagram of the math involved to create the plotter">
<p>We consider that the two motors are simply lengthening or shortening each string a specific amount. Here are the equations that dictate the length of each string, given a point <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mo fence="true">[</mo><mtable rowspacing="0.16em" columnalign="center center" columnspacing="1em"><mtr><mtd><mstyle scriptlevel="0" displaystyle="false"><mi>x</mi></mstyle></mtd><mtd><mstyle scriptlevel="0" displaystyle="false"><mi>y</mi></mstyle></mtd></mtr></mtable><mo fence="true">]</mo></mrow><annotation encoding="application/x-tex">\begin{bmatrix} x &#x26; y \end{bmatrix}</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:1.2em;vertical-align:-0.35em;"></span><span class="minner"><span class="mopen delimcenter" style="top:0em;"><span class="delimsizing size1">[</span></span><span class="mord"><span class="mtable"><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.85em;"><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord mathnormal">x</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.35em;"><span></span></span></span></span></span><span class="arraycolsep" style="width:0.5em;"></span><span class="arraycolsep" style="width:0.5em;"></span><span class="col-align-c"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.85em;"><span style="top:-3.01em;"><span class="pstrut" style="height:3em;"></span><span class="mord"><span class="mord mathnormal" style="margin-right:0.03588em;">y</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.35em;"><span></span></span></span></span></span></span></span><span class="mclose delimcenter" style="top:0em;"><span class="delimsizing size1">]</span></span></span></span></span></span>, and a distance <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mi>m</mi></mrow><annotation encoding="application/x-tex">m</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.4306em;"></span><span class="mord mathnormal">m</span></span></span></span> between the two motors.</p>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><msqrt><mrow><msup><mi>x</mi><mn>2</mn></msup><mo>+</mo><msup><mi>y</mi><mn>2</mn></msup></mrow></msqrt><mo>=</mo><msub><mi>r</mi><mn>0</mn></msub></mrow><annotation encoding="application/x-tex">\sqrt{x^2+y^2} = r_0</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:1.24em;vertical-align:-0.2333em;"></span><span class="mord sqrt"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:1.0067em;"><span class="svg-align" style="top:-3.2em;"><span class="pstrut" style="height:3.2em;"></span><span class="mord" style="padding-left:1em;"><span class="mord"><span class="mord mathnormal">x</span><span class="msupsub"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.7401em;"><span style="top:-2.989em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">2</span></span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">+</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mord"><span class="mord mathnormal" style="margin-right:0.03588em;">y</span><span class="msupsub"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.7401em;"><span style="top:-2.989em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">2</span></span></span></span></span></span></span></span></span></span><span style="top:-2.9667em;"><span class="pstrut" style="height:3.2em;"></span><span class="hide-tail" style="min-width:1.02em;height:1.28em;"><svg xmlns="http://www.w3.org/2000/svg" width="400em" height="1.28em" viewBox="0 0 400000 1296" preserveAspectRatio="xMinYMin slice"><path d="M263,681c0.7,0,18,39.7,52,119
c34,79.3,68.167,158.7,102.5,238c34.3,79.3,51.8,119.3,52.5,120
c340,-704.7,510.7,-1060.3,512,-1067
l0 -0
c4.7,-7.3,11,-11,19,-11
H40000v40H1012.3
s-271.3,567,-271.3,567c-38.7,80.7,-84,175,-136,283c-52,108,-89.167,185.3,-111.5,232
c-22.3,46.7,-33.8,70.3,-34.5,71c-4.7,4.7,-12.3,7,-23,7s-12,-1,-12,-1
s-109,-253,-109,-253c-72.7,-168,-109.3,-252,-110,-252c-10.7,8,-22,16.7,-34,26
c-22,17.3,-33.3,26,-34,26s-26,-26,-26,-26s76,-59,76,-59s76,-60,76,-60z
M1001 80h400000v40h-400000z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.2333em;"><span></span></span></span></span></span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:0.5806em;vertical-align:-0.15em;"></span><span class="mord"><span class="mord mathnormal" style="margin-right:0.02778em;">r</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3011em;"><span style="top:-2.55em;margin-left:-0.0278em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">0</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span></span></span></span></span>
<span class="katex-display"><span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML" display="block"><semantics><mrow><msqrt><mrow><mo stretchy="false">(</mo><mi>x</mi><mo>−</mo><mi>m</mi><msup><mo stretchy="false">)</mo><mn>2</mn></msup><mo>+</mo><msup><mi>y</mi><mn>2</mn></msup></mrow></msqrt><mo>=</mo><msub><mi>r</mi><mn>1</mn></msub></mrow><annotation encoding="application/x-tex">\sqrt{(x-m)^2+y^2} = r_1</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:1.24em;vertical-align:-0.2561em;"></span><span class="mord sqrt"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.9839em;"><span class="svg-align" style="top:-3.2em;"><span class="pstrut" style="height:3.2em;"></span><span class="mord" style="padding-left:1em;"><span class="mopen">(</span><span class="mord mathnormal">x</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">−</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mord mathnormal">m</span><span class="mclose"><span class="mclose">)</span><span class="msupsub"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.7401em;"><span style="top:-2.989em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">2</span></span></span></span></span></span></span></span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mbin">+</span><span class="mspace" style="margin-right:0.2222em;"></span><span class="mord"><span class="mord mathnormal" style="margin-right:0.03588em;">y</span><span class="msupsub"><span class="vlist-t"><span class="vlist-r"><span class="vlist" style="height:0.7401em;"><span style="top:-2.989em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">2</span></span></span></span></span></span></span></span></span></span><span style="top:-2.9439em;"><span class="pstrut" style="height:3.2em;"></span><span class="hide-tail" style="min-width:1.02em;height:1.28em;"><svg xmlns="http://www.w3.org/2000/svg" width="400em" height="1.28em" viewBox="0 0 400000 1296" preserveAspectRatio="xMinYMin slice"><path d="M263,681c0.7,0,18,39.7,52,119
c34,79.3,68.167,158.7,102.5,238c34.3,79.3,51.8,119.3,52.5,120
c340,-704.7,510.7,-1060.3,512,-1067
l0 -0
c4.7,-7.3,11,-11,19,-11
H40000v40H1012.3
s-271.3,567,-271.3,567c-38.7,80.7,-84,175,-136,283c-52,108,-89.167,185.3,-111.5,232
c-22.3,46.7,-33.8,70.3,-34.5,71c-4.7,4.7,-12.3,7,-23,7s-12,-1,-12,-1
s-109,-253,-109,-253c-72.7,-168,-109.3,-252,-110,-252c-10.7,8,-22,16.7,-34,26
c-22,17.3,-33.3,26,-34,26s-26,-26,-26,-26s76,-59,76,-59s76,-60,76,-60z
M1001 80h400000v40h-400000z"></path></svg></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.2561em;"><span></span></span></span></span></span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span><span class="mspace" style="margin-right:0.2778em;"></span></span><span class="base"><span class="strut" style="height:0.5806em;vertical-align:-0.15em;"></span><span class="mord"><span class="mord mathnormal" style="margin-right:0.02778em;">r</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.3011em;"><span style="top:-2.55em;margin-left:-0.0278em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">1</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span></span></span></span></span>
<h2>Is This Actually Going to Happen?</h2>
<p>I initially sat down and did the math out of curiosity. I wasn't planning on building anything. After realizing the elegant simplicty of the math, I falt empowered to go through with it. I threw together a parts list.</p>
<img src="/images/pen_plotter_parts.png" alt="A visual of all the parts needed for the plotter">
<h3>Materials</h3>
<ul>
<li>Whiteboard</li>
<li>Expo Marker</li>
<li>2 Stepper Motors</li>
<li>3D Printer Timing Belt</li>
<li>Arduino</li>
<li>Adafruit Motor Shield</li>
<li>12V 2A Power Adapter</li>
<li>Duct Tape (as any project requires)</li>
</ul>
<p>I wanted to build this from scratch, without using anyone else's designs or software.</p>
<p>I order the materials off Amazon and soon enough, I had everything I needed.</p>
<h2>Putting It All Together</h2>
<p>The first step was to solder the motor shield onto the Arduino, and wiring up stepper motors. I used my school's laser cutter to make a little bracket.</p>
<img src="/images/mounted_motors_pen_plotter.jpeg" alt="The stepper motors mounted to a bracket">
<p>I am using weights to balance the belts on the motors. I mounted it to a whiteboard so I can iterate in software faster/easier.</p>
<img src="/images/mounted_bracket_on_whiteboard_pen_plotter.jpeg" alt="Everything taped onto an old whiteboard. It&#x27;s super rough, but that&#x27;s OK">
<p>I initially didn't have the timing belt, so I prototyped with an old length of wire. This is the last picture before I wrote all the software, which I want to talk about before I show you that version.</p>
<h2>The Software</h2>
<p>There are two, separate programs that, together, make the plotter work. The first is running on the Arduino, accepting commands over serial from the second, which is running on a USB-connected computer (my laptop).</p>
<p>There are three main reasons there needs to be a laptop in the system:</p>
<ol>
<li>The Arduino doesn't have enough program memory</li>
<li>The Arduino cannot easily accept files (like SVGs)</li>
<li>It takes forever for an Arduino program to compile and upload, which makes iteration frustrating.</li>
</ol>
<h3>Arduino</h3>
<p>The Arduino is running a very simple loop:</p>
<ol>
<li>Receive two 32 bit signed integers over serial (one for each motor)</li>
<li>Linear interpolate the stepper motors to positions described by received integers</li>
</ol>
<p>That's it. It's important that the time it takes for each motor to reach it's destination is the same.</p>
<h3>Laptop</h3>
<p>The laptop is doing all the math.</p>
<p>I am used my canvas library <a href="https://github.com/elijah-potter/denim">Denim</a> to do all the virtual drawing. I just added a renderer that:</p>
<ol>
<li>Converts all points to a sequence of belt-lengths</li>
<li>Converts belt-lengths to sequence of motor movements</li>
</ol>
<p>And sent the resulting motor movements over serial to the Arduino.</p>
<p>I also wrote a quick little parser using <a href="https://github.com/Geal/nom">nom</a> to parse and execute the math commands you can find inside SVG <code>&#x3C;path></code> elements.</p>
<h2>Final Results</h2>
<img src="/images/pen_plotter_drawing_a.jpeg" alt="The first drawing on the plotter: the letter A">
<p>As you can see, I also made a big, rectangular box, to hold the marker. It worked alright, but not great. You also probably noticed the main limitation of my design: it cannot lift the marker off the whiteboard.</p>
<p>I figured the roughness was mainly due to unbalanced weights and the slant of the whiteboard. I adjusted everything and tried again.</p>
<h3>Take Two</h3>
<img src="/images/pen_plotter_drawing_a2.jpeg" alt="A slightly improved letter A">
<p>This one is much better. The lines are crisp, and exactly where they are supposed to be. At this point I wanted to try something a little more advanced out. Something a little more <strong>generative</strong>.</p>
<h3>The First Hilbert Curve</h3>
<img src="/images/pen_plotter_drawing_hilbert.jpeg" alt="An attempt at drawing a hilbert curve">
<p>After seeing the plotter draw a Hilbert Curve, I felt proud <em>and</em> a little disappointed. The corners aren't crisp, the lines aren't straight. Frankly, it looks like it was drawn by a two-year old.</p>
<h3>Solving the Issue</h3>
<img src="/images/pen_plotter_drawing_hilbert2.jpeg" alt="The final attempt at a Hilbert Curve">
<p>It worked! Now all the lines are precise, crisp, smooth. The biggest issue is that the marker itself rotates as it moves. That wouldn't normally be an issue, but the marker is chiseled, so it results in varying levels of pressure on the whiteboard.</p>
<h2>Conclusion</h2>
<p>This was a really cool project. I learned a lot. I am not very experienced with robotics, so this really challenged me. I intend on continuing working on it. I want to add the ability for the pen to lift off the canvas, and really solidify the marker holder. Most of all, I look forward to taking my projects into the <em>real</em> world.</p>]]></content:encoded>
            <enclosure url="https://elijahpotter.dev/images/pen_plotter_drawing_hilbert2.jpeg" length="0" type="image/jpeg"/>
        </item>
        <item>
            <title><![CDATA[3 Awesome Ways Computers Generate Randomness]]></title>
            <link>https://elijahpotter.dev/articles/3_awesome_ways_computers_generate_randomness</link>
            <guid>https://elijahpotter.dev/articles/3_awesome_ways_computers_generate_randomness</guid>
            <pubDate>Fri, 04 Mar 2022 07:00:00 GMT</pubDate>
            <description><![CDATA[<h1>What Is Randomness?</h1>
<img src="/images/numbers_on_a_screen.jpg" alt="Numbers on a screen">
<p>In case you were born yesterday, let's go over it.</p>
<p>Randomness is, at the most basic level, something that cannot be predicted.
In computer science, when we talk about randomness, we are usually talking about random numbers and the tools we use to get them, random number generators (RNGs).</p>
<h2>Pseudorandomness</h2>
<p>We say an RNG is “pseudorandom” when we have to give it a fixed seed, and it generates random numbers based on that seed.
If we give it the same seed, we will get the same numbers.
It is deterministic.</p>
<p>I thought, for a very long time, that computers could only create pseudorandom numbers.
The main reason I thought this, and why you might too, is because utilities like the Random class in .NET are pseudorandom and are seeded by some arbitrary information, like the current time.</p>
<p>Computers aren't useful if they aren't deterministic.
In a perfect, enclosed system, it would be impossible for a computer to generate truly random numbers.</p>
<h2>True Randomness</h2>
<p>The phrase “true randomness” is used to describe things that are impossible to predict. Provably so. This is different from chaotic systems, which are predictable under short spans of time.
True randomness is impossible to predict on all scales.</p>
<h2>The Everyday Method</h2>
<p>Allow me to introduce you to <a href="https://en.wikipedia.org/wiki/RDRAND">RDRAND</a> and <a href="https://en.wikipedia.org/wiki/RDRAND#RDSEED">RDSEED</a>, two CPU instructions originally introduced by Intel, that allow programs access to truly random numbers.
These instructions gather data from an on-chip entropy source to provide random numbers.</p>
<p>These instructions utilize thermal noise to produce white noise, which is used to generate said random numbers.
The advantage of thermal noise is that it produces actually random values, as explained here.
Its also fast, which means it can be used for everyday things, like generating SSL or TLS keys and the like.</p>
<h3>Do It Yourself</h3>
<p>If you happen to be on a Linux system, you can actually use these instructions yourself. Just run this command:</p>
<pre><code class="hljs language-bash"><span class="hljs-built_in">dd</span> <span class="hljs-keyword">if</span>=/dev/random count=4 bs=1 status=none | <span class="hljs-built_in">od</span> -An --format=dI
</code></pre>
<p>This command uses <code>dd</code> to generate 4 random bytes.
Then it pipes those bytes into <code>od</code> , which will format those 4 bytes into human-readable text as a signed 32-bit integer.</p>
<h2>The Nuclear Method</h2>
<img src="/images/nuclear_plant.jpg" alt="A Nuclear Plant">
<p>One of the big advancements of the 20th Century was the creation of a branch of research called quantum physics.
One of the things we learned was that on the scale of individual particles, it becomes provably impossible to predict state.
Every time a measurement is taken of a subatomic particle, its state changes randomly.</p>
<p>One way we could sample subatomic particles is by placing a Geiger counter next to a bit of radioactive ore.
Because whether an individual atom will decay at any given moment, we can know that the time between ticks of the Geiger counter is random.</p>
<h2>The Cloudflare Method</h2>
<img src="/images/lava_lamp.jpg" alt="A Lava Lamp">
<p>Cloudflare uses some especially interesting sources of entropy to seed its random number generators.
One way is via lava lamps.
In their lobby, they have a large array of lava lamps.
There is a camera running a live feed of the lava lamps to their servers.
There are two main sources of randomness.</p>
<p><strong>First</strong>, the lava lamps themselves.
The movement of even a single lava lamp, if we forget the second law of thermodynamics, is far too chaotic to predict.
If we include the second law of thermodynamics, it becomes true randomness.</p>
<p><strong>Second</strong>, the camera sensor's noise.
Thanks to the photoelectric effect the noise that appears on a camera's sensor, however unnoticeable, is truly random.</p>
<p>By combining these two sources, you get a pool of truly random numbers ~60 times a second.
If that doesn't fit your needs, you can use those numbers to feed a cryptographically secure pseudorandom number generator (CSPRNG) to get as many as you want.</p>]]></description>
            <content:encoded><![CDATA[<h1>What Is Randomness?</h1>
<img src="/images/numbers_on_a_screen.jpg" alt="Numbers on a screen">
<p>In case you were born yesterday, let's go over it.</p>
<p>Randomness is, at the most basic level, something that cannot be predicted.
In computer science, when we talk about randomness, we are usually talking about random numbers and the tools we use to get them, random number generators (RNGs).</p>
<h2>Pseudorandomness</h2>
<p>We say an RNG is “pseudorandom” when we have to give it a fixed seed, and it generates random numbers based on that seed.
If we give it the same seed, we will get the same numbers.
It is deterministic.</p>
<p>I thought, for a very long time, that computers could only create pseudorandom numbers.
The main reason I thought this, and why you might too, is because utilities like the Random class in .NET are pseudorandom and are seeded by some arbitrary information, like the current time.</p>
<p>Computers aren't useful if they aren't deterministic.
In a perfect, enclosed system, it would be impossible for a computer to generate truly random numbers.</p>
<h2>True Randomness</h2>
<p>The phrase “true randomness” is used to describe things that are impossible to predict. Provably so. This is different from chaotic systems, which are predictable under short spans of time.
True randomness is impossible to predict on all scales.</p>
<h2>The Everyday Method</h2>
<p>Allow me to introduce you to <a href="https://en.wikipedia.org/wiki/RDRAND">RDRAND</a> and <a href="https://en.wikipedia.org/wiki/RDRAND#RDSEED">RDSEED</a>, two CPU instructions originally introduced by Intel, that allow programs access to truly random numbers.
These instructions gather data from an on-chip entropy source to provide random numbers.</p>
<p>These instructions utilize thermal noise to produce white noise, which is used to generate said random numbers.
The advantage of thermal noise is that it produces actually random values, as explained here.
Its also fast, which means it can be used for everyday things, like generating SSL or TLS keys and the like.</p>
<h3>Do It Yourself</h3>
<p>If you happen to be on a Linux system, you can actually use these instructions yourself. Just run this command:</p>
<pre><code class="hljs language-bash"><span class="hljs-built_in">dd</span> <span class="hljs-keyword">if</span>=/dev/random count=4 bs=1 status=none | <span class="hljs-built_in">od</span> -An --format=dI
</code></pre>
<p>This command uses <code>dd</code> to generate 4 random bytes.
Then it pipes those bytes into <code>od</code> , which will format those 4 bytes into human-readable text as a signed 32-bit integer.</p>
<h2>The Nuclear Method</h2>
<img src="/images/nuclear_plant.jpg" alt="A Nuclear Plant">
<p>One of the big advancements of the 20th Century was the creation of a branch of research called quantum physics.
One of the things we learned was that on the scale of individual particles, it becomes provably impossible to predict state.
Every time a measurement is taken of a subatomic particle, its state changes randomly.</p>
<p>One way we could sample subatomic particles is by placing a Geiger counter next to a bit of radioactive ore.
Because whether an individual atom will decay at any given moment, we can know that the time between ticks of the Geiger counter is random.</p>
<h2>The Cloudflare Method</h2>
<img src="/images/lava_lamp.jpg" alt="A Lava Lamp">
<p>Cloudflare uses some especially interesting sources of entropy to seed its random number generators.
One way is via lava lamps.
In their lobby, they have a large array of lava lamps.
There is a camera running a live feed of the lava lamps to their servers.
There are two main sources of randomness.</p>
<p><strong>First</strong>, the lava lamps themselves.
The movement of even a single lava lamp, if we forget the second law of thermodynamics, is far too chaotic to predict.
If we include the second law of thermodynamics, it becomes true randomness.</p>
<p><strong>Second</strong>, the camera sensor's noise.
Thanks to the photoelectric effect the noise that appears on a camera's sensor, however unnoticeable, is truly random.</p>
<p>By combining these two sources, you get a pool of truly random numbers ~60 times a second.
If that doesn't fit your needs, you can use those numbers to feed a cryptographically secure pseudorandom number generator (CSPRNG) to get as many as you want.</p>]]></content:encoded>
            <enclosure url="https://elijahpotter.dev/images/numbers_on_a_screen.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Build a Wordle Solver Using Rust]]></title>
            <link>https://elijahpotter.dev/articles/build_a_wordle_solver_using_rust</link>
            <guid>https://elijahpotter.dev/articles/build_a_wordle_solver_using_rust</guid>
            <pubDate>Tue, 01 Mar 2022 07:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Build a Wordle Solver Using Rust</h1>
<h2>The Game</h2>
<p><a href="https://www.nytimes.com/games/wordle/index.html">Wordle</a> is a relatively simple game. If you have ever played Mastermind, it should sound familiar. The goal is to figure out a mystery word with as few guesses as possible. The mystery word changes each day. Here are two example guesses.</p>
<img src="/images/wordle_example.png" alt="Two example guesses from the game Wordle">
<p>After a guess, each letter’s color changes.</p>
<p>Green — The letter is correct.
Yellow— The letter exists in the word, but not in that space.
Gray — The letter does not exist in the word.</p>
<p>As you can see, there are a maximum of six guesses. If you cannot find the mystery word within six guesses, you lose. I have been competing with my grandmother each day to find the word in as few guesses as possible.</p>
<img src="/images/xkcd_nerd_sniping.png" alt="XKCD Comic #356">
<p>This totally nerd-sniped me. I felt an overwhelming urge to build an app that could, conceivably find the mystery word in as few guesses as possible.</p>
<h2>How I Did It</h2>
<p>First things first, we need a list of English words. I initially used the <a href="http://www.mieliestronk.com/wordlist.html">corncob</a> list, but I found greater success with <a href="https://github.com/dwyl/english-words/">dwyl’s</a> list.</p>
<p>For this project, I decided to use Rust, just because I felt most confident in my ability to make an MVP quickly.</p>
<p>Both the word lists I used are formatted as a sequence of individual words, separated by <code>\n</code> characters. On Windows (which is what I am using), they also have pesky those little <code>\r</code> characters.</p>
<p>Wordle is heavily focused on letters. I can remove items from the word list based on what letters I know aren’t in the mystery word (these are gray letters in-game), and I can remove items based what letters I know are in the mystery word (the orange or green letters), but in a lot of cases that still leaves a lot of possible words. I need to way to sort words based on how likely their letters are.</p>
<p>To do this, I count how frequently each letter appears in the word list, and give each word a score based on how frequently its components appear.</p>
<p>The first step in the program is to load the word list and count the letters:</p>
<pre><code class="hljs language-rust"><span class="hljs-comment">// Store the total number of times a letter appears.</span>
<span class="hljs-keyword">let</span> <span class="hljs-keyword">mut </span><span class="hljs-variable">letter_scores</span> = HashMap::<span class="hljs-title function_ invoke__">new</span>();
<span class="hljs-comment">// The final list of words. It will make like easier later in the program to store the words as Vec&#x3C;char>.</span>
<span class="hljs-keyword">let</span> <span class="hljs-keyword">mut </span><span class="hljs-variable">word_list</span> = <span class="hljs-type">Vec</span>::<span class="hljs-title function_ invoke__">new</span>();
<span class="hljs-keyword">let</span> <span class="hljs-keyword">mut </span><span class="hljs-variable">last_word</span> = <span class="hljs-type">Vec</span>::<span class="hljs-title function_ invoke__">new</span>();
<span class="hljs-keyword">let</span> <span class="hljs-variable">file</span> = std::fs::<span class="hljs-title function_ invoke__">read</span>(<span class="hljs-string">"corncob_lowercase.txt"</span>)?;

<span class="hljs-comment">// Iterate through all the bytes in the wordlist file, ignoring all `\r` instances.</span>
<span class="hljs-keyword">for</span> <span class="hljs-variable">letter</span> <span class="hljs-keyword">in</span> file {
    <span class="hljs-keyword">let</span> <span class="hljs-variable">letter</span> = letter <span class="hljs-keyword">as</span> <span class="hljs-type">char</span>;
    <span class="hljs-keyword">match</span> letter {
        <span class="hljs-string">'<span class="hljs-char escape_">\n</span>'</span> => {
            word_list.<span class="hljs-title function_ invoke__">push</span>(last_word);
            last_word = <span class="hljs-type">Vec</span>::<span class="hljs-title function_ invoke__">new</span>();
        }
        <span class="hljs-string">'<span class="hljs-char escape_">\r</span>'</span> => (),
        _ => {
            <span class="hljs-keyword">let</span> <span class="hljs-variable">entry</span> = letter_scores.<span class="hljs-title function_ invoke__">entry</span>(letter).<span class="hljs-title function_ invoke__">or_default</span>();
            *entry += <span class="hljs-number">1</span>;
            last_word.<span class="hljs-title function_ invoke__">push</span>(letter);
        }
    }
}
</code></pre>
<p>Using the default <code>HashMap</code> (which uses <code>SipHash</code>, which isn’t great for single-character lookup), probably isn’t the best, performance-wise, but this is just a toy program, and doesn’t need to be the fastest thing in the world.</p>
<p>Next, we need to go through the word list, and eliminate words that contain gray letters. Here is a function that helps do that:</p>
<pre><code class="hljs language-rust"><span class="hljs-keyword">fn</span> <span class="hljs-title function_">matches_found</span>(
    word: &#x26;[<span class="hljs-type">char</span>],
    found: &#x26;[<span class="hljs-type">char</span>],
    not: &#x26;[<span class="hljs-type">char</span>],
    must: &#x26;[<span class="hljs-type">char</span>],
    masks: &#x26;[<span class="hljs-type">Vec</span>&#x3C;<span class="hljs-type">char</span>>],
) <span class="hljs-punctuation">-></span> <span class="hljs-type">bool</span> {
    <span class="hljs-comment">// Check if the word contains a letter we know *isn't* in the mystery. &#x3C;-- The gray letters.</span>
    <span class="hljs-keyword">for</span> <span class="hljs-variable">c</span> <span class="hljs-keyword">in</span> not {
        <span class="hljs-keyword">if</span> word.<span class="hljs-title function_ invoke__">contains</span>(c) {
            <span class="hljs-keyword">return</span> <span class="hljs-literal">false</span>;
        }
    }

    <span class="hljs-comment">// Check if the word contains the letters we don't know the positions of, but know they are in the mystery word.. &#x3C;-- The orange letters.</span>
    <span class="hljs-keyword">let</span> <span class="hljs-keyword">mut </span><span class="hljs-variable">found_letters</span> = <span class="hljs-number">0</span>;
    <span class="hljs-keyword">for</span> <span class="hljs-variable">c</span> <span class="hljs-keyword">in</span> must {
        <span class="hljs-keyword">if</span> word.<span class="hljs-title function_ invoke__">contains</span>(c) {
            found_letters += <span class="hljs-number">1</span>;
        }
    }
    <span class="hljs-keyword">if</span> found_letters &#x3C; must.<span class="hljs-title function_ invoke__">len</span>() {
        <span class="hljs-keyword">return</span> <span class="hljs-literal">false</span>;
    }

    <span class="hljs-comment">// Check if the word has letters we know exist in the word, but not at the right spots. &#x3C;-- The orange letters.</span>
    <span class="hljs-keyword">for</span> <span class="hljs-variable">mask</span> <span class="hljs-keyword">in</span> masks {
        <span class="hljs-keyword">for</span> <span class="hljs-variable">i</span> <span class="hljs-keyword">in</span> <span class="hljs-number">0</span>..<span class="hljs-title function_ invoke__">min</span>(word.<span class="hljs-title function_ invoke__">len</span>(), mask.<span class="hljs-title function_ invoke__">len</span>()) {
            <span class="hljs-keyword">if</span> word[i] == mask[i] {
                <span class="hljs-keyword">return</span> <span class="hljs-literal">false</span>;
            }
        }
    }

    <span class="hljs-comment">// Check if the word contains the already found (green) letters.</span>
    <span class="hljs-keyword">for</span> <span class="hljs-variable">i</span> <span class="hljs-keyword">in</span> <span class="hljs-number">0</span>..<span class="hljs-title function_ invoke__">min</span>(word.<span class="hljs-title function_ invoke__">len</span>(), found.<span class="hljs-title function_ invoke__">len</span>()) {
        <span class="hljs-keyword">if</span> found[i] != <span class="hljs-string">' '</span> &#x26;&#x26; word[i] != found[i] {
            <span class="hljs-keyword">return</span> <span class="hljs-literal">false</span>;
        }
    }

    <span class="hljs-literal">true</span>
}
</code></pre>
<p>It accepts a few different char slices:</p>
<ul>
<li><strong><em>Word</em></strong>: the word we want to check.
Found: this is a slice containing the letters we have found (the green ones), with “ “ (space) characters in the locations we don’t know the character of.</li>
<li><strong><em>Not</em></strong>: this is a slice containing the letters we know aren’t in the mystery word.</li>
<li><strong><em>Must</em></strong>: this is a slice containing the letter we know are in the mystery word, but we don’t know the position of.</li>
<li><strong><em>Masks</em></strong>: this is a series of masks. We remove every word that has letters that match any mask here. This is useful for eliminating words in the wordlist that contain punctuation and for eliminating words that contain orange letters, but in positions we know they aren’t.</li>
</ul>
<p>Now all we have to do is run each word in the word list and see if it matches our already known characters, updating the contents of each slice with new information after each guess.</p>
<h2>Why You Should Care</h2>
<p>This sounds like a useless problem. It is. There is no way this will benefit anyone other than me, and I definitely won’t use this when I’m actually competing with my grandmother.</p>
<p>Then why did you do it?</p>
<p>Useless answers to useless problems are useful. They teach us how to improve, without the pressure of real stakes. They are also just plain fun.</p>
<p>It’s also a reflection. How would you have approached this problem in the past? How has your thinking improved. Maybe it’s a bit magnanimous to say this little Wordle solver is the key to self reflection, but I don’t think it’s that far off.</p>
<hr>
<h2>A Reflection From Months Later</h2>
<p>Hi! I am returning to this project months later with a few thoughts.</p>
<p>When I first wrote this article, I completely neglected to share my fitness test for each word. In hindsight, it's a good thing I didn't. It was the exact method 3Blue1Brown described as "naive" in his (fantastic) <a href="https://youtu.be/v68zYyaEmEA">video</a> on this very topic.</p>]]></description>
            <content:encoded><![CDATA[<h1>Build a Wordle Solver Using Rust</h1>
<h2>The Game</h2>
<p><a href="https://www.nytimes.com/games/wordle/index.html">Wordle</a> is a relatively simple game. If you have ever played Mastermind, it should sound familiar. The goal is to figure out a mystery word with as few guesses as possible. The mystery word changes each day. Here are two example guesses.</p>
<img src="/images/wordle_example.png" alt="Two example guesses from the game Wordle">
<p>After a guess, each letter’s color changes.</p>
<p>Green — The letter is correct.
Yellow— The letter exists in the word, but not in that space.
Gray — The letter does not exist in the word.</p>
<p>As you can see, there are a maximum of six guesses. If you cannot find the mystery word within six guesses, you lose. I have been competing with my grandmother each day to find the word in as few guesses as possible.</p>
<img src="/images/xkcd_nerd_sniping.png" alt="XKCD Comic #356">
<p>This totally nerd-sniped me. I felt an overwhelming urge to build an app that could, conceivably find the mystery word in as few guesses as possible.</p>
<h2>How I Did It</h2>
<p>First things first, we need a list of English words. I initially used the <a href="http://www.mieliestronk.com/wordlist.html">corncob</a> list, but I found greater success with <a href="https://github.com/dwyl/english-words/">dwyl’s</a> list.</p>
<p>For this project, I decided to use Rust, just because I felt most confident in my ability to make an MVP quickly.</p>
<p>Both the word lists I used are formatted as a sequence of individual words, separated by <code>\n</code> characters. On Windows (which is what I am using), they also have pesky those little <code>\r</code> characters.</p>
<p>Wordle is heavily focused on letters. I can remove items from the word list based on what letters I know aren’t in the mystery word (these are gray letters in-game), and I can remove items based what letters I know are in the mystery word (the orange or green letters), but in a lot of cases that still leaves a lot of possible words. I need to way to sort words based on how likely their letters are.</p>
<p>To do this, I count how frequently each letter appears in the word list, and give each word a score based on how frequently its components appear.</p>
<p>The first step in the program is to load the word list and count the letters:</p>
<pre><code class="hljs language-rust"><span class="hljs-comment">// Store the total number of times a letter appears.</span>
<span class="hljs-keyword">let</span> <span class="hljs-keyword">mut </span><span class="hljs-variable">letter_scores</span> = HashMap::<span class="hljs-title function_ invoke__">new</span>();
<span class="hljs-comment">// The final list of words. It will make like easier later in the program to store the words as Vec&#x3C;char>.</span>
<span class="hljs-keyword">let</span> <span class="hljs-keyword">mut </span><span class="hljs-variable">word_list</span> = <span class="hljs-type">Vec</span>::<span class="hljs-title function_ invoke__">new</span>();
<span class="hljs-keyword">let</span> <span class="hljs-keyword">mut </span><span class="hljs-variable">last_word</span> = <span class="hljs-type">Vec</span>::<span class="hljs-title function_ invoke__">new</span>();
<span class="hljs-keyword">let</span> <span class="hljs-variable">file</span> = std::fs::<span class="hljs-title function_ invoke__">read</span>(<span class="hljs-string">"corncob_lowercase.txt"</span>)?;

<span class="hljs-comment">// Iterate through all the bytes in the wordlist file, ignoring all `\r` instances.</span>
<span class="hljs-keyword">for</span> <span class="hljs-variable">letter</span> <span class="hljs-keyword">in</span> file {
    <span class="hljs-keyword">let</span> <span class="hljs-variable">letter</span> = letter <span class="hljs-keyword">as</span> <span class="hljs-type">char</span>;
    <span class="hljs-keyword">match</span> letter {
        <span class="hljs-string">'<span class="hljs-char escape_">\n</span>'</span> => {
            word_list.<span class="hljs-title function_ invoke__">push</span>(last_word);
            last_word = <span class="hljs-type">Vec</span>::<span class="hljs-title function_ invoke__">new</span>();
        }
        <span class="hljs-string">'<span class="hljs-char escape_">\r</span>'</span> => (),
        _ => {
            <span class="hljs-keyword">let</span> <span class="hljs-variable">entry</span> = letter_scores.<span class="hljs-title function_ invoke__">entry</span>(letter).<span class="hljs-title function_ invoke__">or_default</span>();
            *entry += <span class="hljs-number">1</span>;
            last_word.<span class="hljs-title function_ invoke__">push</span>(letter);
        }
    }
}
</code></pre>
<p>Using the default <code>HashMap</code> (which uses <code>SipHash</code>, which isn’t great for single-character lookup), probably isn’t the best, performance-wise, but this is just a toy program, and doesn’t need to be the fastest thing in the world.</p>
<p>Next, we need to go through the word list, and eliminate words that contain gray letters. Here is a function that helps do that:</p>
<pre><code class="hljs language-rust"><span class="hljs-keyword">fn</span> <span class="hljs-title function_">matches_found</span>(
    word: &#x26;[<span class="hljs-type">char</span>],
    found: &#x26;[<span class="hljs-type">char</span>],
    not: &#x26;[<span class="hljs-type">char</span>],
    must: &#x26;[<span class="hljs-type">char</span>],
    masks: &#x26;[<span class="hljs-type">Vec</span>&#x3C;<span class="hljs-type">char</span>>],
) <span class="hljs-punctuation">-></span> <span class="hljs-type">bool</span> {
    <span class="hljs-comment">// Check if the word contains a letter we know *isn't* in the mystery. &#x3C;-- The gray letters.</span>
    <span class="hljs-keyword">for</span> <span class="hljs-variable">c</span> <span class="hljs-keyword">in</span> not {
        <span class="hljs-keyword">if</span> word.<span class="hljs-title function_ invoke__">contains</span>(c) {
            <span class="hljs-keyword">return</span> <span class="hljs-literal">false</span>;
        }
    }

    <span class="hljs-comment">// Check if the word contains the letters we don't know the positions of, but know they are in the mystery word.. &#x3C;-- The orange letters.</span>
    <span class="hljs-keyword">let</span> <span class="hljs-keyword">mut </span><span class="hljs-variable">found_letters</span> = <span class="hljs-number">0</span>;
    <span class="hljs-keyword">for</span> <span class="hljs-variable">c</span> <span class="hljs-keyword">in</span> must {
        <span class="hljs-keyword">if</span> word.<span class="hljs-title function_ invoke__">contains</span>(c) {
            found_letters += <span class="hljs-number">1</span>;
        }
    }
    <span class="hljs-keyword">if</span> found_letters &#x3C; must.<span class="hljs-title function_ invoke__">len</span>() {
        <span class="hljs-keyword">return</span> <span class="hljs-literal">false</span>;
    }

    <span class="hljs-comment">// Check if the word has letters we know exist in the word, but not at the right spots. &#x3C;-- The orange letters.</span>
    <span class="hljs-keyword">for</span> <span class="hljs-variable">mask</span> <span class="hljs-keyword">in</span> masks {
        <span class="hljs-keyword">for</span> <span class="hljs-variable">i</span> <span class="hljs-keyword">in</span> <span class="hljs-number">0</span>..<span class="hljs-title function_ invoke__">min</span>(word.<span class="hljs-title function_ invoke__">len</span>(), mask.<span class="hljs-title function_ invoke__">len</span>()) {
            <span class="hljs-keyword">if</span> word[i] == mask[i] {
                <span class="hljs-keyword">return</span> <span class="hljs-literal">false</span>;
            }
        }
    }

    <span class="hljs-comment">// Check if the word contains the already found (green) letters.</span>
    <span class="hljs-keyword">for</span> <span class="hljs-variable">i</span> <span class="hljs-keyword">in</span> <span class="hljs-number">0</span>..<span class="hljs-title function_ invoke__">min</span>(word.<span class="hljs-title function_ invoke__">len</span>(), found.<span class="hljs-title function_ invoke__">len</span>()) {
        <span class="hljs-keyword">if</span> found[i] != <span class="hljs-string">' '</span> &#x26;&#x26; word[i] != found[i] {
            <span class="hljs-keyword">return</span> <span class="hljs-literal">false</span>;
        }
    }

    <span class="hljs-literal">true</span>
}
</code></pre>
<p>It accepts a few different char slices:</p>
<ul>
<li><strong><em>Word</em></strong>: the word we want to check.
Found: this is a slice containing the letters we have found (the green ones), with “ “ (space) characters in the locations we don’t know the character of.</li>
<li><strong><em>Not</em></strong>: this is a slice containing the letters we know aren’t in the mystery word.</li>
<li><strong><em>Must</em></strong>: this is a slice containing the letter we know are in the mystery word, but we don’t know the position of.</li>
<li><strong><em>Masks</em></strong>: this is a series of masks. We remove every word that has letters that match any mask here. This is useful for eliminating words in the wordlist that contain punctuation and for eliminating words that contain orange letters, but in positions we know they aren’t.</li>
</ul>
<p>Now all we have to do is run each word in the word list and see if it matches our already known characters, updating the contents of each slice with new information after each guess.</p>
<h2>Why You Should Care</h2>
<p>This sounds like a useless problem. It is. There is no way this will benefit anyone other than me, and I definitely won’t use this when I’m actually competing with my grandmother.</p>
<p>Then why did you do it?</p>
<p>Useless answers to useless problems are useful. They teach us how to improve, without the pressure of real stakes. They are also just plain fun.</p>
<p>It’s also a reflection. How would you have approached this problem in the past? How has your thinking improved. Maybe it’s a bit magnanimous to say this little Wordle solver is the key to self reflection, but I don’t think it’s that far off.</p>
<hr>
<h2>A Reflection From Months Later</h2>
<p>Hi! I am returning to this project months later with a few thoughts.</p>
<p>When I first wrote this article, I completely neglected to share my fitness test for each word. In hindsight, it's a good thing I didn't. It was the exact method 3Blue1Brown described as "naive" in his (fantastic) <a href="https://youtu.be/v68zYyaEmEA">video</a> on this very topic.</p>]]></content:encoded>
            <enclosure url="https://elijahpotter.dev/images/wordle_example.png" length="0" type="image/png"/>
        </item>
        <item>
            <title><![CDATA[Why Rust May Be More Attractive than JavaScript]]></title>
            <link>https://elijahpotter.dev/articles/why_rust_may_be_more_attractive_than_javascript</link>
            <guid>https://elijahpotter.dev/articles/why_rust_may_be_more_attractive_than_javascript</guid>
            <pubDate>Fri, 25 Feb 2022 07:00:00 GMT</pubDate>
            <description><![CDATA[<h1>Why Rust Might Attract More Developers Than Java and JavaScript</h1>
<p>Rust is undeniably a greatly appreciated language, after all, it has placed as "#1 Most Loved Programming Language" on the StackOverflow Developer Survey for 6th year running.</p>
<p>Reading this, you are probably very aware of why so many people like it, but I will be going over those reasons anyway, at least before I reveal its secret weapon.</p>
<p><strong>It’s fast, very fast</strong>. While it may not be a great benchmark, The Computer Language Benchmarks Game, measures that it falls just a hair behind the fastest language, <strong>C</strong>.</p>
<p>That makes sense, given that Rust was meant to replace C/C++. Even better than the benchmark, it has been shown that in many situations, the Rust compiler is able to optimize code significantly better than C or C++, just because it has more information about the developers intent.</p>
<p><strong>It’s consistent</strong>. For more than pretty much any other language, Rust developers can expect that, if their code compiles, it works. When it doesn’t compile, the compiler gives better feedback and advice than any other language. It can even check if your arithmetic is wrong.</p>
<p><strong>It’s complicated</strong>, but only where it needs to be. The language itself, as well as its standard library are built very consciously, by its users. Most places where you might find you have to write tedious boilerplate, Rust makes it easy. It also gives you the freedom to write code that does exactly what you want, exactly how you want. We will get back to this later.</p>
<h2>The Problem</h2>
<p>When starting a new project what is the first thing you are going to do? It depends on what stack you are using.</p>
<p>If you want to use NodeJS, it might look something like:</p>
<pre><code class="hljs language-bash"><span class="hljs-built_in">mkdir</span> new_project
<span class="hljs-built_in">touch</span> index.js
npm install
</code></pre>
<p>That is just the bare minimum. If you want to add any dependencies, you have to go through an unreasonable number of hoops, and the standard package repository, npm, is known for being quite insecure, not to mention it being controlled by a private corporation.</p>
<p>If you want to use Java, you have to use some kind of template, otherwise it’s impossible to create a project from scratch. Even then, you have to decide between two or three package managers, and deal with it when libraries don’t support your choice.</p>
<p>If you want to use C or C++ forget it. As a beginner, you are going to be stuck figuring out how to use the local linker and compiler to get any library to work.</p>
<h2>The Rusty Solution</h2>
<p><strong>Cargo</strong> is one of the simplest, easiest to use build tools out there. It does everything you might need, and it may be why Rust may soon become a major player. 99% of the time, you can just copy and paste a given library’s string directly from <a href="https://crates.io">crates.io</a>, which is a repository owned and operated by the Rust Foundation, and you are off to the races.</p>
<p>Testing is easy too, just run <code>cargo test</code>, and you're done. It covers both unit tests, and integration tests if you need it.</p>
<p>`cargo doc`` provides a consistent, and easy way to document code. It’s also great for users, allowing developers access to a straightforward representation of what a library looks like.</p>
<p><strong>It’s extensible</strong>, some of the greatest binary crates out there exist to extend the functionality. So, in the few areas where <code>cargo</code> on its own doesn’t cut it, there are additional crates to fill the gaps.</p>
<h2>Closing</h2>
<p>At least for the moment, of all the great reasons to use Rust, <code>cargo</code> will be the most significant reason it will increase in popularity.</p>
<img src="/images/most_loved_stackoverflow_2021.png" alt="Most loved programming languages, according to the 2021 StackOverflow Developer Survey">
<p>Rust can be tough to learn, especially for people who are used to dynamic languages like JavaScript. But what Rust has beat for most languages is the build system. <code>cargo</code> alone makes Rust more approachable than C/C++, Java and many others.</p>
<p>More and more new developers will be going to Rust instead of those other languages, because it is easier to start with, easier to learn, and has a great community to request help from.</p>]]></description>
            <content:encoded><![CDATA[<h1>Why Rust Might Attract More Developers Than Java and JavaScript</h1>
<p>Rust is undeniably a greatly appreciated language, after all, it has placed as "#1 Most Loved Programming Language" on the StackOverflow Developer Survey for 6th year running.</p>
<p>Reading this, you are probably very aware of why so many people like it, but I will be going over those reasons anyway, at least before I reveal its secret weapon.</p>
<p><strong>It’s fast, very fast</strong>. While it may not be a great benchmark, The Computer Language Benchmarks Game, measures that it falls just a hair behind the fastest language, <strong>C</strong>.</p>
<p>That makes sense, given that Rust was meant to replace C/C++. Even better than the benchmark, it has been shown that in many situations, the Rust compiler is able to optimize code significantly better than C or C++, just because it has more information about the developers intent.</p>
<p><strong>It’s consistent</strong>. For more than pretty much any other language, Rust developers can expect that, if their code compiles, it works. When it doesn’t compile, the compiler gives better feedback and advice than any other language. It can even check if your arithmetic is wrong.</p>
<p><strong>It’s complicated</strong>, but only where it needs to be. The language itself, as well as its standard library are built very consciously, by its users. Most places where you might find you have to write tedious boilerplate, Rust makes it easy. It also gives you the freedom to write code that does exactly what you want, exactly how you want. We will get back to this later.</p>
<h2>The Problem</h2>
<p>When starting a new project what is the first thing you are going to do? It depends on what stack you are using.</p>
<p>If you want to use NodeJS, it might look something like:</p>
<pre><code class="hljs language-bash"><span class="hljs-built_in">mkdir</span> new_project
<span class="hljs-built_in">touch</span> index.js
npm install
</code></pre>
<p>That is just the bare minimum. If you want to add any dependencies, you have to go through an unreasonable number of hoops, and the standard package repository, npm, is known for being quite insecure, not to mention it being controlled by a private corporation.</p>
<p>If you want to use Java, you have to use some kind of template, otherwise it’s impossible to create a project from scratch. Even then, you have to decide between two or three package managers, and deal with it when libraries don’t support your choice.</p>
<p>If you want to use C or C++ forget it. As a beginner, you are going to be stuck figuring out how to use the local linker and compiler to get any library to work.</p>
<h2>The Rusty Solution</h2>
<p><strong>Cargo</strong> is one of the simplest, easiest to use build tools out there. It does everything you might need, and it may be why Rust may soon become a major player. 99% of the time, you can just copy and paste a given library’s string directly from <a href="https://crates.io">crates.io</a>, which is a repository owned and operated by the Rust Foundation, and you are off to the races.</p>
<p>Testing is easy too, just run <code>cargo test</code>, and you're done. It covers both unit tests, and integration tests if you need it.</p>
<p>`cargo doc`` provides a consistent, and easy way to document code. It’s also great for users, allowing developers access to a straightforward representation of what a library looks like.</p>
<p><strong>It’s extensible</strong>, some of the greatest binary crates out there exist to extend the functionality. So, in the few areas where <code>cargo</code> on its own doesn’t cut it, there are additional crates to fill the gaps.</p>
<h2>Closing</h2>
<p>At least for the moment, of all the great reasons to use Rust, <code>cargo</code> will be the most significant reason it will increase in popularity.</p>
<img src="/images/most_loved_stackoverflow_2021.png" alt="Most loved programming languages, according to the 2021 StackOverflow Developer Survey">
<p>Rust can be tough to learn, especially for people who are used to dynamic languages like JavaScript. But what Rust has beat for most languages is the build system. <code>cargo</code> alone makes Rust more approachable than C/C++, Java and many others.</p>
<p>More and more new developers will be going to Rust instead of those other languages, because it is easier to start with, easier to learn, and has a great community to request help from.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How to Write a Discord Bot in Rust]]></title>
            <link>https://elijahpotter.dev/articles/how_to_write_a_discord_bot_in_rust</link>
            <guid>https://elijahpotter.dev/articles/how_to_write_a_discord_bot_in_rust</guid>
            <pubDate>Wed, 02 Jun 2021 06:00:00 GMT</pubDate>
            <description><![CDATA[<h1>How to Write a Discord Bot in Rust</h1>
<p>Discord is an instant message platform with more than 150 million monthly active users. The main appeal seems to a combination of the wide reach that platforms like StackOverflow have, as well as the instant delivery of the instant messaging concept. One of the best features of Discord is it’s seemingly unlimited hackability of the platform via it’s “Bot” system. The Bot system allows developers to add functionality to Discord communities by writing software that interfaces in a similar way that people do. I want to give you an introduction on how to do that in Rust.</p>
<img src="/images/flat_ferris.png" alt="A Flat Render of Rust&#x27;s Mascot, Ferris">
<h2>Interaction</h2>
<p>Most interaction with Discord bots happens via commands, not dissimilar to terminal applications. Commands may look like <code>!play Eat it by Weird Al</code>.</p>
<img src="/images/ping_pong.png" alt="Ping! Pong!">
<p>The bot we are going to make now will simply respond to <code>!ping</code> with "Pong!".</p>
<h2>Template</h2>
<img src="https://github.com/elijah-potter/discord-bot-template" alt="cargo generator template">
<h2>Setup the Project</h2>
<p>Using a functioning Rust environment, use cargo to create a new project. E.g:</p>
<pre><code class="hljs language-bash">cargo new tutorial-bot
</code></pre>
<p>Next, we have to add <a href="https://github.com/serenity-rs/serenity">Serenity</a>, the library for creating Discord bots in Rust. We also have to drop in Tokio, because Serenity takes advantage of it's async runtime.</p>
<p>You can do this either via cargo-edit:</p>
<pre><code class="hljs language-bash">cargo add serenity
cargo add tokio --features full
</code></pre>
<p>or by just adding them to Cargo.toml:</p>
<pre><code class="hljs language-toml">[dependencies]
serenity = "0.10.5"
tokio = { version = "1.5.0", features = ["full"] }
</code></pre>
<h2>Setting up the Standard Framework</h2>
<p>Serenity has a lot of flexibility. You have access to a event handler that allows fine grain control of events. You also have access to a standard framework that makes it ridiculously easy to respond to commands.
Before we do anything else, we have to make our main function async. It is super easy to do that, just replace it with:</p>
<pre><code class="hljs language-rust"><span class="hljs-meta">#[tokio::main]</span>
<span class="hljs-keyword">async</span> <span class="hljs-keyword">fn</span> <span class="hljs-title function_">main</span>() {
}
</code></pre>
<p>First, we want to get our bot token in. In an actual bot, please obtain it via an environment variable or some other method. We are only doing it this way for simplicity.</p>
<pre><code class="hljs language-rust"><span class="hljs-keyword">let</span> <span class="hljs-variable">token</span> = <span class="hljs-string">"{your bot token}"</span>;
</code></pre>
<p>If you do not know how to get a bot token, please follow <a href="https://www.getdroidtips.com/discord-bot-token/">this tutorial</a>.</p>
<p>The Serenity Standard Framework splits your bot's commands into groups. Each group can have multiple commands. For example, a bot might have two groups: one focused on fun and one focused on math. The former has commands like !meme, while the latter may have various math functions, like sin!. This is also how we will add commands to our bot.</p>
<p>First, add the needed structs and macros to the file:</p>
<pre><code class="hljs language-rust"><span class="hljs-keyword">use</span> serenity::{Client, client::Context, framework::{StandardFramework, standard::{CommandResult, macros::{group, command}}}, model::channel::Message};
</code></pre>
<p>Next, create a struct that we will attach our commands to:</p>
<pre><code class="hljs language-rust"><span class="hljs-meta">#[group]</span>
<span class="hljs-meta">#[commands()]</span>
<span class="hljs-keyword">struct</span> <span class="hljs-title class_">HelloWorld</span>;
</code></pre>
<p>Once we have added our commands, we will enter them into the commands sub-macro.</p>
<p>Create an instance of <code>StandardFramework</code> and add our group to it. We can also configure our command prefix now.</p>
<pre><code class="hljs language-rust"><span class="hljs-keyword">let</span> <span class="hljs-variable">framework</span> = StandardFramework::<span class="hljs-title function_ invoke__">new</span>()
    .<span class="hljs-title function_ invoke__">configure</span>(|c|{
        c.<span class="hljs-title function_ invoke__">prefix</span>(<span class="hljs-string">"!"</span>)
    })
    .<span class="hljs-title function_ invoke__">group</span>(&#x26;HELLOWORLD_GROUP);
</code></pre>
<p>Notice that we used a reference to a static struct called HELLOWORLD_GROUP instead of just adding our group. This is the output of the #[group] macro.</p>
<p>Now that we have created our framework, we have to attach it to a Discord client.</p>
<pre><code class="hljs language-rust"><span class="hljs-keyword">let</span> <span class="hljs-keyword">mut </span><span class="hljs-variable">client</span> = Client::<span class="hljs-title function_ invoke__">builder</span>(token).<span class="hljs-title function_ invoke__">framework</span>(framework).<span class="hljs-keyword">await</span>.<span class="hljs-title function_ invoke__">expect</span>(<span class="hljs-string">"Could not start Discord"</span>);
</code></pre>
<p>Start it.</p>
<pre><code class="hljs language-rust">client.<span class="hljs-title function_ invoke__">start</span>().<span class="hljs-keyword">await</span>.<span class="hljs-title function_ invoke__">expect</span>(<span class="hljs-string">"The bot stopped"</span>);
</code></pre>
<h2>Adding the Command</h2>
<p>Now that we have the framework set up, let's add a command.</p>
<pre><code class="hljs language-rust"><span class="hljs-meta">#[command]</span>
<span class="hljs-keyword">async</span> <span class="hljs-keyword">fn</span> <span class="hljs-title function_">ping</span>(ctx: &#x26;Context, msg: &#x26;Message) <span class="hljs-punctuation">-></span> CommandResult{
}
</code></pre>
<p>This command only needs the Discord client's Context, the message that contains the command, and returns a <code>CommandResult</code>.
 Do not forget to add the command to the group:</p>
<pre><code class="hljs language-rust"><span class="hljs-meta">#[group]</span>
<span class="hljs-meta">#[commands(ping)]</span>
<span class="hljs-keyword">struct</span> <span class="hljs-title class_">HelloWorld</span>;
</code></pre>
<p>If you don't, it will simply not get run.</p>
<p>We want the bot to reply to the !ping command with "Pong!", so let's add that to the inside of the ping function:</p>
<pre><code class="hljs language-rust">msg.<span class="hljs-title function_ invoke__">reply</span>(ctx, <span class="hljs-string">"Pong!"</span>).<span class="hljs-keyword">await</span>?;
<span class="hljs-title function_ invoke__">Ok</span>(())
</code></pre>
<h2>Full Code</h2>
<p>Here is the full code for the bot:</p>
<pre><code class="hljs language-rust"><span class="hljs-keyword">use</span> serenity::{Client, client::Context, framework::{StandardFramework, standard::{CommandResult, macros::{group, command}}}, model::channel::Message};
<span class="hljs-meta">#[tokio::main]</span>
<span class="hljs-keyword">async</span> <span class="hljs-keyword">fn</span> <span class="hljs-title function_">main</span>() {
    <span class="hljs-keyword">let</span> <span class="hljs-variable">token</span> = <span class="hljs-string">"Your bot token"</span>;
    <span class="hljs-keyword">let</span> <span class="hljs-variable">framework</span> = StandardFramework::<span class="hljs-title function_ invoke__">new</span>()
    .<span class="hljs-title function_ invoke__">configure</span>(|c|{
        c.<span class="hljs-title function_ invoke__">prefix</span>(<span class="hljs-string">"!"</span>)
    })
    .<span class="hljs-title function_ invoke__">group</span>(&#x26;HELLOWORLD_GROUP);
    <span class="hljs-keyword">let</span> <span class="hljs-keyword">mut </span><span class="hljs-variable">client</span> = Client::<span class="hljs-title function_ invoke__">builder</span>(token).<span class="hljs-title function_ invoke__">framework</span>(framework).<span class="hljs-keyword">await</span>.<span class="hljs-title function_ invoke__">expect</span>(<span class="hljs-string">"Could not start Discord"</span>);
    client.<span class="hljs-title function_ invoke__">start</span>().<span class="hljs-keyword">await</span>.<span class="hljs-title function_ invoke__">expect</span>(<span class="hljs-string">"The bot stopped"</span>);
}
<span class="hljs-meta">#[group]</span>
<span class="hljs-meta">#[commands(ping)]</span>
<span class="hljs-keyword">struct</span> <span class="hljs-title class_">HelloWorld</span>;
<span class="hljs-meta">#[command]</span>
<span class="hljs-keyword">async</span> <span class="hljs-keyword">fn</span> <span class="hljs-title function_">ping</span>(ctx: &#x26;Context, msg: &#x26;Message) <span class="hljs-punctuation">-></span> CommandResult{
    msg.<span class="hljs-title function_ invoke__">reply</span>(ctx, <span class="hljs-string">"Pong!"</span>).<span class="hljs-keyword">await</span>?;
    <span class="hljs-title function_ invoke__">Ok</span>(())
}
</code></pre>
<h2>That's it</h2>
<p>If you build and run your app, you should have a functioning Discord bot!
Serenity is an amazing crate and is an absolute joy to work with. I hope you learned something. There is a ton more stuff that I did not cover here. Feel free to look at the <a href="https://docs.rs/serenity/latest/serenity/">Serenity docs</a> and examples to learn more!</p>]]></description>
            <content:encoded><![CDATA[<h1>How to Write a Discord Bot in Rust</h1>
<p>Discord is an instant message platform with more than 150 million monthly active users. The main appeal seems to a combination of the wide reach that platforms like StackOverflow have, as well as the instant delivery of the instant messaging concept. One of the best features of Discord is it’s seemingly unlimited hackability of the platform via it’s “Bot” system. The Bot system allows developers to add functionality to Discord communities by writing software that interfaces in a similar way that people do. I want to give you an introduction on how to do that in Rust.</p>
<img src="/images/flat_ferris.png" alt="A Flat Render of Rust&#x27;s Mascot, Ferris">
<h2>Interaction</h2>
<p>Most interaction with Discord bots happens via commands, not dissimilar to terminal applications. Commands may look like <code>!play Eat it by Weird Al</code>.</p>
<img src="/images/ping_pong.png" alt="Ping! Pong!">
<p>The bot we are going to make now will simply respond to <code>!ping</code> with "Pong!".</p>
<h2>Template</h2>
<img src="https://github.com/elijah-potter/discord-bot-template" alt="cargo generator template">
<h2>Setup the Project</h2>
<p>Using a functioning Rust environment, use cargo to create a new project. E.g:</p>
<pre><code class="hljs language-bash">cargo new tutorial-bot
</code></pre>
<p>Next, we have to add <a href="https://github.com/serenity-rs/serenity">Serenity</a>, the library for creating Discord bots in Rust. We also have to drop in Tokio, because Serenity takes advantage of it's async runtime.</p>
<p>You can do this either via cargo-edit:</p>
<pre><code class="hljs language-bash">cargo add serenity
cargo add tokio --features full
</code></pre>
<p>or by just adding them to Cargo.toml:</p>
<pre><code class="hljs language-toml">[dependencies]
serenity = "0.10.5"
tokio = { version = "1.5.0", features = ["full"] }
</code></pre>
<h2>Setting up the Standard Framework</h2>
<p>Serenity has a lot of flexibility. You have access to a event handler that allows fine grain control of events. You also have access to a standard framework that makes it ridiculously easy to respond to commands.
Before we do anything else, we have to make our main function async. It is super easy to do that, just replace it with:</p>
<pre><code class="hljs language-rust"><span class="hljs-meta">#[tokio::main]</span>
<span class="hljs-keyword">async</span> <span class="hljs-keyword">fn</span> <span class="hljs-title function_">main</span>() {
}
</code></pre>
<p>First, we want to get our bot token in. In an actual bot, please obtain it via an environment variable or some other method. We are only doing it this way for simplicity.</p>
<pre><code class="hljs language-rust"><span class="hljs-keyword">let</span> <span class="hljs-variable">token</span> = <span class="hljs-string">"{your bot token}"</span>;
</code></pre>
<p>If you do not know how to get a bot token, please follow <a href="https://www.getdroidtips.com/discord-bot-token/">this tutorial</a>.</p>
<p>The Serenity Standard Framework splits your bot's commands into groups. Each group can have multiple commands. For example, a bot might have two groups: one focused on fun and one focused on math. The former has commands like !meme, while the latter may have various math functions, like sin!. This is also how we will add commands to our bot.</p>
<p>First, add the needed structs and macros to the file:</p>
<pre><code class="hljs language-rust"><span class="hljs-keyword">use</span> serenity::{Client, client::Context, framework::{StandardFramework, standard::{CommandResult, macros::{group, command}}}, model::channel::Message};
</code></pre>
<p>Next, create a struct that we will attach our commands to:</p>
<pre><code class="hljs language-rust"><span class="hljs-meta">#[group]</span>
<span class="hljs-meta">#[commands()]</span>
<span class="hljs-keyword">struct</span> <span class="hljs-title class_">HelloWorld</span>;
</code></pre>
<p>Once we have added our commands, we will enter them into the commands sub-macro.</p>
<p>Create an instance of <code>StandardFramework</code> and add our group to it. We can also configure our command prefix now.</p>
<pre><code class="hljs language-rust"><span class="hljs-keyword">let</span> <span class="hljs-variable">framework</span> = StandardFramework::<span class="hljs-title function_ invoke__">new</span>()
    .<span class="hljs-title function_ invoke__">configure</span>(|c|{
        c.<span class="hljs-title function_ invoke__">prefix</span>(<span class="hljs-string">"!"</span>)
    })
    .<span class="hljs-title function_ invoke__">group</span>(&#x26;HELLOWORLD_GROUP);
</code></pre>
<p>Notice that we used a reference to a static struct called HELLOWORLD_GROUP instead of just adding our group. This is the output of the #[group] macro.</p>
<p>Now that we have created our framework, we have to attach it to a Discord client.</p>
<pre><code class="hljs language-rust"><span class="hljs-keyword">let</span> <span class="hljs-keyword">mut </span><span class="hljs-variable">client</span> = Client::<span class="hljs-title function_ invoke__">builder</span>(token).<span class="hljs-title function_ invoke__">framework</span>(framework).<span class="hljs-keyword">await</span>.<span class="hljs-title function_ invoke__">expect</span>(<span class="hljs-string">"Could not start Discord"</span>);
</code></pre>
<p>Start it.</p>
<pre><code class="hljs language-rust">client.<span class="hljs-title function_ invoke__">start</span>().<span class="hljs-keyword">await</span>.<span class="hljs-title function_ invoke__">expect</span>(<span class="hljs-string">"The bot stopped"</span>);
</code></pre>
<h2>Adding the Command</h2>
<p>Now that we have the framework set up, let's add a command.</p>
<pre><code class="hljs language-rust"><span class="hljs-meta">#[command]</span>
<span class="hljs-keyword">async</span> <span class="hljs-keyword">fn</span> <span class="hljs-title function_">ping</span>(ctx: &#x26;Context, msg: &#x26;Message) <span class="hljs-punctuation">-></span> CommandResult{
}
</code></pre>
<p>This command only needs the Discord client's Context, the message that contains the command, and returns a <code>CommandResult</code>.
 Do not forget to add the command to the group:</p>
<pre><code class="hljs language-rust"><span class="hljs-meta">#[group]</span>
<span class="hljs-meta">#[commands(ping)]</span>
<span class="hljs-keyword">struct</span> <span class="hljs-title class_">HelloWorld</span>;
</code></pre>
<p>If you don't, it will simply not get run.</p>
<p>We want the bot to reply to the !ping command with "Pong!", so let's add that to the inside of the ping function:</p>
<pre><code class="hljs language-rust">msg.<span class="hljs-title function_ invoke__">reply</span>(ctx, <span class="hljs-string">"Pong!"</span>).<span class="hljs-keyword">await</span>?;
<span class="hljs-title function_ invoke__">Ok</span>(())
</code></pre>
<h2>Full Code</h2>
<p>Here is the full code for the bot:</p>
<pre><code class="hljs language-rust"><span class="hljs-keyword">use</span> serenity::{Client, client::Context, framework::{StandardFramework, standard::{CommandResult, macros::{group, command}}}, model::channel::Message};
<span class="hljs-meta">#[tokio::main]</span>
<span class="hljs-keyword">async</span> <span class="hljs-keyword">fn</span> <span class="hljs-title function_">main</span>() {
    <span class="hljs-keyword">let</span> <span class="hljs-variable">token</span> = <span class="hljs-string">"Your bot token"</span>;
    <span class="hljs-keyword">let</span> <span class="hljs-variable">framework</span> = StandardFramework::<span class="hljs-title function_ invoke__">new</span>()
    .<span class="hljs-title function_ invoke__">configure</span>(|c|{
        c.<span class="hljs-title function_ invoke__">prefix</span>(<span class="hljs-string">"!"</span>)
    })
    .<span class="hljs-title function_ invoke__">group</span>(&#x26;HELLOWORLD_GROUP);
    <span class="hljs-keyword">let</span> <span class="hljs-keyword">mut </span><span class="hljs-variable">client</span> = Client::<span class="hljs-title function_ invoke__">builder</span>(token).<span class="hljs-title function_ invoke__">framework</span>(framework).<span class="hljs-keyword">await</span>.<span class="hljs-title function_ invoke__">expect</span>(<span class="hljs-string">"Could not start Discord"</span>);
    client.<span class="hljs-title function_ invoke__">start</span>().<span class="hljs-keyword">await</span>.<span class="hljs-title function_ invoke__">expect</span>(<span class="hljs-string">"The bot stopped"</span>);
}
<span class="hljs-meta">#[group]</span>
<span class="hljs-meta">#[commands(ping)]</span>
<span class="hljs-keyword">struct</span> <span class="hljs-title class_">HelloWorld</span>;
<span class="hljs-meta">#[command]</span>
<span class="hljs-keyword">async</span> <span class="hljs-keyword">fn</span> <span class="hljs-title function_">ping</span>(ctx: &#x26;Context, msg: &#x26;Message) <span class="hljs-punctuation">-></span> CommandResult{
    msg.<span class="hljs-title function_ invoke__">reply</span>(ctx, <span class="hljs-string">"Pong!"</span>).<span class="hljs-keyword">await</span>?;
    <span class="hljs-title function_ invoke__">Ok</span>(())
}
</code></pre>
<h2>That's it</h2>
<p>If you build and run your app, you should have a functioning Discord bot!
Serenity is an amazing crate and is an absolute joy to work with. I hope you learned something. There is a ton more stuff that I did not cover here. Feel free to look at the <a href="https://docs.rs/serenity/latest/serenity/">Serenity docs</a> and examples to learn more!</p>]]></content:encoded>
            <enclosure url="https://elijahpotter.dev/images/flat_ferris.png" length="0" type="image/png"/>
        </item>
    </channel>
</rss>