-
On Measuring Strategic Work
KPIs are simple. That is their virtue and their failure mode. The tension is real: simple becomes simplistic, complex becomes complicated. The discipline is staying in the useful middle of both.
-
On Critical Thinking in a Multi-Model World
This is not a post about prompting. It is a post about thinking. The techniques I use with AI models are not new. Adversarial analysis, dialectical argument, Socratic questioning, pre-mortems, steelmanning, red teaming: these are critical thinking disciplines that predate computers.
-
On a Year of Multi-Model Assisted Development
In February 2025, Claude Code launched and I switched back to CLI and git. Within months, Codex CLI and Gemini CLI followed. The question shifted from “which model” to “how do I use all of them together.” Here is what a year of that looks like.
-
GPU Failure Rates and the Vocabulary Problem
The number is always wrong in the same way. Burn-in failure rates are 3–8%. Total manufacturing attrition is 15–25%. In-service failure runs ~9% per year. Three different numbers measuring three different things.
-
What Jensen Actually Said at GTC 2026, and What It Means
I attended the GTC 2026 keynote in person. At the highest level, this keynote was more newly explicit than fundamentally new. Here is what I think matters most, and why.
-
The Cognitive Root: Compensatory Control and Ambiguity Intolerance
Part 12 in the series. The hardest skill in engineering is not technical. It’s emotional. Most people cannot sit with not knowing the full state of things. The desire for control is the root of most bad architecture — in software, in organizations, and in life.
-
Agents Are Agents: A Computer Science Reading List — 53 Years in the Making
53 Years in the Making. A companion reading list to Agents Are Agents (Part 11). The AI agent discourse treats ‘agent’ as if it were coined last year. It wasn’t. 21 papers from Hewitt’s Actor model (1973) through Erlang, BDI, SmartOS, and MCP to Claude Code. The substrate changed. The architecture didn’t.
-
Agents Are Agents
Part 11 in the series. The AI agent discourse acts like ‘agent’ is a new concept invented by the LLM era. It wasn’t. Erlang called them agents in 1986. The architecture requirements haven’t changed. If an AI agent is a problem in your system, your system has problems.
-
The Hammer Problem
Part 10 in the series. Previously: Computational Strategy (Part 9), Model Eats Software (Part 8), On Keeping AI in the Critical Path (Part 7), The Confident Incompetence Problem (Part 6), The Disintermediation Principle (Part 5), Zen of Unix Tools (Part 4). You want to build a home. But then you start building a house, which…
-
On Computational Strategy: The Transition from Narrative to Computation
Part 9 in a series. Previously: Model Eats the Software (Part 8), The Confident Incompetence Problem (Part 6), Confidence All the Way Down (Part 6b), On Keeping AI in the Critical Path (Part 7), The Disintermediation Principle (Part 5), Zen of Unix Tools (Part 4). The Transition Strategy has been narrative. Consultants build slide decks.…