Linkfest #7

Every couple of weeks I send you a curated stack of Internet reading that stayed with me. Culture, technology, science, software. Things that made me pause and think.

This round is heavy on AI and engineering judgment, but from very different angles: a deep history lesson on Ada and what modern languages quietly borrowed, a call to merge design and engineering again, and a sharp note on "verification debt." There's also a long meditation on why today's AI systems feel both impressive and unreliable at the same time, a practical look at reading a codebase through git, a CLI that runs Apple's on‑device model locally, a personal story about building serious tooling with AI, and a surprising evolutionary story about how our eyes might have been lost and rebuilt hundreds of millions of years ago.

Enjoy! -- Christoph (CTO @ Basilicom)

The Quiet Colossus: Ada and the Language That Built the Languages

https://www.iqiipi.com/the-quiet-colossus.html

A long and careful essay on Ada, the language the US Department of Defense commissioned in the late 1970s. It walks through packages, generics, discriminated unions, strong typing, concurrency, contracts, and shows how many ideas we now call modern were already there in 1983. Rust, Go, TypeScript, C#, many of them are converging on ground Ada covered decades ago.

I like this kind of historical reset. It cuts through the hype cycle and reminds us that language design is often rediscovery under new constraints. It also raises an uncomfortable question: if we already knew how to build safer systems languages, why did the industry choose differently for so long?

Design and Engineering, As One

https://matthiasott.com/articles/design-and-engineering-as-one

This piece traces the split between design and engineering back to industrial management theory, then argues that digital product teams inherited a model that no longer fits. Designers think in perception and narrative. Engineers think in structure and constraints. When those lenses only meet at handoff, translation losses pile up. The proposed fix is simple and hard: work in the real material early, prototype in the browser, and cultivate people who can hold both perspectives at once.

As an agency CTO, this hits home. Many delivery problems are not skill gaps but process artifacts. Fewer handoffs and more shared ownership usually beat better documentation. The argument for design engineering as a first class role feels pragmatic, not romantic.

Verification Debt Is Your Next Headache

https://leadership.garden/verification-debt/

If AI makes every engineer 50 percent more productive at writing code, review and validation become the new bottleneck. This post calls the gap between generation speed and validation speed "verification debt." Green builds and clean diffs create a sense of progress, while understanding lags behind. The risk is not sloppy code but unexamined assumptions shipped at scale.

I think this is one of the sharper management takes on AI right now. Measuring output is easy. Measuring understanding is not. Leaders who keep celebrating velocity without investing in review capacity will eventually pay for it.

Our Modern Vision Evolved from an Ancient One Eyed Worm Creature

https://theconversation.com/our-modern-vision-evolved-from-an-ancient-one-eyed-worm-creature-278120

Researchers propose that a worm like ancestor first lost its paired steering eyes when it adopted a stationary lifestyle, then later re evolved paired eyes from midline light sensing structures when movement became important again. Some remnants of the old system may live on as the pineal organ.

Evolution stories are always provisional, but I enjoy how this reframes complexity. Losing capabilities can be adaptive. Regaining them can build on leftover parts. It is a useful metaphor for technology as well.

It Has Never Been About Code

https://www.ufried.com/blog/never_been_about_code/

This essay steps back and asks two questions. First, is writing third generation languages really the best abstraction we have for telling computers what to do? Second, now that we have AI agents as a different kind of tool, does every solution still need to be traditional software? It contrasts deterministic, precise software with probabilistic, flexible AI systems and argues we should treat them as different tools with different trade offs.

I appreciate the framing. Too many debates reduce to "AI is good" or "AI is bad." This piece instead asks where each tool fits. That is a more useful lens for architects making real decisions.

The Future of Everything Is Lies, I Guess

https://aphyr.com/posts/411-the-future-of-everything-is-lies-i-guess

Aphyr describes large language models as improv machines that constantly say "yes, and." They are impressive and unreliable at the same time. He explores hallucinations, fake reasoning traces, the jagged edge between competence and absurdity, and the cultural consequences of systems that sound authoritative while being wrong.

This is not a balanced take, and that is fine. It captures a feeling many engineers have: awe mixed with unease. Even if models improve, the social systems around them will struggle with trust and verification.

The Git Commands I Run Before Reading Any Code

https://piechowski.io/post/git-commands-before-reading-code/

Five simple git commands to surface churn hotspots, bug clusters, bus factor, velocity changes, and firefighting patterns before opening a single file. The idea is that commit history reveals where a codebase hurts.

This is practical and easy to adopt. I like tools that give you a first diagnostic view in minutes. It also connects nicely with the verification debt theme. History often tells you more than the current snapshot.

apfel: Apple Intelligence from the Command Line

https://github.com/Arthur-Ficial/apfel

Apfel wraps Apple's on device foundation model in a CLI and an OpenAI compatible local server. No API keys, no cloud, no subscriptions. It runs entirely on Apple Silicon hardware and exposes chat, streaming, tool calling, and JSON output.

This feels like an early sign of a shift. When models are local and cheap, the design space changes. Privacy, latency, and cost constraints look different. It will be interesting to see what kinds of developer workflows emerge when AI is just another local UNIX tool.

Eight Years of Wanting, Three Months of Building with AI

https://lalitm.com/post/building-syntaqlite-ai/

A detailed account of building serious SQLite developer tooling with AI agents. The author describes an initial month of loose "vibe coding" that produced working but unmaintainable code, then a restart with tighter design control, heavy refactoring, and deliberate scaffolding. AI shines at implementation and repetitive rule generation. It struggles with architecture and API design.

This is the kind of grounded experience report we need more of. Not magic. Not doom. Just a clear look at where the leverage is and where judgment still matters most.

Before You Fire All Your Glue People Because of AI

https://cutlefish.substack.com/p/tbm-417-before-you-fire-all-your

This essay argues that the easy heuristic for AI adoption, "use AI to do what you know you should be doing," only works when the real barrier is time and friction. It breaks down when the missing behavior depends on trust, judgment, informal coordination, and people who connect teams. The so called glue people often carry invisible context, political awareness, and timing. Replacing their outputs with AI generated documents can preserve the artifacts while destroying the social system that made those artifacts useful.

I see this risk in real projects. The damage does not show up immediately. Things look fine for a quarter, then alignment drifts and nobody quite knows why. The piece is a good reminder that automation decisions are organizational decisions, not just tooling upgrades.

I love your feedback! If you've got a comment, want to discuss one of the items or even suggest something ineresting to add to the next edition of the Linkfest - please reach out and contact me.