Linkfest #4

Every couple of weeks I pick a list of links that felt worth my time. Culture, software, management, the strange corners of the internet. Things that made me pause.

This round moves from RSS readers and curated PHP blogs to AI code rewrites and simple logic tests that stump large models. There is a sharp look at why simplicity rarely gets rewarded, a careful takedown of the "Europe is stagnating" story, and a long, playful walk through 1,000 years of English. We also get a sober take on AI panic and a useful model for why online communities keep splitting instead of merging.

Enjoy! -- Christoph (CTO @ Basilicom)

The View From RSS

https://www.carolinecrampton.com/the-view-from-rss/

Caroline Crampton reads nearly the whole web through 2,000 RSS feeds and almost never visits homepages. In her reader, everything is chronological, stripped down, and mixed together. That gives her a different view of online publishing. She sees the SEO filler, the affiliate bait, the AI summaries written for other AI summaries. She also sees the messy human side, draft headlines, hidden Substack threads, small experiments made just for RSS subscribers.

I like this piece because it shows how much the shape of a tool changes what the web looks like. RSS feels old, but it gives you leverage. It removes ranking games and puts you back in control. In a time when feeds are tuned for engagement, that feels quietly radical.

AI And The Ship of Theseus

https://lucumr.pocoo.org/2026/3/5/theseus/

Armin Ronacher looks at what happens when an open source library gets reimplemented with AI, using only the API and test suite as a guide. The result behaves the same but is written from scratch, sometimes faster and with a different design. That raises licensing questions. Is it a derived work or a new one. And if AI generated most of it, is it even protected by copyright.

This is not just a legal curiosity. If code can be cheaply reimplemented from tests, copyleft loses some of its force. So might proprietary advantages. The fights around this will be messy, and few people will want a court to set a clear precedent. It feels like an early signal of deeper shifts in how software ownership works.

Nobody Gets Promoted for Simplicity

https://terriblesoftware.org/2026/03/03/nobody-gets-promoted-for-simplicity/

This short essay argues that organizations reward visible complexity. Shipping a big framework, a new architecture, or a bold rewrite looks impressive. Quietly removing code, cutting scope, or choosing the boring option does not. Over time, that incentive leads to bloated systems that are hard to maintain.

It is uncomfortable because it is familiar. In many teams, the cleanest solution is also the least career friendly. If AI now makes it easier to produce large amounts of code, this bias toward complexity could get worse. That makes cultural guardrails more important, not less.

PHP Reads - Curated PHP writing worth your time

https://phpreads.com/why

PHP Reads is a curated project that picks three thoughtful PHP articles each week. The founders argue that blogging is coming back, but signal is buried under AI generated noise. Their answer is manual selection, signed recommendations, no ads, no tracking, and soon an RSS feed and plain text newsletter.

This is a modest idea, but it points in the right direction. In a world of infinite content, curation becomes infrastructure. It also pairs nicely with the RSS piece above. We may not need more content. We need better filters. Or just keep read curated content like my Linkfest!

European Economies Are Not Stagnating

https://jacobin.com/2026/03/us-europe-comparative-productivity-statistics

This deep dive challenges the common claim that Europe has fallen far behind the US in GDP and productivity. The key is how purchasing power parity data is constructed. The widely cited constant price series extrapolates backwards using national growth rates that are not fully comparable. An alternative dataset suggests Europe has kept pace more closely than the narrative implies.

It is a technical argument, but an important one. Big stories about decline or dominance often rest on fragile statistical choices. Before we redesign whole economies around a headline, it is worth checking the footnotes.

Car Wash Test on 53 leading AI models

https://opper.ai/blog/car-wash-test

Opper tested 53 AI models with a simple question: if the car wash is 50 meters away and you want to wash your car, should you walk or drive. Most models said walk. Only a handful consistently answered drive, and even some top tier models failed several times out of ten. A human baseline of 10,000 people scored about 71 percent correct.

It is a toy problem, but it shows how brittle reasoning can be. Many models latch onto the "short distance equals walk" pattern and miss the basic constraint that the car must be present. In production systems, this kind of flakiness matters. Reliability, not just capability, is the real test.

How far back in time can you understand English?

https://www.deadlanguagesociety.com/p/how-far-back-in-time-understand-english

This playful essay walks a fictional blog post back from modern English to the year 1000, shifting spelling, grammar and vocabulary century by century. At some point most readers hit a wall. The piece then explains what changed, from the loss of inflections to the flood of French and Latin words.

It is a reminder that language drift is slow until it is not. What feels stable over decades looks alien over centuries. That perspective helps when we talk about how AI might change writing or coding. Change accumulates quietly before it becomes obvious.

You are not left behind

https://www.ufried.com/blog/not_left_behind/

Uwe Friedrich pushes back against the constant warning that if you do not master AI right now, you will be left behind. He argues that current tools are immature and full of quirks. Much of today's hard won prompt knowledge may be obsolete soon. The real risk is not missing a trick, but missing the inflection point where the market truly shifts.

This is a useful counterweight to the hype cycle. Panic leads to bad decisions. Ignoring change is risky too. The balanced position is harder, but usually more sustainable.

The online community trilemma

https://pluralistic.net/2026/02/16/fast-good-cheap/

Cory Doctorow highlights research on why similar online communities keep forming instead of merging into one big group. The authors describe a trilemma between scale, trust and useful information. Any one community can usually satisfy two of these, but not all three. So people join multiple overlapping spaces.

This model explains a lot of internet history, from Usenet splits to Reddit subgroups. It also suggests that fragmentation is not always a bug. Sometimes it is how people manage tradeoffs. As platforms centralize and chase scale, remembering that limit matters.

THE Future of Software Development

https://martinfowler.com/fragments/2026-02-18.html

Martin Fowler shares notes from a Thoughtworks retreat on the future of software development with AI. Themes include a new "middle loop" of supervisory engineering, risk tiering, and the role of TDD as a way to guide LLMs. There is also an undercurrent of uncertainty. No one has it fully figured out yet.

I appreciate the tone here. Less manifesto, more questions. The idea that good tests and healthy code bases make AI more reliable feels grounded. It suggests that solid engineering practice still matters, maybe more than ever.

I love your feedback! If you've got a comment, want to discuss one of the items or even suggest something ineresting to add to the next edition of the Linkfest - please reach out and contact me.