Linkfest #6
Every couple of weeks I pick a list of links that felt worth my time. Culture, software, management, the strange corners of the internet. Things that made me pause.
This time there is a clear theme: structure. We look at Markdown and why it refuses to stay simple, at how oxygen once rewired the planet, at what AI might be doing to scientific training, at the hidden checks behind ChatGPT's input box, at the real cost of running your own models, at a hackathon where domain experts built software in days, at seafoam green control rooms, and at a Chinese AI framework modeled after imperial bureaucracy. Different fields, same question: what happens when a simple tool grows into infrastructure?
Enjoy! -- Christoph (CTO @ Basilicom)
The Machines Are Fine. I'm Worried About Us.
https://ergosphere.blog/posts/the-machines-are-fine/
This essay imagines two PhD students. One does the hard work herself. The other uses an AI agent for reading, coding and writing. Both publish a paper. On paper they look identical. The difference is invisible: one built intuition, the other shipped output. The author argues that the real risk of AI in science is not bad results, but hollowed-out training. If students outsource the struggle, they may never develop the instinct that lets them judge when a model is wrong.
I think this hits a nerve. Tools are not the problem. Incentives are. If institutions reward output over understanding, people will optimize for output. The piece is worth reading because it shifts the focus from model capability to human formation.
Why So Many Control Rooms Were Seafoam Green
https://bethmathews.substack.com/p/why-so-many-control-rooms-were-seafoam
An exploration of industrial color theory through the Manhattan Project. The author traces the seafoam green used in control rooms back to Faber Birren, a color theorist who helped standardize safety colors in the 1940s. Light green was meant to reduce eye fatigue and create a calm, non-distracting environment in high-risk spaces. The piece connects design decisions to psychology and industrial safety.
This is the kind of niche history I enjoy. It shows how even something as simple as wall paint can be part of a larger system of risk management. Design is rarely neutral.
The Lawyer Who Won
https://hadleylab.org/blogs/2026-03-22-the-lawyer-who-won/
A reflection on an Anthropic hackathon where non-programmers built working AI tools. A lawyer created a permit assistant, a cardiologist built a patient follow-up system, a road technician built an appraisal pipeline. The takeaway is that domain expertise now matters more than coding skill. But the author adds a twist: demos are not products. Without governance, audit trails and long-term maintenance, many AI apps will fail once they leave the hackathon stage.
I like this balanced view. It is easy to celebrate democratization. It is harder to think about compliance, liability and maintenance. Software that touches real-world processes needs more than a clever prompt.
OpenClaw Emperors
https://www.chinatalk.media/p/taking-the-throne-as-openclaw-emperors
A vivid essay about China's latest AI craze. From aluminum pots in the 1990s Qigong fever to lobster claw headbands in 2026, the author draws parallels between spiritual and technological manias. The focus is an open source multi-agent framework modeled on the Tang dynasty's Three Departments and Six Ministries. Instead of flat agent chats, it enforces a strict bureaucratic structure with planning, auditing and execution roles.
It is both funny and sharp. The historical analogy works because governance is the hidden layer of AI systems. As agents grow more capable, the real question becomes how we design their institutions. The emperor fantasy is appealing. The bureaucracy is unavoidable.
Why the Heck Are We Still Using Markdown?
https://bgslabs.org/blog/why-are-we-using-markdown/
This is a long, slightly chaotic rant about Markdown. The author argues that Markdown started as a minimal text-to-HTML convention, but we now treat it like a programming language without giving it real structure. Multiple ways to write bold text, inline HTML everywhere, footnotes that change parsing rules, endless extensions, and security issues from half-baked parsers. The core point is simple: we want a lightweight markup tool, but we keep piling features on top until it behaves like a fragile compiler.
I share some of the frustration. Markdown works because it is small and readable. The moment you bolt on math, diagrams, custom components and executable snippets, you are building a build system in disguise. The piece is interesting because it shows how technical debt can hide inside something that feels simple.
Markdown Ate the World
https://matduggan.com/markdown-ate-the-world/
A nostalgic tour from WordPerfect and fragile .doc files to the rise of .docx and finally Markdown. The author recalls the corruption nightmares of early Word formats and the massive complexity of OOXML. Markdown won not because it is powerful, but because it is simple, portable and almost impossible to break. Plain text survives where complex containers fail.
This pairs nicely with the earlier Markdown critique. One complains about feature creep, the other celebrates minimalism. Both circle the same truth: durability often beats capability.
The Oxygen Apocalypse: How Symmetry Saved Us
https://keiran-rowell.github.io/oxygen/2026-04-02-the-oxygen-apocalypse/
A deep dive into the Great Oxidation Event. The author explains how early cyanobacteria learned to split water using a manganese cluster, releasing oxygen into an atmosphere that had never seen it. Oxygen, normally restrained by quantum spin rules, slowly poisoned ancient life, rusted the oceans, triggered climate shifts, and eventually enabled complex organisms. The essay mixes chemistry, geology and a bit of drama to show how one molecular trick changed everything.
It is a reminder that what feels stable today is often the result of a past catastrophe. Oxygen is both fuel and poison. The balance is fragile. Reading this makes current tech revolutions feel less unique. Life has been disrupted before, at planetary scale.
ChatGPT Won't Let You Type Until Cloudflare Reads Your React State
A detailed reverse engineering of Cloudflare Turnstile as used by ChatGPT. The author decrypted hundreds of fingerprinting programs and found that they check not only browser properties like GPU and screen size, but also React application state such as router context and loader data. In short, it verifies that you are running a fully booted ChatGPT app, not just a spoofed browser. The encryption, based on XOR keys embedded in the payload, hides complexity but is not cryptographically strong.
This is fascinating because it shows how bot detection has moved up the stack. It is no longer enough to fake a browser. You have to fake an entire application runtime. It also shows how much infrastructure sits between you and a simple text box.
Self-Hosted LLM Costs: Complete 2026 Pricing Guide
https://www.sitepoint.com/self-hosted-llm-costs-2026/
A practical breakdown of what it costs to run your own large models in 2026. Cloud GPUs, dedicated servers, colocation, electricity, staffing, amortization. The article includes formulas for total cost of ownership and compares self-hosting to API pricing at different token volumes. The conclusion is not ideological. At high sustained volume, self-hosting can beat frontier APIs. Against cheap open-model APIs, the break-even point is much higher.
This is useful because it replaces vibes with math. Running your own models sounds empowering, but power bills and salaries exist. Decisions here are less about belief in open source and more about workload shape and risk tolerance.
Is the Future of AI Local?
https://tombedor.dev/open-source-models/
An argument that the AI buildout story is missing a third path. Instead of endless datacenter expansion or stalled adoption, open source models running locally could become dominant. The author points to shrinking performance gaps, rising API prices, specialized small models, and Apple's bet on powerful local hardware. The appeal is simple: fast, private, and free.
I am not fully convinced, but the scenario is plausible. If local hardware keeps improving and open models stay close to frontier performance, cost and privacy could tip the balance for many use cases.
I love your feedback! If you've got a comment, want to discuss one of the items or even suggest something ineresting to add to the next edition of the Linkfest - please reach out and contact me.
Christoph Lühr - CTO
christoph.luehr@basilicom.de