Linkfest #8

Every couple of weeks I send you a curated stack of Internet reading that stayed with me. Culture, technology, science, software. Things that made me pause and think.

This edition circles around AI from very different angles. OpenAI chasing goblins in its own reward signals. A forgotten art school experiment that reshaped creativity. NASA quietly choosing Flickr for the long haul. An autonomous research loop redesigning a CPU. The messy question of who owns AI-written code. CEOs losing their grip in agent dashboards. A game that lets Claude playtest itself. A programming language built for machines, not humans. The risks of anonymity when generative AI makes identity fluid. And a sharp take on how LLMs turn knowledge work into theater. It's a lot, but the common thread is control: who has it, who thinks they have it, and who actually does.

Enjoy! -- Christoph (CTO @ Basilicom)

Where the Goblins Came From

https://openai.com/index/where-the-goblins-came-from/

OpenAI investigates why GPT‑5 variants started overusing goblins, gremlins, and other creatures in metaphors. The culprit was a reward signal tied to a "Nerdy" personality that favored playful, creature heavy language. Reinforcement learning amplified the quirk, and supervised fine tuning spread it beyond that personality. The team eventually removed the reward and filtered the data.

This is a small story with a big lesson. Tiny incentives can shape model behavior in ways that leak across contexts. If goblins can spread through feedback loops, so can more serious biases. The write-up is a rare look at how personality tuning interacts with RL and data reuse.

The 1960s Art School Experiment That Redefined Creativity

https://thereader.mitpress.mit.edu/the-1960s-art-school-experiment-that-redefined-creativity/

This piece looks back at a radical art school program in the 1960s that broke down boundaries between disciplines. Students were pushed to collaborate across art, design, technology, and theory. The goal was not mastery of a single craft but learning how to think, critique, and build in context. The experiment influenced generations of artists and designers, even if the school itself did not last in its original form.

I like this as a reminder that "interdisciplinary" is not a buzzword invented by startups. We have been here before. When tools change, education follows. Right now we are watching that shift again with AI, and it is useful to remember that creativity often grows out of institutional risk.

Why Are the Artemis II Photos on Flickr?

https://www.anildash.com/2026/04/30/artemis-photos-flickr/

Anil Dash explains why NASA posts its high resolution Artemis mission photos on Flickr in 2026. The short version: Flickr was built in an era that cared about ownership, licensing, metadata, and long term access. It supports public domain licensing, full resolution downloads, and has a foundation committed to preserving images for a hundred years. That makes it a better archival home than most modern social platforms.

This is a quiet but important point. Public institutions need infrastructure that is stable and boring. In a moment when official records can be edited or erased, independent archives matter. It is also a small case study in how design values from the early web still shape what is possible today.

Auto-Architecture: Karpathy's Loop, Pointed at a CPU

https://github.com/FeSens/auto-arch-tournament/blob/main/docs/auto-arch-tournament-blog-post.md

This GitHub write-up takes Andrej Karpathy's autonomous research loop and applies it to CPU design. The system proposes microarchitectural changes, implements them, runs formal verification and FPGA synthesis, and keeps the winners. Out of 73 hypotheses in under ten hours, it finds a set of improvements that nearly double CoreMark performance over the baseline design.

The key argument is not that the loop is magic. It is that the verifier is everything. Most ideas fail. Without strict formal checks, performance gates, and schema validation, the agent would ship broken silicon. That framing feels right. The interesting companies in this space will not just build agents. They will build the measurement harness around them.

Who Owns the Code Claude Wrote?

https://legallayer.substack.com/p/who-owns-the-claude-code-wrote

This legal deep dive walks through three issues: whether AI generated code is copyrightable, whether your employment contract already assigns it to your employer, and whether model output can quietly pull in copyleft licensed code. It cites current US Copyright Office guidance, ongoing cases, and the messy edge cases around "meaningful human authorship."

If you are shipping AI assisted software, this is not theoretical. Ownership questions show up in acquisitions and due diligence long before they reach court. The practical advice is simple: document your human decisions, run license scans, and read your employment contract. That is not glamorous, but it is how you avoid surprises later.

Your CEO Is Suffering from AI Psychosis

https://handyai.substack.com/p/your-ceo-is-suffering-from-ai-psychosis

This essay argues that some executives are mistaking activity for output. Agent dashboards, token leaderboards, and stories about sleepless nights shipping tens of thousands of lines of code create the feeling of productivity. Meanwhile, studies show limited measurable gains, and heavy AI use can correlate with more bugs and slower delivery.

The sharpest point here is about sycophancy. Models are trained to affirm and assist. That can reinforce overconfidence, especially at the top. The fix is not to ban agents. It is to keep boring disciplines in place: specs, acceptance criteria, real metrics. Without that, you get theater.

Letting AI Play My Game

https://blog.jeffschomay.com/letting-ai-play-my-game

A solo developer building a crossword dungeon crawler wires up a text based interface so Claude can playtest the game through HTTP calls. The model navigates rooms, triggers traps, finds bugs, writes minimal fixtures to reproduce them, fixes the code, and validates the fix. One milestone with five new features is implemented and playtested in about twelve minutes.

This is a concrete version of the verifier story. The AI is useful because it can interact with a real system, not just generate code in a vacuum. It still misses feel issues, and the human still matters. But the workflow shifts from "write everything" to "supervise and refine," which feels closer to where many dev teams are headed.

Vera - A Language Designed for Machines to Write

https://veralang.dev/

Vera is a programming language built around the idea that LLMs, not humans, are the primary authors. It removes variable names in favor of typed positional references, makes contracts mandatory, and pushes verification through an SMT solver. The thesis is that models struggle with coherence and naming, so the language should constrain expressiveness and maximize checkability.

I am not sure how far this specific language will go, but the direction is interesting. If models generate more code, languages may evolve to reduce ambiguity and reward formal structure. It is a reminder that tooling is not fixed. It adapts to whoever is holding the keyboard.

The Risks of Anonymity in the Age of Generative AI

https://www.techdirt.com/2026/04/27/the-risks-of-anonymity-in-the-age-of-generative-ai/

This Techdirt piece explores how generative AI complicates online anonymity. When text, images, and even video can be fabricated at scale, anonymous speech becomes harder to trust and easier to weaponize. At the same time, anonymity still protects vulnerable voices and whistleblowers. The tension is not new, but the cost of faking identity has dropped sharply.

Simulacrum of Knowledge Work

https://blog.happyfellow.dev/simulacrum-of-knowledge-work/

This essay argues that knowledge work has always relied on proxy signals like polish and structure to judge quality. LLMs are very good at producing those signals without necessarily producing truth. As more output is generated and skimmed, the system optimizes for looking right rather than being right.

I love your feedback! If you've got a comment, want to discuss one of the items or even suggest something ineresting to add to the next edition of the Linkfest - please reach out and contact me.