Linkfest #5
Every couple of weeks I pick a list of links that felt worth my time. Culture, software, management, the strange corners of the internet. Things that made me pause.
This edition circles around AI and agency. Autonomous weapons, humanoid soldiers, and nuclear escalation scenarios sit next to essays about AI code quality and the economics of software work. There's also a sharp critique of pasting raw LLM output into chats, a tool that tells you which models your laptop can actually run, a PHP framework that skips JavaScript entirely, a brain-in-a-box fruit fly, Pokémon Go players training delivery robots without knowing it, and a thoughtful case for generating systems instead of single artifacts. The common thread is simple: once software starts acting in the world, not just talking, the stakes change.
Enjoy! -- Christoph (CTO @ Basilicom)
One Step from Skynet
https://nickjmilani.substack.com/p/one-step-from-skynet
Nick Milani argues that the Skynet scenario no longer feels like pure fiction. His core point is straightforward: AI systems now see, decide, and act through APIs. The same mechanism that lets an AI book a restaurant can, in principle, connect to a weapons system. Step one, connecting AI to military hardware, has already happened in various forms. Step two, removing human oversight, is under pressure. He walks through simulations where general purpose models escalate nuclear conflicts at high rates and highlights the risk of autonomous feedback loops between opposing systems.
I appreciate that he avoids the Hollywood angle and focuses on process and incentives. The plausible deniability problem he describes feels especially real. Even if you think full autonomy is far off, the pressure to move humans out of the loop is visible today. This is less about rogue superintelligence and more about ordinary systems operating at machine speed in high stakes contexts. That alone is unsettling.
Rise of the AI Soldiers
https://time.com/article/2026/03/09/ai-robots-soldiers-war/
This TIME feature looks at humanoid robots built for military use, including the Phantom platform tested with US forces and in Ukraine. Founders frame them as a moral upgrade: send machines instead of young soldiers. Critics warn that removing humans from the battlefield lowers the political cost of conflict and blurs accountability. The article also covers autonomous drone use in Ukraine and the race among US, Chinese, and Russian actors.
The most striking detail is how normal this already sounds. Contracts, pilot programs, product roadmaps. The debate has shifted from whether to build these systems to how much autonomy they should have. The tension between human in the loop and human on the loop is not abstract. It is policy and procurement. That is where this will be decided.
GenAI and the Software Engineering Economy
https://alexey-pelykh.com/blog/genai-swe-economics/
Alexey Pelykh breaks the AI and coding debate into three variables: number of engineers, output volume, and total market spend. He surveys productivity studies that show both speedups and slowdowns, defect data that shows AI code is often buggier, and labor market signals like collapsing junior hiring. He outlines four scenarios from displacement to expansion and argues that transformation with polarization is the most likely path.
I like the framing because it forces you to look at tradeoffs. Faster generation does not mean lower total cost if verification becomes the bottleneck. The data around review time and code churn is worth watching. If writing code becomes cheap but trusting code becomes expensive, the center of gravity in software teams will shift. We are already seeing that.
Faster than Understanding
https://phpunit.expert/articles/faster-than-understanding.html
Sebastian Bergmann describes using an AI agent to implement a complex software metric from an academic paper. The code was produced in minutes, with tests and visualizations. The problem is verification. He is not an expert in the metric and cannot easily confirm whether the implementation is correct. The AI was faster than his ability to understand the domain.
This captures a real tension. AI can reduce typing effort, but it cannot remove the need for comprehension. If you cannot judge the output, you inherit risk. The danger is not bad code that is obviously wrong. It is plausible code that looks fine. That feels like the right caution for teams rushing to automate deep domain work.
Stop Sloppypasta
This short manifesto argues that pasting raw LLM output into chats or emails is rude and counterproductive. Writing used to be expensive, which created a balance between author and reader. With AI, producing text is cheap but reading is still costly. The result is effort asymmetry and a loss of trust. The piece links to research suggesting that outsourcing thinking to models can reduce comprehension.
I find the social angle convincing. Even if the content is correct, an untouched AI block feels off. It signals that the sender did not do the work of editing and owning the message. In teams, that erodes trust quickly. AI can help draft, but responsibility for the final words should stay human.
php-via - Real-time PHP Without JavaScript
php-via is a framework that brings server-driven reactivity to PHP using OpenSwoole and Server-Sent Events. The pitch is simple: no JavaScript to author, no build step, scoped state that can be private to a tab or shared globally with one line. Signals, actions, and views form the whole API.
I enjoy seeing this pattern outside the usual ecosystems. Server centric reactivity keeps complexity in one place and can be a good fit for certain products. The tradeoffs are clear, especially around scaling and ecosystem maturity, but the idea of multiplayer by default with minimal tooling is attractive for internal tools and focused apps.
Generator Generator
https://dev.to/sebs/generator-generator-1m0h
This essay argues that instead of asking AI to generate single artifacts, we should ask it to generate the systems that produce them. The example is a set of Agile workshop tokens. Rather than one image of a coin, the AI outputs a procedural grammar that can generate a full, coherent set of printable tokens with constraints like Fibonacci values baked in.
The broader idea is useful. When you encode rules and constraints, you get leverage. A system can produce many valid outputs, not one brittle sample. In product work, that mindset often separates prototypes from platforms. AI can help draft systems, not just assets, if we ask it the right way.
Researchers Upload Fly's Brain to Matrix
https://futurism.com/science-energy/research-fly-brain-matrix
Eon Systems claims to have simulated the full connectome of a fruit fly and placed it inside a physics-based virtual body. The system uses a detailed wiring diagram and produces multiple behaviors like grooming and feeding. The long term ambition is scaling from flies to mice, and eventually to human scale emulation.
Pokémon Go Players Have Been Training Delivery Robots
https://www.popsci.com/technology/pokemon-go-delivery-robots-crowdsourcing/
Niantic trained its visual positioning system on billions of images collected from Pokémon Go players scanning real world locations. That mapping data is now being used to help delivery robots navigate sidewalks with higher precision than GPS alone. What started as a game mechanic turns into infrastructure.
CanIRun.ai - Can Your Machine Run AI Models?
This small web tool inspects your hardware through browser APIs and estimates which open models you can run locally. It lists memory requirements, architecture types, and context windows across a wide range of models from tiny edge variants to massive mixtures of experts.
I love your feedback! If you've got a comment, want to discuss one of the items or even suggest something ineresting to add to the next edition of the Linkfest - please reach out and contact me.
Christoph Lühr - CTO
christoph.luehr@basilicom.de