The Theorist by Force

The Zynux specification is 3,300 lines of state-machine bookkeeping. I drafted it in an afternoon. A decade ago, that document would have been the start of a three-year project. Today it is the project.

For most of the history of enterprise software, the architect was a theorist by force. The cost of being an experimentalist was too high. We drew diagrams. We wrote architectural decision records. We waited for the Factory to translate intent into binary, knowing the translation would cost us most of the meaning we started with.

The Apparatus

Generative AI has changed the physics. The cost of running an experiment has collapsed. The apparatus exists now. Call it a Desktop Supercollider.

The metaphor is not loose. The senior technologist now writes the experiment itself, not instructions for someone else to interpret. The specification is the source code of intent. The state machine is, at once, the theory and the apparatus. Guard conditions are the equations. The runtime is the laboratory. A failed transition is a measurement against the data.

The implementation is a side-effect: a build artifact, as temporary and disposable as the opcodes a C++ compiler produces. We don't debug the binary. We are rapidly approaching the point where we stop debugging the Java or the Kotlin. We will debug the theory.

The theoretical physicist can now become an experimentalist. You can boil any system, regardless of complexity, down to a hypothesis. You no longer need a department to prove it. You can prove it yourself.

The Fail Signal

The beauty of experimentalism is the fail signal. In a traditional project, a missing VAT requirement is a disaster discovered months late, after the team has built around the gap. In the Zynux experiment, the AI caught the missing VAT logic immediately. Not because it was clever. It noticed a hole in the topology of the system. It saw an incomplete logical model and refused to proceed.

This is what the apparatus does. Not generate code. Detect inconsistency. The formalisms (Harel state machines, invariants, guards) are not academic. They are the only way to specify intent at the speed of thought. When intent is precise and the production of code is cheap, the craft shifts from execution to hypothesis.

Where the IP Lives

What survives this transition? Not the code. The code is noise produced by a machine that finally understands what we meant.

The institutional memory of why a ledger is append-only, why a credit note must use the original exchange rate, why a posting cannot be deleted: that is the true IP of a company. It does not live in PowerPoint. It does not live in the LLM. It lives in the meta-information layer: the ontology, the state models, the invariants, the process grammar.

This is what the apparatus runs experiments against. Without it, the architect is back to chalkboards. With it, every hypothesis becomes testable in an afternoon.

Theory used to be where architects hid. Now it's what they debug. The apparatus is on your desk. Run the experiment.