For decades, building enterprise software began with understanding intent. Not features. Not user stories. Intent: why a system existed, what obligations it was designed to satisfy, which business rules it was meant to enforce. That understanding lived in specifications, design documents, and long discussions before a single line of code was written. It was far from perfect. But it reflected a belief that clarity up front reduced risk later.

Then the industry accelerated. Release cycles shortened. Customers expected continuous improvement. Agile practices emerged to help teams adapt, and "working software over comprehensive documentation" became a defining principle. The focus shifted toward learning through delivery rather than planning through paperwork. In many ways, that was a necessary correction.

But something subtle happened along the way.

In too many organizations, "working software" quietly became "the code is the documentation." As if the implementation itself was sufficient to explain what a system was meant to do and why it existed in the first place.

It is not.

Code tells us how a problem is solved. It rarely tells us why that problem mattered. These are fundamentally different forms of knowledge. One is mechanical. The other is contextual.

Across long-lived enterprise systems, this gap becomes visible after a decade or two. Millions of lines of logic implement tax rules, reporting formats, compliance workflows, customer-specific agreements, and historical exceptions. The behavior is still there. The rationale often is not. We know that something happens. We can see how it happens. We increasingly struggle to explain why.

This is not a documentation problem. It is an institutional memory problem.

Intent lives outside the code: in regulations, contracts, emails, negotiations, audits, and design decisions made under pressure. As people move on and systems evolve, that context slowly disappears. What remains is behavior without explanation. The software keeps running. But confidence in changing it declines. Every unusual rule might be compliance. Every exception might be contractual. Every workaround might be critical. Or it might not. We no longer know.

This challenge is becoming more visible now, partly because of the tools we are beginning to rely on.

AI-assisted development, code generation, and agentic systems are exceptionally good at optimizing behavior. They are far less capable of preserving intent. If we ask machines to refactor systems whose rationale is undocumented, they will simplify away rules that exist for non-obvious reasons. Not out of malice. Out of ignorance. The intent was never encoded in a form that inference systems could access.

Without an explicit why layer, automation becomes structurally risky. Not because the tools are immature, but because the foundation is incomplete. This connects directly to a broader architectural challenge: the absence of a meta-information layer that captures not just what data exists and how it flows, but what it means and why it matters.

We are starting to see a quiet return to intent-aware practices. Architecture Decision Records. Obligation registers. Specification by example. Domain models tied to regulatory sources. Lightweight governance artifacts that explain not just what a system does, but what it must never forget. These are not bureaucratic overhead. They are infrastructure. They create a shared understanding that survives individual careers and organizational changes. They allow both humans and machines to reason about boundaries, not just mechanics.

Recovering intent in legacy systems is rarely elegant. It involves mining code for related behavior, tracing dependencies, finding overlaps, and then doing the human work: searching archives, reviewing contracts, talking to domain experts, mapping rules to regulations. It is slow. It is unglamorous. It is strategic. You do not discover intent. You rebuild it. And once rebuilt, you preserve it deliberately.

This is another example of our industry moving in cycles. From heavy upfront specification, to rapid delivery, to realizing that meaning cannot remain implicit forever. From assuming code is enough, to recognizing that systems need memory as much as they need logic.

For those of us building AI-supported enterprise systems, this realization carries operational weight. Inference capability alone is not enough. There must be something meaningful to reason about. An ontology, a state model, a set of defined relationships: these form the meta-information layer. But without captured intent, even the most sophisticated reasoning layer operates on mechanics alone, optimizing processes it cannot fully understand.

Writing down why something exists is not a step backward. It is what allows systems to evolve without losing themselves along the way. Speed still matters. But clarity is what makes speed trustworthy.

In a world where humans and machines increasingly build and operate software together, shared intent is not a nice-to-have. It is architectural infrastructure.