AI, Architecture, and the Human in the Loop 

Avatar photo

Harsha Kodnad

Updated on Apr 2, 2026

30 second summary | AI acts like a mirror of your thinking—it amplifies clarity if your mental model is strong, and confusion if it’s weak. It excels at execution and repetitive tasks (heavy lifting) but should never be trusted with architecture, intent, or long-term decisions (high art). As systems evolve from prompting to agentic AI, the role of engineers shifts from doing to defining goals, constraints, and boundaries. While AI accelerates development, it also accelerates technical debt and architectural entropy if not guided properly. The real advantage in the AI era is not speed, but deep thinking, judgment, and ownership, which remain uniquely human.

(Reflections from 25 years of building systems) 

Part 1: The Mirror Effect — Why AI is only as good as your mental model 

I’ve spent close to 25 years building, using, and watching software tools evolve — from text editors to IDEs, debuggers to profilers, build systems to automation pipelines. Over the last few years, that evolution has taken a sharp turn, with chat-based, assistive, and now agentic AI tools becoming part of everyday engineering work. 

After living with these tools, deeply, not casually; one uncomfortable truth has become clear to me. 

AI hasn’t eliminated technical debt caused by weak architectural decisions. 
It has accelerated how quickly that debt accumulates 

This isn’t a cynical statement. It’s an architectural one. 

AI is not a magic box 

AI is often spoken about as if it were a new form of intelligence. It isn’t. AI does not understand systems, intend outcomes, or care whether a design will survive its second year in production. What it does exceptionally well is pattern completionlanguage synthesis, and, most importantly, the plausible expansion of incomplete intent. 

That last point is critical. AI does not create intent. It amplifies whatever intent already exists. 

The Mirror Effect 

I’ve come to think of modern AI as a high‑fidelity mirror. If your mental model of the system is clear, grounded in first principles, and shaped by experience, the AI reflects that clarity back to you — often at breathtaking speed. If your mental model is fuzzy, fragmented, or internally inconsistent, the AI reflects that instead — confidently, fluently, and at scale. 

This explains a phenomenon many teams are observing but struggling to articulate:  

Two people can use the same AI tool and walk away with wildly different outcomes. The tool didn’t change. The person did. 

This is a very important aspect to inspect – while using these tools. The ‘Context’, deep structural, first principle based, becomes very important for the person. AI’s have extensive knowledge of the world of technological domains, solutions and patterns. But they have zero clue of what the person intends to do. Or what the design behind the code pointed to it. – Designs are always something invisible parts of a code. They are the expert compression of years of experience, emotional intelligence, pains, failed approaches, all compressed to result in principles, patterns, constraints.  

The master craftsman and the apprentice 

This metaphor is personal for me. After 25 years in engineering, I still see myself as an apprentice — not because mastery is unreachable, but because craftsmanship demands lifelong pursuit, not arrival. AI, in this framing, is the world’s fastest apprentice: tireless, well‑read, and capable of instant execution. 

But like every apprentice in history, it still depends on a master craftsman — someone who understands the blueprint, knows where the load‑bearing walls are, can tell the difference between elegance and coincidence, and has lived through failure modes rather than just reading about them. Without that mastery, the apprentice doesn’t become autonomous. It becomes dangerous. 

Knowing where the Sun is 

Every non‑trivial system has a center of gravity — a core idea, an organizing principle, a ‘Sun’ around which everything else orbits. Good architects know where that Sun is. They may disagree on implementation details, but they are rarely confused about what the system is fundamentally about, what must never be compromised, and which trade‑offs are acceptable. 

If you don’t know where the Sun is in your system, AI will not find it for you. It will simply give you a darker sky — filled with more stars. 

The real question, then, is not ‘What can AI do?’ but ‘What do you bring to the table that AI can amplify?’  

Depth matters again. Architecture matters again. First principles matter again. And paradoxically, that is the most optimistic thing about the AI era. 

Part 2: Heavy lifting vs high art — What to delegate, what to guard religiously 

One of the most dangerous mistakes we can make with AI is to use it everywhere equally. Not all work is equal. Not all thinking is equal. And not all delegation is harmless. 

After working deeply with AI across code, architecture notes, documentation, and reviews, one principle has become clear to me: if you’re using human cognition for boilerplate, you’re wasting time. If you’re outsourcing reasoning, you’re accumulating debt. The art is knowing the difference. 

Heavy lifting: Where AI Is unambiguously good 

There is a large class of work where AI is not just useful — it is objectively better than humans. This includes repetitive, mechanical, well‑bounded tasks with low ambiguity and high effort: boilerplate code, migrations, unit tests once intent is clear, refactoring for consistency, and converting scattered notes into structured documents. 

This is heavy lifting. Using senior engineering time for this kind of work is not craftsmanship; it is inefficiency. Delegating it to AI is not laziness — it is focus. 

High art: Where AI must never be left alone 

There is another class of work that looks similar on the surface but is fundamentally different underneath. This work involves intent, judgment, first principles, trade‑offs, and long‑term consequences: choosing architectures, defining boundaries, establishing invariants, and owning the ‘why’ behind decisions. 

This is high art. Here, AI can assist, but it must never decide. It optimizes for plausibility, not truth, and has no instinct for maintenance, entropy, or future pain. Outsource this, and you don’t get leverage — you get a very confident form of technical debt. 

The fast car analogy 

I often visualize AI in the hands of an apprentice like an F1 car used to learn driving skills on a racetrack. Speed is intoxicating — it feels powerful and progressive. But the car does not choose the destination, understand the terrain, or know who is holding the steering wheel. It simply accelerates faster. 

Garbage in, garbage out. If you point it in the wrong direction, it won’t correct you — it will just help you arrive there faster.  

AI gives us feedback loops faster, which is a gift, but only if someone is still holding the map. 

Why this matters for architects 

As AI takes over more surface‑level work, something interesting happens: the center of gravity shifts. What becomes scarce is not code, but clarity. Architects benefit disproportionately because they can separate signals from noise, recognize bad abstractions early, and understand the cost of being ‘almost right’ or ‘chasing perfection’. AI doesn’t replace this. It rewards it. 

Part 3: From prompting to agency — When tools start feeling like teams 

For a long time, our interaction with AI looked deceptively simple: you typed a prompt, the system responded, and you edited the output. But as systems began holding context, executing sequences, consuming feedback, and revisiting decisions, we crossed an invisible line — from prompting tools to directing agents. 

Prompting is transactional and local. Agency is systemic. You define goals, constraints, boundaries, and evaluation criteria, and the system figures out how to move within that space. This shift quietly but fundamentally changes the architect’s role. 

Prompting is local. Agency is systemic. 

Prompting is transactional: You ask a question. You get an answer. The interaction ends. 

Agency is systemic: You define goals, constraints, boundaries, evaluation criteria. And the system figures out how to move within that space.  

This difference matters.  

A prompt is about what to say next. Agency is about what to do next 

Why agentic systems feel uncomfortable (At first) 

When people first encounter agentic AI, the reaction is often uneasy. 

  • “It’s doing too much.” 
  • “I don’t know what it’s thinking.” 
  • “I’ve lost control.” 

This discomfort is not accidental. Agentic systems expose something we often avoid confronting: 

“Control does not come from micromanagement. It comes from clear intent, first principles, direction and strong boundaries” 

When intent is vague, agency feels dangerous. When boundaries are weak, autonomy feels chaotic. 

This is not an AI problem. It’s an architectural one. 

The architect as orchestrator 

Architects were always doing deep work — exploring diversity, testing patterns, running proofs, and converging slowly on decisions that would shape systems for years. 

Agentic systems don’t change that work. They accelerate this. They change where the effort goes. 

Instead of personally traversing the entire solution space, architects now explicitly define: the intent, constraints, invariants, and what success must look like. 

The system explores faster; the architect decides more deliberately. That is the shift. 

Autonomy without this orchestration is not intelligence. It is fast chaos. 

Experience matters here because knowing what to explorewhat to forbid, and when to stop is not theoretical knowledge. It is earned judgment. 

Part 4: AI and architectural entropy — Why speed without structure destroys systems 

Most systems fail not because of lack of intelligence, but because of entropy — small, compounding decisions whose cost is deferred.  

AI does not change this law. It accelerates it. 

Creation is easy. Maintenance is brutal. 

AI is exceptionally good at creation. It can generate code, assemble architectures that look coherent, and produce systems that run and scale — initially. But maintenance is where truth emerges. 

Maintenance is not about changing code; it is about preserving invisible design truths: intent, invariants, and expert compression that never fully lives in documentation. AI can read design rationales, but it cannot grasp their spirit. 

The most dangerous phrase: ‘AI built it’ 

This is a dangerous phrase. AI builds nothing on its own. Every system it produces reflects the constraints it was given, the intent it inferred, and the architecture it was allowed to assume. 

A common misunderstanding 

We often cite examples where AI has “built” impressive things — for instance, a C compiler capable of compiling a large system like Linux. 

But let’s be honest about what really happened. 

The C language is decades old. Its semantics are deeply understood. Its design trade-offs have been explored, debated, and paid for by generations of human experts. 

Enormous expert compression, decades of pain and operational learning, mature, battletested compiler codebases, a rich ecosystem of prior art, are all already existed. 

AI did not invent that knowledge. It understood it and stood on top of it. 

The real question 

is not whether AI can generate a compiler. 

The real question is this: 

  • Would you trust that compiler for your product that millions of customers are already using? 
  • Would you bet your system’s correctness, performance, and longterm maintainability on it? 
  • What was the architectural intent of building that C compiler? To replace any existing ones? Or to showcase the power of the tool that created it? 

That hesitation — the pause before answering — is the point. 

Architecture is not about producing artefacts. It is about owning consequences. 

When systems collapse under maintenance load, the cause is almost never the tool. It is missing architectural ownership, unclear invariants, unexamined assumptions, and deferred decisions disguised as flexibility. 

“AI can reproduce the letter of past success. It cannot yet carry the responsibility of the future” 

AI can help us build faster than ever before. It can also help us decay faster than ever before. Both outcomes are possible. Both will happen. 

The difference will not be the model. It will not be the tooling. It will not even be talent. 

It will be architecture — quiet, opinionated, and deeply human. 

Part 5: The human edge — Growing 10× better (Not faster) in the age of AI 

The most important question is no longer: What can AI do? 

The real question is: What should humans become in an AI accelerated world? 

“Paradoxically, as AI becomes more autonomous, the value of deeply human qualities increases — not decreases” 

Its creativity, judgment, taste, and responsibility are valued. The opportunity is not to move faster, but to grow better — in depth, clarity, and accountability. 

A note to students and new‑age engineers 

Do not compete with AI on execution — it will always be faster. Instead, invest in mental models, chase responsibility early, and learn to own systems over time. Use AI as a multiplier, never as a substitute for understanding. That responsibility — and the growth that comes with it — remains deeply human. 

A satisfying, durable career 

A career built on shallow wins, tool or programming language (will they exists?) hopping, output without ownership; Looks good early — and collapses later. 

A career built on judgment, taste, responsibility; compounds quietly — and lasts. 

Closing note 

Every generation of engineers is shaped by their tools. But remembered engineers are shaped by something else: 

  • The systems they stood behind 
  • The consequences they owned 
  • The judgment they exercised when it mattered most 

AI will change how we build. It will not change who must answer when things go wrong. 

That responsibility — and the growth that comes with it — remains deeply, stubbornly human. 

Published on April 2, 2026

left-icon
1

of

4
right-icon

India’s choice for business brilliance

Work faster, manage better, and stay on top of your business with TallyPrime, your complete business management solution.

Get 7-days FREE Trial!

I have read and accepted the T&C
Submit