The Architecture of Intelligence: John McCarthy and the Foundations We Build Upon
How a 1956 proposition shaped the AI systems we design today—and what his methods still teach us about building intelligent experiences
Every time you interact with an AI system—whether you’re asking a conversational interface to summarize a document, watching a recommendation algorithm surface relevant content, or relying on automated systems to flag potential risks—you’re standing on scaffolding built nearly seventy years ago by a man who never witnessed any of these implementations.
In 1956, when “computer” still meant a person who computed rather than a machine, John McCarthy gathered researchers at Dartmouth College and made a proposition that seemed almost absurd: “Every aspect of learning or any other feature of intelligence can, in principle, be so precisely described that a machine can be made to simulate it.”
That sentence didn’t just name a field. It set a direction that would eventually lead to the entire ecosystem of artificial intelligence we interact with today—from the large language models that power conversational interfaces like ChatGPT, Claude, and Gemini, to the computer vision systems that help diagnose medical images, to the predictive algorithms that optimize supply chains. ChatGPT and similar chat interfaces are simply one visible layer—a frontend that makes it easy for humans to communicate with these underlying AI systems. The real intelligence lives in the models, the architectures, and the computational approaches that McCarthy helped establish.
The Invisible Architecture We Still Build Upon
McCarthy’s influence extends far beyond coining the term “Artificial Intelligence.” He constructed the conceptual and technical infrastructure that makes modern AI possible, though most designers and developers today wouldn’t recognize his fingerprints on their work.
Consider LISP, the programming language he created in the late 1950s. While few people write LISP today, its core philosophy—symbolic reasoning, recursive thinking, treating code as data—shaped how we conceptualize intelligent systems. When you design a chatbot that reasons through nested conditional logic, you’re following mental models McCarthy established decades ago.
Then there’s time-sharing, his vision of multiple users accessing a single powerful computer simultaneously. In 1960, this was revolutionary. Today, we call it cloud computing, and it’s the foundation on which every generative AI model runs. The architecture that lets millions of people query GPT-4 simultaneously? McCarthy sketched that pattern when mainframes filled entire rooms.
But perhaps his most enduring contribution was philosophical: he saw AI not as replacement but as augmentation. Machines assisting humans in reasoning, planning, and decision-making. This human-centered view of automation feels remarkably prescient when we’re now grappling with how to design AI that collaborates rather than dictates.
What McCarthy’s Legacy Means for Modern Design Practice
I’ve spent seventeen years moving from visual design to digital experience architecture, and I’ve noticed something curious. Tools change constantly—the design software I used in 2007 is obsolete, the prototyping methods have evolved three times over—but the underlying patterns of systems thinking remain remarkably stable. McCarthy’s work reveals three of these durable patterns that matter as much today as they did in 1956.
First, naming crystallizes thinking. When McCarthy called it “Artificial Intelligence,” he gave researchers and builders a shared frame of reference. This wasn’t just semantics; it was strategic clarity. In my projects, I’ve learned that naming the problem space crisply creates alignment faster than any tool choice. The difference between “we need AI” and “we need to reduce cognitive load in approval workflows by surfacing risk signals early” is the difference between wandering and shipping.
Second, design for collaborative scale from the start. McCarthy’s time-sharing wasn’t merely technical innovation; it was a collaboration pattern. Modern AI products must consider multi-user environments where people and models co-create. Who sees what? Who can override the AI’s suggestion? How do teams review and refine outputs collectively? These aren’t implementation details—they’re foundational design decisions.
Third, treat intelligent systems as capable colleagues, not oracles. The most successful AI features I’ve architected behave like good teammates: they explain their reasoning, accept feedback, adapt to corrections, and visibly improve over time. This is fundamentally a design challenge. An AI that generates a perfect output but can’t explain its logic is less useful than one that produces good work and shows its reasoning.
From SAIL to SAILs: Testing Ideas in the Real World
McCarthy didn’t just theorize about artificial intelligence from a distance. In 1963, he helped establish SAIL—the Stanford Artificial Intelligence Laboratory—where abstract ideas collided with the messy reality of systems, tools, and actual people using them. SAIL became legendary not because it produced perfect systems, but because it created an environment where researchers could fail fast, learn deeply, and refine their thinking through direct contact with problems.
That experimental spirit finds a direct descendant in my current work at Aramco’s SAIL, the Saudi Accelerated Innovation Lab. The name is no accident—it’s an intentional nod to Stanford’s legacy and an aspiration to build the same culture of rigorous experimentation for Saudi Arabia. Like its Stanford predecessor, our lab operates as a proving ground where we validate what genuinely improves decisions and reduces friction, rather than chasing whatever feels novel this quarter.
Our work at SAIL operates through a decentralized model where our Digital Experience Design Architect team collaborates across organizational boundaries—with designers embedded in business units, external vendors building our tools, and internal product teams shipping features. We don’t wait for problems to arrive at our door; we take initiative, moving between discovery research, rapid prototyping, and AI-assisted analysis to spot opportunities and build innovative solutions before they become urgent requests.
A typical week might involve working with a procurement team to understand workflow friction, prototyping an AI-assisted approval interface with an external vendor, and collaborating with an internal product team to instrument decision-clarity metrics across their existing tools. The work is about turning fuzzy problems—”approvals take too long” or “we miss critical risks”—into testable flows that we can measure, refine, and scale across the enterprise.
What makes this feel connected to McCarthy’s legacy is our insistence on certain practices. We measure usability not through subjective satisfaction scores but through task success rates, time-to-clarity metrics, and error-recovery patterns. When users stumble, we want to know precisely where and why. We design prototypes that directly inform governance policies, ensuring that questions about roles, permissions, and audit trails get answered during design rather than after deployment. We build AI-assisted research tools that provide summaries with full citations and visible rationale, always asking “Why this recommendation?” and creating correction loops so the system learns from its mistakes.
Perhaps most importantly, we design within enterprise constraints from day one. Security boundaries, data governance requirements, and on-premises integration needs aren’t obstacles we work around—they’re design parameters we work within. This constraint-aware approach prevents the common tragedy where a brilliant prototype dies in the gap between lab and production.
The goal remains what McCarthy championed seven decades ago: augmentation, not replacement. We’re building systems that explain their reasoning, adapt based on feedback, and scale responsibly across teams. Systems that make expertise more accessible without pretending to replace the judgment that comes from years of experience in a domain.
From Philosophy to Practice: A Working Framework
When I design AI-assisted experiences, whether in our SAILs lab or in broader consulting engagements, I use what I think of as the McCarthy Method—a framework inspired by his approach to breaking down intelligence into describable, machine-simulatable components.
I start by defining what decision or task we’re actually trying to clarify or accelerate. Not “add AI to the dashboard” but “help product managers identify which customer requests signal market shifts versus individual edge cases.” Specificity forces clarity.
Then I decompose the problem. What inputs, constraints, and context does the model need to do quality work? If a human expert would need customer segment data, usage patterns, and competitive intelligence to make this call, the AI needs structured access to the same information.
Next comes dialogue design. How will the system explain its output? Not just what it concluded, but why? How will users correct it when it’s wrong? This correction loop isn’t a nice-to-have—it’s how the system learns what “good” means in your specific context.
Governance follows naturally. What’s the review loop? When must a human make the final call? What’s the audit trail? These questions feel bureaucratic until something goes wrong, and then they’re the only questions that matter.
Finally, evolution. Where will the system learn over time? Are we refining prompts, adding examples, or updating policies? Making this explicit prevents the common trap where AI features slowly degrade because no one owns their continued improvement.
This framework turns AI from an inscrutable black box into a transparent collaborator. It also keeps teams aligned on intent, not just interface—a distinction that saves weeks of rework.
The Unfinished Nature of Intelligence
In my book Unfinished: Notes on Designing Experience in a World That Never Stops Changing, I explore the tension between our desire for permanence and the reality of continuous evolution. AI embodies this tension perfectly. Like design itself, intelligence is never finished. It’s a living system that learns, forgets, and relearns.
McCarthy understood this. He didn’t present AI as a solved problem in 1956; he presented it as a research program that would unfold over decades. That patience, that comfort with incompleteness, is something we’ve lost in our rush to ship. The best AI products I’ve seen embrace their unfinished nature. They ship with clear limitations, obvious feedback mechanisms, and visible improvement over time.
Imagination, McCarthy reminds us, is a design material as real as code or pixels. The systems that endure are those that remain adaptable, explainable, and fundamentally centered on human needs—even as the underlying technology transforms.
Five Moves You Can Make This Quarter
If you’re designing or building AI-powered products, here are five concrete actions that embody McCarthy’s principles:
Ship an explanation pattern for every AI output. Add “Why this recommendation?” to your interface. Users don’t need to understand transformer architecture, but they deserve to know why the system suggested prioritizing bug A over bug B. Clarity earns trust faster than accuracy alone.
Create a genuine correction loop. When users edit an AI-generated summary or adjust an automated decision, capture that feedback as training data. Design the interaction so corrections feel natural, not like filing a bug report. “Accept with changes” should be as easy as “Accept.”
Measure outcomes, not outputs. Stop counting how many AI suggestions users accepted. Start measuring time to clarity, rework avoided, and friction removed. These are the metrics users actually feel, and they’ll tell you whether your AI is truly helpful or just novel.
Build a versioned prompt library for your team’s top workflows. When someone crafts a prompt that consistently produces quality results, capture it. Version it. Share it. This scales quality across your team and creates a learning artifact that improves over time.
Draft a single-page ethics and limitations document. Be explicit about data boundaries, known model weaknesses, and when humans must intervene. This isn’t legal coverage—it’s design honesty. It also prevents painful conversations six months from now.
The Long View
Before ChatGPT could respond in seconds, John McCarthy spent years asking better questions. He died in 2011, never witnessing the explosion of generative AI that now dominates our industry. Yet his fingerprints are everywhere.
Every time we discuss designing intelligence, crafting human-machine harmony, or building systems that augment rather than replace human capability, we’re walking a path he sketched more than half a century ago. We’re standing on scaffolding he built before most of us were born.
That might be the most beautiful form of design: work that outlives its designer, infrastructure that enables futures the creator never saw, questions that remain relevant across generations of answers.
The tools McCarthy used are obsolete. The problems he identified are still here. And the methods he pioneered—clear problem definition, collaborative architecture, human-centered automation—remain as relevant as ever.
Perhaps that’s the real lesson. In a field obsessed with disruption and novelty, the most powerful contributions are those that establish enduring patterns. McCarthy didn’t just predict the future. He drafted its architecture.
Haider Ali is a Digital Experience Design Architect exploring the intersection of design, technology, and human experience. His work focuses on building AI-powered systems that augment human capability rather than replace human judgment.
For more writing on design systems and intelligent interfaces, subscribe to User First Insight or explore Black & White. Connect on LinkedIn, follow on Medium, or visit haiderali.co.


