IA, AI, O
Part II of The Root of It: The Intelligence Continuum
This is the second piece in our The Root of It series: ETPEO (pronounced like C3PO) – Everyone Talking Past Each Other (… Again).
In our first article, we explored how excessive labels cloud our understanding of capitalism. But when you dig past the past the surface down into the foundation the complexity fades and the simplicity comes through:
We didn’t break capitalism – we broke the connection between value creation (labor) and asset ownership (capital), between those who do the work (labor) and those who reap the benefit (capital).
Now, we turn to technology - specifically, the “AI” debate that has everyone from practitioners to policy makers talking past each other.
The problem isn’t the technology. It’s the language. Everyone uses the same two letters while meaning wildly different things. Layer on the infinite acronyms — AGI, ASI, API — and we’re all sure to be thoroughly confused.
The goal of this series is to strip away complexity and connotation and go back to simplicity and first principles – describing things in a direct way to make sure everyone is talking about the same thing. Rather than promoting polarity, this process helps streamline constructive discourse – to help accelerate building a better future for everyone.
Let’s get to the root of it…
The Label Problem
“AI” is over-used and under-understood.
A recent example: Andrej Karpathy and Rich Sutton on the Dwarkesh podcast. Both discussing AI. Perceived as disagreement. Classic ETPEO. Listen closely and you hear something else: two people describing different things with the same “word.”
Karpathy takes a practical, near-term view. He talks about how to use current systems to augment what humans do today. These systems are not ready to fully operate by themselves. They require human judgment for anything creative or ambiguous. They are powerful tools inside the world as it and they exist – limited by energy efficiency and foundational architecture, but practical and useful.
Sutton takes a purist, long-term view. When he says “AI,” he effectively means: a system that can fully replace human cognitive work across many domains. The kind that replaces humans. The kind that learns from experience the way nature does. In that frame, he discounts today’s systems as not “real” AI. He argues that to get there, we likely need a fundamentally different approach and likely a different physical substrate – something that mirrors how nature computes and learns.
From the outside, it looks like disagreement. In reality, it is ETPEO via labels. They are not arguing about the same object.
Karpathy sees an incredibly powerful tool today that still has a lot of room for optimization and improvement. While Sutton points toward nature-like computation and embodied learning as the future path. In the short run, this is impractical for production use – it does not exist. In the long run, it may be the path toward ‘real’ AI.
This isn’t just semantic hairsplitting. The confusion shapes investment decisions, policy debates, workforce planning, and research priorities. Companies race toward “AGI” without agreeing what they’re racing toward. Workers fear “AI” will take their jobs without understanding which AI, which jobs, and when.
Everyone’s talking. No one’s communicating.
The Continuum
CrowdCent’s view is simple: both are right… about different parts of the same continuum.
Karpathy is right about what we have now, how to use it, and how to improve it. Sutton is likely right about what “true AI” would require and why today’s systems don’t qualify. The confusion comes from compressing this entire spectrum into one overloaded label: AI.
So instead of a binary (“we have AI” / “we don’t have AI”), think of a continuum:
On one end: IA – Intelligence Augmentation
Pattern recognition with encyclopedic memory, trained on human-created and machine-generated data, running on binary hardware. Extremely good at well-defined tasks, poor at genuine novelty. A powerful tool for humans-in-the-loop.On the other end: AI – Artificial Intelligence
Systems that learn, adapt, and reason in a way closer to how nature does it. Less memorization of the past, more construction of new internal models of the world. Likely needing new architectures and substrates. A powerful system potentially replacing humans.
Today’s frontier models are firmly on the IA side of this continuum, no matter what the marketing says. There are many practical ways to keep moving toward AI (from IA) – in particular, Omar Khattab’s suggestions (discussed below) – but a true jump likely requires a nonlinear paradigm shift for the whole stack (substrate to models).
Let’s dig into where we are today…
IA: Intelligence Augmentation
Today, we have IA.
Current IA systems excel at:
Well-defined, narrow tasks with well-documented and clear answers
Examples: math, repetitive coding, summarization, classification, and pattern recognition in large datasets
This has immediate labor implications:
The most “at risk” jobs are entry-level roles (where judgment is shallow and process-heavy) and codified roles (where work is documented, rules-based, and outcome-defined).
Anywhere the job is “follow the playbook” rather than “write the playbook,” IA will compress the value of pure execution.
But IA is bad at creativity and open-ended, ill-defined problems. It struggles when the domain lacks clear rules or an agreed-upon notion of a “right” answer. It repackages, blends, and interpolates. It does not ‘create’ in the sense humans mean when they talk about “truly new.”
Even what looks creative – songs, images, essays – is usually a recombination of known motifs, tuned to what past data suggests people will like. Algorithms optimized for virality, not quality. That can be commercially useful (and also dangerous). But it is not the same as an agent forming its own goals, models, and abstractions.
IA is an incredible tool we must learn how to yield – opening up massive opportunities for the young and curious. But it requires adaptability and thinking. The world we live in today is not like the past – whole institutions, from higher education to central banking, need to be re-imagined and re-built. The point of this piece is not to argue whether it is good or bad, but rather to point out what it is versus what it isn’t – to get to the essence and build better.
Under the hood, there are some clear fundamental limitations.
1. Data and Model Design
Frontier models today are mostly trained on raw internet data, much of which is low-quality, redundant, wrong, biased and/or adversarial. That means:
Huge capacity is spent memorizing garbage, not learning general principles.
The system’s behavior inherits all the biases, inconsistencies, and noise of that data.
Improvement here is straightforward in concept:
Filter and build higher-quality, curated unbiased training data.
Fine-tune on aligned, high-signal data.
Quality data compounds; we need filtering before training. Higher quality foundation data means less wasted capacity, more robustness, and more ‘space’ for reasoning. Which leads nicely into the next area.
2. Memory vs. Thinking
Today’s models are memory-heavy, thinking-light. They behave like someone who has memorized an enormous library but has limited working memory and open space to reason.
This is partially because the training data decisions noted above – throwing more data (ignoring quality) and more compute at the problem has been the path thus far. A brute force scaling method.
To push further on IA without changing the substrate (i.e., current path), you want:
Less raw memorization of noisy, low-quality data.
More effective “DRAM for thought”: space and time to run internal computations.
Training regimes that reward reasoning per step along the path, not just prediction accuracy on static text.
In other words: fewer, better parameters doing more actual work. At first, the system is overly complex – you don’t know what you don’t need. Over time, as with any good design (i.e., Raptor design over time), you simplify – removing the unnecessary – while improving performance.
Further, thinking more about the optimization process itself is important. Many systems are optimized for virality, not quality. And related, most systems appear to be focused on token volume – not quality – as an output.
Today’s models are implicitly optimized for volume (tokens generated, prompts answered, clicks captured).
These should be optimized for quality per token – how much useful, accurate, novel “thinking” is done for each unit of compute and time.
Optimizing for volume and virality over quality and accuracy is dangerous – it exacerbates polarity, while ‘hiding’ the balanced majority views.
That shifts incentives:
Constrain time and/or tokens.
Contemplate both energy and money efficiency.
Reward depth, quality, and correctness under constraint.
Penalize cheap verbosity and mind-numbing addictiveness.
Karpathy essentially lays out all of the above as practical steps toward improving current systems on their current paths (while also contemplating the future). There are levers for improvements in each area – from the infrastructure, to the data, to the algorithms. But none of these linear optimizations are likely to lead to non-linear (paradigm shifting) advancements – like going from what we had 5 years ago to what we have today.
Even with those improvements, this entire regime – binary chips, static-data training, pattern recognition at scale – is still IA, not AI. Mechanistic, fundamentally limited by its foundational substate and architecture — we’re building better tools, not unified consciousness. It augments humans; it does not wholly replace humans.
AI: Artificial Intelligence
Meanwhile, many claim to be racing toward AGI (what we call “AI”) – framing it as if scaling up IA will eventually “cross a line” into AI.
From a first-principles lens, that is unlikely. You do not get to a fundamentally different kind of system – a new paradigm – just by making the same kind of system larger or better optimized (i.e., brute force). Ilya Sutskever — another recent Dwarkesh guest — essentially agrees on this point as well.
Sutton’s argument is that to get true AI, you need a different approach. Something closer to how nature learns:
Continuous learning from interaction and observation, not periodic retraining on static dumps of text and images.
Systems that build world models, not just token predictions.
Algorithms designed to be adaptable and resilient in pursuit of intelligence, not just good at compressing patterns in historical data.
Yann LeCun makes a related point: you want embodied or environment-grounded systems that can observe, act, and update their internal models, rather than spinning on offline static archives. Today’s models are trapped in what he calls the “symbol cage” – powerful text databases without world understanding.
And underneath all of this, there is a substrate question:
Binary semiconductors and today’s architectures are excellent at matrix multiplications and static pattern learning.
Genuinely general, energy-efficient, nature-mirroring intelligence likely requires new hardware and physical principles – mimicking the incredibly energy efficient human brain.
This is the nonlinear jump: from IA that scales linearly with more data and compute, to AI that behaves like a fundamentally different kind of system. Building something like a human with depth of consciousness is a herculean task and unlikely to manifest from our current approach.
The IA Progression – Practical Next Steps
Between the current reality of IA and the future dream of AI, there is a bridge. A practical progression for the next several years.
Omar Khattab calls it API – Artificial Programmable Intelligence. We simply call it moving along the continuum of IA toward AI (or maybe Programmable IA).
The thesis is simple: AGI is the wrong goal right now. We don’t need vague, autonomous general intelligence. It’s not practical; it’s vanity. We need reliable systems we can actually direct and control. The bottleneck isn’t that models aren’t smart enough – it’s that we can’t tell them what we want; we can’t effectively communicate.
Natural language prompts are a terrible interface for engineering. They’re brittle. You change one word, the whole system breaks. It’s “guess and check,” not engineering.
The solution isn’t better prompting; it’s better programming. Khattab’s framework, DSPy, moves us from “prompting” to “compiling.”
You write code that defines the intent – what you want the system to do. “Is this spam?” “Summarize this legal brief.” You define the input and the output. Then, an optimizer (a compiler for AI) figures out the perfect prompt to get that result. It treats the AI system like a machine learning problem to be optimized, not a chatbot to be coerced.
This solves the “intent expression” problem. It lets you be precise where you need to be (code) and fuzzy where you want to be (language). It turns “magic strings” into modular, maintainable software.
Purpose built end-to-end systems with intentional design are absolutely feasible in our current paradigm. You ‘simply’ need the right data and the right tools.
This is what the next several years has in store. This is what CrowdCent is building on right now. Not waiting for a super-brain to wake up, but building on programming languages that let us reliably direct the intelligence we already have.
The Opportunity Ahead
Put it together and the picture is clear:
IA today is transformative. It will reshape workflows, compress certain job categories while opening up others, and unlock real value across many industries.
But it is not strung-together consciousness. It is not general, interconnected intelligence. It is constrained by how it is built – from the hardware to the data to the algorithms.
The next major nonlinear move – toward something that looks like true AI – likely requires mirroring nature, embodied learning, new substrates, and incentives centered on quality of thought, not volume of output.
So the right mental model is not “AI: yes or no,” but:
IA →→→ (nonlinear jump) →→→ AI
And it is also important to note, these paths are not mutually exclusive. They can and will happen in parallel.
In the process of the pursuit of AI, we continue to move along the continuum – pursuing paths until another, better one, reveals itself. That is the art of discovery.
In the meantime, the opportunity is enormous on the IA side:
For builders, IA is the most powerful tooling layer ever created.
For society, it is a forcing function: anything that is purely rote, process-based, and documented will be automated or commoditized.
For individuals, it raises the bar on what “entry level” even means.
The way to avoid ETPEO is to be explicit about where on the continuum any claim is being made:
If someone says “AI will take all the jobs,” clarify whether they’re talking about today’s IA or tomorrow’s potential AI.
If someone dismisses current systems as “just autocomplete,” clarify that even autocomplete at global scale can be economically transformative.
If someone claims AGI is “almost here,” ask: on what substrate, with what learning mechanism, and what incentive structure?
Once you remove the labels and go back to first principles, the confusion drops away. What remains is a simple picture:
IA is here and already changing everything.
AI will probably require us to mirror nature and is a lot further than many think.
And the underlying intention of the technology is critical.
Do we really want technology that completely replaces humans in every way?
We need to have open, constructive discourse around topics like this, rather than blaming and arguing past one another. And to do that, first we must all be on the same page.
Let’s get to the root of it and stop arguing past each other.
Time to build a better future for everyone.
A preview for our 3rd piece in The Root of It. What is a key missing element of the AI computing paradigm? The network. Which, as Ben Horowitz nicely lays out, is blockchain… our next topic.
Want to be part of building the next generation of capitalism using cutting edge, purpose built technology? Join the CrowdCent Community and the CrowdCent Challenge to help create investment systems where merit determines rewards, not legacy position.


