Modern education is quietly training students for a world that no longer exists.
In classrooms across the globe, young people are learning to compete with machines on the one dimension machines are rapidly mastering: the ability to reproduce known answers quickly and flawlessly. Homework is graded for correctness, tests reward memorization under time pressure, and “success” is defined as efficient performance on standardized tasks. For most of the industrial era, that made sense.
In the age of artificial intelligence, it is a losing strategy.
Po‑Shen Loh, a Carnegie Mellon mathematician, Olympiad coach, and educational innovator, has spent the past several years on the front lines of this transition.[1][2][3][4][7] He watched as AI systems went from bungling basic fractions to solving Olympiad‑level problems and explaining them with elegant insight.[1][4] What unnerved him was not just that AI could now do sophisticated math, but that it could communicate the *ideas* behind the math—territory he once assumed was safely, and permanently, human.[1]
At the same time, he noticed another pattern: students outsourcing not just their homework, but their *thinking*, to tools like ChatGPT.[5][6] The very cognitive muscles education once aimed to strengthen are atrophying precisely when we need them most.
This, Loh argues, is the core failure of modern education in the AI era: it still acts as if the ultimate exam is a timed test, when in reality the test now is civilization itself.
When thinking becomes optional
The first place AI struck, Loh observes, was schoolwork.[5][6] Any assignment that could be completed by following a template, replicating an example, or doing standard computations became instantly automatable. Not just math worksheets, but essays, lab reports, and even project ideas can now be produced in seconds.
On paper, this looks like a revolution in efficiency. In practice, it is a crisis in *mental fitness*.
When a student lets an AI draft every essay, compute every solution, or outline every argument, they lose the repeated struggle that forges genuine understanding and originality.[3][7] The brain adapts to what it is asked to do most often. If it is asked to delegate, not to wrestle with problems, it becomes extraordinarily good at delegating—and correspondingly bad at thinking.
Loh’s concern is not nostalgia for “doing it the hard way.” It is survival. In a world where AI can handle execution, the remaining uniquely human value resides upstream—in deciding *what* to do, *why* to do it, and *for whom*.[4][6] That requires more than creativity in the romantic sense. It requires something deeper and more demanding.
The one trait machines can’t fake
When asked what will matter most in the AI era, Loh does not begin with coding, or even creativity as it is typically taught. He points to a trait that is at once cognitive and moral: the ability to **simulate the world from other perspectives**.[3][4][6]
Humans, at their best, can hold multiple viewpoints in mind, imagine how decisions ripple through complex social systems, and anticipate how different people will feel and respond. AI can model patterns from data; it does not, in any meaningful sense, *care* what happens.
Loh argues that this capacity to “run simulations” of the human world—to empathize, reason about other minds, and foresee consequences—is both our competitive advantage and our ethical obligation.[3][4] It is how we avoid building powerful tools that harm the very people they are meant to help. It is also how we remain irreplaceable when machines are better than us at almost everything else.
This kind of simulation is not abstract. It starts with simple questions:
– If I release this product, who is most likely to benefit—and who might be hurt?
– If I phrase this message in this way, how will different audiences interpret it?
– If I design an algorithm like this, what incentives will it create in the real world?
To answer such questions well, students must learn to step outside their own heads and *live inside* other perspectives, then integrate what they see into better decisions. That is not a side effect of education; in Loh’s vision, it *is* education.
How school dulls the very skills we now need
Paradoxically, the structure of modern schooling often suppresses the exact traits Loh believes we must amplify.
Students are sorted, ranked, and rewarded for individual performance on narrowly defined tasks.[3][7] The social message is clear: stay in your lane, master the rubric, avoid mistakes. Curiosity becomes dangerous if it leads away from what will be on the test. Empathy becomes optional if collaboration looks like “cheating.”
It is not surprising, then, that many students emerge from years of schooling anxious, cynical, and disconnected. They have been trained to see learning as a private arms race rather than a collective project. They have also been given little practice in the messy, improvisational work of solving real problems with real people—precisely the work that remains after AI automates the rest.[7]
Loh sees this as a recipe not just for personal dissatisfaction, but for social fragility. A civilization that optimizes its schools for standardized test scores while AI eats standardized tasks is like a navy proudly building faster rowboats in the age of nuclear submarines.
Redefining success: from scores to civilization
What would it mean to design education around the world we are actually entering?
Loh’s answer begins with a reframing of the ultimate goal. The purpose of education, in an AI‑saturated society, is not merely to produce employable workers or impressive résumés. It is to cultivate **people capable of sustaining and improving civilization itself**.[3][4]
That sounds grandiose until you consider the stakes. AI systems are amplifiers: they magnify the intentions and competence of the people who deploy them.[3][6][9] In the hands of thoughtful, empathetic, and intellectually rigorous humans, they can accelerate progress. In the hands of shallow, impulsive, or indifferent actors, they can scale harm at unprecedented speed.
So the basic questions of education change:
– Not “Can you solve this problem we’ve shown you before?” but “Can you tackle a problem you’ve *never* seen, with incomplete information, in collaboration with others?”[2][3][7]
– Not “Can you out‑compute a machine?” but “Can you direct machines toward goals that are wise, humane, and sustainable?”[4][6]
– Not “Can you get into a good university?” but “Can you become the kind of person whose presence makes society more resilient?”
Under this lens, the metrics that dominate school brochures—acceptance rates, average scores—look strangely small.
A different kind of classroom
Loh is not content to critique; he is building alternatives.[1][2][3][7]
One of his most striking innovations is a learning ecosystem that fuses **math stars with professional actors** to teach problem solving in a live, highly interactive format reminiscent of Twitch or Instagram Live.[1][2] Instead of a lone teacher lecturing from the front of a room, students join dynamic sessions where ideas are explored collaboratively, mistakes are made publicly, and questions drive the flow.
The design is deliberate:
– **Previously unseen problems** are the norm, not the exception, forcing students to practice genuine reasoning rather than pattern recognition.[2][3][7]
– **Storytelling and performance** are built into the teaching, so that abstract ideas are anchored in vivid narratives, and students experience what it feels like to *care* about a problem.[1][2]
– **Peer instructors**—high school students skilled in math—teach younger students under the live feedback of professional actors, simultaneously growing their communication skills, empathy, and leadership.[1][2]
The result is a scalable model that develops not only IQ but also EQ—intellectual sharpness paired with emotional intelligence.[1][2][3] Importantly, it targets students in middle and high school, precisely the age when habits of thinking, feeling, and relating are being set.
This is not “AI literacy” in the narrow sense of learning to prompt a chatbot. It is training in how to be the kind of human who can use AI wisely.
Networks as the new curriculum
There is another dimension to Loh’s vision that is easy to overlook but critical to understand: **no one navigates the AI era alone**.
He argues that the most valuable asset in this new world is not just personal brilliance, but membership in a **network of kind, capable, and thoughtful people**.[2][4][6] In such a network, ideas are challenged from multiple angles, blind spots are revealed, and individuals are held to higher standards than they might set for themselves.
This is an antidote to one of AI’s subtler dangers: the illusion of total perspective. When a single system can synthesize viewpoints from across the globe, it becomes tempting to treat its output as the final word. But, as Loh points out, no model, however vast, truly captures the “7.5 billion perspectives” that make up our world.[5][6]
Human networks matter because they are grounded in trust, shared experience, and the willingness to disagree in good faith. They also matter because they shape what AI is trained on and how it is used. Diverse, principled communities will push tools toward different objectives than isolated individuals optimized for short‑term gain.
Thus, one job of education in the AI era is to **engineer serendipity**: to intentionally connect young people with peers and mentors who are not only smart, but generous and ethically grounded. Loh’s programs reflect this, emphasizing collaboration across ages and backgrounds rather than solitary achievement.[1][2][7]
Taste and truth in a world of infinite copies
Loh has a particularly sharp way of describing what AI quietly steals from us if we are not careful: first **taste**, then **truth**.[5]
When a model can instantly generate a hundred plausible designs, melodies, or paragraphs, it becomes trivial to produce content and increasingly difficult to discern what is *good*.[6] The risk is that our own sense of taste—our hard‑won ability to distinguish the meaningful from the merely impressive—atrophies. We start to like what the algorithm serves us simply because it is easy.
Next, as AI systems remix the world’s information at scale, they blur the line between original and derivative, signal and noise, fact and fabrication.[6][9] Without strong internal compasses, students can drown in persuasive but shallow outputs, or worse, build careers on top of misunderstandings they never discovered because the answers were always pre‑packaged.
Education, in this light, must reassert itself as a **forge for taste and a laboratory for truth**. Students need constant practice asking:
– *Is this argument coherent?*
– *Is this source trustworthy?*
– *Does this solution actually help the humans it claims to help?*
AI can assist in that inquiry, but it cannot replace the human responsibility to care whether the answer is right in any deeper sense than “statistically likely.”
Five perspectives versus 7.5 billion
Most of us grow up inside a tiny cone of experience: our family, our school, our local culture. Even when we consume media from around the world, we do so filtered through familiar lenses. Loh emphasizes that in a world of global AI, this narrowness is dangerous.[5][6]
If our serious thinking never stretches beyond a handful of perspectives, we will unconsciously build tools that work for “people like us” and fail, or harm, everyone else. The point is not that any one person must personally understand 7.5 billion lives, but that we must cultivate the **habit of expanding our frame**.
In practical terms, that means seeking out:
– People who disagree with us in principled ways.
– Problems that force us to confront realities unlike our own.
– Collaborations that cross disciplines, cultures, and comfort zones.[3][4][7]
Education can normalize this kind of cognitive stretching. Or, if it remains narrowly focused on local competitions and standardized metrics, it can make such stretching feel radical and risky. Loh’s work is a bet that we cannot afford the latter.
Destroying your best ideas
One of the more counterintuitive habits Loh teaches aspiring problem solvers is to **try to destroy their own ideas**.[5][6]
In an age when AI can instantly generate solutions, it is easy to fall in love with the first elegant answer that appears. But genuine robustness demands the opposite impulse: taking your favorite concept and attacking it from every angle, searching for flaws, blind spots, and unintended consequences.
This is not self‑sabotage; it is intellectual humility made practical. If your idea survives your best attempt to kill it, it is more likely to withstand reality.
Training students to think this way serves two purposes:
– It inoculates them against over‑trusting AI outputs, which often appear more confident than they are correct.
– It prepares them to operate in high‑stakes domains—healthcare, governance, infrastructure—where mistakes can cause real harm at scale.[3][4][9]
Schools could bake this habit into every project: not just “present your solution,” but “present the three strongest arguments against your solution, and what you would change in response.”
Building a hopeful future that pays its own way
Despite his stark warnings, Loh is not a pessimist. He insists that it is possible to stay hopeful, not by denying the magnitude of the change, but by confronting it honestly and then **building ventures that both solve real problems and sustain themselves financially**.[3][4][6]
Part of his own journey has been launching educational and technological initiatives that generate revenue while advancing his broader mission of cultivating thoughtful, empathetic problem solvers.[1][2][3] This matters: models that depend solely on philanthropy are vulnerable to fashion and fatigue. Models anchored in real value exchanged in the marketplace can endure.
For students, this suggests a new kind of ambition. Instead of aspiring only to “get a good job,” they can aim to **identify neglected human problems**, design AI‑augmented solutions, and structure those solutions so that they reward everyone involved—users, creators, and the wider society.
Such a path is not easy. It demands precisely the mix of traits Loh champions: technical fluency, deep empathy, global perspective, critical self‑questioning, and the ability to attract and work with other capable humans.[3][4][6][7] But it is a path that leads somewhere more satisfying than endlessly optimizing for rankings in systems designed for a previous century.
Rethinking what it means to be educated
The AI era does not make education obsolete; it makes shallow education indefensible.
If schools cling to their old functions—distributing information, drilling procedures, sorting students by test scores—they will increasingly be competing with systems that can do those things faster, cheaper, and at scale. Many will lose, and their students will lose with them.
If, instead, education embraces a new mandate—to cultivate humans who can *think deeply, feel widely, and act wisely* in partnership with machines—it can become more vital than ever.
In Loh’s emerging blueprint, a truly educated person in the age of AI is someone who:
– Has the **mental fitness** to tackle new, hard problems without flinching.
– Possesses the **empathic imagination** to simulate lives and worlds beyond their own.
– Participates in **networks of kind, capable people** who challenge and support them.
– Uses AI as a **tool for amplification**, not a crutch for avoidance.
– Measures success not only in personal gains, but in their contribution to the ongoing project of civilization.
That is a standard high enough to be worthy of the moment we are in.
Whether our institutions rise to meet it will depend, in no small part, on whether we listen to voices like Loh’s—and whether we are willing, as he is, to rebuild education from the ground up for a future where being human is no longer about out‑performing machines, but about out‑growing what we thought humans could be.



Leave a Reply
You must be logged in to post a comment.