Every January, the global elite gather in Davos to declare the future of work. This year, the refrain was familiar: artificial intelligence will transform everything. Productivity will soar. Efficiency will reign. What follows are my reflections on the narratives that emerged from the recent World Economic Forum gathering—and on what I believe those narratives leave out. The implicit message, as always, was that the nations best positioned for this transformation are those with the capital to build the largest models and the infrastructure to deploy them at scale.
I find myself unconvinced. Not because the technology is not powerful—it is. And I should say plainly: the boundary between what AI can and cannot do is not fixed. It is shifting, and it is shifting fast. Tasks that required human judgment five years ago are now automated. Tasks we consider irreducibly human today may not remain so. I am not naive about this. But precisely because that boundary is moving, it becomes all the more urgent to understand what lies on the human side of it—and why. Because these conversations consistently mistake one kind of intelligence for the whole of it. They celebrate the ability to process, to predict, to optimize. And they overlook the kind of intelligence that has always mattered most in consequential decisions: the capacity to sense what is not yet visible, to read what has not been said, to act on understanding that no dataset contains.
In an economy increasingly mediated by AI, the most consequential decisions will not be made by the person with the fastest processor or the largest dataset. They will be made by the person who can hold technical fluency and human understanding in the same hand—who can read a situation the way a musician reads a room, sensing what the data does not say, what the model has not been trained to notice, what the stakeholders have not yet found the words to express. This is not intuition in the romantic sense. It is a disciplined synthesis of analysis and lived experience, of first principles and felt consequence. It is the form of intelligence that societies have always depended on to navigate complexity, and it is precisely the form of intelligence that the human-in-the-loop economy will reward.
Machines can process what is directly in front of them—what the training data has prepared them for, what the patterns suggest will come next. But the world does not move in straight lines. Problems arrive from the periphery. Meaning shifts beneath the surface. Context changes in ways that no dataset fully captures. And in those moments, what matters is not computational speed but something older, deeper, and irreducibly human: the capacity to sense what the situation demands before anyone has articulated it.
The Human-in-the-Loop Economy
This is the foundation of what I call the human-in-the-loop economy—an economic paradigm in which AI systems perform computational work, but humans remain essential to guide, correct, contextualize, and improve them. This is not a transitional phase on the way to full automation. It is, I believe, the durable structure of the economy we are building. And its most valuable currency is not data. It is judgment.
In this economy, AI proposes and humans decide. The machine generates; the person evaluates. The algorithm suggests; the community accepts, rejects, or refines. But the human role in this loop is not mechanical. It is not simply pressing “approve” or “reject.” It is the act of sensing whether the output is right—not just statistically, but morally, culturally, contextually. It is the judgment that no model can exercise on its own behalf.
Consider the domains where this pattern already prevails. In healthcare, diagnostic AI systems can analyze imaging data with remarkable accuracy, yet physicians remain responsible for interpreting results within the context of a patient’s history, values, and circumstances. The best physician is not the one who reads the scan most accurately; it is the one who senses, from the way a patient hesitates before answering, that the real problem has not been stated yet. In content moderation, automated systems flag potentially harmful material, but human reviewers make the final determination—because context, intent, and cultural meaning cannot be captured in training data alone. In education, an AI tutor can deliver instruction with precision, but the teacher who notices that a student has gone quiet, who reads the room and feels that something has shifted, who adjusts not the lesson but the tone—that teacher is exercising a form of intelligence that no algorithm can replicate.
These are not edge cases. They are the norm. And they reveal something fundamental: intelligence, the kind that actually matters in consequential decisions, is not pattern recognition. It is pattern recognition plus context plus empathy plus the accumulated weight of having lived in the world as a sensing, feeling, socially embedded human being.
Why the Global South Holds the Advantage
Here is where the conventional Davos narrative inverts. If human judgment becomes the scarce input in an AI-saturated economy—and I believe it will—then the question of who possesses that judgment, and what forms of knowledge it draws upon, becomes economically significant.
I grew up in a culture where sensing the unspoken was not a leadership competency to be trained. It was survival. It was how you navigated complex family structures, communal obligations, institutional ambiguity, and social systems that operated on implicit rather than explicit rules. Millions of people across the Global South grow up developing precisely this form of intelligence—not because they are taught it in school, but because their environments demand it.
The Global South holds something that Western AI models cannot easily replicate: cultural diversity at scale. Hundreds of languages. Thousands of local knowledge traditions. Context-specific practices in agriculture, medicine, education, and governance that have evolved over generations. These are not inefficiencies to be optimized away. In the human-in-the-loop economy, they are sources of irreplaceable value—the kind of knowledge that is not stored on servers but carried in people, shaped by generations of navigating complexity without the luxury of simplification.
When AI systems are deployed in contexts they were not designed for—and most systems are designed in a handful of wealthy nations—they require human intermediaries who understand local conditions. They require people who can translate between algorithmic output and community meaning. They require the person who can sense the texture of a room, a village, a classroom, a market, and know that the model’s recommendation, however statistically sound, will not land. That person is not a bottleneck. That person is the point.
This is not merely a matter of localization or translation. It is a matter of epistemic authority. The question is not only whether an AI system works, but whether it works for whom, according to whose values, and in service of what ends. These are questions that only humans embedded in specific communities can answer. And answering them requires not just knowledge but the kind of wisdom that comes from having lived inside a context long enough to sense its textures, its tensions, and its unspoken rules.
A New Definition of Work—and of Intelligence
The human-in-the-loop economy creates new forms of labor. Data labelers. Output reviewers. Prompt designers. AI auditors. Feedback specialists. Human verifiers. These roles exist because AI systems, despite their sophistication, cannot self-correct without human input. They cannot know when they are wrong in ways that matter. They cannot understand when they have caused harm. They cannot calibrate their outputs to shifting social norms.
But I want to push further than the labor argument. What interests me is not just that these jobs exist. It is what they reveal about intelligence itself.
We have spent decades defining smart as a function of processing speed, technical fluency, and domain expertise. IQ tests. Standardized scores. The ability to solve well-defined problems quickly. AI is now better at all of those things than any human alive. If that is what smart means, then we have already been surpassed.
But I do not think that is what smart means. Smart is the doctor who orders the test no one else thought to order—not because the data pointed there, but because something in the patient’s voice did. The teacher who changes the lesson mid-sentence because she read something in a student’s eyes. The engineer who pauses before deploying because something feels off, even though every metric says go. The community leader who knows that the technically optimal policy will fail because it ignores how people actually live. In each case, the intelligence at work is a synthesis of data, analysis, first principles, life experience, wisdom, and the felt sense of other people. It is the kind of knowing that cannot be extracted, labeled, and fed into a model—because it was never separable from the person who carries it.
That kind of intelligence—situated, embodied, empathetic, anticipatory—is what the human-in-the-loop economy runs on. And it is the one kind of intelligence that AI cannot replicate, because it requires not just information but inhabitation. You have to have been somewhere, felt something, known someone, to understand what the data leaves out.
The Risk of Removing the Human
I should be clear about what is at stake. When humans are removed from the loop entirely—when we optimize away the person whose judgment holds the system accountable—the consequences are not merely technical. Bias goes unchecked. Errors scale instantly. Accountability evaporates. Systems fail silently, and no one is positioned to notice until the damage is done.
We have already seen this pattern. Automated hiring systems that discriminate. Content moderation algorithms that suppress legitimate speech. Predictive policing tools that reinforce historical injustices. In each case, the failure was not that the AI made a mistake. The failure was that no human was positioned to sense that the system was about to produce harm before the harm arrived.
The human-in-the-loop economy is not simply more effective. It is more trustworthy. People trust systems more when they know a human is involved—not because humans are infallible, but because humans can be held accountable. Accountability requires presence. It requires judgment. It requires someone who can say: I was responsible for this decision, and I will answer for its consequences. A machine cannot say that. A machine does not feel the weight of consequence. A person does. And that weight—that embodied sense of what it means for a decision to matter—is itself a form of intelligence.
What Comes Next
The automation discourse tends toward binaries: humans versus machines, jobs lost versus jobs gained, winners versus losers. But the human-in-the-loop economy suggests a different possibility—one in which the question is not whether humans will be replaced, but where human judgment remains indispensable.
The answer, I believe, is everywhere that context matters. Everywhere that meaning is contested. Everywhere that judgment must be exercised and accountability must be borne. Everywhere that the stakes are high enough that we cannot afford to let the algorithm decide alone. Everywhere that the situation requires not just analysis but presence—the full, embodied, historically informed, emotionally attuned presence of a person who has lived enough to know what the data does not say.
In such an economy, the Global South’s diversity is not a liability to be overcome. It is a resource to be cultivated. The future of work is not humans against AI. It is humans with AI—and crucially, humans who know things that AI does not. Humans who can sense things that AI cannot. Humans whose understanding of the world was not learned from text but earned through living in it.
That capacity is worth more than we have been led to believe. It may, in fact, be the most valuable form of intelligence the 21st century has to offer.
I will end with a concession. It is possible that I am wrong—that the frontier of machine capability will eventually absorb even the forms of judgment I have described here. That contextual reasoning, cultural fluency, and embodied wisdom will one day be simulated convincingly enough to render the human in the loop redundant. I cannot prove otherwise. But I notice that every previous prediction of full automation has underestimated the same thing: the sheer density of what it means to be human in a specific place, at a specific time, among specific people. The world keeps turning out to be more textured than the models expect. And every time it does, it is a person—not a system—who notices first.
