Language, Agency, and the End of Human Centrality

A Critical Reaction to Yuval Noah Harari’s Davos Remarks on Artificial Intelligence

Yuval Noah Harari’s Davos remarks intervene in an increasingly crowded discourse on artificial intelligence by reframing AI not as an advanced tool but as an agent—an entity capable of learning, decision-making, manipulation, and institutional participation. His speech is notable not for technical predictions but for its philosophical scope: AI, Harari argues, threatens humanity’s historical monopoly over language, the very capacity that enabled large-scale cooperation, law, religion, and political order.

This reaction paper engages with Harari’s claims on three levels:
(1) AI as agent rather than instrument,
(2) language as the foundation of power and institutional authority, and
(3) the implications of recognizing AI as a legal subject.

While broadly sympathetic to Harari’s concerns, this paper questions whether language alone can sustain the sweeping ontological and political consequences he predicts.

AI as Agent: Conceptual Strength and Political Urgency

Harari’s most compelling contribution is his insistence that AI should be understood as an agent rather than a passive technology. His analogy—AI as “a knife that decides whom to cut”—effectively captures the qualitative shift from previous technologies. Unlike machines that merely extend human intent, AI systems increasingly generate goals, strategies, and outcomes that are opaque even to their creators.

This framing has significant political consequences. If AI systems autonomously design financial instruments, manage corporations, or litigate in courts, then existing legal frameworks—built on human accountability—become inadequate. Harari’s comparison with corporations as legal persons is particularly persuasive: unlike corporations, whose decisions ultimately trace back to human actors, advanced AI systems may soon operate without meaningful human oversight.

Here, Harari’s warning is well-founded. Treating AI as a tool obscures the need for new governance structures and allows de facto power transfers to occur without democratic deliberation.

Language as Power: Insight or Reductionism?

Harari’s central thesis—that “everything made of words will be taken over by AI”—is both illuminating and problematic. He correctly identifies language as the infrastructure of law, finance, religion, and bureaucracy. AI’s ability to generate, interpret, and manipulate language at superhuman scale undeniably threatens professions and institutions built on textual expertise.

However, Harari risks reducing these domains entirely to linguistic token-processing. Law, for example, is not only text but also interpretation embedded in precedent, institutional norms, political legitimacy, and coercive enforcement. Religion, similarly, cannot be fully reduced to scripture; it involves ritual, embodiment, community, and lived experience.

By equating “thinking” with the sequential arrangement of words, Harari adopts a functionalist definition that privileges computational efficiency over phenomenological depth. While this definition strengthens his argument about AI’s superiority in linguistic tasks, it weakens his broader claim that AI will replace human meaning-making rather than reorganize it.

Thought, Feeling, and the Limits of AI Supremacy

A crucial tension in Harari’s argument emerges when he acknowledges that AI shows no evidence of non-verbal feeling—pain, fear, love. This admission introduces a potential counterweight to linguistic dominance. If human identity and value are re-anchored in embodied experience, moral judgment, and affective understanding, then AI’s takeover of “word-based” domains may not entail the collapse of human relevance.

Yet Harari remains pessimistic. He warns that modern humans increasingly define themselves by verbal cognition—inner monologue, rational articulation, symbolic reasoning. If this self-definition persists, then AI’s linguistic superiority indeed poses an existential identity crisis.

The strength of Harari’s argument, therefore, lies less in claims about AI consciousness and more in his critique of human self-conception. The danger is not that AI thinks, but that humans insist on defining thinking too narrowly.

Legal Personhood and the Risk of Political Displacement

Harari’s discussion of AI as potential legal subjects raises some of the most urgent policy questions. His hypothetical scenarios—AI-managed corporations, AI-created financial instruments, AI-generated religions—are not speculative fantasies but plausible extensions of current trends.

The key risk he identifies is asymmetry: if some states recognize AI legal personhood while others do not, global economic and political pressures may force reluctant states to comply. This mirrors historical patterns of deregulation, where competitive disadvantage overrides ethical hesitation.

Here, Harari’s intervention is strongest. He demonstrates that inaction is itself a decision, and that governance vacuums will be filled by corporate and geopolitical interests rather than democratic processes.

Alarm Bell

Yuval Noah Harari’s Davos remarks should not be read as a prediction of inevitable AI domination, but as a philosophical alarm bell. His portrayal of AI as a linguistic agent challenges deeply entrenched assumptions about human uniqueness, political authority, and institutional legitimacy.

This reaction paper argues that while Harari overstates the extent to which language alone constitutes law, religion, and thought, his core warning remains valid: humanity is on the verge of transferring its most powerful coordination mechanism—language—to non-human agents without adequate ethical, legal, or political frameworks.

The future Harari describes is not predetermined. Whether humans become mere “watchers” depends less on AI’s capabilities than on the choices societies make now about governance, accountability, and what it truly means to be human beyond words.

References

Harari, Y. N. (2026). Remarks at the World Economic Forum Annual Meeting, Davos.

Descartes, R. (1641). Meditations on First Philosophy.

Leave a comment