by Alexandros Schismenos
This year will prove significant for Artificial Intelligence for many reasons. A political reason is the midterm congressional elections in the USA, which will determine, among other things, the future of federal and state regulations on AI applications and the conditions for financial speculation and investment in AI companies. The midterm results may be the reason the AI financial bubble bursts, but that depends on the outcome and its potential shockwaves for US politics, the court that currently dominates the AI race.
A secondary, symbolic reason is the fifth anniversary of UNESCO’s Recommendation on the ethics of AI, adopted by the organization’s 194 member states on November 23rd, 2021. How relevant is this document today, given our five-year experience with AI? And how influential has the official adoption of the Recommendation been in shaping AI’s current trajectory?
Of course, these questions cannot be answered definitively given the accelerating pace of the race for AI dominance. However, a brief reflection on the state of the 2021 UNESCO goals in comparison with the empirical realities of early 2026 may help demonstrate why such questions are worth asking.
To make the comparison clear, I will utilize the conceptual framework of my book “Artificial Intelligence and Barbarism” [Athens School, 2025] as a resource for understanding the “mythinformation” and political control mechanisms that underlie seemingly neutral technological advancements.
I adopt the concept of “mythinformation” from Lagdon Winner, who coined the term in 1984 to describe the almost religious belief that the widespread adoption of computers and increased access to information will automatically lead to a better, more democratic world. I think the scope of the concept should be broadened to include the forty-year experience we have gained.
In this historical context, mythinformation is the ideology that equates the expansion of digital information with the expansion of truth, freedom, and social progress. It is the belief that more data produces more knowledge, more connectivity produces more democracy, and more information access produces more autonomy. Moreover, mythinformation transforms technological infrastructures into cultural myths, concealing the power relations, biases, and economic interests embedded in digital systems, and preventing critical reflection on the limits of information-centric thinking.
On that note, we should remember that the UNESCO General Conference’s adoption of the Recommendation was intended to establish global consensus on the ethical governance of AI, grounded in international law and focused on human dignity. It is an anthropocentric document that underscores the potential impact of AI:
“Guided by the purposes and principles of the Charter of the United Nations”, it recognizes the multilevel risks of AI technology, “on societies, environment, ecosystems, and human lives, including the human mind, in part because of the new ways in which its use influences human thinking, interaction, and decision-making.”
The Recommendation was designed as a framework for regulating policies to prevent AI’s catastrophic impact on the social and natural environments. Conceived as a proactive political and legal tool to address a multifaceted problem, the Recommendation defines AI systems by their capacity to mimic intelligent human behavior, including reasoning, learning, and planning.
However, in the five years since, a lot has changed. The most direct challenge to UNESCO’s Recommendation comes from its member states themselves, whose governments have started the AI dominance race. The Atlantic described it in 2026 as a high-stakes, $500B+ competition primarily between the US and China, with Big Tech (Amazon, Alphabet, Meta, Microsoft) projected to spend over $650B on infrastructure. The Recommendation’s voluntary character guaranteed as much. The AI dominance race is a direct effect of systemic technophilia, a dominant socio-political force toward digital barbarism and the delegation of human autonomy to algorithmic governance, exemplified by the Presidency of Donald Trump and the ascension of Big Tech figures like Elon Musk to governmental power.
Let’s look briefly at the policies of the Recommendation. The document designates eleven areas of policy action but I will just comment on some. [1]
The first and the second policy areas, “ethical impact assessment” and “ethical governance and stewardship” will be discussed later.
The third policy area, “data policy,” has already been compromised by Big Data, breaches of data privacy in LLM large-scale training, and the commodification of human intellectual property as raw material for Generative AI, exemplified by the case of Miyazaki’s Studio Ghibli, the data of which was sold by the Japanese government to OpenAI, thus igniting a global trend of Ghibli-like memes in 2025, despite the creator’s explicit objection to this.
Policy area number four, “development and international cooperation,” seems like a joke in our era of the AI dominance race between China, the USA, Russia, and, lately, the EU. UNESCO’s Principle of Fairness and Non-discrimination, alongside Policy Area 4, emphasizes that the benefits of AI must be shared equitably, with particular attention to low- and middle-income countries (LMICs). However, the UNDP 2025 report, “The Next Great Divergence,” warns that AI is sparking a new era of inequality. [2]
This informational inequality further deepens the power gap between the “Global North” and the “Global South”, reintroducing the exploitation structures of colonialism on another level, where private data becomes raw material for machine training, while human labor becomes devalued, and the local environment is devastated by mining and drilling.
Microsoft’s AI diffusion report, “Global AI Adoption in 2025—A Widening Digital Divide,” concludes that AI adoption in the Global North is growing nearly twice as fast as in the Global South, widening the usage gap from 9.8% to 10.6% between late 2024 and 2025. The IMF further warns that growth impacts in advanced economies could be more than double those in low-income countries, effectively eroding the labor advantages that once underpinned convergence. [3]
The most significant divergence between the 2021 UNESCO goals and the 2026 realities lies in Policy Area 5: Environment and Ecosystems, as expected.
The Recommendation mandates that AI actors reduce carbon footprints and prevent the unsustainable exploitation of natural resources. However, empirical data from 2025 and 2026 show an environmental cost that is rapidly escalating beyond sustainable limits.
The Cornell University study on the US data center boom provides a state-by-state look at the toll, projecting that by 2030, AI growth will add 24 to 44 million metric tons of CO2 to the atmosphere annually. The water use associated with cooling AI-focused data centers now exceeds global demand for bottled water, reaching an estimated 765 billion liters in 2025. [4]
This contradicts UNESCO’s goal of “Environmental and Ecosystem Flourishing” as an existential necessity for humanity and poses a significant threat to both society and nature. This means that the current trajectory of AI power dynamics poses a dual threat, both environmental and cultural.
The seventh policy area of UNESCO’s Recommendation focuses on culture and the values of diversity and inclusiveness, which are now being profoundly tested by the homogenization effects of large language models, as UNESCO itself warned recently.
In 2025, UNESCO’s CULTAI expert group report for MONDIACULT 2025 identifies “algorithmic homogenization” and the “outpacing of governance” as core threats to cultural pluralism. Linguistic diversity is a primary point of friction. Currently, fewer than 5% of the world’s languages feed the datasets of frontier AI systems, meaning the vast majority of linguistic worldviews are excluded from the platforms that increasingly structure global knowledge. [3] It seems that AI apps act as aggressive Anglicization machines that threaten the very cultural diversity of humanity. We should add to that the exploitation of cultural work without consent or compensation, which also affects individual privacy and collective memory.
The cultural impact of AI is not only felt across societies from the centers of power to the periphery, but also within society, from above to below.
The UNESCO goals for education in 2021, as explained in the eighth policy area of the Recommendation, focus on enhancing pedagogical integrity and ensuring that AI empowers rather than replaces teachers. In 2025, an UNESCO report found that classrooms have become spaces for “AI experimentation,” frequently without independent evidence of educational effectiveness. [5]
This poses a potential danger of individuals internalizing algorithmic norms. People begin to think, act, and perceive themselves through the logic of digital systems. Examples include optimizing one’s life like a dataset, measuring self-worth by metrics, and adopting algorithmic categories as personal identities. It is the psychological dimension of digital barbarism—the point where external systems become internal habits.
Digital barbarism names the condition in which technologically advanced societies regress in their capacity for judgment, autonomy, and critical thought. It is not a return to chaos, but a new form of domination produced by algorithmic rationality itself.
Of course, this is the opposite direction of the Recommendation’s proclaimed principles which are firmly grounded in digital humanism:
“13. The inviolable and inherent dignity of every human constitutes the foundation for the universal, indivisible, inalienable, interdependent and interrelated system of fundamental rights and freedoms.”
There seems to be an antithesis of values between UNESCO’s Recommendation and the actual objectives of the large companies that run AI applications, specifically LLMs. The necessary infrastructure to support LLMs relies on vast data centers and a capitalist business model that is extractive, centralized, resource-intensive, and oligarchic. On the social level, it introduces a new, aggressive form of financial exploitation of both communal and natural environments. On the political level, it promotes autocratic and oligarchic forms of governance that facilitate and reproduce capital flows toward the technocratic elites who provide AI. We see that these trends are dominating the politics of the Western world as we witness the devaluation not only of UNESCO’s adopted Recommendation but of the UN as such.
Moreover, the Recommendation failed to curb or even limit the spread of mythinformation about the “messianic” properties of AI, which was promoted by those who would stand to benefit the most. CEOs of multibillion-dollar AI enterprises are seeking financial investment. In January 2025, Sam Altman of OpenAI claimed that “we are now confident we know how to build AGI.” This is a perfect example of mythinformation. Later that year, on August 7th, the release of GPT-5 was deemed a failure, proving that we are still nowhere near true AGI. On January 21st, 2026, Sir Demis Hassabis, CEO of Google DeepMind, admitted on a CNBC podcast that current LLMs are excellent at pattern recognition but fail to grasp causality.
Nevertheless, the Recommendation seems to be challenged by the technological developments in AI as well. The transition from reactive Generative AI to agentic AI in late 2025, characterized by the development of autonomous systems based on LLMs that are capable of setting independent goals, executing multi-step plans, self-correcting, and revising their plans accordingly with no human supervision. These are proactive AI models that are goal-oriented and interact with their environment in a perceptive and active manner. They are designed to maximize autonomous functioning and limit human oversight. But as such, they are by design in opposition to UNESCO’s Recommendation on the necessity of human oversight:
“35. Member states should ensure that it is always possible to attribute ethical and legal responsibility for any stage of the life cycle of AI systems, as well as in cases of remedy related to AI systems, to physical persons or to existing legal entities. Human oversight refers thus not only to individual human oversight, but to inclusive public oversight, as appropriate.”
While the Recommendation covers stages from research to disassembly, the non-linear nature of 2026 AI development, in which open-weight models like DeepSeek-R1 are fine-tuned across borders and deployed as decentralized agents, complicates the attribution of ethical and legal responsibility. Agentic AI marks a significant step in the process Luciano Floridi has called the decoupling of Agency and Intelligence, a trend reinforced by the decoupling of Agency and Responsibility.
As AI agents increasingly operate in “blended teams” alongside humans, the Recommendation’s insistence on “final human determination” for life-and-death decisions faces technical friction. By 2026, there are projections that 40% of enterprise applications will embed AI agents, up from less than 5% in 2025, suggesting that human oversight is being architecturally refactored into “supervised autonomy” rather than direct intervention. [6]
This raises the prospect of replacing democratic deliberation with algorithmic governance. Algorithmic governance is the delegation of social, economic, and political decisions to automated systems. It includes predictive policing, algorithmic credit scoring, automated hiring, content moderation, and behavioral nudging. This form of governance is characterized by opacity, power asymmetry, and the displacement of public deliberation by technical procedures.
But before we become alarmists or succumb to the popular trend of adversarial technophobia, we should maintain our technoskeptic stance, focusing on the realities of our time rather than dystopian projections.
Technoskepticism is a critical stance toward technology that rejects both naïve technophilia and reactionary technophobia. It insists that technology is never neutral, that digital systems embody political and economic interests and that philosophical critique is necessary for democratic control of innovation.
We should always keep in mind that behind AI technologies are old-fashioned power dynamics, which means the social-historical field of interaction, where collective activity can change the tides. There is an opposition of values and principles between UNESCO’s Recommendation and global politics, but UNESCO is far from being an anti-systemic organization. It is part of the same global governance institutions that, five years ago, officially adopted this Recommendation along with its values and principles and which have now turned to technocratic autocracy.
Is this sign of state hypocrisy a balance of power that can be reversed?
Our critique advocates for a democratic digital humanism that entails a political critique of techno-capitalist networks, democratization of control over digital information flows, social regulation of AI technology, deepening of the radical political project of social autonomy, recreation of free public time and space, and a reevaluation of the individual as a citizen rather than a user.
Notes:
[1] UNESCO, Recommendation on the ethics of AI, adopted on November 23rd 2021.
[3] https://www.csis.org/analysis/divide-delivery-how-ai-can-serve-global-south
[4] https://news.cornell.edu/stories/2025/11/roadmap-shows-environmental-impact-ai-data-center-boom
[5] https://www.unesco.org/en/articles/ai-and-futures-education
[6] https://machinelearningmastery.com/7-agentic-ai-trends-to-watch-in-2026/