by Alexandros Schismenos
On January 14th, 2026, the World Economic Forum issued its Global Risks Report, a comprehensive official analysis of the potential risks facing the world in 2026. This influential document was produced exclusively by the World Economic Forum ahead of the WEF Annual Meeting 2026, convened under the theme “Spirit of Dialogue,” and held from January 19 to 23 in Davos, Switzerland. According to its editors:
“The Global Risks Report 2026, the 21st edition of this annual report, marks the second half of a turbulent decade. The report analyses global risks through three timeframes to support decision-makers in balancing current crises and longer-term priorities.”
Among the numerous potential dangers lurking in the future, the document places special emphasis on the expansion of Artificial Intelligence, and for good reasons. The findings of the GRR 2026 are indications of the rise of digital barbarism, which I have defined, in the context of my analysis of Digital Reason, as the systemic erosion of social meaning and temporal autonomy caused by the dominance of algorithmic rationality. It manifests when computational systems reorganize the conditions under which individuals and collectives interpret, act, and imagine.
To summarize the argument of my recent book “Artificial Intelligence and Barbarism: A Critique of Digital Reason”, AI must be understood not as an autonomous agent but as an expression of the dominant imaginary significations of contemporary society. The digital revolution is not merely technical; it is an ontological transformation that reshapes meaning, subjectivity, and social time.
But this has reached new heights after the rapid digital transformation during the pandemic, the public release of ChatGPT in November 2022, and the fast and vast proliferation of AI applications across all domains of social life.
The World Economic Forum’s GRR 2026 acknowledges that:
“AI has shifted from a frontier technology to a systemic force shaping economies, societies, and security. The global market size for AI is projected to rise from an estimated $280 billion in 2024 to $3.5 trillion by 2033.” – GRR2026: 60.
It seems like we have entered a new phase of the digital ontological revolution, Web 4.0, the “intelligent” or “symbiotic” web, when AI comes onto the scene as an actor imitating and regenerating human communication. AI systems have permeated communication, labor, education, and governance, while public discourse has polarized into technophilic celebrations of progress and technophobic fears of collapse.
Both trends rely on hyperbolic narratives that obscure the deeper social‑historical dynamics at work. They emerge from an imaginary dominated by instrumental rationality, efficiency, optimization, and quantification. It is the capitalist social imaginary stemming from the Cartesian imperative to “render ourselves the lords and possessors of nature.” [Descartes 1635]
Uncritical technophilia is a core imaginary signification of capitalist modernity and, as such, was shared by some of the most critical adversaries of industrial capitalism, like Saint-Simon, Fourier, and, most importantly, Karl Marx. Castoriadis has pointed out that one of the conservative elements of Marxian thought is the acceptance of technology as a force of progress, and the tendency to “reduce production, human activity mediated by instruments and objects, labor, to ‘productive forces’, that is to say, ultimately, to technique.” [The Imaginary Institution of Society, MIT Press, 1987: 19]
The digital revolution is thus a transformation of the symbolic field through which societies interpret themselves. AI becomes the privileged expression of this imaginary: a mechanism for prediction, control, and the automation of judgment, as the spearhead of capitalism’s drive toward the goal of total mastery of nature, both inanimate and human, by means of digitization. Digital barbarism cannot be understood without reference to these imaginary significations.
The horizon of digital barbarism is not chaos but hyper‑order: the submission of social life to algorithmic governance based on automated parameters detached from human meaning.
The rise of mythinformation
The GRR 2026 is an official statement that underscores the dangers of digital barbarism and justifies our caution, beginning from the Introduction:
“Misinformation and disinformation and Cyber insecurity ranked #2 and #6, respectively, on the two-year outlook. Adverse outcomes of AI are the risk with the largest rise in ranking over time, moving from #30 on the two-year outlook to #5 on the 10-year outlook. “ – GRR 2026
Misinformation and disinformation on the cyberspace are the results of what Langdon Winner called “mythinformation” namely, “the almost religious conviction that a widespread adoption of computers and communications systems and broad access to electronic information will automatically produce a better world for humanity.” (Winner Bull. Sci. Tech. Soc . , Vol. 4, pp. Printed in the USA. 2070-4676/84 Pergamon Press, Ltd.582-596, 1984.)
The most direct and obvious effect of mythinformation is economic and directly linked with the surge of financial investments in AI companies.
Modern digital markets represent what Shoshana Zuboff calls “Surveillance Capitalism”—a new economic order that claims human experience as “free raw material” for commercial practices of extraction and behavioral modification. This represents a “coup from above” and a “digital dispossession” of sovereignty.
Drawing on Karl Polanyi’s “commodity fictions” (labor as life, nature as real estate, money as exchange), we can identify a fourth fiction: personal experience and individual behavior as market values. Through “individualization algorithms,” private companies reify personal experience, reducing individuals to measurable behavioral patterns for the purpose of attention-baiting and time consumption.
AI contributes to this dispossession through what Luciano Floridi calls “enveloping”. Rather than machines being made to inhabit the human world, social relations are transformed to accommodate AI applications. The danger is not “thinking machines” dominating humans, but the domination of society by political and economic mechanisms.
In 2025, stock markets were driven by political decisions regarding the financing of AI companies worldwide.
In the first week of his presidency, following his inauguration on January 20, 2025, D.J. Trump announced a $500 billion package for the development of digital technology.
A week later, two Chinese companies [DeepSeek and ByteDance, the owner of TikTok] presented TN models [LLMs] that operate at 50 times lower training costs.
On February 18, 2025, Elon Musk presented the latest Generative Artificial Intelligence model from his company xAI, Grok 3, which includes a chatbot, two reasoning models, and a digital research assistant, falsely claiming that it mimics “human reasoning.” It is powered by the xAI Colossus supercomputer, with 200,000 graphics processing units [GPUs], and has been pre-trained for 1,000,000 GPU hours. It is located in Memphis, and the system consumes 18,927,058 liters of water per day for cooling. It was built in less than eight months and includes large energy facilities for the power-hungry digital system.
During the summer, President Trump decreed that AI companies could fund their own energy facilities using federal resources.
With a presidential decree on December 12, 2025, Trump suspended the States’ right to enact regulatory laws and legislate restrictions on artificial intelligence companies.
Among others, the above political – and not scientific – events helped the investments in AI companies skyrocket:
“Total spending on AI worldwide is estimated at $1.5 trillion in 2025 and is projected to rise to $2 trillion in 2026, with the main segments being generative AI (genAI) smartphones, AI-optimized servers, AI services, AI application software, AI processing semiconductors and AI infrastructure software.68 The data centre capex of the top eight US hyperscalers (very large cloud services providers) alone amounted to $258 billion in 2024 and is projected to more than double to $525 billion in 2032.6” – GRR2026: 44
However, the promises for AGI failed to materialize, and fears for a potential economic bubble started manifesting.
As the GRR2026 points out, this surge in capital investment poses a significant danger:
“There is currently widespread concern around elevated equity prices for the largest technology companies, and 2025 saw periods of frenzied investor interest not only in artificial intelligence (AI)- related stocks, but also in sectors such as nuclear, quantum or rare earths. A sharp run-up in the prices of precious metals has raised concerns of bubble-like activity there, too. Some of these prices have since stabilized or corrected, but concerns about overvalued markets remain. Should the predictions of an asset bubble burst turn out to be true, the potential impacts can be significant. Global institutional and retail investors are heavily invested in US stock markets by historical standards, so the resulting potential impacts of a crash could be severe for the global economy;63 85% of global chief economists in September 2025 believe a financial shock would have wide-ranging systemic effects.” – GRR2026
The WEF verifies the picture of a vicious financial circle, as Dr.Alex Pazaitis described it in an online discussion: the revenues of companies that develop AI models for the market are meager compared to their ever-increasing commitments to spending and investing in more computing power. So their funding continues with money they receive from “investments” from companies such as NVidia, from which they then purchase processors. NVidia simultaneously “invests” capital in companies that provide cloud computing infrastructure, such as Oracle, which then purchases processors from NVidia to offer services to companies such as OpenAI to host their model data. And the cycle of self-referential investments goes on and on.
We may call this the economic vicious cycle of self-referential investment.
This vicious cycle of self-referential investment can go on for a while until the bubble bursts due to the accumulation of political capital by AI companies owned by influential political actors like Elon Musk. This only expands the vicious circle to include other areas of communication, space travel and transportation and widens the area of influence of digital mythinformation.
The concentration of political and financial capital, computational power, user data, and digital technology in a handful of companies creates a steep hierarchical pyramid of informational power that widens the gap of inequality on both a social and international scale. The GRR2026 notes the uneven distribution of power regarding AI:
“Access to AI infrastructure as well as to electricity, internet access, and data storage will amplify economic power shifts between countries over the next decade as AI’s productivity benefits bypass some populations entirely- albeit protecting them from some of the risks. For example, AI adoption in North America (27% of the working-age population) is triple that in Sub-Saharan Africa (9%).
Only a handful of AI data centres are in developing regions, with the United States, Europe and Eastern Asia dominating capacity. Within countries, the gap between AI-integrated geographies and excluded peripheries may also drive localized power shifts, create internal migration pressures and destabilize national cohesion.” – GRR2026
This marks a significant widening of the post-colonial power gap between Western/Eastern techno-capitalist countries and the impoverished countries of the global South. But it also marks a widening of the internal gap of equality between governing elites and working classes within each country.
The GRR2026 devotes a whole section [2.7] to discussing three sets of risks connected to AI applications:
“First, the widely cited concerns around the impact on labour markets could lead to deepening societal polarization if unemployment rises and workers struggle to adapt to new tasks and roles. In such a scenario, both higher productivity and higher unemployment could unfold simultaneously.”
Mark this sentence: higher productivity with higher unemployment. This is a direct consequence of the expansion of automation across all social functions that can be reduced to algorithmic functions. It also seems in alignment with the core imaginary signification of the neoliberalist dogma of capitalism that emphasizes profit over production, while also being the realization of the imaginary signification of technocracy, the diminishing of the human factor in production.
Of course, the outcomes are societal disruption, mass unemployment, mass poverty, the sudden devaluation of the significance of labor, and the rupture of social bonds.
We may call this the political vicious circle of systemic inequality on a domestic and international level.
Even so, we should widen the definition of systemic inequality to include ecological devastation, which may affect all, but not all have the means to mitigate the catastrophe on public health and living conditions. GRR2026 notes the environmental risks of AI:
“There are second-degree physiological health impacts as well, deriving from the environmental impacts of generative AI models. These can consume up to 4,600 times more energy than traditional software.162 AI-related infrastructure can result in degraded air quality and pollution from manufacturing, electricity generation and e-waste disposal. In the United States alone, this could impose a public-health burden of over $20 billion annually by 2028” – GRR2026: 54
Such are the ramifications of instituted mythinformation around the commercialized future projections of AI on the external social domains of economy, ecology, and politics. But these effects are reflected and doubled in the internal private domains of intersubjective communication and reality perception.
The risks of mythinformation loom even greater in cyberspace than in stock markets. Cyberspace, as a new sphere of being, represents a form of alterity (otherness) where the subjective and objective merge in a virtual subjective objectivity. This suggests that the Internet doesn’t just mediate our existing communications; it creates an intermediate digital layer with unique epistemological attributes and possibilities, where telepresence presupposes physical absence and communication is reduced to syntax. These underline the epistemic risks of AI.
Second, as more tasks become undertaken by AI and previously applied human skills begin to atrophy, it is unclear if the path forward will be a golden age for creativity, leisure and learning – or, conversely, a drift into purposelessness, apathy and societal decay.” – GRR2026
Large Language Models introduce a new epistemic instability. Their outputs are plausible but unreliable, often exaggerating or distorting conclusions. As they infiltrate education, research, and policy, they threaten to automate epistemic error at scale.
If AI is a form of immature intelligence, lacking understanding and accountability, entrusting it with critical public functions becomes dangerous. The risk behind mythinformation is the delegation of judgment to algorithmic systems.
In the age of AI mythinformation spreads rapidly through attention-optimized platforms, displacing reasoned discourse and eroding the public field of communication on a global scale via digital social media. The increasing reliance on AI apps for the production of common knowledge has begun to threaten the public’s common sense of reality.
“Increasing reliance on both social media and AI tools enhances the impact of algorithmic bias, which shapes what information users see online and reinforces exposure of individuals to information aligned with their views. This can create widely divergent perspectives on real-world events and developments. The impacts are starting to run even deeper. How real-world events are interpreted online combined with the growing circulation of violent content on social media may be leading citizens to become more emotionally and cognitively detached and numbed to human tragedies.” – GRR2026
We must stress that, given the global public’s reliance on mythinformation, the common sense around reality is easily manipulated by means of propaganda, even before AI. Nevertheless, we must observe that the penetration of the digital field into the social imaginary is so deep that AI apps are also transforming the sense of reality in scientific terms.
The above concern is justified given the sheer volume of scientific papers and informative material currently produced by Generative AI models. This is already happening in the fields of academic knowledge production, as in the case of digital fossils. A digital fossil is an old error preserved in files used as AI training data and unexpectedly reproduced automatically in new results.
However, digital fossils are an unwanted byproduct of LLMs. We should consider them as indicators of a broader epistemic erosion of public knowledge.
This epistemic erosion goes deeper than public discourse, since digital media and AI applications aim at personification and private interaction by design. Cyberspace gives the false impression of a digital public space, while it is more of a network of interconnected virtual private spaces. Every user communicates in the mode of telepresence via digital personas, from their own private time and space, even when they are in public. If you add AI chatbots to the other side of telecommunication, every user can be caught in a web of self-referential pseudo-dialogue. At that point, traditional intersubjective communication networks collapse into dead-end repetitive monologues between subjectivities and machines.
“A society where large segments, especially young people, subsist on UBI could experience a crisis of meaning.” – GRR2026: 63
As a result, a fragmentation of the public common sense of reality may lead to the collapse of the coherence of social imaginary significations and common values. A constant flow of unchecked information leads to the blurring of the distinction between fact and fiction.
This forms an ideological vicious circle of misunderstanding. The effect of this would be felt socially as the shrinkage of public space and time and the proliferation of conspiracy theories and other fringe narratives. This is not a fictional scenario, but a real-life danger, illuminated by the GRR2026 with a focus on electoral consequences:
“Recent elections in the United States, Ireland, the Netherlands, Pakistan, Japan, India, and Argentina have all had to contend with such fabricated content on social media, depicting fictional events or discrediting political candidates, blurring the line between fact and fiction. As AI is used to make such content more personalized and persuasive, there is a risk of greater impact on elections. For example, research has found that 87% of people in the United Kingdom are concerned about deepfakes affecting election results. But while awareness is high, many lack confidence in their abilities to identify when content is manipulated.”
But there are still broader dangers along epistemic erosion as GRR2026 admits:
“In an extreme scenario, control over many aspects of society could be ceded to AI.” – GRR2026
This is the nightmare of algorithmic governance in terms of technocratic absolutism, and also the fantasy project of billionaires like Elon Musk and Peter Thiel, spelled out in a most official policy forecast, like GRR. But it is not the worst scenario described by the Review. The third set of risks involves the envelopment of the industrial-military complex within the AI technosphere:
“Third, with militaries’ reliance on AI systems continuing to increase, the potential for misuse or mistakes will rise, too, placing human lives directly at risk.” – GRR2026.
No need to comment more or elaborate on that dismal future perspective, without risking delving into a technophobic dystopia.
Nevertheless, technophobia, once more, seems reasonable, once we notice the WEF’s conclusion on the potential risks of AI for 2026:
“What distinguishes AI-driven disruption from previous technological transitions is the potential for cascading failures across interconnected domains. Labor displacement ripples widely, into households, communities and political systems. Lack of economic opportunity or unemployment (ranked #14 in the GRPS 10-year ranking) can drive extremism; institutional distrust is interlinked with misinformation and disinformation; and surveillance empowers authoritarian responses to the instability that AI creates. Once established, these loops could become self-reinforcing.” – GRR2026
What the WEF describes is what I have called Digital Barbarism. Is this the only potential future?
The GRR2026 concludes by encouraging state governments to coordinate on strict measures and regulations, warning that AI should be considered as high a threat as nuclear weapons and biochemical weapons:
“Coordination on minimum safety, transparency, and ethical deployment standards, particularly for military, biometric, and large-scale decision-making systems, is needed – yet requires cooperation similar to that for nuclear or bioweapons safeguards.” – GRR: 66
This vague, conclusive suggestion makes the reader wonder if the authors read the previous pages of the document, where the lack of such coordination is mentioned as the cause of the problem in the first place.
The case for democratic technoskepticism
While its infrastructure is physical, cyberspace is a field of intersubjective communication where human subjects are the real nodes of meaning-making. The boundary between the digital and the real is a “porous membrane,” meaning digital actions have direct social-historical consequences.
By acknowledging the “ontological duality” of the digital sphere—where models are communicative practices—we can bridge the gap between code and practice to build autonomous institutions.
The dominant discourse on AI oscillates between two exaggerated imaginary trends of instituted Technophilia – AI as salvation, optimization, transcendence – and popular Technophobia – AI as domination, displacement, apocalypse – which both share the common ground of technological fatalism.
Both trends obscure the fact that AI is a statistical mechanism, lacking interiority, intentionality, or autonomy. AI can be understood as a non-subjective statistical pattern manipulation machine, a computational tool, a being-by-and-for-another incapable of being‑for‑itself.
Democratic Technoskepticism offers a third path. It rejects both utopian and dystopian fantasies and situates AI within the power structures, ecological constraints, and symbolic significations of contemporary society. Grounded in digital humanism, technoskepticism affirms the primacy of human autonomy and democratic oversight.
Digital commons serve as a “practical paradigm” for “digital repossession”. They materialize values opposed to capitalist norms, such as reciprocity, equity, solidarity, and self-management. In the digital commons, exchange is not money, labor is not exploited, and experience is not reduced to data points.
By ‘practical paradigm’ I mean, as opposed to theoretical paradigms, a model that can be implemented practically, thus creating a network of human activities towards common goals. This bridge between code and practice is possible thanks to the ontological duality of the digital sphere, where models are communicative practices and not just abstractions. As such a ‘practical paradigm’, it can be developed further into the communicative modality of social institutions of social liberty, justice, social autonomy, revocability, equality, inclusivity, self-government, and cosmo-localism, both in the sphere of common design and production, but also in the sphere of cultural discourse and co-creation. The rooting of digital commoning into physical social-historical reality opens possibilities for a wider radical social transformation through the combination of digital communing with practices of grassroots democratic politics and social ecological communities. This combination implies the co-joint recreation of an autonomous free public space and time, both in the digital and physical realms of human co-existence.
In hierarchical systems, information is extracted from below, and commands are issued from above. Digital commons allow for the “reversal of the flow,” where political decisions are issued by the social basis, informed by transparent, second-order institutions of data-processing. This creates an autonomous digital public space and time that supports direct democratic horizontal networks.
Contemporary democratic social movements, characterized by their sustained collective claims and unique “repertoires” of dissent, provide the physical counterpart to digital commoning. Unlike authoritarian movements (e.g., “Trumpism”) that use digital networks for top-down propaganda, democratic movements emphasize communal assemblies, direct democracy, and the refusal of hierarchical authority.
We could imagine Technoskepticism like a theoretical telescope – it helps us discover potential breaches of democratic rules and violations of human rights on the horizon of future expectations. Digital humanism is like a political compass— it helps us navigate toward a democratic common future. Digital Humanism builds on the concerns raised by technoskepticism, offering solutions and design principles. It embeds principles of direct democracy, digital commoning, and social ecology into technological development.
Temporality and community are the key conditions for these acts of dispossession or repossession. In that framework, the individuals who allow their personal time to become colonized, co-opted, and absorbed within the dominant rhythms of social -networking, profile influencing, and digital marketing contribute to the expansion of the networks of capitalist dispossession actively, by becoming proponents of their marketing model. In the digital world, models are practices and the capitalist terms of digital representation are terms of individual commodification. These digital netizens are the innovators and influencers of capitalist post-modernism; they are the new entrepreneurs that exploit and are exploited, they are actors of the dispossession machine. Therefore, they create consumerist communities, which function as operators and accelerators for the circular, repetitive temporality of dispossession; incursion, habituation, data reification, and public diversion.
This cycle feeds the multiplication of social crises through the spread of individualistic consumerism, political apathy or fanaticism, proliferation of advertising strategies, and the reification of personal experience. Time and community in the form of personal engagement and collective experiences are the new commodities, fragmented under the fictitious principle of digital individuality.
On the contrary, the praxis of digital repossession is constituted by the creation of a free common temporality via the tools provided by social ecology, digital commons and the emancipation of human subjectivity within open, horizontal, and democratic communities.
The current crisis of AI and surveillance capitalism presents a choice between two paths: one of automated “envelopment” and another of collective “repossession.” By rooting digital commoning in social-historical reality, we can reclaim the digital sphere as a field for “poetics/acting”—a space where human subjects define their own purposes. This project of social autonomy aims for a post-capitalist, ecological, and humanist future where technology serves to enhance, rather than replace, human agency and democratic self-governance.