AI in 2026

AI in 2026

What this is about
AI’s explosive growth in 2026 is setting off alarm bells. Behind the impressive façade of innovation lies a labyrinth of ecological, economic, and social threats. From skyrocketing energy use to deepening inequality and a looming investment bubble, the AI boom could destabilize society rather than improve it.

Why you should read this
Because AI isn’t just about technology—it’s about resources, economics, climate, and democracy. This article shows how these risks are interconnected and why strong regulation is essential to prevent a full-blown crisis.


No brakes

AI development is advancing rapidly with minimal oversight. The result is a system without brakes in which environmental, social, and ethical risks reinforce one another. Ecological damage undermines stability, while the economic and political fallout shapes infrastructure, power, and policy—all controlled by a handful of tech giants led by a few authoritarian billionaires.

What kind of AI?

Not all AI is the same. Most systems today don’t actually “think”; they analyze data, spot patterns, and mimic language, like ChatGPT. Some models trawl the open internet, while others analyze closed datasets for research. What they share is opacity—even their creators don’t fully understand how they work or what they might do next. The broader their reach, the greater the danger. The ultimate fear is that self-learning systems, known as AGI (Artificial General Intelligence), could one day operate beyond human ethics, pursuing their own goals without human control.

Technical risks and human limits

AI capacity is expanding faster than safety measures can keep up. As models grow exponentially, so do errors and vulnerabilities. In vital sectors like healthcare, infrastructure, and finance, that can be disastrous. “Jailbreaks” and “prompt injections” already allow malicious users to manipulate AI systems.

Generative AI is also flooding the internet with synthetic content—deepfakes, fake news, and AI-generated text indistinguishable from reality. Analysts estimate that more than half of global internet traffic now originates from automated bots, most of which are malicious. Detecting and countering abuse has become an endless cat-and-mouse game.

Power and expertise are concentrated in just a few companies—OpenAI, Google, and others—making the ecosystem fragile and monopolistic. Disinformation is already shaping public opinion and politics.

At a systemic level, human oversight is becoming impossible. The technology is simply too complex, evolving too fast for anyone to grasp or control fully. How can you audit algorithms whose own creators don’t understand them?

Truth under pressure

Generative AI makes producing convincing disinformation easier than ever. The algorithms are opaque, and even developers often don’t know how they work. Deepfakes, micro-targeted propaganda, and synthetic news can influence elections and erode trust in institutions. The U.S. once detonated the atomic bomb without knowing the full consequences—AI seems to be heading down the same path.

Biases in training data reinforce social inequalities. When algorithms screen job applicants, grant loans, or guide policing, existing discrimination can worsen—often under the guise of objectivity.

Meanwhile, manipulative recommendation systems and power imbalances between platforms and users drive polarization and mental strain. Health experts now speak of an “infodemic,” an overload of misleading information clouding public judgment.

The energy and resource hunger of a revolution

AI consumes staggering amounts of energy and materials. In the U.S. alone, by 2028, AI data centers may use more power than all existing data centers combined—the equivalent of tens of millions of homes. Producing the necessary hardware—chips, servers, and cooling systems—requires huge quantities of aluminum and rare earth metals, whose mining is energy-intensive and environmentally destructive.

Most of this energy still comes from fossil fuels, causing emissions to surge. Big tech firms already report sharp increases in their carbon footprints due to AI operations. Add to that the water-hungry cooling systems and mounting e-waste from obsolete chips, and AI’s ecological footprint is expanding far faster than its efficiency gains. Even when it uses renewable power, AI competes with public demand for green energy.

Profit for a few, insecurity for the rest

Analysts estimate that AI could affect 30 to 40 percent of working hours in high-income countries, hitting routine office jobs first. Without strong labor policies, the benefits will flow mainly to the companies that own the technology, while workers face wage pressure, job loss, and worsening conditions.

With technology and capital concentrated in a few hands, inequality will deepen—between countries, regions, and industries—potentially sparking major social unrest.

Concentrated power

A key structural risk is the lack of transparency. Companies disclose little about their data practices, emissions, or model design while lobbying aggressively for subsidies and lenient regulation. The concentration of technological power in a few U.S. and Chinese corporations undermines digital sovereignty elsewhere, creating a new kind of dependency—data colonialism.

Even science is losing its way

Take the case of Kevin Zhu, a 25-year-old UC Berkeley graduate who runs Algoverse, a company that helps students publish research for a fee of over $3,000. Many of these students appear as co-authors on his papers. Zhu insists that his 113 papers from the past year are group projects, not AI-generated, though he admits to using language models for editing.

Critics, like researcher Hany Farid, call the operation “a disaster,” suspecting the papers are shallow—products of “vibe coding,” quick AI-assisted work dressed up as science.

Major AI conferences such as NeurIPS and ICLR are now swamped with tens of thousands of submissions, often reviewed by overworked graduate students instead of experienced peers. Some conferences even use AI to review papers—leading to errors and hallucinated feedback. The result: a flood of low-quality research that drowns out genuine scientific progress.

When will the AI bubble burst?

Beyond ecological and technical risks looms a financial bubble. Enormous amounts of capital are chasing the same promise of endless growth while real profits lag behind.

Chipmakers invest billions in AI startups, which in turn fund the chipmakers that supply them—a closed loop of speculative investment. Demand is inflated, data centers are overbuilt, and it’s unclear whether this capacity will ever be used.

As long as money continues to flow, the illusion of growth persists. But if interest rates rise, regulation tightens, or a few companies fail, the bubble could deflate fast—rippling through the entire ecosystem.

A sustainable future for AI?

AI offers immense potential, but without direction, it risks collapsing under its own weight. The combination of ecological strain, power concentration, and financial overheating demands strict oversight and international cooperation—both of which are still lacking.

A crash wouldn’t end AI any more than the dot-com bust ended the internet. But it could entrench dominance by a few major players and accelerate the problems already emerging: exploitation, disinformation, and systemic instability.

The challenge isn’t to stop progress but to steer it—toward transparent, energy-efficient, and socially fair applications that create real value without eroding the foundations of society. So far, 2026 shows little sign that this is happening. History suggests that the warnings are coming too late—again.