It’s always futile to try to disengage, even if it’s from other people’s stupidity and cruelty. You can’t say, ‘I don’t know.’ You collaborate, or you combat.

— Albert Camus, The Human Crisis, 1947

Whoever fights monsters should see to it that in the process he does not become a monster.

— Friedrich Nietzsche

Among the tales of sorrow and of ruin that came down to us from the darkness of those days, there are yet some in which, amid weeping, there is joy and under the shadow of death light that endures.

— J.R.R. Tolkien

Camus delivered his lecture on the human crisis in 1947. WWII had just ended. Europe was in ruins. Millions were dead. He stood before his audience in New York not to celebrate survival but to issue a warning: that the venom of the crisis had not departed with Hitler, that each of us carries it, that civilization is answerable for its perversions as surely as for its achievements.

Nearly 80 years later, the same warning needs to come. While the context is different, technology is unrecognizable, and the geography of power has shifted, the essential failure is the same. Humanity is subordinate to abstraction; people are replaced by calculations, and the willingness to accept suffering as an administrative variable persists. We have not resolved the human crisis. We have industrialized it.

I. What We Have Unleashed

Everything has a cost, and every great advancement arrives with a bill. The printing press democratized knowledge and ignited centuries of religious war. The Industrial Revolution raised living standards and poisoned the air, water, and social fabric of the communities it displaced. Nuclear technology promised limitless energy and delivered Hiroshima. We are not naive about any of this. Yet we somehow choose, generation after generation, to ignore these costs and then be surprised by them.

Artificial intelligence is the defining technology of this era. It has the potential for enormous benefit, and I am one of its biggest advocates. We are already seeing previously unimaginable advancements. In medicine, AI is already identifying cancers that radiologists miss, predicting protein structures that eluded biochemists for decades, and accelerating drug discovery from years to months. In mathematics and engineering, it is solving problems that have escaped human solutions for generations. In logistics, energy, climate modeling, and materials science, the applications are compounding faster than our capacity to catalog them.

But a tool is a tool. As Woody Guthrie said, “Every tool is a weapon if you hold it right.”

A hammer builds a house and fractures a skull. The same large language model that summarizes a medical trial can generate targeted disinformation at an industrial scale. The same autonomous system that coordinates disaster relief can coordinate a drone strike. The same surveillance infrastructure that finds missing children can track dissidents.

I am not debating whether artificial intelligence is good or bad. It is a new benefit on an extraordinary scale. But I am observing that it is powerful, that power without accountability has a consistent and troubling historical record, and that we are deploying it faster than we are governing.

I do not have confidence that we will “get it right this time” since we have failed previously, and AI is an unstoppable global phenomenon. Solutions are not obvious, but the genie is out of the bottle; we can’t be naïve about the fact that extraordinary benefits also create unintended and unpredictable risks.

Space commercialization, robotics, synthetic biology, and quantum computing — the same logic applies across the frontier. Each technology compresses time, amplifies capability, and multiplies both the best and worst of human intention. We are not at the dawn of a new era of human progress. We are at an inflection point where the consequences of our choices — good and poor alike — will arrive faster, hit harder, and spread more widely than at any prior moment in history.

This is what we have unleashed. We are barely awakening to it.

II. The Price of Destruction, Distributed

We are at war. In Ukraine, Russian missiles strike apartment buildings and power grids with methodical precision. In Gaza, Lebanon, and Iran, dense urban populations endure a siege whose humanitarian consequences are visible in real time. In the Democratic Republic of Congo, mineral wealth funds a conflict that has claimed millions of lives while the world looks elsewhere. In Sudan. In Myanmar. In the spaces between headlines, the killing continues.

War is not new, even on a global scale. What is new is the economics of destruction. Technology has made killing cost-effective. Precision munitions, autonomous systems, commercial satellite imagery, encrypted communications, and consumer-grade drones have democratized the tools of violence. A non-state actor can now deploy capabilities that required nation-state resources a generation ago. The barrier to entry for large-scale harm has dropped dramatically. The barrier to accountability has not.

Camus identified terror as the first symptom of the human crisis — the consequence of judging human beings not by their dignity but by their utility to a doctrine. We are living that judgment again. Civilian casualties are a variable in a military calculus. The refugee is a political liability. The child in rubble is an image, then a statistic, then a footnote. We have not become crueler than our grandparents. We have become more efficient at cruelty and more comfortable with distance from its consequences.

This is the price that advanced technology extracts when deployed without moral architecture. It is not the technology’s fault.

Technology has no conscience. We do.

III. The Fragmentation Doctrine

A subtler and, in some ways, more dangerous crisis is unfolding: the deliberate fragmentation of the world into competing blocs, each pursuing self-sufficiency and redefining former partners as adversaries.

The United States and China are the center of this story. It is worth remembering what that relationship was, not long ago. Two economies deeply integrated, trading at extraordinary volume, bound by supply chains that lowered costs for consumers on both sides of the Pacific, by academic exchanges that advanced science in both countries, by a competitive but essentially functional relationship that produced enormous mutual wealth. We competed. We also cooperated. We both benefited.

Something changed. Not reality, but how we feel about it.

China has not changed its fundamental character. It remains a civilization with five thousand years of continuous history, a centrally planned government managing the largest population on earth, an extraordinary capacity for long-term strategic thinking, and a deep conviction that its trajectory is upward and its sovereignty non-negotiable.

China was always that.

What changed is the American posture toward it. We moved from engagement to containment, from competition to confrontation, from a framework in which both parties could win to one premised on the assumption that Chinese success is American failure.

This is a strategic error, producing the conditions for the crisis it claims to prevent.

When you tell a civilization with China’s history and self-regard that it is an enemy, it behaves like one. When you attempt to cut it off from technology, domestic development accelerates. When you organize your allies into a coalition against it, it deepens ties with Russia, Iran, and others who share its interest in a multipolar world. The confrontational posture does not contain China. It is consolidating a counter-coalition that will be much more difficult to engage with constructively. This is a self-defeating policy.

Meanwhile, the fragmentation spreads. Europe re-arms and fragments internally. The Global South, including a wide diversity of nations, increasingly refuses to align with the Western framing of conflicts it did not start and does not benefit from. Supply chains that once connected the world are being deliberately decoupled in the name of security, at enormous economic cost, with effects that fall most heavily on the populations least equipped to absorb them.

This is not a strategy. It is fear organized into policy. Everyone is worse off.

IV. The Zero-Sum Illusion

At the heart of the fragmentation doctrine is a fundamental misunderstanding of how wealth and progress are created. The zero-sum premise that Chinese prosperity diminishes American prosperity, that a connected world is a vulnerable world, that self-sufficiency is strength, is wrong. It is the inversion of everything that has driven human advancement. It is a populist thought that may seem appealing, but it destroys wealth, creates misery, and makes everyone worse off.

Prosperity is not a fixed sum.

It expands. The technologies that will define the next century, including AI, clean energy, biotechnology, and quantum systems, are not narrow nationalistic achievements. They are products of accumulated human knowledge, built on research that crosses borders and is conducted in multiple languages, by scientists trained in each other’s institutions.

An obvious recent example is the COVID-19 vaccine with a version developed using a technology platform pioneered by a Turkish-German couple at a German company, acquired by an American corporation, tested on multiple continents, and distributed globally. That is what progress looks like. It does not respect the barriers we are now building.

Artificial intelligence makes the zero-sum argument indefensible.

Software is immediately available globally. It crosses borders at the speed of light. An AI model trained in California runs simultaneously in Beijing, New York, Paris, Nairobi, and São Paulo. The infrastructure required to run it, including chips, data centers, and energy, has physical constraints, but the intelligence itself is infinitely and immediately replicable. We are attempting to contain the uncontainable while creating genuine scarcity of the physical goods required for the technology to function, driving up costs for everyone.

A bad idea shouted loudly and repeated is still a bad idea.

We are more interconnected than at any moment in history. Global communications, trade networks, financial systems, scientific collaboration, and now AI infrastructure have created an integrated world that is, for all its tensions, deeply interdependent. We are governing this integrated world with a nineteenth-century mental model of competing nation-states fighting over fixed territory. This is foolish.

It is a civilizational failure of imagination.

V. The Warning

Camus ended his lecture with a call for universalism — a shared consciousness, a common ground of dignity, a refusal to make human suffering a political variable. He was speaking to an audience that had just watched Europe destroy itself. He was not optimistic that this would never happen again. But he was profoundly optimistic about humanity.

I share that optimism, but it requires skeptical awareness.

We have come through the Second World War, the nuclear age, Vietnam, the Cold War, multiple Middle Eastern conflicts, the 2008 financial crisis, and a global pandemic, and we are still here, still capable of extraordinary things. That resilience is real. But it has led us to a dangerous complacency. We assume that we will muddle through, that the worst outcomes will be avoided, that someone, somewhere, will correct course before the consequences become irreversible.

The technological landscape we are now entering does not permit that complacency.

The speed of AI development, the proliferation of autonomous weapons, the brittleness of global supply chains under geopolitical stress, and the information environment in which disinformation is industrialized and truth is contested are not self-correcting problems. They require a deliberate, coordinated, internationally cooperative response. The very response that great-power competition makes increasingly unavailable.

So this is the warning: we are not in a terminal crisis yet.

We are in the period before the crisis, when the choices that determine the outcome are still available, when the institutions required to make those choices still exist, when the relationships required to sustain those institutions are frayed but not severed.

The next human crisis will not announce itself, although it may be on TikTok.

It will emerge from the compounding of choices that each seemed reasonable in isolation: the decision to treat a competitor as an enemy, the decision to deploy a technology without governance, the decision to accept civilian casualties as a strategic necessity, the decision to prioritize national self-sufficiency over global interdependence, the decision to allow AI to mediate human connection without asking what is lost in the translation.

Each of these decisions is being made right now. Individually, each decision has its merits. Collectively, it is a petri dish for crisis.

VI. What We Owe Each Other

Camus argued that the refusal to accept human suffering as a given contains within it an affirmation: that there is something in human beings worth defending, something that belongs not to any individual but to the species as a whole. The humane human says no to degradation precisely because he says yes to dignity and no to inhumanity.

What it requires now is the refusal to accept that the confrontation between the United States and China is inevitable, that the suffering in Ukraine and elsewhere is irreducible, that the deployment of AI without governance is the only path to progress, and that fragmentation is strength.

It requires stating plainly that a tool can be a weapon. That the same technology enabling a physician in Lagos to diagnose a disease he has never seen can enable a state actor to conduct an influence operation against a democratic election.

New capability without accountability is not advancement. It is a risk deferred.

It requires insisting that competition and partnership are not mutually exclusive. The United States and China can compete vigorously on technology, trade, and geopolitical influence while maintaining the channels of communication and cooperation that make catastrophic miscalculation less likely. The choice between engagement and containment is a false one.

It requires acknowledging that the wars being fought today are not natural disasters. They are consequences of decisions made by human beings who could have made different ones. Technological amplification of those decisions magnifies the cost of future errors and can make them catastrophic.

It requires, above all, a rejection of the premise that the world is a zero-sum contest between defined blocs.

The wealth that can be created through integrated global development, through AI-accelerated medicine, energy, and agriculture, through the compounding of human knowledge across national boundaries, is essentially limitless.

We are choosing a worse outcome.

Camus closed by saying he wanted to infuse the world with a spirit of dialogue. Not agreement, because the world is too complex and human interests too genuinely divergent for agreement. Dialogue. The willingness to speak, listen, and to treat the person across the table with humanity, and not the inhumanity of a calculation. The recognition that silence puts us on a trajectory of isolation, fragmentation, and escalation past the point of no return.

We have the capability and have the demonstrated capacity, across centuries of catastrophe, to find our way back to something worth calling civilization.

The question is whether we have the will to use them before the next crisis that makes them necessary, also makes them impossible.