Algorithms of power: How AI is redefining national security

AI 19-02-2026 | 12:03

Algorithms of power: How AI is redefining national security

From data centers to decision rooms, artificial intelligence is no longer just a tool—it is a new frontier of sovereignty, shaping global power, accountability, and the future of warfare.
Algorithms of power: How AI is redefining national security
According to Axios, the U.S. military used Anthropic’s Claude model during Maduro’s arrest.
Smaller Bigger

Artificial intelligence is no longer just a tool to improve productivity or accelerate scientific research. The recent uproar over the use of an advanced model developed by Anthropic within a security analysis system contributed to a politically charged intervention, revealing a truth far deeper than the event itself: we are entering a phase where algorithms are becoming active agents in the engineering of power.

 

The issue is not that an intelligent model “arrested” someone—this is media exaggeration. The more important point is that artificial intelligence has become part of the security decision-making structure and an integral element in the equation of national sovereignty.

 

From traditional weapons to algorithmic weapons

Military history has gone through three major leaps: gunpowder, then the industrial revolution, and then digital technology. Today, we are witnessing the fourth leap: the militarization of artificial intelligence.

 

Battles are no longer decided solely by missiles or tanks, but by a state's ability to analyze billions of data points in real time, uncover hidden patterns, predict behavior, and identify risks before they occur. Artificial intelligence does not pull the trigger, but it may select the target. And this is where the problem begins.

 

Illustrative image (Websites)
Illustrative image (Websites)


Private companies as geopolitical players

When advanced artificial intelligence models collaborate with security data analysis platforms like Palantir Technologies, we face a new scenario: private companies are becoming part of the strategic infrastructure of national security.

 

This shift raises a fundamental question: Does the state still hold a monopoly on the tools of power?

 

In the twentieth century, power was measured by the number of airplanes and aircraft carriers. In the twenty-first century, it is also measured by the number of servers, the quality of algorithms, and the depth of databases. Here emerges a new player that is not subject to popular election or direct parliamentary oversight: the tech company.

 

Between analysis and decision

It is essential to distinguish between two levels: algorithmic analysis and sovereign decision-making. Artificial intelligence provides probabilities, patterns, and risk assessments, but the final decision remains with humans. The problem lies in the “trust effect”: the more accurate the model, the greater the authority of its recommendations. Over time, human decisions may become merely formal approvals of what the algorithm proposes.

 

This creates a legal and ethical gray area, where responsibilities overlap and the boundaries between technical advice and sovereign decision blur.

 

Responsibility: Who holds the algorithm accountable?

If algorithmic analysis leads to a serious security failure, who bears the responsibility?
Is it the state that relied on the system?
The company that developed it?
Or the engineer who wrote the code?

International humanitarian law was not designed for an era where software code intersects with military decision-making. Current treaties regulate conventional, nuclear, and chemical weapons, but they do not cover “algorithmic weapons.” We are, therefore, facing a widening global legislative gap.


Digital sovereignty: The battle of the new century

States that lack sovereign artificial intelligence infrastructure may find themselves dependent on cross-border companies for their most sensitive matters. Here, artificial intelligence shifts from being a mere technical tool to a full-fledged geopolitical instrument.

Whoever controls the algorithm controls the flow of information. Whoever controls information influences decision-making. And whoever influences decision-making touches the very core of sovereignty.

 

For this reason, major countries have begun talking about “digital sovereignty,” investing in the development of national models and independent infrastructure, alongside legislation regulating the use of artificial intelligence in security and defense.

 

Towards a new global charter

Just as agreements have been established to regulate nuclear weapons, there is now an urgent need for an international framework governing the use of artificial intelligence in military and security domains. Such a charter would define usage limits, transparency standards, accountability rules, and safeguards against bias or manipulation.

 

The militarization of artificial intelligence is not merely a technical issue; it is a matter of global power balances being quietly yet profoundly reshaped.

 

The algorithm is not innocent

Artificial intelligence is neither inherently good nor inherently evil; it reflects those who design, feed, and use it. Yet incorporating it into national security calculations changes the very nature of power.

 

Power is no longer reserved for those who control land or weapons—it also belongs to those who control data and algorithms.

 

The question we must ask today is not whether artificial intelligence can be used in security operations, but how we can ensure it does not become an unaccountable force.
The future of global security will be shaped not only in military operations rooms but also in software development centers.