Machines That Kill Without Permission
An autonomous weapon identifies, selects, and engages a target without human intervention. This isn't theoretical — autonomous systems are already deployed in the Iran-Israel theater, the Russia-Ukraine conflict, and by 30+ nations developing AI weapons programs.
What Exists Today
- Israel's Iron Dome — AI intercepts incoming rockets autonomously. Humans oversee but don't approve each intercept (too fast for human decision-making).
- Turkey's Kargu-2 — Autonomous drone that reportedly attacked soldiers in Libya without human command (disputed).
- US Navy's AEGIS — Can autonomously engage incoming missiles. "Auto mode" removes human from the loop.
- Russia's Poseidon — Autonomous nuclear torpedo. AI-guided, no human in the loop after launch.
- Iran's Shahed swarms — AI-coordinated drone attacks with minimal human oversight once launched.
The Core Ethical Questions
1. Should machines decide who lives and dies?
Proponents: AI weapons react faster, don't panic, don't commit war crimes out of anger. They could actually reduce civilian casualties by being more precise.
Opponents: Machines can't understand context. They can't exercise mercy. And they reduce the political cost of war — making conflict more likely.
2. Who is responsible for an autonomous kill?
The programmer? The commander who deployed the system? The AI itself? International humanitarian law requires accountability. Autonomous weapons create an accountability gap.
3. Does lowering the cost of war make war more likely?
If one side can fight without risking human soldiers (autonomous drones vs manned aircraft), the barrier to starting a conflict drops. Cheap autonomous weapons could destabilize deterrence.
The International Response
- UN: Convention on Conventional Weapons has debated LAWS (Lethal Autonomous Weapons Systems) since 2014. No binding treaty yet.
- US: DoD Directive 3000.09 requires "appropriate levels of human judgment" but doesn't ban autonomy.
- EU: European Parliament voted to ban fully autonomous weapons. No enforcement mechanism.
- China/Russia: Developing autonomous weapons while publicly calling for regulation. Classic game theory.
The Companies Building AI Weapons
PLTR — Intelligence and targeting AI for US military. The "brain" behind autonomous decision-making.
Anduril — Autonomous drones, surveillance towers, submarine drones. Palmer Luckey's vision of AI-first defense.
LMT — Autonomous missile defense, F-35 AI co-pilot systems.
RTX — AI-guided missiles and counter-drone systems.
Boeing (BA) — MQ-28 Ghost Bat, Loyal Wingman autonomous fighter.
Investment Thesis
Regardless of the ethics debate, autonomous weapons spending is accelerating. The Iran crisis ensures defense budgets keep growing. PLTR, LMT, RTX, and Anduril (pre-IPO) are the primary beneficiaries. This is a decade-long trend, not a trade.
