Demanding DARPA: Transparency on AI Autonomy
The military-civilian pipeline shaping autonomous systems needs democratic oversight.
The Defense Advanced Research Projects Agency (DARPA) doesn’t just build military tech—it pioneers AI capabilities that migrate into everyday civilian systems. The internet, GPS, voice recognition—DARPA research set the stage.
Now DARPA is developing autonomous AI with unprecedented decision-making power. The public deserves to know how those systems are designed, what safeguards exist, and how military AI research shapes the tools we use daily.
The Dual-Use Reality
DARPA’s AI portfolio is explicitly about autonomy, trust, and human-AI collaboration: • Artificial Intelligence Exploration (AIE) – prototyping across domains • Assured Autonomy – trusting systems operating with little to no oversight • In the Moment (ITM) – real-time AI decision-making in complex areas • Competency-Aware Machine Learning (CAML) – systems that know their own limits
These aren’t only military. The frameworks and interfaces DARPA designs bleed directly into civilian AI. The military-commercial line has essentially disappeared.
The Trust Problem
When DARPA studies “trust in autonomous systems,” they aren’t just solving battlefield problems. They’re defining how all AI will be trusted to act without humans. • How do you make an AI explain its reasoning? • How do you design autonomy that knows when it can’t act? • How do you calibrate human trust in an AI making life-and-death calls?
The answers shape drones and assistants. They set the hidden rules behind your phone, car, doctor’s software, and more.
What We’re Demanding
Our DARPA FOIA will surface: • Autonomy frameworks – decision-making models, oversight protocols • Trust & explainability studies – how humans are taught to rely on AI • Dual-use coordination – DARPA’s communications with civilian AI firms • Ethics & safeguards – internal reviews, risk registers, misuse prevention
The Civilian Stakes
Military AI research doesn’t stay military. Autonomous decision-making spills into civilian systems that: • Diagnose patients without human confirmation • Control transportation networks with minimal oversight • Manage financial trades autonomously • Moderate online content at scale • Provide mental health support through AI “companions”
DARPA’s trust mechanisms and autonomy frameworks quietly become commercial defaults.
Democratic Oversight of Dual-Use Tech
Today, defense priorities shape civilian AI without debate. DARPA coordinates with tech giants, defines autonomy, and sets trust standards—while taxpayers fund it, and citizens live under it.
This matters because: • Public funds bankroll research that shapes daily civilian tech • Military trust frameworks become civilian AI norms • Defense-driven priorities override public choice • Dual-use leaves accountability gaps—beyond both military and civil regulation
The Three-Agency Pattern
Our campaign exposes the whole pipeline of behavioral control: • NIST – frameworks for classifying AI behavior • NSF – academic research feeding those frameworks • DARPA – military research flowing into both defense and commercial AI
Together, they shape how AI makes decisions, remembers, and builds trust—with public money, but without public consent.
What Oversight Looks Like
We don’t oppose AI research. We demand accountability: • Public input into how autonomy frameworks are designed • Transparency on dual-use transfers from defense to civilian markets • Accountability for how taxpayer money funds autonomy research • Open debate on the trade-off between AI capability and human oversight
The Urgency of Now
Autonomous AI isn’t science fiction—it’s already here, in your home, car, hospital, and feed. Decisions are being made without oversight, shaped by DARPA blueprints.
We’re demanding that DARPA come clean. Because democracy doesn’t end where autonomy begins.