The Pentagon Wants AI to Police Campus. Fine. Show Us the Rulebook.
United States – April 21, 2026 – The Pentagon wants AI to flag risky research ties; without due process, the algorithm becomes the new loyalty board.
I was parked in a public library, the kind with dust in the vents and civic faith in the stapler. On my screen: another government attempt to solve an oversight shortage with software. When power is in a hurry, guardrails always seem to be “phase two.”
Pentagon says AI will screen Pentagon-funded academics for China ties
Defense News reports the Pentagon is moving toward computer screening, including AI tools, to vet military-funded academics for problematic foreign ties, with China as the headline concern. The impetus is painfully familiar: a watchdog found oversight staffing was badly outmatched by the volume of awards and disclosures that need review.
This is the “easy button” genre. Only this button can freeze grants and scorch reputations.
Why the Pentagon is reaching for automation
The Department of Defense funds a vast amount of fundamental research. It wants innovation fast, and it wants adversaries not to siphon it off faster. Congress has warned about research security for years, and a 2025 House Select Committee report said it identified roughly 1,400 papers that acknowledged DoD support while involving collaboration with PRC entities, arguing DoD policies were fragmented and inconsistently enforced.
Then the math problem arrives: per Defense News, an inspector general evaluation highlighted thin staffing compared with the number of awards requiring scrutiny. So the Pentagon says computers will help do the sorting.
A January 7, 2026 memorandum from the office overseeing defense research and engineering points components toward tighter risk-based security reviews and explicitly calls for developing automated vetting and continuous monitoring capabilities, building a common research grant database, and conducting spot checks and reporting.
The Paine test:
Does this expand liberty or concentrate power? Automation that surfaces real deception while preserving due process is a guardrail. Automation that quietly widens surveillance and denial decisions behind a dashboard is power with a user interface.
The tradeoff: speed versus fairness
Security is not imaginary. Spies exist, and technology transfer is real. But the moment an algorithm triages “trustworthiness,” false positives become policy, and those false positives land on actual people: grad students, tenure files, labs on deadlines, immigration paperwork.
This is also how the United States repeats itself. We build a blunt tool for a real threat, get impatient with case-by-case judgment, and then act surprised when proxies get punished: surnames, nationality, co-authorship networks, old affiliations, a conference trip from years ago. The China Initiative era left scars for a reason.
The Orwell check: “continuous monitoring” as a euphemism
Automated vetting. Continuous monitoring. Risk-based review. Common repository. Clean language, big consequences. What data feeds the model? Who sees the outputs? How long is it kept? Can a person see, correct, and appeal before the penalty hits?
Per the Defense News reporting, the Pentagon declined to provide specifics about criteria and weighting for threat assessments. That might be normal inside the building. It is not good enough when civilians and universities are on the receiving end.
Guardrails before the software gets a badge
If any screening is automated, rules should be bright-line and public: human judgment as final decision-maker with documented reasoning; notice and an appeal process with real timelines; a narrow data diet; independent audits for bias and error rates reported to Congress and made public to the maximum extent possible; and hard limits on retention and sharing, because a risk flag can become a career-long stain.
If you were the researcher getting flagged, what due process would you insist on before you called it fair?
Keep Me Marginally Informed