Treasury Wants “Secure AI” in Banking. Fine. Show the Guardrails.
United States – February 19, 2026 – Treasury just blessed “secure AI” for finance; without hard guardrails, risk management becomes a polite name for mass monitoring.
I read government tech announcements the way I read old court opinions: quietly, with a nose for consequences. The headline promises progress. The footnotes promise a new kind of power that swears it is temporary, then starts forwarding its mail to your address.
On February 18, 2026, the U.S. Department of the Treasury announced it has wrapped up a major public-private initiative focused on strengthening cybersecurity and risk management for artificial intelligence in the financial services sector. Treasury also says it will release six resources throughout February to help financial institutions adopt AI securely and resiliently. That sounds responsible. In American civics, “soothing” often doubles as a warning label.
What Treasury says it built
In Treasury’s telling, the work ran through an Artificial Intelligence Executive Oversight Group, described as a partnership between the Financial and Banking Information Infrastructure Committee and the Financial Services Sector Coordinating Council. Treasury says the effort brought together senior executives from financial institutions, federal and state regulators, and other stakeholders.
The output, according to Treasury, is a set of practical tools covering:
- governance
- data practices
- transparency
- fraud
- digital identity
Treasury also emphasizes support for small and mid-sized institutions, frames the initiative as part of the President’s AI Action Plan, and says the focus is implementation rather than prescriptive requirements.
The Orwell check: when “risk management” means “more data, fewer questions”
The Orwell check is simple: what new language is being used to make control sound like care? “Secure and resilient AI.” “Practical tools.” “Integrated” approaches to fraud and digital identity. Nobody hears that and thinks “surveillance.” That is the point.
In finance, AI risk management can slide into a familiar pattern: collect more data, share more data, and automate more decisions. Some of that can reduce fraud. Some of it can also build a financial panopticon where the safest way to bank is to look average forever.
The Paine test and the liberty ledger: guardrails or mission creep?
The Paine test asks a rude question: does this expand liberty or concentrate power? Better cybersecurity can expand liberty in the boring, real way: fewer hacks, fewer drained accounts, fewer people spending months proving to a call center that they are themselves.
But the liberty ledger turns red if “security” quietly normalizes cross-institution identity graphs and automated gatekeeping without meaningful appeal. Partnership and “guidance, not rules” can be useful. They can also dilute accountability: when everyone owns the process, nobody owns the failure.
What real guardrails would look like
- Non-performative privacy impact assessments, especially where digital identity is involved.
- Auditability with teeth, including independent assessments and clear accountability for false flags and lockouts.
- Encryption and compartmentalization as a baseline, not a brochure slogan.
- No backdoor mandates via examiner pressure without open public debate.
Publish the resources. Improve defenses. Then invite oversight like it is part of the design, not a nuisance: Congress can hold hearings, inspectors general can look for mission creep, regulators can publish aggregated outcomes, and civil society can FOIA drafts and read the footnotes like adults in folding chairs.
Because if “secure AI” is going to live inside the pipes of American finance, the public deserves receipts. What, exactly, are we securing, and what, exactly, are we being asked to surrender to get it?