White House weighing draft to lift Anthropic security designation, Axios reports
White House draft could allow federal agencies to stop treating Anthropic as a security risk, potentially ending a months‑long dispute over the AI company’s federal blacklist.
The White House is considering a draft policy that would give federal agencies a mechanism to stop classifying Anthropic as a supply‑chain or national security risk, according to a report by Axios published April 29, 2026. The draft is being discussed inside the West Wing as one possible way to resolve an escalating fight between the administration and the AI developer over access to Anthropic’s models. (axios.com)
Origins of the dispute
The conflict dates back to February 27, 2026, when senior administration officials moved to remove Anthropic’s models from government systems and the Department of Defense labeled the company a “supply chain risk.” This designation followed disagreements over whether Anthropic would alter its safeguards to permit broader military or classified uses. (axios.com)
Anthropic pushed back, saying the designation was unprecedented for an American firm and flagged the matter for litigation and public scrutiny. Tech companies and civil‑liberties groups have also weighed in, warning that such a designation could set a broad precedent for government control over commercial AI. (axios.com)
Details of the draft under consideration
According to people familiar with the discussions, the draft under review would create tailored administrative steps enabling agencies to permit specific Anthropic models or services despite the broader designation. Supporters of the change argue it would let agencies access capabilities needed for national security while keeping legal guardrails in place. (axios.com)
Officials involved in the crafting of the proposal are reported to be weighing options that range from narrowly scoped waivers for particular programs to broader guidance that would ease procurement restrictions across multiple agencies. The exact text remains in flux and could be revised before any decision is announced. (axios.com)
Federal agencies’ split positions
Government entities are reported to be divided on Anthropic access, with some national security and intelligence offices pressing to retain access to the company’s most capable models for defensive and analytical missions. Other agencies have supported the blacklist and sought stricter controls, citing concerns about guardrails and operational transparency. (axios.com)
The split has produced practical complications for procurement and contracts, as agencies and defense contractors weigh the legal risks of continuing to use Anthropic tools against operational needs. That tension has been a key driver behind the White House effort to find a policy compromise. (mayerbrown.com)
Anthropic’s stance and industry reaction
Anthropic has publicly contested the designation and signaled it is prepared to litigate aspects of the decision, arguing that the label exceeds the government’s statutory authority and harms innovation. Company statements emphasize adherence to safety standards and caution against demands to remove safeguards that prevent misuse. (washingtonpost.com)
Industry groups and competitors have watched the dispute closely, with some firms supporting Anthropic on legal grounds and others urging clearer, consistent rules for when the government can restrict access to commercial AI. Legal and trade groups warn that erratic policy-making could chill investment and slow adoption of advanced models across the private sector. (axios.com)
Legal and national‑security considerations
Legal analysts point to several statutory authorities the Pentagon has cited in designating a supplier as a risk, and they say any attempt to reverse or workaround the designation may face judicial scrutiny. The government’s approach will need to reconcile executive branch discretion with procurement law and constitutional protections cited by Anthropic. (mayerbrown.com)
National‑security officials, meanwhile, describe a balancing act: safeguarding sensitive operations while not handicapping agencies that require leading‑edge AI tools for defense, cybersecurity, and intelligence missions. Internal reviews have involved agencies such as the Office of the National Cyber Director and the NSA, which are helping to assess operational risks tied to high‑capability models. (washingtonpost.com)
Path forward and possible timelines
A final decision on the draft could come in the form of executive guidance, a targeted executive order, or interagency procurement directives; policymakers are said to be weighing which instrument would best withstand legal challenge while addressing operational needs. Any formal change will likely undergo White House counsel review and a period of interagency coordination before being published. (axios.com)
Observers say the outcome will be closely watched by technology companies, defense contractors and Congress, which has already held briefings and hearings on advanced AI systems in recent weeks. The stakes include not only access to Anthropic’s models but also broader questions about how the federal government manages risk across commercial AI providers. (axios.com)
The White House deliberations on whether to ease restrictions on Anthropic underscore the complicated intersection of innovation, procurement law and national security policy, and they set the stage for further legal and political contests over how the United States will govern powerful AI systems.