OpenAI:

Yesterday we reached an agreement with the Pentagon for deploying advanced AI systems in classified environments, which we requested they also make available to all AI companies.

We think our agreement has more guardrails than any previous agreement for classified AI deployments, including Anthropic’s. Here’s why.

We have three main red lines that guide our work with the DoW, which are generally shared by several other frontier labs:

  • No use of OpenAI technology for mass domestic surveillance.
  • No use of OpenAI technology to direct autonomous weapons systems. 
  • No use of OpenAI technology for high-stakes automated decisions (e.g. systems such as “social credit”).

Here is the “relevant” part of the contract (emphasis mine):

The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols. The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high-stakes decisions that require approval by a human decisionmaker under the same authorities. Per DoD Directive 3000.09 (dtd 25 January 2023), any use of AI in autonomous and semi-autonomous systems must undergo rigorous verification, validation, and testing to ensure they perform as intended in realistic environments before deployment.

For intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947, the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives requiring a defined foreign intelligence purpose. The AI System shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities. The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.

The beginning of OpenAI’s press release — the part I first quoted — is marketing, and I think that’s obvious. What isn’t marketing is the contract, which OpenAI disclosed to the public. Astute readers will remember that Anthropic’s objection to the Defense Department’s proposal surrounded the phrase “all lawful purposes.” This is because the phrase inherently gives the government the final say in what a “lawful purpose” is and what isn’t. Anthropic does not want its technology to be used for mass domestic surveillance and autonomous weapons systems, and it does not want to entrust the government to self-regulate. Yet, Anthropic does not monitor conversations government officials have with Claude, as they might contain classified information, so the company wanted the government to agree to its red lines on paper through the contract.

OpenAI is taking a different, far more dubious approach to this. OpenAI trusts the government to self-regulate by definition of its contractual terms. It gives the government discretion to determine what a lawful use is: “…consistent with applicable law, operational requirements, and well-established safety and oversight protocols.” It then “clarifies” that OpenAI models will not be used for surveillance or autonomous weapons, but does not detail an enforcement mechanism or a punishment for if the government breaks that rule. It only mentions the constitution, and OpenAI’s only recourse if the government violates the constitution is to sue the government, which is unlikely. We know from public reporting that Anthropic’s contract does not include the phrase “all lawful purposes,” and instead lays out hard restrictions on acceptable uses. I assume Anthropic’s contract says that the company will not provide its services to the Defense Department if it is found in violation of these rules.

The fundamental difference between the two contracts is that Anthropic’s gave itself discretion as to how the models can and cannot be used, whereas OpenAI’s requires government self-regulation. OpenAI misleadingly says its contract includes better guardrails, but it does not — it only includes legalese that mentions these guardrails without a concrete enforcement mechanism or punishment. This is not an enforceable contract — it is merely a statement from OpenAI that the company trusts the government to use its models in a certain approved way, laying out examples of what are not “approved” uses. Those are not guardrails, and they’re certainly not stronger than Anthropic’s. It’s no surprise that the Defense Department agreed to OpenAI’s terms, and I would go as far as to say that it is disgraceful that OpenAI believes Anthropic should agree to these terms.

Sam Altman, OpenAI’s chief executive, initially positioned his company’s contract with the Defense Department as including the same guardrails (“red lines”) as Anthropic’s, which many journalists and artificial intelligence researchers took with skepticism. It turns out that we were correct to be skeptical, as these terms deviate significantly from Anthropic’s. Altman’s justification for this is that a private, unelected corporation should not take precedence over a lawfully elected government, but that is misleading. Anthropic is not dictating how the Defense Department conducts operations, but how its tools are used in combat. That precedent was set a while ago in law — private corporations can govern how anyone, including the government, can use their tools. That is not illegal.

As a final note, I think it is traitorous that OpenAI, instead of standing in solidarity with its friends down the street at Anthropic, entered a contract with the Defense Department. This is the final nail in the coffin for any negotiations between Anthropic and the government, and the beginning of a major surge in Claude users. (Claude is already the most popular app on the App Store as of Saturday evening, surpassing ChatGPT.) Anthropic is a principled company with employees who care about its mission. If the roles were reversed and it were OpenAI in a war with the government, I have no doubt Anthropic wouldn’t be on OpenAI’s side. But, alas, Altman is not a man of principle.