In late February 2026, a conflict between the artificial intelligence company Anthropic and the United States Department of Defense brought an important question: who should govern the utilization of AI systems? Anthropic, the creator of the Claude AI, held a Pentagon contract worth up to two hundred million dollars. The Department of Defense demanded that the company remove its self-imposed safety restrictions and allow military use of the technology for “all lawful purposes.” Anthropic refused, citing potential misuse, and the Pentagon decided to designate the company a “supply-chain risk,” a label ordinarily applied to foreign adversaries (Shalal et al.). Friedrich Hayek’s essay “The Use of Knowledge in Society” provides a powerful framework for understanding why state attempts to forcibly control private AI companies are likely to be clumsy and counterproductive. Hayek argues that socially relevant knowledge is dispersed, local, and constantly changing, so no central authority can fully gather and direct it. Applying this perspective to the Anthropic–Pentagon conflict reveals that attempts by the government to pressure companies into reversing their safety judgments are flawed. However, Hayek’s framework alone is insufficient. Frontier AI has the potential to concentrate private power and amplify the risks of surveillance and militarization to a societal scale. While this may not justify arbitrary state coercion against specific companies, it can justify generally applicable public rules.

Hayek’s core argument is that the basic problem of social order is not simply one of abstract calculation. The real problem is the “utilization of knowledge not given to anyone in its totality” (Hayek 520). Much of the knowledge needed for sensible decision-making exists only in fragmented form, distributed across many different people. Thereroe, Hayek rejects the fantasy that a single authority could possess all relevant information and direct society from above. According to Hayek, the dispute about planning is really a dispute about who does the planning: whether it is done centrally by one authority or decentrally by many persons whose separate decisions must somehow be coordinated (520–21).  For Hayek, competition is important not because it eliminates planning, but because it decentralizes it.

This point becomes clearer when Hayek distinguishes between abstract scientific knowledge and what he calls “the knowledge of the particular circumstances of time and place” (521). Such knowledge is practical, local, and often tacit. It includes information about specific shortages, shifting conditions, underused capacities, and new risks that may never appear in official statistics. Because this kind of knowledge cannot be fully communicated to a central authority, Hayek argues that many decisions must be left to the “man on the spot” (524). The market’s advantage lies in its ability to aggregate and communicate information quickly, while bureaucratic systems struggle because the relevant information is highly localized and because individuals often lack incentives to reveal it fully to planners. Thus, Hayek’s argument is not merely economic. It is a warning against the assumption that any centralized system possesses knowledge it can not truly have.

The Anthropic–Pentagon conflict is a good example of this problem. AI safety decisions depend on exactly the kind of specialized and evolving knowledge Hayek describes. Engineers and safety researchers know how models fail, what kinds of misuse are most likely to occur, and where present systems are vulnerable. Those judgments cannot be reduced to simple phrases like “all lawful uses.” A use may be legally permissible but technically reckless. The relevant question is not only whether an action is authorized by law, but whether the people closest to the model have good reason to think the system can perform that task safely and predictably.

From a Hayekian perspective, the Pentagon’s demand was flawed in treating the question of situated technical judgment as one of simple central authority.  Demanding that Anthropic remove safeguards assumed government officials could replace the company’s ongoing testing and evaluation with a blanket directive. Hayek would see this as a mistake. The issue is not whether the state has interests in defense. The issue is whether those interests give state officials the knowledge required to override the technical judgments of the people who actually understand the system’s limitations. Hayek’s answer would be no. 

Some might counterargue that national defense is not an ordinary market and war requires coordination and centralized political responsibility. If the state cannot direct the use of military technology, then perhaps private firms gain too much power over matters that should belong to public authority. This objection is weighty, but it does not refute Hayek. Rather, it clarifies the distinction his paper necessitates. Hayek does not show that government has no role in AI governance. He shows that centralized authorities should not try to make particular technical decisions that depend on local knowledge they do not possess. The state may legitimately set general ends and public constraints. It may pass laws and prohibit especially dangerous uses. What it should not do, on Hayekian grounds, is issue a temporary ultimatum demanding that one company abandon its own safety judgments simply because officials want broader access.

This distinction also explains why this case has political significance beyond the knowledge issues. Capitalism holds value on the grounds of “diffusion of power”. That is, separating economic power from political power helps restrain ambitious rulers. In the Anthropic case, that separation came under pressure. When the government can threaten blacklisting or coercive exclusion in order to force a firm to relax its safeguards, political and technological power begin to fuse. Hayek’s warning is relevant here not only because centralized command is less informed, but also because it becomes harder to challenge once economic dependence and state authority are joined together. 

Still, Hayek’s framework alone is insufficient. This is where Marx usefully supplements the analysis. According to historical materialism, productive forces develop within existing relations of production until they eventually outgrow them, generating institutional crises and conflicts. AI technology fits that pattern better than it fits the model of an ordinary market good. It is not just another product to be bought and sold efficiently. It is a productive force that can reshape labor markets, surveillance capacities, military power, and public discourse all at once. Marx’s insight helps identify a weakness in any purely Hayekian lens of the case. If one concludes that AI governance should simply be left to firms because they possess the most local knowledge, one overlooks the fact that those firms can themselves accumulate enormous power over social life.

Therefore, Marx identifies a problem Hayek did not sufficiently develop: even if the state lacks the knowledge to micromanage cutting-edge AI, private firms may still wield too much unchecked authority over a socially decisive technology. This does not mean Marx replaces Hayek. He adds a second dimension to the analysis. Hayek explains why command-style coercion is unsound, and Marx explains why laissez-faire deference to powerful firms is politically unstable. Together, they point toward a middle position: neither central planning nor complete laissez-faire.

After all, that is the most compelling lesson to draw from the Anthropic–Pentagon conflict. Hayek is right that those closest to the technology possess knowledge that cannot be fully centralized, and that managers act unwisely when they try to override that knowledge through direct command. But Marx is also right in  that new transformative technology can create concentrations of private power. Therefore, the best response is a system of democratically established general rules that prohibits the most dangerous uses of AI and demands accountability, while leaving technical implementation and safety judgments to those with relevant expertise.


Works Cited

  • Hayek, F. A. “The Use of Knowledge in Society.” The American Economic Review, vol. 35, no. 4, Sept. 1945, pp. 519–530.
  • Shalal, Andrea, et al. “Trump Directs US Agencies to Toss Anthropic’s AI as Pentagon Calls Startup a Supply Risk.” Reuters, 27 Feb. 2026.