This essay explores the trilemma at the heart of AI governance: (1) regulation is logically necessary to prevent catastrophic risks; (2) regulation is practically impossible due to technical opacity, jurisdictional arbitrage, and rapid iteration; and (3) even if implemented, regulation may produce perverse outcomes—accelerating centralization, stifling safety research, or driving AI development underground.
The algocratic tightrope will not be walked by any single institution. It will be walked by millions of small decisions: a researcher choosing to publish safety benchmarks, a company refusing a contract, a regulator updating a benchmark, a citizen insisting on transparency. That is not a solution. It is, perhaps, the only thing that has ever been. Word count: ~1,800 (abridged from full-length target). Full-length version would include case studies (Tay, Zillow, COMPAS, Clearview), economic models (compute thresholds as Pigouvian taxes), and extended legal analysis (First Amendment vs. algorithmic speech). BIG LONG COMPLEX
I. Introduction: The New Leviathan In 2023, over 1,000 tech leaders and researchers signed an open letter comparing the risks of artificial intelligence to those of pandemics and nuclear war. That same year, the European Union passed the world’s first comprehensive AI Act—a 400-page document classifying AI systems by risk level. Within months, ChatGPT, the poster child of generative AI, was banned in Italy, reinstated, and then faced 13 separate complaints across EU member states. Meanwhile, in the United States, the White House secured voluntary commitments from seven AI companies, while China implemented mandatory security reviews for “generative AI services with public opinion characteristics.” This essay explores the trilemma at the heart
Example: In 2018, the EU’s General Data Protection Regulation (GDPR) included a “right to explanation” for algorithmic decisions. By 2022, courts were already struggling with cases involving deep learning systems where no explanation exists. The law is not wrong—it is obsolete. AI models are weight files. Weight files can be stored on servers in any country, or on a laptop, or on a USB drive. Unlike physical goods or even software binaries, a model can be split across jurisdictions, quantized, or converted to a different framework. If the EU bans a model, its weights can be hosted in Switzerland, accessed via VPN, or distilled into a smaller model that no longer meets the legal definition. Enforcement becomes a cat-and-mouse game where the mouse has infinite tunnels. That is not a solution
Example: In 2022, a major AI company certified that its recommendation algorithm was “fair” under a state law, using a proprietary metric. An independent audit later found that the metric ignored exactly the kinds of disparate impact the law was designed to prevent. The company was legally compliant and dangerously unfair. If a country imposes strict AI safety rules, frontier development will move elsewhere. This is not speculation—it is history. When the US tightened biotech regulations in the 1970s, research moved to the UK. When the EU enforced strict data localization, cloud providers opened data centers in Ireland. Today, if the US bans training runs above a certain FLOP threshold, a Chinese or Middle Eastern state-funded lab will simply ignore it. The risk does not disappear; it relocates to jurisdictions with weaker institutions, less transparency, and potentially fewer scruples.