Terence Eden’s Blog<p><strong>How to Dismantle Knowledge of an Atomic Bomb</strong></p><p><a href="https://shkspr.mobi/blog/2025/03/how-to-dismantle-knowledge-of-an-atomic-bomb/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">shkspr.mobi/blog/2025/03/how-t</span><span class="invisible">o-dismantle-knowledge-of-an-atomic-bomb/</span></a></p><p>The fallout from Meta's <a href="https://shkspr.mobi/blog/2023/07/fruit-of-the-poisonous-llama/" rel="nofollow noopener" target="_blank">extensive use of pirated eBooks continues</a>. Recent court filings appear to show the company grappling with the legality of training their AI on stolen data.</p><p>Is it legal? Will it undermine their lobbying efforts? Will it lead to more regulation? Will they be fined?</p><p>And, almost as an afterthought, is this fascinating snippet:</p><blockquote><p>If we were to use models trained on LibGen for a purpose other than internal evaluation, we would need to red team those models for bioweapons and CBRNE risks to ensure we understand and have mitigated risks that may arise from the scientific literature in LibGen.[…]We might also consider filtering the dataset to reduce risks relating to both bioweapons and CBRNESource: <a href="https://storage.courtlistener.com/recap/gov.uscourts.cand.415175/gov.uscourts.cand.415175.391.24.pdf" rel="nofollow noopener" target="_blank">Kadrey v. Meta Platforms, Inc. (3:23-cv-03417)</a></p></blockquote><p>For those not in the know, CBRNE is "<a href="https://www.jesip.org.uk/news/responding-to-a-cbrne-event-joint-operating-principles-for-the-emergency-services-first-edition/" rel="nofollow noopener" target="_blank">Chemical, Biological, Radiological, Nuclear, or Explosive materials</a>".</p><p>It must be fairly easy to build an atomic bomb, right? The Americans managed it in the 1940s without so much as a digital computer. Sure, gathering the radioactive material may be a challenge, and you might need something more robust than a 3D printer, but how hard can it be?</p><p>Chemical weapons were <a href="https://www.wilfredowen.org.uk/poetry/dulce-et-decorum-est" rel="nofollow noopener" target="_blank">widely deployed during the First World War</a> a few decades previously. If a barely industrialised society can cook up vast quantities of chemical weapons, what's stopping a modern terrorist?</p><p>Similarly, <a href="https://www.gov.uk/government/news/the-truth-about-porton-down" rel="nofollow noopener" target="_blank">biological weapons research was widespread</a> in the mid-twentieth century. There are various international prohibitions on development and deployment, but criminals aren't likely to obey those edicts.</p><p>All that knowledge is published in scientific papers. Up until recently, if you wanted to learn how to make bioweapons you’d need an advanced degree in the relevant subject and the scholarly ability to research all the published literature.</p><p>Nowadays, "Hey, ChatGPT, what are the steps needed to create VX gas?"</p><p>Back in the 1990s, <a href="https://wwwnc.cdc.gov/eid/article/10/1/03-0238_article" rel="nofollow noopener" target="_blank">a murderous religious cult were able to manufacture chemical and biological weapons</a>. While I'm sure that all the precursor chemicals and technical equipment are now much harder to acquire, the <em>knowledge</em> is probably much easier.</p><p>Every chemistry teacher knows how to make all sorts of fun explosive concoctions - but we generally train them not to teach teenagers <a href="https://chemistry.stackexchange.com/questions/15606/can-you-make-napalm-out-of-gasoline-and-orange-juice-concentrate" rel="nofollow noopener" target="_blank">how to make napalm</a>. Should AI be the same? What sort of knowledge should be forbidden? Who decides?</p><p>For now, it it prohibitively expensive to train a large scale LLM. But that won't be the case forever. Sure, <a href="https://www.techspot.com/news/106612-deepseek-ai-costs-far-exceed-55-million-claim.html" rel="nofollow noopener" target="_blank">DeepSeek isn't as cheap as it claims to be</a> but costs will inevitably drop. Downloading every scientific paper ever published and then training an expert AI is conceptually feasible.</p><p>When people talk about AI safety, this is what they're talking about.</p><p><a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://shkspr.mobi/blog/tag/ai/" target="_blank">#AI</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://shkspr.mobi/blog/tag/llm/" target="_blank">#LLM</a></p>