🧩 Philosophy 5h ago · Robi Rahman

Catching illicit distributed training operations during an AI pause

Less Wrong
View Channel →
Source ↗ 👁 0 💬 0
Last year, my colleagues on MIRI’s Technical Governance Team proposed an international agreement to halt risky development of superhuman artificial intelligence until it can be done safely. The agreement would require all clusters of AI chips with more computing power than 16 H100 GPUs to be registered with a coalition of states, led by the US and China, that would monitor their operations to ensure they aren’t being used for unsafe AI development. In my opinion, the proposal is impressively wel

Comments (0)

Sign in to join the discussion

More Like This

📰
Claude Interviews Me About Writing
LessWrong · 2h ago
An apple picking model for AI R&D
LessWrong · 4h ago
📰
Small models also found the vulnerabilities that Mythos found
LessWrong · 4h ago
📰
Dreams of the Future
LessWrong · 5h ago
Proof Explained: Touchette-Lloyd Theorem
LessWrong · 5h ago
📰
Pausing AI Is the Best Answer to Post-Alignment Problems
LessWrong · 6h ago