Catching illicit distributed training operations during an AI pause
Source ↗
👁 0
💬 0
Last year, my colleagues on MIRI’s Technical Governance Team proposed an international agreement to halt risky development of superhuman artificial intelligence until it can be done safely. The agreement would require all clusters of AI chips with more computing power than 16 H100 GPUs to be registered with a coalition of states, led by the US and China, that would monitor their operations to ensure they aren’t being used for unsafe AI development. In my opinion, the proposal is impressively wel
Comments (0)