🧩 Philosophy 2h ago · James Newport

Bridging the Gap on AI Safety Policy

Less Wrong
View Channel →
Source ↗ 👁 0 💬 0
In February, the Swift Centre for Applied Forecasting launched a competition designed to bridge the gap between abstract AI safety research and the realities of government decision-making. See the original post here.Most AI policy work today functions as a literature review of technical risks. While valuable, this rarely moves the dial for a policy official who has 15 minutes to read a brief and 48 hours to make a recommendation. We wanted to test a different model: Forecasting-led, decision-rea

Comments (0)

Sign in to join the discussion

More Like This

📰
Final research agenda #2: first sketch of a plan
LessWrong · 2h ago
📰
The Enneagram is a Useful Fake Framework
LessWrong · 3h ago
Illicit Use of AI by Philosophers Refereeing for Journals
Daily Nous · 3h ago
The Most Important Charts In The World
LessWrong · 3h ago
Let Kids Keep More Productivity Gains
LessWrong · 3h ago
📰
Loyalty
Stanford Encyclopedia of Philosophy · 4h ago