🧩 Philosophy 2d ago · David Scott Krueger

On the political feasibility of stopping AI

Less Wrong
View Channel →
Source ↗ 👁 0 💬 0
A common thought pattern people seem to fall into when thinking about AI x-risk is approaching the problem as if the risk isn’t real, substantial, and imminent even if they think it is. When thinking this way, it becomes impossible to imagine the natural responses of people to the horror of what is happening with AI.This sort of thinking might lead one to view a policy like getting rid of advanced AI chips is “too extreme” even though it’s clearly worth it to avoid (e.g.) a 10% chance of human

Comments (0)

Sign in to join the discussion

More Like This

📰
Maybe I was too harsh on deep learning theory (three days ago)
LessWrong · 2h ago
On today's panel with Bernie Sanders
LessWrong · 4h ago
📰
Red vs blue: The parable of the feud within a feud
LessWrong · 5h ago
Scaffolding vs Reinforcement Finetuning for AI Forecasting
LessWrong · 6h ago
📰
What Do You Mean by a Two-Year AGI Timeline?
LessWrong · 7h ago
📰
No Strong Orthogonality From Selection Pressure
LessWrong · 7h ago