🧩 Philosophy 14h ago · Against Moloch

Foundational Beliefs

Less Wrong
View Channel →
Source ↗ 👁 0 💬 0
I see a lot of AI safety strategies that don’t fully engage with the complexity of the real world—and therefore are unlikely to succeed in the real world.
To take a simple example: many strategies rely heavily on government playing a leading role through regulation and perhaps even nationalization. That’s a reasonable strategy in the abstract, but the recent conflict between DoW and Anthropic raises serious questions about the real-world viability of that approach. Too many people are stuck thin

Comments (0)

Sign in to join the discussion

More Like This

📰
Philosophy of Microbiology
Stanford Encyclopedia of Philosophy · 4h ago
Claude Mythos #2: Cybersecurity and Project Glasswing
LessWrong · 11h ago
📰
The Unintelligibility is Ours: Notes on Chain-of-Thought
LessWrong · 11h ago
📰
Anthropic is Really Pushing the Frontier, What do I Think About This?
LessWrong · 11h ago
Slaughterhouse 666: Kicking and A-gouging in the Pus and the Gore and the Fear
3:AM Magazine · 13h ago
📰
"Close Enough" as a Primitive in Intelligent Systems
LessWrong · 14h ago