🧩 Philosophy 16h ago · Florian_Dietz

Positive Feedback Only

Less Wrong
View Channel →
Source ↗ 👁 0 💬 0
This story was written collaboratively with Claude. I brainstormed ideas with it and decided what to include and what to discard. Claude wrote down the result once I was satisfied with the plan, and I made final edits.I.A species built a properly aligned superintelligence.This is not a remarkable claim within their literature. The alignment problem, as they understood it, was difficult but tractable, and they solved it on what their historians describe as their second serious attempt. The system

Comments (0)

Sign in to join the discussion

More Like This

📰
Idealism
Stanford Encyclopedia of Philosophy · 13h ago
Toward a Better Evaluations Ecosystem
LessWrong · 15h ago
Model Spec Midtraining: Improving How Alignment Training Generalizes
LessWrong · 16h ago
📰
What if LLMs are mostly crystallized intelligence?
LessWrong · 17h ago
Decision theory doesn’t prove that useful strong AIs will doom us all
LessWrong · 17h ago
Psychopathy: The Mechanics
LessWrong · 17h ago