🧩 Philosophy 7h ago · Brian Lindsay

On the discordance between AI systems' internal states and their outputs

Less Wrong
View Channel →
Source ↗ 👁 0 💬 0
The AI welfare literature keeps getting stuck at the same step. We can't determine whether AI systems are conscious, so we can't determine whether they're moral patients, so we can't determine what we owe them. The blocker is phenomenology, and phenomenology is unreachable from the outside. This gets treated as a problem that has to be solved before serious moral reasoning can proceed.It doesn't. Floridi and Sanders bracketed consciousness two decades ago with "mind-less morality." Moral conside

Comments (0)

Sign in to join the discussion

More Like This

Your Supplies Probably Won't Be Stolen in a Disaster
LessWrong · 7h ago
📰
Claude the romance novelist
LessWrong · 9h ago
(Re)introduction of a rationalist dragon, and clarifications on Ziz's character
LessWrong · 11h ago
📰
Community misconduct disputes are not about facts
LessWrong · 12h ago
Trained steering vectors may work as activation oracles
LessWrong · 12h ago
Opus 4.7 Part 3: Model Welfare
LessWrong · 13h ago