🧩 Philosophy 2d ago · loops

Latent reasoning models might be a good thing?

Less Wrong
View Channel →
Source ↗ 👁 2 💬 0
Epistemic status: I think the main point of this post is probably (~80%) false, and there are probably more counterpoints I haven't thought of. I wrote the rest of the post as if my claims are true for ease of reading. I would appreciate it if you told me where my arguments are wrong!Latent reasoning models (LRMs, popularized by Meta's Coconut paper, which was improved on a lot by CODI) do CoT thinking in the model's latent space by skipping the LM head that maps from d_model-vectors to a distri

Comments (0)

Sign in to join the discussion

More Like This

📰
Maybe I was too harsh on deep learning theory (three days ago)
LessWrong · 9h ago
On today's panel with Bernie Sanders
LessWrong · 11h ago
📰
Red vs blue: The parable of the feud within a feud
LessWrong · 12h ago
Scaffolding vs Reinforcement Finetuning for AI Forecasting
LessWrong · 13h ago
📰
What Do You Mean by a Two-Year AGI Timeline?
LessWrong · 14h ago
📰
No Strong Orthogonality From Selection Pressure
LessWrong · 14h ago