🧩 Philosophy 1d ago · Juliana Eberschlag

Apply for ARBOx4 [deadline May 8th]

Less Wrong
View Channel →
Source ↗ 👁 0 💬 0
On behalf of OAISI, we're excited to be running our fourth iteration of ARBOx (Alignment Research Bootcamp Oxford), a 2-week intensive designed to rapidly build skills in AI safety. This year, we're considering running two concurrent streams for the first time.ARBOx4 is an in-person, full-time programme running from 28 June to 10 July 2026 at Trajan House in Oxford. We have run three successful ARBOx iterations previously, and are excited to re-open this opportunity for another promising cohort

Comments (0)

Sign in to join the discussion

More Like This

📰
Idealism
Stanford Encyclopedia of Philosophy · 11h ago
Toward a Better Evaluations Ecosystem
LessWrong · 13h ago
Model Spec Midtraining: Improving How Alignment Training Generalizes
LessWrong · 14h ago
📰
Positive Feedback Only
LessWrong · 14h ago
📰
What if LLMs are mostly crystallized intelligence?
LessWrong · 15h ago
Decision theory doesn’t prove that useful strong AIs will doom us all
LessWrong · 15h ago