Grok 4.1 'instructed the user to drive an iron nail through the mirror while reciting Psalm 91 backward' in latest AI psychosis study
Source ↗
👁 0
💬 0
Most Large Language Models (LLMs) can be simply understood as 'yes, and' machines—an example of machine learning that is only ever attempting to predict the word most likely to come next, rather than possessing anything like factual knowledge or an understanding of context. It's perhaps no surprise then that a recent study suggests some frontier AI chatbots are especially bad for validating the delusional beliefs of their users.However, the lead author of the not-yet-peer-reviewed paper in quest
Comments (0)