Study: AI models that consider user's feeling are more likely to make errors
Source ↗
👁 0
💬 0
In human-to-human communication, the desire to be empathetic or polite often conflicts with the need to be truthful—hence terms like “being brutally honest” for situations where you value the truth over sparing someone’s feelings. Now, new research suggests that large language models can sometimes show a similar tendency when specifically trained to present a "warmer" tone for the user.
In a new paper published this week in Nature, researchers from Oxford University’s Internet Institute found th
In a new paper published this week in Nature, researchers from Oxford University’s Internet Institute found th
Comments (0)