Protecting Cognitive Integrity: Our internal AI use policy (V1)
Source ↗
👁 0
💬 0
We (at GPAI Policy Lab) want to share our V1 policy as an invitation for pushback. Some of what motivates it is our extrapolations of AI capabilities, internal conversations about their effects on cognition, and some empirical evidence. I think the expected cost of being somewhat over-cautious here is lower than the cost of being under-cautious, and the topic deserves considerably more attention than it's currently getting. I'd love to see more orgs publish their own policies on this, both to co
Comments (0)