ChatGPT is just too dangerous for teenagers
BLOOMBERG
When tech companies allow the public full access to open-ended AI that keeps them engaged with persistent memory and human-mimicking empathy cues, they risk creating unhealthy attachments to that technology.
A better approach would be to release narrow versions of ChatGPT for under-18s, restricting conversations to subjects related to things like homework, and limited from getting personal.
Clever users might still jailbreak the bot to talk about loneliness, but the tech would be less likely to go off the rails. OpenAI recently introduced parental controls and is testing its technology for checking user ages on a small portion of accounts, a spokesperson tells me.
It should go further by preventing open-ended conversations with teens altogether. That would get ahead of future regulations that look set to treat emotional manipulation by AI as a class of consumer harm.
Read more | BLOOMBERG

