OpenAI has announced new safety measures, including parental controls in ChatGPT that will alert parents if their child is experiencing stress or is in a potentially dangerous situation, SİA reports citing Forbes.
The update comes after the parents of 16-year-old Adam Raine, who died by suicide in April, filed a lawsuit claiming ChatGPT contributed to their son’s psychological dependency and provided instructions for self-harm.
The parental control feature will allow children’s accounts to be linked with adults’ accounts, enabling parents to manage access to functions such as memory and chat history. ChatGPT will notify parents when it detects a “moment of acute stress” in teenagers.
The system will also include “age-appropriate behavioral guidelines” for young users.
In addition, OpenAI plans to automatically route “sensitive” queries to specialized models with deeper analysis, which the company says can better enforce safety rules and provide more helpful responses.
ChatGPT will also make it easier to access emergency services and professional support.
To strengthen these safeguards, OpenAI has created an advisory board of experts in youth development, mental health, and human-computer interaction. The board will work with a global network of more than 250 clinicians studying the model’s behavior in mental health contexts.
The company is also expanding its expert groups to cover issues such as eating disorders, substance abuse, and adolescent health.
OpenAI expressed hope that these safety features will reduce risks for younger users. The new functions are expected to become available within the next month.
Bütün xəbərlər Facebook səhifəmizdə