![](https://techcrunch.com/wp-content/uploads/2023/11/openAI-pattern-04.jpg?w=711)
Under scrutiny from activists and parents, OpenAI has formed a new team to study ways to prevent AI tools from being misused or abused by children.
OpenAI reveals the presence of a child safety team in a new job listing on its careers page. OpenAI said it is working with platform policy, legal and investigations groups within OpenAI as well as external partners to manage “processes, incidents and reviews.” ” This relates to minor users.
The team is currently looking to hire a child safety enforcement expert to apply OpenAI policies in the context of AI-generated content and be responsible for review processes related to “sensitive” (presumably child-related) content.
Technology vendors of a certain size spend a significant amount of time complying with laws such as the U.S. Children's Online Privacy Protection Rule, which requires controls over what children can and cannot access on the web and what kinds of data companies they use. Invest resources. You can collect them. So it's not entirely surprising that OpenAI employs child safety experts. Especially if the company anticipates a significant underage user base at some point. (OpenAI's current terms of use require parental consent for children ages 13 to 18, and use by children under 13 is prohibited.)
However, the formation of the new team, which comes just weeks after OpenAI announced a partnership with Common Sense Media to collaborate on child-friendly AI guidelines and land its first education customers, will address OpenAI's policy violations related to: It suggests vigilance. AI use by minors and negative press.
Children and teenagers are increasingly using GenAI tools to get help with academic as well as personal issues. According to a poll from the Center for Democracy and Technology, 29% of children reported using ChatGPT to deal with anxiety or mental health issues, 22% used it for friend problems, and 16% used it for family conflicts. They say they did it.
Some people see this as increasing risk.
Last summer, schools and universities rushed to ban ChatGPT over concerns about plagiarism and misinformation. Some have since lifted the ban. But not everyone is convinced of GenAI's potential for good. Research such as the UK Safer Internet Center found that more than half of children (53%) have seen people their age use GenAI in a negative way. Information or images used to upset someone.
In September, OpenAI published documentation for ChatGPT in the Classroom that included prompts and FAQs to provide guidance to educators on how to use GenAI as a teaching tool. In one of its support articles, OpenAI acknowledges that its tools, particularly ChatGPT, “may produce output that is not appropriate for all audiences or all ages” and that “if exposed to children, even those who meet the age requirements,” “Caution” was advised.
There is a growing need for guidance on the use of GenAI in children.
Late last year, the United Nations Educational, Scientific and Cultural Organization (UNESCO) called on governments to regulate the use of GenAI in education, including limiting user age and implementing guardrails for data protection and user privacy. “Generative AI can be a tremendous opportunity for human development, but it can also lead to harm and bias,” UNESCO Director-General Audrey Azoulay said in her press release. She said, “It cannot be integrated into education without public participation and the necessary safeguards and regulations from government.”