.
New version will allow users to customize AI assistant’s personality in what firm calls ‘treat adults users like adults’ policy
OpenAI announced plans on Tuesday to relax restrictions on its ChatGPT chatbot, including allowing erotic content for verified adult users as part of what the company calls a “treat adult users like adults” principle.
OpenAI’s plan includes the release of an updated version of ChatGPT that will allow users to customize their AI assistant’s personality, including options for more human-like responses, heavy emoji use, or friend-like behavior. The most significant change will come in December, when OpenAI plans to roll out more comprehensive age-gating that would permit erotic content for adults who have verified their ages. OpenAI did not immediately provide details on its age verification methods or additional safeguards planned for adult content.
The company launched a dedicated ChatGPT experience for under-18 users in September, with automatic redirection to age-appropriate content that blocks graphic and sexual material.
It also said it was developing behavior-based age prediction technology that estimates whether a user is over or under 18 based on how they interact with ChatGPT.
In a post on X, Sam Altman, the CEO of OpenAI, said that stricter guardrails on conversational AI to address mental health concerns had made its chatbot “less useful/enjoyable to many users who had no mental health problems”.
The stricter safety controls came after Adam Raine, a California teenager, died by suicide earlier this year, with his parents filing a lawsuit in August claiming ChatGPT provided him with specific advice on how to kill himself. Just two months later, Altman said the company has “been able to mitigate the serious mental health issues”.
The US Federal Trade Commission had also launched an inquiry into several tech companies, including OpenAI, over how AI chatbots potentially negatively affect children and teenagers.
“Given the seriousness of the issue we wanted to get this right,” Altman said Tuesday, arguing that OpenAI’s new safety tools now allow the company to ease restrictions while still addressing serious mental health risks.
Not all journalism is the same. At the Guardian, we see it as our job not only to report the facts as we find them, but to give you the whole picture. Never sanitized or censored, our reporting provides the historical and global context necessary to fully understand the turbulent times in which we’re living.
As we witness the erosion of democratic norms and political stability in our country – with heightened violence and division, troops on city streets, attacks on academia and science, and disregard for the rule of law – the role of the press as an engine of scrutiny, truth and accountability becomes increasingly important.
At the Guardian, we proudly platform voices of dissent, and we are fearless when it comes to investigating corruption and challenging power. We don’t have a single viewpoint, but we do have a shared set of values: humanity, curiosity and honesty guide us, and our work is rooted in solidarity with ordinary people and hope for our shared future.
Not every news organization sees its mission this way – and nor is their editorial independence as ironclad as ours. In the past year, several large US media outlets have caved to outside pressure at the behest of their corporate and billionaire owners. We are thankful the Guardian is different.
Our only financial obligation is to fund independent journalism in perpetuity: we have no ultrarich owner, no shareholders, no corporate bosses with the power to overrule or influence our editorial decisions. Reader support is what guarantees our survival and safeguards our independence – and every cent we receive is reinvested in our work.
www.theguardian.com
New version will allow users to customize AI assistant’s personality in what firm calls ‘treat adults users like adults’ policy
OpenAI announced plans on Tuesday to relax restrictions on its ChatGPT chatbot, including allowing erotic content for verified adult users as part of what the company calls a “treat adult users like adults” principle.
OpenAI’s plan includes the release of an updated version of ChatGPT that will allow users to customize their AI assistant’s personality, including options for more human-like responses, heavy emoji use, or friend-like behavior. The most significant change will come in December, when OpenAI plans to roll out more comprehensive age-gating that would permit erotic content for adults who have verified their ages. OpenAI did not immediately provide details on its age verification methods or additional safeguards planned for adult content.
The company launched a dedicated ChatGPT experience for under-18 users in September, with automatic redirection to age-appropriate content that blocks graphic and sexual material.
It also said it was developing behavior-based age prediction technology that estimates whether a user is over or under 18 based on how they interact with ChatGPT.
In a post on X, Sam Altman, the CEO of OpenAI, said that stricter guardrails on conversational AI to address mental health concerns had made its chatbot “less useful/enjoyable to many users who had no mental health problems”.
The stricter safety controls came after Adam Raine, a California teenager, died by suicide earlier this year, with his parents filing a lawsuit in August claiming ChatGPT provided him with specific advice on how to kill himself. Just two months later, Altman said the company has “been able to mitigate the serious mental health issues”.
The US Federal Trade Commission had also launched an inquiry into several tech companies, including OpenAI, over how AI chatbots potentially negatively affect children and teenagers.
“Given the seriousness of the issue we wanted to get this right,” Altman said Tuesday, arguing that OpenAI’s new safety tools now allow the company to ease restrictions while still addressing serious mental health risks.
At this unsettling time
We hope you appreciated this article. Before you close this tab, we want to ask if you could support the Guardian at this crucial time for journalism in the US.Not all journalism is the same. At the Guardian, we see it as our job not only to report the facts as we find them, but to give you the whole picture. Never sanitized or censored, our reporting provides the historical and global context necessary to fully understand the turbulent times in which we’re living.
As we witness the erosion of democratic norms and political stability in our country – with heightened violence and division, troops on city streets, attacks on academia and science, and disregard for the rule of law – the role of the press as an engine of scrutiny, truth and accountability becomes increasingly important.
At the Guardian, we proudly platform voices of dissent, and we are fearless when it comes to investigating corruption and challenging power. We don’t have a single viewpoint, but we do have a shared set of values: humanity, curiosity and honesty guide us, and our work is rooted in solidarity with ordinary people and hope for our shared future.
Not every news organization sees its mission this way – and nor is their editorial independence as ironclad as ours. In the past year, several large US media outlets have caved to outside pressure at the behest of their corporate and billionaire owners. We are thankful the Guardian is different.
Our only financial obligation is to fund independent journalism in perpetuity: we have no ultrarich owner, no shareholders, no corporate bosses with the power to overrule or influence our editorial decisions. Reader support is what guarantees our survival and safeguards our independence – and every cent we receive is reinvested in our work.
OpenAI will allow verified adults to use ChatGPT to generate erotic content
New version will allow users to customize AI assistant’s personality in what firm calls ‘treat adults users like adults’ policy