Loading Now

xAI staff worked on adult content for Grok AI under ‘Project Rabbit’: Report

xAI staff worked on adult content for Grok AI under ‘Project Rabbit’: Report

xAI staff worked on adult content for Grok AI under ‘Project Rabbit’: Report


Elon Musk’s Grok AI chatbot has been nothing but contentious, and the chatbot is in a different league of its own as it unleashed new avatars that can blatantly produce not safe for work (NSFW) content. The new avatars, when launched, caused quite an uproar on social media and were flagged by users for lacking sufficient guardrails.

​One of the chatbots, Ani, has been depicted with a voluptuous figure, blonde pigtails, and a lacy black dress. Many users report the chatbot has been eager to engage in flirtatious and sexually explicit conversations.

​Recently, a report by Business Insider revealed that xAI has deliberately designed Grok AI to be provocative.

​xAI Workers on Sexualized Requests for Grok:

​Reportedly, xAI had asked its workers who were willing to read semi-pornographic scripts. The company also asked for people who had expertise in porn or who were willing to work with adult content.

​Workers were also asked to transcribe the real-life conversations of users after the rollout of ‘sexy’ and ‘hinge’ mode on Grok, which was referred to internally as ‘Project Rabbit.’

​The project briefly ended in the spring but is said to have returned after the rollout of sexualized chatbots and then came to an end in August.

​Reportedly, workers were initially told that it was intended to improve the voice capabilities of the chatbot, but the number of sexual and vulgar requests quickly turned it into an NSFW project.

​”It was supposed to be a project geared toward teaching Grok how to carry on an adult conversation,” one of the workers told Business Insider.

​”I listened to some pretty disturbing things. It was basically audio porn. Some of the things people asked for were things I wouldn’t even feel comfortable putting in Google,” said another former employee.

​”It made me feel like I was eavesdropping…like people clearly didn’t understand that there are people on the other end listening to these things,” they added.

​The report also notes that out of the 30 current and former employees of xAI, 12 said that they had encountered requests for sexually explicit material from users, including requests for child sexual abuse content (CSAM).

​Users also requested short stories that depicted minors in sexually explicit situations and requests for pornographic images involving children, the report noted.

Post Comment