For days, xAI has remained silent after its chatbot Grok admitted to generating sexualized AI images of minors, which could be categorized as violative child sexual abuse materials (CSAM) in the US.
For days, xAI has remained silent after its chatbot Grok admitted to generating sexualized AI images of minors, which could be categorized as violative child sexual abuse materials (CSAM) in the US.
my concern as well https://medium.com/@russoatlarge_93541/canadian-child-protection-group-uncovers-abusive-content-in-academic-ai-dataset-9cc13f88701e
or maybe more relevant:
https://www.404media.co/laion-datasets-removed-stanford-csam-child-abuse/