For days, xAI has remained silent after its chatbot Grok admitted to generating sexualized AI images of minors, which could be categorized as violative child sexual abuse materials (CSAM) in the US.

  • IchNichtenLichten@lemmy.wtf
    link
    fedilink
    arrow-up
    8
    ·
    16 days ago

    I agree that it’s disgusting. To answer your question, it doesn’t know anything. It’s assigning probabilities based on its training data in order to create a response to a user prompt.

    • DaTingGoBrrr@lemmy.world
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      16 days ago

      Yes I know that it doesn’t “know” anything, but where did it get that training data and why does it have CSAM training data? Or does it just generate regular porn and add a kid face?