• Tim_Bisley@piefed.social
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    AI models are annoyingly affirming even for the most benign questions. I can be like what shape is a stop sign? It would reply with something like “Way to think on your toes and you are so right for asking about that!”

  • Saleh@feddit.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 month ago

    I wonder if this is related to how ChatGPT and other models provided as a service have been filtered.

    E.g. them being “forced” to be nice and more agreeable.

    If that turns out to be the case, i’d wager that it is impossible to filter for every possible constellation and outcome as we have seen with people hacking through clever prompting in ever more sophisticated ways.

    I find it particularly worrying that people without any prior signs of mental health issues got sucked into severe delusions and the article suggests that the “AI” being marketed as reliable and impartial is key to it. This means the companies behind will not address this fundamental misconception as their business model is built on it.

    I dont see how these cases could be prevented without extreme regulatory intervention.