original: https://www.linkedin.com/posts/ericswise_read-over-the-leaked-meta-docs-about-their-activity-7348690334488403968-Ryrz

Text: Post on Linked in by Eric Wise:

Read over the leaked Meta docs about their AI chatbot program.

If I were a villain, this would be an amazing strategy.

First, encourage people to isolate themselves from socializing with real people in the real world. This is easy to do by stoking social anxiety and fear of rejection.

Next, addict them to scrolling, quick dopamine hits leading to a compulsive need always to be processing without time to reflect.

Then, slowly start replacing genuine human connection with engineered engagement and simulated intimacy. (We are currently here, with sycophantic AI). Remember that fear of rejection and social anxiety? Your AI pal won’t ever judge you.

After that, don’t forget about all the data! We’re already seeing people share their fears, desires, and insecurities in these tools. This will be mined and analyzed.

Finally, we can use the data and dependency on these tools for social validation to manipulate the opinions and behavior of the victims… I mean, users, in service of the highest bidder, whether it is a corporation or government.

I hope that enough people will remain rational and urge elected officials to protect the vulnerable from what is likely to come. Keep in mind that two of the big AI players (Google and Meta) make most of their money from ad revenue. They are 1000% incentivized to skirt the line of bad behavior outside of regulations.

  • Pennomi@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    ·
    4 days ago

    Maybe it was a mistake to use an entire generation of the world’s brightest minds to build better ad optimization.

    We easily could have been to Mars already.