The fediverse used to feel pretty anti-ai, but over the past month or two I’ve noticed a LOT of generated memes and images, and they tend to have positive votes.
Has there been a sudden culture shift here? Or is there a substantial percentage of people just unable to tell the difference anymore?
I don’t know about specific instances but AI has both good and bad sides, so it’d be stupid IMO to just go with a black’n white stance.
Most loudmouths don’t know what they are talking about too (on both sides).
That doesn’t really help me or add anything to the conversation. I already have views on ai, I was really just asking about the dynamics and cultures of different instances because I find learning about those cultural differences interesting.
I’m personally not a fan. Its a commercial product built on the theft of intellectual labor by creatives and the primary selling point of generative ai is that it can replace the people who do that creative labor. I’ve tried using it at various points and it straight up made stuff up and ended up not helping me find what I was looking for at all. I tried to use it to generate practice text I could translate into Japanese for language learning and it constantly used words other than the ones provided- words I didn’t know in japanese.
It has hypothetically useful usecases, that I pretty much never see anyone actually implement, and it feels very clear that the only reason anyone is investing in it is because it can reduce the need to pay actual humans, generating more money for people who already have tons, while wasting huge amount of electricity and resources.
Telling me, apropos of nothing, that having a stance other than neutral is “stupid” doesn’t add anything, give me anything to consider, substantiate any stance, provide any details, etc. I don’t really need to know that you think I’m stupid for not liking ai.
One useful usecase that’s being exploited a lot is roleplay.
Using AI to generate a bot to do a roleplay with and maybe images to add flavour. It’s something that people like to do, and that’s totally harmless.
Like, yes, the llm was trained the books of grrm without his explicit consent and now someone is roleplay a fantasy scenario with John Snow, but who cares?
It’s not like GRRM is available to be hired as a play partner, and no one is getting profit out of it, specially if people just selfhost the models. People is just having fun. And the AI is not substituting anyone. As people didn’t hire “actors” to play their roleplay sessions anyway.
And it’s not like people who use it it like this even post the results in social media and call themselves “AI artist” or anything like that. They just play for themselves or their group of friends, and, at most you can share online the “bot card” so others can use it.
There’s really nothing good about ai if you really look into it. MAYBE medical advances, and thats it.
Translation tools (like DeepL and Google Translate), proof assistance for mathematicians, camera settings optimisations, data analysis assistance in pretty much any field of research, anomaly detection, compression algorithms, ADAS systems like following a lane or self-parking, I can’t remember the specifics, but I know Nokia uses ML/AI methods for signal transmision/receiving optimisation, noise removal, image recognition for various purposes, I recall a system for automatic tree pruning, etc etc.
And before I get the usual “only GenAI is AI”, the underlying methods for creating a generative model and something like a model that detects street signs or abnormalities in medical scans are based on the same principles, they are the same field of computer science.
There are a massive number of scientific research and other pattern matching positive uses that all involve using the AI to help narrow down what to focus on. All of those use AI as a way to filter and group information, not as the end result like the current trend is for the AI being shoved into everything.
Heck, there are some positive uses that could be made with the right guardrails like as a supplemental tool when learning a language (with an educator for oversight!) or as a natural language output for something that is created through an algorithm that returns accurate results.
Mainly, the exact opposite of what is being forced on everyone right now which is inaccurate slop that is full of errors but presented as reliable and helpful.
Is there any resource I can read on or start from that’s useful for the average person? Just pointers so I don’t go on slop rabbit holes which I’m sure there are a lot of.
Not sure how detailed you want to get, but the two that I know of off the top of my head are looking for exoplanets and signs of where humans used to live. Here are a couple of easy reads on the application.
https://blog.tensorflow.org/2019/11/identifying-exoplanets-with-neural.html?m=1
https://www.themirror.com/news/world-news/groundbreaking-ai-uncovers-lost-ancient-945182
Useful medical applications are similar, where the pattern matching can be used to narrow down what to look for, but there is a human step afterwards to verify.
Thanks. Starting easy is fine. At least I now have a place to start from.
They have been doing machine learning for novel proteins for over 15 years now. “AI” is just a buzzword grant writers have to add these days to have any chance at funding is all
I have to correct you here, machine learning (AI) is extremely important in research. There is just no doubt about it.
Is AI image generators beneficial for society? Probably, I have artist friends who use AI images to help them paint for example, but is it out weighing the cost? Dunno.
Is AI slop beneficial? Orobably not :-)