It’s not AI in the sense that it’s not intelligent and thus doesn’t understand any concepts like human or harm so there’s no way to shackle it besides the data it’s trained with. And since companies refuse to spend time and money curating training data and just scrape the whole internet and LLMs are just parroting remixed data thst they are trained with, that’s not likely to happen.
It’s not AI in the sense that it’s not intelligent and thus doesn’t understand any concepts like human or harm so there’s no way to shackle it besides the data it’s trained with. And since companies refuse to spend time and money curating training data and just scrape the whole internet and LLMs are just parroting remixed data thst they are trained with, that’s not likely to happen.