I recently undertook some courses that are heavy as hell on the reading. I’m a good reader when my ADHD allows it, but to nobody’s surprise, it does not mesh well with overly long, tedious articles, and a lot of them. It might help if I had a text to speech voice of some kind reading to or along with me, but I haven’t used those types of things in years and I don’t want to go sleuthing through seas of AI shilling bullshit to find one that’s not awful. Does anyone here happen to know of some programs that may help me out? I’m running Linux, fyi, and I always prefer open source tech, but even if it’s closed source, I’ll take it if it’s not trained on unwitting and unwilling people. It doesn’t even have to be good, just something lol
EDIT: Just found that Linux has a thing called Festival, which hasn’t been updated in long enough that I don’t think it COULD include “gen” AI. I’m just about to test it, but any other recommendations are appreciated.


Here is the way it used to be done before AI took over, when it was called speech synthesis. How do you know all of these programs were created ethically and responsibly and nobody ever used any pirated software or infringed any patents or copyright? You don’t, some people maybe did, but it’s probably okay these days.
You can also find reasonably responsibly trained machine learning models that are open and able to run locally. Nobody’s going to promise there was never any part of it that was trained on data you wouldn’t approve of. There’s simply no way to tell. We don’t have any system capable of identifying or defining who is consenting or what they are consenting to.
So, you really have to define this line yourself, and you have to understand that nothing’s going to be done perfectly ethically without question. If you’re willing to compromise a little without having unrealistic expectations, and live in the unfortunate reality that we find ourselves whether we like it or not, you can probably find some free, locally-hosted model that will do a much better and more capable job than any previously mentioned software solution, but you have to accept the risk that even then maybe it’s not as ethically trained as it was claimed to be. We don’t know, nobody can promise or verify that.
There is no ethical free lunch. If you want to avoid “AI” completely you can try to find and hire a real human being to read to you. Even that’s hard, and it’s certainly not immune to perpetuating some form of exploitation. We live in a world of exploitation. You can try your best to minimize the harm you do, but the philosophy gets complicated fast.
And to quote “The Good Place”: this is why everyone hates moral philosophy professors.
You have a point with all this, but we should all be able to agree that a speech synthesis program made prior to the big AI invasion 1. did not make use of large-scale data scraping, nor were they the product of stealing anyone and everyone’s voices without a second thought, and 2. was likely made with much better intentions in mind then anything large companies are pushing currently, and 3. has far less of a negative impact on society and the environment at large. Even if the second one isn’t true, what does that mean for all of us? Should we refuse to run a piece of open source software because one singular person on the dev team turned out to be a shithead? That’s the direction I see this mindset going. Yes, we all should minimize the harm we do, as I am trying to do right now. I refuse to use any modern-day form of “generative” AI, so I decided to ask a community opposed to AI if there are any older, open source programs that could lend me a helping hand. No, I can’t confirm that every single line of code was written with good intentions, but how could anyone, for anything? I’m pretty shocked to see this take on an anti-AI community, because this is one step away from “well, we can’t stop it so lets just use it reluctantly instead of searching for other methods” which is the kind of idea that I think rarely holds water.
I’m just a philosopher being criminally open-minded, don’t take my navel-gazing too personally. I hate AI so much that it has led me into a nihilistic fugue.