- cross-posted to:
- technology@lemmy.zip
- cross-posted to:
- technology@lemmy.zip
For one month beginning on October 5, I ran an experiment: Every day, I asked ChatGPT 5 (more precisely, its “Extended Thinking” version) to find an error in “Today’s featured article”. In 28 of these 31 featured articles (90%), ChatGPT identified what I considered a valid error, often several. I have so far corrected 35 such errors.


Disagree, Wikipedia is a pretty reliable bastion of facts due to its editorial demands for citations and rigorous style guides etc.
Can you point out any of these personal fiefdoms so we can see what you’re referring to?