The more i see these issues the more I think the problem is with gradient descent.
It’s like…
Imagine you have a machine draped in a sheet. Machine learning, for all the bells and whistles about attention blocks, and convolutional layers, it’s doing gradient decent and still playing " better or worse. But fundamentally it’s not building it’s understanding of the world from “below”. It’s not taking blocks or fundamentals and combining them. It’s going the other way about it. It’s takes a large field and tries to build an approximation that captures the fold whatever under the sheet is creating: but it has not one clue what lies under the sheet or why some particular configuration should result in such folds.
there was a really interesting critique, I forget where , a few weeks ago on this matter. Also, the half glass of wine issue further highlights the matter. You can appear mache over the problem but you’ll not over come it down this alley we’ve taken.
Depends. Pure LLM, sure, you are right. LLMs are a terrible way to “store” information.
Coupling LLMs with a decent data source on the other hand isn’t such a terrible idea. E.g. answer the question with a google search summarized by LLM can work.
The bigger issue here is (a) when it doesn’t seach but does everything locally and (b) that now the site owners lose traffic without compensation.
The more i see these issues the more I think the problem is with gradient descent.
It’s like…
Imagine you have a machine draped in a sheet. Machine learning, for all the bells and whistles about attention blocks, and convolutional layers, it’s doing gradient decent and still playing " better or worse. But fundamentally it’s not building it’s understanding of the world from “below”. It’s not taking blocks or fundamentals and combining them. It’s going the other way about it. It’s takes a large field and tries to build an approximation that captures the fold whatever under the sheet is creating: but it has not one clue what lies under the sheet or why some particular configuration should result in such folds.
there was a really interesting critique, I forget where , a few weeks ago on this matter. Also, the half glass of wine issue further highlights the matter. You can appear mache over the problem but you’ll not over come it down this alley we’ve taken.
Thanks for the
iconavatar. Now that song is, once again, stuck in my head lolDepends. Pure LLM, sure, you are right. LLMs are a terrible way to “store” information.
Coupling LLMs with a decent data source on the other hand isn’t such a terrible idea. E.g. answer the question with a google search summarized by LLM can work.
The bigger issue here is (a) when it doesn’t seach but does everything locally and (b) that now the site owners lose traffic without compensation.
or © if scammers can manipulate which phone numbers get displayed in the summary
https://www.zdnet.com/article/scammers-have-infiltrated-googles-ai-responses-how-to-spot-them/