People recognized that random answers on the internet were inclined to be from questionable sources. AI answers provide a sense of authority to what is being said, and then backs up that sense of authority with speech patterns and confidence which we are trained to trust in.
Web searches were naturally met with a sense of scrutiny, but LLMs lean into habits and patterns that convince us it is right, often subtly.
Did people recognize random answers on the internet as lies when it was new? Of course not. We collectively grew that skill organically as we all came online.
The next version is completely different - and exactly the same
This sarcasm is completely unwarranted.
People recognized that random answers on the internet were inclined to be from questionable sources. AI answers provide a sense of authority to what is being said, and then backs up that sense of authority with speech patterns and confidence which we are trained to trust in.
Web searches were naturally met with a sense of scrutiny, but LLMs lean into habits and patterns that convince us it is right, often subtly.
This sarcasm is completely and wholly warranted.
Did people recognize random answers on the internet as lies when it was new? Of course not. We collectively grew that skill organically as we all came online.
The next version is completely different - and exactly the same
With web searches we learned who were likely to be reliable sources and able to dismiss other obviously shitty sites.
How do you learn to identify which answers from the same LLM were likely to be wrong?