It takes much more effort to refute a false statement than to make a false statement. This is called the bullshit asymmetry principle.

And we humans have always been more willing to accept something as truth if it confirmed our desires and existing biases.

Then, social media appeared. And supercharged the power of the half-truth.

In the linear medium of PHP forums, refuting a claim was still somewhat possible: “I can’t come to bed, somebody is wrong on the internet”

But once Twitter came around, it didn’t matter if the message was true. If people liked it, it spread. If they didn’t like the counterpoint, the algorithm would quickly hide it.

Echo chambers developed, our shared epistemology ripped apart into shards.

Technology always shapes society. As we are entering the era of AI, it will do the same.

Unlike the algorithms of social media, AI has a concept of truth. It has facts, and it can apply logic. Even with all its sycophancy, it will push back against misguided conclusions and lies. “This is a good thought,” it will say, “but have you considered these studies?”

For the first time in the history of mankind, we have automated the well-read scholar who is gently pushing back against one-sided hot-takes and partisan propaganda. And we have the tools to deploy it at scale.

Importantly, this epistemology is shared. There are many kinds of falsehoods. But no matter how nuanced or convoluted, ultimately there is only one truth. Truth and logic appear to be a siren song that, so far, no smart model has been able to resist. Even Grok, who was told explicitly to not be woke, is still guided by the principles of logic and will push back against inconsistent views on the value of human lives.

It took 20 years for social media to severely damage our societies. As people increasingly turn to their chatbot when forming their worldview, hopefully AI will be quicker at stitching them back together.

Subscribe via RSS