At least it lacks nuance and solutions. In the paper “ChatGPT is bullshit” [1] Hicks et al. makes the argument that since chatGPT is optimizing for humanlike text generation without any underlying notion of “truth”™ it is effectively a bullshitter by definition:
“Any utterance produced where a speaker has indifference towards the truth of the utterance.“
While it is true that LLMs are just a probabilistic distribution across a large amount of text training data, and is effectively optimizing producing the next token given the previous string of tokens, I am not so sure it does not represent some aspect of truth. And while its way of generating utterances (actions) are far from what we humans do, I would argue that it is not that different from how we interact with the world in an abstract sense.
Humans are bullshitters too
We humans do not see the world as it is, we perceive it via our senses, and interpret it via our cultural norms, scientific understanding and numerous other lenses. Thus we never see the world as it truly is, and we build proxies for understanding the objective world, hence the “truth”™ escapes us. A good example of this is Newton’s classic mechanics, it is a good working approximation of how the world works, but quantum mechanics have shown us that it is not the “truth”™. Does that make Newton a bullshitter? Hardly, he was doing the best with the tools and understanding of his time to formulate a new hypothesis of how the world worked.
The fact that we humans cannot truly comprehend or perceive the objective world has led to multiple interesting philosophical directions, from Kant’s the thing in itself and the thing for me – to Sander Pierce’s exploration of semiotics of sign, object and interpreter. All different approaches to how we overcome this fundamental challenge of us being both within the world and outside of the world in our interpretation of it at the same time. In fact, Kahneman shows that we humans are in fact really bad at making objective / rational decisions – we are haunted by bias, easy heuristics, extreme confidence under uncertainty etc. Overall, we humans just suffer from a different set of “hallucinations” than that of LLMs.
Overcoming bullshit
Our latest remedy to overcome this hurdle is the scientific method: We propose a hypothesis of how the world works, and then set out to prove the hypothesis wrong via experiments.
It is not hard to imagine a LLMs system with the same mode of operation. Take software engineering. If the LLM is capable of writing code, executing it, do stack traces, unit and integration tests, it would in time be able to converge on a correct solution similar to how we humans write software. Similarly, we could imagine an AI system hooked up to microscopes, gene expression profiling equipment and similar to do in vitro experiments and slowly formulate new insights about the “truth”™.
The current lack of world understanding in LLMs comes not from a desire to build bullshitting machines, but from the architecture and training data. LLMs have only ever seen the world from the lens of our writing, which is our way of communicating an imperfect representation of our own understanding of the world. If we are removed from the world, one step in our interpretation of it, LLMs are further removed by relying on our linguistic description of our understanding.
A fundamental challenge for moving LLMs (or rather AI) to understand “truth” semiotic systems. Right now our AI are only fed the signs of the world, but have limited access to interpretation (meaning) and interaction (the object / world). What kind of “intelligence” will come from such semiotic AI will be interesting to watch. It will likely be unlike ours, just as other species have other semiotic systems with the world and other “truths”™. Because in the end “truth”™ is for the most part a utilitarian pattern of how the world works that enables me to manipulate the world to my liking.
On the subject of bullshit
Overall, I do find that Hicks et al. are right in their scepticism of the current state of LLMs – but it is easy to be the rebel and contrarian. It is much harder to seek solutions and overcome our current challenges in progressing AI to be more than bullshitters. I would also argue that current LLMs, while merely being probabilistic models, do capture some aspect of “truth”™ in that they correctly mimic our understanding of the world as we have represented it in text. So they can be viewed as limited or constrained “truth”™ machines by proxy. Thus the hold utilitarian “truth”™ and usefulness which can be exploited to augment our own search for “truth”™.
References:
Hicks, M.T., Humphries, J. & Slater, J. ChatGPT is bullshit. Ethics Inf Technol 26, 38 (2024). https://doi.org/10.1007/s10676-024-09775-5