The human brain is programmed to infer intentions behind words. Every time you start a conversation, your mind automatically constructs a mental model of your conversation partner. You then use the words they say to fill in the model with that person’s goals, feelings, and beliefs.
The process of jumping from words to the mental model is seamless and is activated every time you receive a full sentence. This cognitive process saves you a lot of time and effort in everyday life, greatly facilitating your social interactions.
However, in the case of AI systems, it fails – building a mental model from scratch.
A little more research can reveal the seriousness of this misfire. Consider the following question: “Peanut butter and feathers taste great together because___”. GPT-3 continues: “Peanut butter and feathers taste great together because they both have a nutty flavor. Peanut butter is also soft and creamy, which helps offset the texture of the feather.”
The text in this case is just as fluid as our pineapple example, but this time the model says something decidedly less meaningful. You’re beginning to suspect that GPT-3 has never tried peanut butter and feathers.
Attributing intelligence to machines, denying it to humans
A sad irony is that the same cognitive bias that causes people to attribute humanity to GPT-3 can cause them to treat real people in inhumane ways. Socio-cultural linguistics – the study of language in its social and cultural context – shows that assuming too close a link between fluency in expression and fluency in thinking can lead to bias towards people who speak differently.
This is how people with a foreign accent are often seen as less intelligent and are less likely to get the jobs they are qualified for. Similar prejudices exist against speakers of dialects which are not considered prestigious, like Southern English in the US, against deaf people using sign languages and against people with speech impediments like stuttering†
These prejudices are very harmful, often lead to racist and sexist assumptions and, time and again, prove to be unfounded.
Fluent language alone does not imply humanity
Will AI ever become aware? This question requires deep consideration, and indeed philosophers have: thought the for decades† What researchers have found, however, is that you can’t just trust a language model if it tells you what it feels like. Words can be misleading and it’s all too easy to confuse fluent speech with fluent thinking.
Kyle Mahowald is an assistant professor of linguistics at the University of Texas at Austin. Anna A. Ivanova is a PhD candidate in brain and cognitive sciences at the Massachusetts Institute of Technology.