"Large Models of What? Mistaking Engineering Achievements for Human Linguistic Agency" is a recent peer-reviewed paper that aims to take a look at how LLMs work, and examine how they compare with a scientific understanding of human language.
Amid "hyperbolic claims" that LLMs are capable of "understanding language" and are approaching artificial general intelligence (AGI), the GenAI industry –
forecast to be worth $1.3 trillion over the next ten years – is often prone to misusing terms that are naturally applied to human beings, according to the paper by Abeba Birhane, an assistant professor at University College Dublin's School of Computer Science, and Marek McGann, a lecturer in psychology at Mary Immaculate College, Limerick, Ireland. The danger is that these terms become recalibrated and the use of words like "language" and "understanding" shift towards interactions with and between machines.
"Mistaking the impressive engineering achievements of LLMs for the mastering of human language, language understanding, and linguistic acts has dire implications for various forms of social participation, human agency, justice and policies surrounding them," argues
the paper published in the peer-reviewed journal Language Sciences.