hero:Tech: The Age of Stochastic Parrots
– 4 min read

Tech: The Age of Stochastic Parrots

A colleague noted a few weeks ago that using more plain language would help me communicate better. And while they were right, there is also a subtle power in how we tell the story and what phrases we use. Sometimes, it makes sense to go all over the board, while sometimes, it’s better to be direct1. We use language not only to convey meaning but also to influence the listener.

This realization led me to a crucial understanding of the impact of the large language models. Something even many of the big tech seem to be utterly ignorant of.

Understanding the difference between a plain data transfer and a message intended to affect the recipient is hard. It is an essential skill in the age of large language models, the LLMs. It can help us utilize the so-called “generative AI” to its fullest potential. And missing it can lead us to disaster.

Emily M. Bender coined a fantastic way to describe the LLMs as “stochastic parrots2.” Machines that are, in Muhammad Saad Uddin’s words, “impressive in their ability to generate realistic-sounding language but ultimately do not truly understand the meaning of the language they are processing3.”

I’ve often called myself a dabbling statistician4, and from that point of view5, I understand the dangers the researchers want to highlight. But I also recognize the incredible power these one-armed bandits create. If we can handle 80% of incoming customer feedback automatically as a standard issue or summarise the central findings of a large body of knowledge in seconds - I’m all for it.

Many think we can now outsource all thinking into something that’s a lot like rolling the dice6. Yes, the dice are weighted and tend to give the correct answer more often than not. But it’s still just a dice roll. It can not replace research, analysis, or study.

The parroting problem is evident when you look at the tech forecasts and predictions that big tech and consultancies publish today. Gartner and McKinsey look like they have done their homework, but almost all of the industry outside of these two publishes repetitious, over-explaining text that fails to provide any new insights or reflections. Material that looks like LLM generated noise to a bystander.

Is this a marketing experiment? Or do the giants of consulting and tech believe they can replace expert work with random noise? Or am I just seeing a sixth finger7 where there is none? The reason for the appalling quality of this year’s tech predictions doesn’t matter for the argument: even if humans can be lazy, there are fields where an LLM will never be able to do the required work.

And this brings me back to where we started: simple and direct language. We must create the language to describe the opportunities the LLMs created without hiding the caveats. Even if that language comes in the form of calling these machines “stochastic” or “parrots.”


  1. In the feedback context, it’s always better to be direct and use plain language. I was being too clever for my good, and I appreciate the critique.

  2. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? Emily M. Bender et al. https://dl.acm.org/doi/10.1145/3442188.3445922

  3. Stochastic Parrots: A Novel Look at Large Language Models and Their Limitations. Muhammad Saad Uddin. https://towardsai.net/p/machine-learning/stochastic-parrots-a-novel-look-at-large-language-models-and-their-limitations

  4. In my native language, this makes a lot more sense. The word for Anatinae – like a dabbling duck – is “puolisukeltaja,” which translates roughly to a half-diver-half-walker and can be used to refer to a job or profession you do on the side—something where you are proficient but not an expert.

  5. Most of the time, researchers seem oblivious to the power of the things they study. They talk of the limitations, problems, and dangers – which makes sense – as they aim to understand the phenomenon instead of thinking about how someone could create a business scenario based on the findings.

  6. Daniel Kahneman makes an excellent point on the opposite in his famous quote; “There are domains in which expertise is not possible. Stock picking is a good example. And in long-term political strategic forecasting, it’s been shown that experts are just not better than a dice-throwing monkey.” In the same alley of thought, today, there are domains where LLMs exceed human experts. And domains where they never will.

  7. Why does AI art screw up hands and fingers? https://www.britannica.com/topic/Why-does-AI-art-screw-up-hands-and-fingers-2230501