The elderly woman exhaled loudly as she pushed up from sitting at the kitchen table. She’d heard a knocking from the front porch and wondered if her son had forgotten something earlier. She walked to the kitchen door and looked out across the porch, only to see giant orange flames licking up the siding of the house. Her breath caught in her throat. She fumbled pulling her phone from her pocket and her fingers shook as she punched in 9-1-1. Her voice trembled as she almost screamed at the operator – “There’s a fire on the front porch!” Then, in her hurry to leave, she put the phone down as she picked up her purse and rushed towards the side door. Just as she made it out into the yard, she saw that the flames had already come around the porch and soon the entire side of the house was on fire. Now safely outside the burning house, she suddenly wondered why the operator had said they were sending the police. Why weren’t they sending the firetrucks? I need firefighters!  

Minutes later a neighbor drove by, saw the flames and stopped to help. The woman had the presence of mind to borrow his phone to call 911 back and clarify that her house was on fire and that she needed firefighters to be sent. But in those critical moments, the fire had grown in intensity and the house seemed already engulfed. Somehow, the first 911 operator had heard “there’s a fight” instead of “there’s a fire”.

The old adage, learn from your mistakes, applies not just to trying to improve yourself, but to how the different kinds of mistakes we make can teach us about how the world works. For example, understanding communication mistakes like the one above can help us to better understand human cognition and the mechanisms behind how our minds comprehend language, and these lessons can then be broadly applied in everything from improving education to making your Google/Amazon/Apple AI assistant work better. So, what might have led to the mistake in our story?

You might be thinking, well, “fight” and “fire” sound somewhat alike. The distinction between the two may be even less obvious depending on the speaker’s accent, rate of speech, and degrees of emphasis and articulation. Additionally, maybe the clarity of the audio was degraded over the cellular signal or through the phone’s speaker, and all of this may have been affected or exasperated by the stress and intensity of the emergency situation. Perhaps what the 911 operator heard simply sounded more like “fight” than “fire.”

A maybe less obvious possibility is that the 911 operator’s mind made a sort-of calculated guess – a prediction – about the word or words it might hear, given the context of an emergency call and the phrase “There’s a…”, and that this prediction influenced what they thought they heard. It might seem strange to think that our minds make predictions about what we’re about to hear or read, because if we waited just a few moments there probably wouldn’t be a need to predict at all. However, we know that human minds make lots of other generally beneficial predictions. You may try to predict how your opponent will move when playing basketball, where the ball will land in a game of catch, or how the drivers around you will behave to better plan your own movements. You probably aren’t even fully aware that you’re doing it. If you think of language use like these other joint activities, predictions of what a speaker might say next could allow better coordination of turn-taking, faster comprehension, and better planning of your own responses. When you add in the additional ambiguity of spoken language, from all the words that sound alike to all the different ways the same word can be articulated to just how unintelligible speech can sometimes be, making calculated guesses – when you’re right – could be very beneficial for efficient comprehension.

If our minds are really making predictions during language comprehension, what specifically is being predicted and what information is used to make those predictions? These are questions that are still being actively investigated and debated across the levels of language. There is evidence that one source of information that people can use to make predictions is knowledge about the world, and specifically about what is likely to happen in a given context, to make predictions about upcoming language. For example, how would you complete the following sentences?

Getting himself and his car to work on the neighboring island was time consuming.

Every morning he drove for a few minutes, and then boarded the…

If you said ferry you agreed with most people in the classic study by Federmeier & Kutas where people were able to use knowledge about what can be boarded (not a bridge) and how you can travel to islands with a car (not on just a regular boat) in order to predict the next word in sentence pairs like these.  

           But asking people to complete sentences isn’t necessarily the same as predicting language in real time as it’s being produced. How do we know that people are making predictions early and throughout language comprehension? One way is to follow their eyes. People are attending to what they are looking at, and thus following their gaze (or eye-tracking) as they comprehend sentences can allow you to determine how people are processing information in real time, and specifically what they are thinking about. This is often done using the visual world paradigm, developed by Michael Tanenhaus and colleagues. People are asked to look at objects (or pictures of objects) while their eye movements are measured with a special eye-tracking device. In a seminal study using this paradigm, Altmann & Kamide found that people looked more at a picture of a cake than a train or ball while hearing the sentence “The boy ate the cake” after the verb ate but importantly before they even heard the word cake. Thus people were using their knowledge of what can be eaten to restrict and predict what could be talked about before it was even mentioned.

Another way language prediction can be seen is by measuring how the brain responds to specific linguistic stimuli like words, using non-invasive EEG (those fun head caps with all the wires sticking out everywhere!). A neural response to a specific stimuli, or event, is called an Event Related Potential (ERP).

A centro-parietal, negative-going event-related brain potential that occurs about 300-500 ms after a word is encountered is commonly referred to as the N400 (because it is negative and occurs around 400 ms). A large amplitude of the N400 seems to be the “default” response of the brain to words, with reductions occurring for words that are easier to access because of the prior context or because they are semantically related or part of a predictable continuation. So you might have a smaller N400 to “fire” after hearing “Harry Potter and the Goblet of…” and a larger N400 to “fire” after hearing “Harry went to the circus and ate…” In both cases, it seems your mind uses your own real world knowledge (about what is typically eaten) and experiences (enjoying the Harry Potter series) to make predictions about what the next word might be. (See Kutas & Hillyard, 1980 for foundational N400 work or Troyer& Kutas, 2018 for a more recent example of work in this area.)

       The same might be true of our 911 operator. Perhaps they typically have more calls for fights than fires, or perhaps they had just had another similar call that was about a fight. Perhaps, over the course of the operator’s experiences with the language that people use in emergency calls, people tended to say “My _____ is on fire”, whereas they tended to end phrases like “There’s a…” with words like fight or car accident. (In fact, a quick check of the Google Ngram corpus of literature and periodicals finds that “there’s a fire” is less frequently used compared with “is on fire.”) It would take more research to understand exactly why our operator heard “fight” over “fire,” but this example illustrates the importance of understanding the cognitive mechanisms behind language prediction and comprehension in general. In the majority of cases, predictions like this might not even lead to mistakes, and in fact could lead to better, more efficient responses to a variety of communicative situations. However, understanding more about how the mind makes predictions in language comprehension, both in the mistakes and the successes, can help us to have a greater understanding of the human cognition of language and could be vital to improving any human endeavor that depends on successful communication.

Leave a Reply

Your email address will not be published.