Skip to main content

Neural Processing of Spontaneous Speech

Speech coming out of a maze-like brain

Neural Processing of Spontaneous Speech

How does our brain navigate through the challenge of "messiness" in speech?

Real-life speech is far from the tidy structure of textbook sentences. It’s filled with disfluencies like fillers ("uh," "um," "you know"), repeated or corrected words, and abrupt shifts to new ideas mid-sentence. At first glance, it can seem chaotic. Try reading a verbatim transcript of real-life speech, and you'll quickly realize that our brains must handle much more than just linguistic analysis to make sense of the input. This "speech pre-processing" involves cleaning up and segmenting speech into manageable units that serve as the foundation for higher-level analysis.

Through fMRI, EEG, and linguistic analysis, we investigate speech pre-processing to uncover the neural mechanisms that enable us to process syntax and semantics within the seemingly chaotic nature of real-world communication. More precisely, we ask:

How does the brain process disfluencies?

Are disfluencies processed like words? Do they make speech comprehension more challenging or on the contrary - easier to process?

Does the brain segment the continuous speech stream into grammatical sentences?

What is a sentence, really? At first glance, the answer seems obvious—but try segmenting unpunctuated text into sentences, and it quickly becomes clear that the task is far from straightforward. Prosody often diverges from the grammatical definition of a sentence, and the messy nature of spontaneous speech—laden with disfluencies and incomplete grammar—only amplifies the challenge. Yet, our brains handle this complexity with remarkable ease, transforming a continuous stream of sound into coherent and structured meaning. How does this remarkable feat happen so seamlessly?