Lingua East

People should hear your ideas, not your accent.

Category: speech

The Mechanics of Speech

Remember learning about Rube Goldberg machines? These machines employ a series of mechanical processes that eventually result in an action. Speech production is similar to a Rube Goldberg machine. After the brain figures out what we want to say (which is accomplished through electrical and chemical means), the rest of speech production is surprisingly mechanical.

Voice Production

The Bellows

Maybe you’ve used a bellows to blow some life into a fire. You might have noticed the sound of the air rushing out of the bellows and how it changed depending on how quickly you moved the handles together. Or maybe you have created a squealing sound by stretching the opening of an inflated balloon and allowing the air to escape through the tight but unblocked opening. With a little bit of force and in close quarters, moving air can produce sound.

 

Image: Lisa Ann Yount

Image: Lisa Ann Yount

In speech production, the moving air comes from the lungs. We use muscles, including the diaphragm, to get the air moving. It travels up out of the lungs and into the throat, where it meets the larynx.

 

The Whoopee Cushion

Whoopee cushions are immensely entertaining. That is not just because of the ease with which you can trick someone into thinking they have farted, but rather, because the whoopee cushion creates its comical sound via air flowing through a flappy opening. The vocal chords (a.k.a. vocal folds), are like the flappy opening of a whoopee cushion. Just as you can shift the opening of a balloon or whoopee cushion to alter the pitch of the sound that is created by the escaping air, we adjust our larynx to change the pitch and quality of our voice. These adjustments are made using muscles that move cartilaginous structures attached to our vocal folds.

Image: Jason Meredith

Image: Jason Meredith

Voice is produced as it travels through the opening in the vocal folds on its journey from the lungs. The vocal folds are made of muscle with a flexible, stretchy, flappy cover, like the rubber of a whoopee cushion. The sound that comes out of the opening in your vocal folds is your voice.

The Bottleneck

Have you ever blown air across the top of a bottle to make a sound? You can change the sound a little by shifting your lips, but the nature of the sound that you can make with a bottle depends the most on the volume of air inside the bottle. The more liquid there is (in other words, less air), the higher-pitched the sound will be. An empty bottle makes a low-pitched sound.

Image: Dean Hochman

Image: Dean Hochman

Air resonates in a bottle like the voice resonates in the air-filled chambers of the head and neck (a.k.a. the vocal tract). The sound waves of a speaker’s voice explode out of the vocal folds and bounce around at the back of the throat, in the mouth, and up into the nose. The speaker can greatly affect the quality of the sound of their voice using the muscles of the throat and mouth to position their larynx and the surfaces of the vocal tract.

Articulation

Over the years, people have created duck calls, a type of whistle designed to – like a duck’s own vocal tract – cut through the noise of water to attract ducks. There are a number of duck calls out there, with each design offering a different ducky sound. Key features of duck call designs are the use of vibrating reeds to create the sound when someone blows into the whistle, and the blockages through which the sound waves travel before leaving the whistle. The structure of these duck calls can vary greatly, with different whistles producing very different sounds.

Image: www.patentswallart.com

Image: www.patentswallart.com

Spoken language is a series of sounds that we create using the air we move from our lungs out our mouth (and nose). What makes it so complex is our ability to produce – and understand – a lengthy series of these sounds at high speed. We produce the sounds of speech with our lips, tongue, and the flap of tissue that separates the air in the mouth from the air in the nose.

Different sounds are produced by changing the airflow in the vocal tract in different ways. Speakers can do this by forcing the air through a smaller space (like when a pirate bunches up their tongue to say, “arrrr”), and by blocking the air (like when a diner forces the air out through the nose to say, “mmm” or a thirsty baby moves their lips together and apart to say, “baba”).

Putting it Together: Whoopee Cushion to Duck Call

Around the globe, there are over 100 different speech sounds, and all are created out of thin air[1]. When we speak, we move air from our lungs through our vocal folds, and we manipulate it for each sound. Speakers combine speech sounds in infinite ways to communicate. A listener’s ability to process a series of speech sounds quickly depends on their knowledge of the language spoken and their experience using that language, as well as the speaker’s precision of production. Just as one small change in the design of a duck call can change the sound it produces, a small change in the position of the larynx or tongue can significantly change the sound a speaker produces.

When you know the words you want to say but you feel that your speech production could be more precise, that’s where speech trainers come in. At Lingua East we can help you turn the air in your lungs into speech that people understand. Drop us a line and we’ll help you out. Go on, let them hear your ideas!

[1] While most (such as the sounds in Standard American English) are created using air from the lungs, some speech sounds, such as the various clicks in African languages, are created by drawing air in.

Train Your Ears for Clear Pronunciation

An important part of many accent modification programs is auditory training. This entails listening to sounds in our second language that are so similar that we might not even hear a difference when we start the training. But, with repeated listening and practice, you can learn to hear the differences between sounds that the native speakers hear. Being able to hear the difference can help you to produce the difference in your speech.

The Mouth-Ear Connection

Although no one can know for sure exactly how speech production happens, people have come up with different theories that connect what we hear to how we speak. As babies, we played with pushing air through our mouths, and eventually we figured out how to produce the sounds that we heard around us. These are the sounds of our native language.

When we’re older and we want to learn to speak another language, we try our best to produce the sounds we hear, but there are two things working against us: one is related to the movement patterns our brains have programmed our mouths to follow to speak, and the second is the acoustic input our ears have been trained to pick up on.

During childhood, our mouths learn the motor patterns that are required to produce the speech sounds of our first language with a native accent. These are the motor patterns that, when applied to a second language, contribute to an accent. It is possible to work on the motor patterns for speech sound production to improve pronunciation and increase clarity. As I have discussed in previous posts, this is no simple task; it takes a lot of focused practice.

As we begin life, we are able to distinguish between all the speech sounds of different languages. Babies hear speech sounds with more sensitivity than adults! They can hear the differences between similar sounds in languages spoken not just at home, but around the world. As we get older, the different sounds of speech that we can distinguish are reduced to something closer to the sounds of our own language.

Why You Need to Train Your Ears

As a result of the development of language-specific listening and speaking skills, adults speaking English as a second language can experience difficulties with producing and hearing certain sounds. This difficulty stems from two things. One is not having the appropriate motor pattern to produce the sound; the other is not hearing the contrast between the sound they mean to produce and a similar sound, with which they may be more familiar.

When you think about the different features of a speech sound, it is not surprising that there are some sounds that we hear differently that are very similar. Take, for example the sounds /b/ and /p/. They are both produced by stopping the airflow in the mouth – in this instance, by putting the lips together – then releasing the built-up air. These two sounds are produced in the same part of the mouth. The only difference is that the /b/ has voice, the /p/ does not. In some languages, this difference between /b/ and /p/ isn’t as important as it is in English, so native speakers of Arabic, which has /b/ but not /p/ might not distinguish between these two sounds.

However, sounds that are difficult for an adult speaking English as a second language can be learned; proficiency can be gained. As mentioned here, here, and here, with consistent practice and assistance from a speech trainer or native speaker, it is possible to improve your pronunciation of standard American English. Part of improving your pronunciation involves training your ears.

Training your ears requires some careful listening.

How to Train Your Ears for Clear Pronunciation

  1. Select the sounds you need to work on.

There are many sound pairs that you could work on, but you will probably only need to work on a few that really affect other people’s ability to understand you. These should be sounds that you do not consistently produce when you’re speaking English as a second language. It may be helpful to find the sound pairs that other native speakers of your first language have difficulty with in English. Once you know which sound pairs to train, get a list of word pairs. If you search for “minimal pairs” then you can find several helpful websites with lists of different sound pairs.

  1. Get a recording.

It will be easier to work with just one or two word pair lists at a time. Each word in the pair should be a real word in English, and it should differ from the other word in the pair by only one sound (the sound distinction you’re training). Have a native English speaker check the list to make sure that each pair has the correct sounds, and then have that same native English speaker create a recording of themselves saying each word pair at a reasonable pace. They can do this on the voice recording app on their phone, and send it to you as a text message or email.

  1. Listen to the recording.

Listen to the recording while you’re doing an automatic activity, such as driving. Listen to the recording for several minutes at a time, several minutes a day. Each time you listen, listen really hard for the difference between each word pair.

  1. Check in with a speech trainer or a native speaker.

If you have access to a speech trainer, ask them to help you to learn the muscle patterns for clear pronunciation of the sound distinction you’re training. If you don’t have access to a speech trainer, click here to send one a message.

Show your word list to a native speaker and tell them you want them to quiz you. Ask them to say each word pair, but every couple of pairs or so, instead of reading both words of the pair, have them say one of the words twice. For every pair, tell your speaker if the words were different or if they were the same. Were you able to identify when they were different and when they were the same? Once you are able to identify whether the words are the same or different with 100% accuracy, move on to another list.

Train your ears while you drive to and from work.

Train your ears while you drive to and from work.

Lingua East provides accent modification, professional communication, and cultural communication services to individuals and companies in the United States and abroad. If you or someone you know is interested in communicating with greater clarity, confidence, and success, do not hesitate to contact us at contact@LinguaEast.com.

Seeing is Hearing: The McGurk Effect

For decades, speech pathologists and linguists have been entertaining people at parties with an interesting phenomenon known as the McGurk effect. The McGurk effect occurs when people are exposed to audio of one sound, with a visual of another sound being produced. People hear something different from the actual sound. I first learned of the effect via the following video, in which Patricia Kuhl of the University of Washington elicits the effect with the sounds /ba-ba/ and /da-da/ or /tha-tha/:

Searching for that video, I found a fantastic example using the “Bill! Bill! Bill!” chant from the 90s kids science show, Bill Nye the Science Guy. Take a moment (24 seconds, to be exact) to watch and listen:

The audio is paired with images that affect how the word “Bill” is heard: first, images depicting different bills as shown. Then, as images of pails are shown the sound heard changes to “pail”. Next, images of mayonnaise are shown, and the sound shifts again to “mayo”. Did you hear the three different words?

A McGurk effect shows up in babies exposed to English by the time they are five months-old[1]. This effect seems to strengthen with age. However, the likelihood of a listener falling for the McGurk effect depends on different factors. These factors demonstrate the fascinating interplay between hearing and vision in our ability to understand spoken language.

In a noisy environment, people are more likely to mishear what was said. That makes sense; if there are a lot of noises around, it is harder to pick out one sound from the rest of the noise and correctly identify it. If English is your native language, you’re likely to fall for the McGurk effect. Researchers have found that native Japanese speakers are better able to correctly identify the sound presented, even when shown video of someone producing a different sound[2], with similar results for Chinese as a native language.

This may be related to differences in cultural communication, specifically, eye contact. In English-speaking cultures, for the most part, eye contact is pretty constant, with some degree of occasional gaze shift away from the speaker by the listener. In Asian cultures, eye contact with a speaker is less common, with a much greater degree of the listener directing his gaze to something other than the speaker. How we hear language is impacted by the engagement of the visual system while listening.

Further evidence that how we listen to language affects our tendency to fall for the McGurk effect was found in a 2008 study published in Brain Research[3]. In this study, deaf people who used cochlear implants to hear were compared with normally hearing people in their susceptibility to the McGurk effect. The normally hearing people did not fall quite as hard for the McGurk effect as the individuals using cochlear implants to hear, suggesting that the cochlear implant group relied more on what they saw the speaker doing with their mouth than the audio. This is further evidence that our understanding of spoken language is dependent on the sensory information we take in. This, in turn, seems to be related to our varied cultural communication styles.

We all come from different backgrounds of language, hearing, and abilities. It can be fun to share videos of the McGurk effect with people from diverse backgrounds, to see what they hear. Share what you heard in a comment below!

If you are interested in learning more about the McGurk effect, or if you would like to work on your speech hearing abilities, let us know. Until next time, let them hear your ideas, not your accent.

[1] Rosenblum, L., Schmuckler, M., & Johnson, J. (1995). The McGurk effect in infants. Perception & Psychophysics, 59, 347-357. link

[2] Sekiyama, K. & Tohkura, Y. (1991). McGurk effect in non-English listeners: Few visual effects for Japanese subjects hearing Japanese syllables of high auditory intelligibility. Journal of the Acoustical Sociaty of America, 90, 1797-1805. link

[3] Rouger, J., Fraysse, B., Deguine, O., & Barone, P. (2008). McGurk effects in cochlear-implanted deaf subjects. Brain Research, 1188, 87-99. link

Uptalk: The Reviled Speech Behavior’s Origins and Purpose

There are many curious features of spoken language. Nobody know where they come from or how they start, but everyone has an opinion on these mannerisms. One of such mannerisms is uptalk.

In the world of linguistics, uptalk is known as high rising terminal. In other words, it means finishing a phrase or sentence by raising the pitch of your voice. Uptalk occurs all over the English speaking world, with notable occurrences in the United States, Australia, New Zealand, South Africa, England, and especially in Northern Ireland. Like other curious features of spoken language, the origins of uptalk are guessed at, but unknown.

In the United States, people blame “Valley Girls;” in England they blame the Australians for exporting the juicy soap opera Neighbours; and in Cape Town… I’m not sure about that one. However, uptalk appears to be one of those linguistic features that has always been around. Or at least, it’s been around for much longer than people have been talking about it.

Those particularly critical of uptalk label the high rising terminal as sounding like a question. Those people report hearing an individual using uptalk as asking a series of questions. They further state that this series of question-like utterances make the speaker sound indecisive or insecure. However, such is not the case.[1]

Uptalk is quite versatile. Uptalk is a way for the speaker to ensure the listener is paying attention and following what they are saying. Uptalk is also frequently used when listing items, especially when the speaker has to think about the next item on the list. It is a way of holding the floor so the listener will not interrupt before the speaker has finished. Uptalk has a purpose and perhaps like any other speech behavior, it can be overused, but that does not take away its linguistic legitimacy.

Because language isn’t politicized enough (that was sarcasm), many people commenting on uptalk have labeled it as a gender issue plaguing the speech of young women. Others have come to women’s defense, declaring that the “uptalk epidemic” is just part of society’s unfair policing of women’s behavior. The truth is probably somewhere in the middle of those two extremes.[2]

The uptalk discussion has received contributions from individuals varying in their qualifications to discuss language, from the BBC reporter who consulted with Mark Liberman (a celebrity, if you follow linguist a-listers) and traced uptalk back to the 9th century, to others, who refer to uptalk as a pathology disturbingly growing in popularity among female speech. The truth is, especially in the English-speaking world, communication is not gender-specific. (You have probably heard children of both genders using uptalk as they told a story, especially in the phrase “and then…”.)

The sorts of linguistic features that women are accused of using more often, such as uptalk, or fry, are things that both genders use. Linguistics studying the incidence (how often something occurs) of uptalk have not found any definitive data pointing to it being more of a feminine or masculine way of communicating. Instead, these negative critiques of uptalk appear to be related to some of the work done by linguist William Labov in the 20th century.

Labov came up with something called the “gender paradox.” According to the gender paradox, when there is an evolution in a specific language form, then women – particularly the younger women who speak that language – are much more likely than men to use that form. If the gender paradox applies to uptalk, then you can expect to hear a lot more uptalk in the future.

All attempts to communicate thoughts and ideas are good. When others are uncomfortable with the way you speak, the problem is not yours, it’s theirs. Some speech coaches ascribe to the view that uptalk is a pathology that should be eradicated from their client’s speech, the sooner the better. Or that it is generally undesirable because it makes the speaker sound indecisive, like they lack confidence. However, uptalk has its uses. At Lingua East, we can help you get rid of your uptalk, if that is what you want. But we will never force you to change who you are. It could be that you are just ahead of the curve.

Whether you choose to use uptalk or not, let them hear your ideas.

[1] That is certainly not how the uptalk I heard yesterday from Shinzo Abe’s interpreter sounded. But maybe you disagree.

[2] I am suggesting here that women may use uptalk as a tool to hold the floor, given the overwhelming incidence of men interrupting women more than they interrupt other men.

© 2017 Lingua East

Theme by Anders NorenUp ↑