Lingua East

People should hear your ideas, not your accent.

Tag: listening

Lost in a Crowd

It’s a strange feeling, to be completely lost, surrounded by people and conversation, struggling to keep up and follow along. Participating in the conversation is much more difficult, with an array of unpleasant emotions. If you find yourself in a place where your second language is the primary means of communication, it takes guts to learn the language to a level where you can use it every day. You probably know what it is like to think hard about a great response to something someone said in conversation, only to come out with it too late.

The moment has passed, and your insightful, witty comment isn’t insightful or witty anymore. Sometimes a thin smile spreads across your conversation partners’ faces as they nod slowly at you, pausing a respectful moment before continuing with a conversation that has progressed further than your ears were able to follow. Other times, after adding your comment, the other speakers keep the conversation going, as if you hadn’t spoken at all.

It’s a feeling of powerlessness, to be left standing there, wanting to be a part of the conversation, but grasping to keep up with what others have said and to come up with a response fast enough for it to add meaning to the exchange. Being able to understand and communicate with others evens the playing field. Even if two people don’t see eye to eye on some things, they can get their ideas across and begin to understand the point of view of others whose knowledge and experiences differ from theirs. But it’s not easy.

It takes patience.

It takes practice.


It takes guts to speak up, to chime in, to share your two cents, to let them hear your ideas. And if you really want them to understand your message, it takes some attention to the way you say it.


So take the time to work on understanding the things about the language that are different from the language you grew up speaking. Maybe pronouns were optional, and you have difficulty with he and she. Many people will brush off you talking about your sister as he, but others might get confused.

When you are giving a big presentation at work, trying to convince your superiors of something you know will be great for the company, the difference between in and on may not be relevant to your ideas, but knowing it will help you be more persuasive.

And in those nerve-racking circumstances when it’s late at night, your phone is dead, and you need to ask a stranger for help, being able to explain your situation with clear pronunciation can make a world of difference.

The more you interact with native speakers and work on your ability to produce the language, the easier it will be to understand others in that language. Life is not as much fun when you are lost in a crowd of people you can’t communicate with. At Lingua East, our certified instructor can give you a road map to better communication in English. Join the conversation. Let them hear your ideas.

Train Your Ears for Clear Pronunciation

An important part of many accent modification programs is auditory training. This entails listening to sounds in our second language that are so similar that we might not even hear a difference when we start the training. But, with repeated listening and practice, you can learn to hear the differences between sounds that the native speakers hear. Being able to hear the difference can help you to produce the difference in your speech.

The Mouth-Ear Connection

Although no one can know for sure exactly how speech production happens, people have come up with different theories that connect what we hear to how we speak. As babies, we played with pushing air through our mouths, and eventually we figured out how to produce the sounds that we heard around us. These are the sounds of our native language.

When we’re older and we want to learn to speak another language, we try our best to produce the sounds we hear, but there are two things working against us: one is related to the movement patterns our brains have programmed our mouths to follow to speak, and the second is the acoustic input our ears have been trained to pick up on.

During childhood, our mouths learn the motor patterns that are required to produce the speech sounds of our first language with a native accent. These are the motor patterns that, when applied to a second language, contribute to an accent. It is possible to work on the motor patterns for speech sound production to improve pronunciation and increase clarity. As I have discussed in previous posts, this is no simple task; it takes a lot of focused practice.

As we begin life, we are able to distinguish between all the speech sounds of different languages. Babies hear speech sounds with more sensitivity than adults! They can hear the differences between similar sounds in languages spoken not just at home, but around the world. As we get older, the different sounds of speech that we can distinguish are reduced to something closer to the sounds of our own language.

Why You Need to Train Your Ears

As a result of the development of language-specific listening and speaking skills, adults speaking English as a second language can experience difficulties with producing and hearing certain sounds. This difficulty stems from two things. One is not having the appropriate motor pattern to produce the sound; the other is not hearing the contrast between the sound they mean to produce and a similar sound, with which they may be more familiar.

When you think about the different features of a speech sound, it is not surprising that there are some sounds that we hear differently that are very similar. Take, for example the sounds /b/ and /p/. They are both produced by stopping the airflow in the mouth – in this instance, by putting the lips together – then releasing the built-up air. These two sounds are produced in the same part of the mouth. The only difference is that the /b/ has voice, the /p/ does not. In some languages, this difference between /b/ and /p/ isn’t as important as it is in English, so native speakers of Arabic, which has /b/ but not /p/ might not distinguish between these two sounds.

However, sounds that are difficult for an adult speaking English as a second language can be learned; proficiency can be gained. As mentioned here, here, and here, with consistent practice and assistance from a speech trainer or native speaker, it is possible to improve your pronunciation of standard American English. Part of improving your pronunciation involves training your ears.

Training your ears requires some careful listening.

How to Train Your Ears for Clear Pronunciation

  1. Select the sounds you need to work on.

There are many sound pairs that you could work on, but you will probably only need to work on a few that really affect other people’s ability to understand you. These should be sounds that you do not consistently produce when you’re speaking English as a second language. It may be helpful to find the sound pairs that other native speakers of your first language have difficulty with in English. Once you know which sound pairs to train, get a list of word pairs. If you search for “minimal pairs” then you can find several helpful websites with lists of different sound pairs.

  1. Get a recording.

It will be easier to work with just one or two word pair lists at a time. Each word in the pair should be a real word in English, and it should differ from the other word in the pair by only one sound (the sound distinction you’re training). Have a native English speaker check the list to make sure that each pair has the correct sounds, and then have that same native English speaker create a recording of themselves saying each word pair at a reasonable pace. They can do this on the voice recording app on their phone, and send it to you as a text message or email.

  1. Listen to the recording.

Listen to the recording while you’re doing an automatic activity, such as driving. Listen to the recording for several minutes at a time, several minutes a day. Each time you listen, listen really hard for the difference between each word pair.

  1. Check in with a speech trainer or a native speaker.

If you have access to a speech trainer, ask them to help you to learn the muscle patterns for clear pronunciation of the sound distinction you’re training. If you don’t have access to a speech trainer, click here to send one a message.

Show your word list to a native speaker and tell them you want them to quiz you. Ask them to say each word pair, but every couple of pairs or so, instead of reading both words of the pair, have them say one of the words twice. For every pair, tell your speaker if the words were different or if they were the same. Were you able to identify when they were different and when they were the same? Once you are able to identify whether the words are the same or different with 100% accuracy, move on to another list.

Train your ears while you drive to and from work.

Train your ears while you drive to and from work.

Lingua East provides accent modification, professional communication, and cultural communication services to individuals and companies in the United States and abroad. If you or someone you know is interested in communicating with greater clarity, confidence, and success, do not hesitate to contact us at contact@LinguaEast.com.

Seeing is Hearing: The McGurk Effect

For decades, speech pathologists and linguists have been entertaining people at parties with an interesting phenomenon known as the McGurk effect. The McGurk effect occurs when people are exposed to audio of one sound, with a visual of another sound being produced. People hear something different from the actual sound. I first learned of the effect via the following video, in which Patricia Kuhl of the University of Washington elicits the effect with the sounds /ba-ba/ and /da-da/ or /tha-tha/:

Searching for that video, I found a fantastic example using the “Bill! Bill! Bill!” chant from the 90s kids science show, Bill Nye the Science Guy. Take a moment (24 seconds, to be exact) to watch and listen:

The audio is paired with images that affect how the word “Bill” is heard: first, images depicting different bills as shown. Then, as images of pails are shown the sound heard changes to “pail”. Next, images of mayonnaise are shown, and the sound shifts again to “mayo”. Did you hear the three different words?

A McGurk effect shows up in babies exposed to English by the time they are five months-old[1]. This effect seems to strengthen with age. However, the likelihood of a listener falling for the McGurk effect depends on different factors. These factors demonstrate the fascinating interplay between hearing and vision in our ability to understand spoken language.

In a noisy environment, people are more likely to mishear what was said. That makes sense; if there are a lot of noises around, it is harder to pick out one sound from the rest of the noise and correctly identify it. If English is your native language, you’re likely to fall for the McGurk effect. Researchers have found that native Japanese speakers are better able to correctly identify the sound presented, even when shown video of someone producing a different sound[2], with similar results for Chinese as a native language.

This may be related to differences in cultural communication, specifically, eye contact. In English-speaking cultures, for the most part, eye contact is pretty constant, with some degree of occasional gaze shift away from the speaker by the listener. In Asian cultures, eye contact with a speaker is less common, with a much greater degree of the listener directing his gaze to something other than the speaker. How we hear language is impacted by the engagement of the visual system while listening.

Further evidence that how we listen to language affects our tendency to fall for the McGurk effect was found in a 2008 study published in Brain Research[3]. In this study, deaf people who used cochlear implants to hear were compared with normally hearing people in their susceptibility to the McGurk effect. The normally hearing people did not fall quite as hard for the McGurk effect as the individuals using cochlear implants to hear, suggesting that the cochlear implant group relied more on what they saw the speaker doing with their mouth than the audio. This is further evidence that our understanding of spoken language is dependent on the sensory information we take in. This, in turn, seems to be related to our varied cultural communication styles.

We all come from different backgrounds of language, hearing, and abilities. It can be fun to share videos of the McGurk effect with people from diverse backgrounds, to see what they hear. Share what you heard in a comment below!

If you are interested in learning more about the McGurk effect, or if you would like to work on your speech hearing abilities, let us know. Until next time, let them hear your ideas, not your accent.

[1] Rosenblum, L., Schmuckler, M., & Johnson, J. (1995). The McGurk effect in infants. Perception & Psychophysics, 59, 347-357. link

[2] Sekiyama, K. & Tohkura, Y. (1991). McGurk effect in non-English listeners: Few visual effects for Japanese subjects hearing Japanese syllables of high auditory intelligibility. Journal of the Acoustical Sociaty of America, 90, 1797-1805. link

[3] Rouger, J., Fraysse, B., Deguine, O., & Barone, P. (2008). McGurk effects in cochlear-implanted deaf subjects. Brain Research, 1188, 87-99. link

© 2017 Lingua East

Theme by Anders NorenUp ↑