More Meditations on Machine Translation

At this year’s CEGLOC virtual conference, I watched a couple of presentations about the role of Machine Translation (MT) in language teaching and learning.

They got me thinking again about a subject I’ve written about a few times before and also edited an article on.

Here are a few key assumptions about the intersection of MT and language teaching/learning:

#1 The accuracy/naturalness of MT is continuing to improve. The output produced by MT is approaching the point where it is virtually indistinguishable from the output produced by human translators.

#2 It is unacceptable for a student to rely solely on MT when submitting work for an assessed course of language learning. An example of this would be if a Japanese student wrote a report entirely in Japanese, pasted the report into an MT tool, copied the resulting English, and handed in the work in as their own (without even looking at the resulting English text)

#3 Notwithstanding #2, MT could have powerful pedagogical applications if used in the right way. 

#4 It is difficult, if not impossible, to completely prevent the use of MT without reverting to hand-written essays in exam conditions. Unlike plagiarism, MT cannot be easily detected by software. Although there are still some tell-tale idiosyncrasies of translations produced by MT (such as inappropriate grammatical subjects when translating Japanese to English, for example) such traits/mistakes are becoming less obvious as MT continues to improve

#5 The kind of behavior exemplified in #2 is clearly not a “CALL” technique. Computer Assisted Language Learning entails the use of computers to assist the learning of a language. Using MT to generate a piece of assessed work and handing it in sight-unseen is basically indistinguishable from plagiarism. But instead of passing off the work produced by another human intelligence as their own, the perpetrator is passing off the work generated by artificial intelligence as their own.

The above assumptions (if correct) raise some interesting questions, and force us to re-evaluate the reasons or motivations for learning a language.

The motivation for learning a language is often categorized into three main strands: integrative, intrinsic, and instrumental.

Integrative motivation compels language learners who wish to live in and integrate with a target language community. This kind of motivation might drive an American who wants to emigrate to and settle down in Japan, for example. 

Would such an individual be able to rely solely on MT tools and devices to achieve this goal? Could they whisper sweet nothings into their iPhone, and then place the iPhone on their pillow and allow it to translate and convey those sentiments to their significant other? Perhaps not.

Intrinsic motivation comes from inside the individual and often arises from a deep interest in the target language itself. Intrinsically motivated students are interested not only in the syntactic structure of the target language, but also in how speaking the language will change the way they perceive and interact with the world around them.

Could an intrinsically motivated individual leverage the power of MT to further increase their knowledge of the target language? I think so. Would they be happy to completely delegate to MT the task of translating their thoughts from L1 to L2? Would they want to miss out on the philosophical or cultural insights that learning another language can bring about? I think not.

Instrumentally motivated individuals simply treat the target language as a means to an end. They want to get a promotion or avoid being demoted. They want (or have) to do business with speakers of the target language. They want to quickly translate an email or subtitles for a video (e.g., rev.com). They want to pass an exam or entrance test for a particular business organization or academic institution. 

Could such an individual rely extensively on MT to achieve their aims? I think so. Would it be fair to allow them to do so, especially with regards to assumptions #2 and #5 above? Perhaps not. 

That question would need to be decided by the organizations and institutions involved, who are best placed to judge the skills and competencies they require from candidates.

Given all of the above, I tend to believe that the use of MT tools and devices will continue to increase, especially in situations where instrumental motivation is paramount, or time and money costs are significant. 

But in my role as a language teacher who has to assess the written and spoken output of language learners, there are difficult questions to answer with regard to the role that MT can or should play in the language learning process.

MT surely has many powerful pedagogical applications, but the temptation for time-pressed and sleep-starved students to rely solely on MT to produce the required output is high.

And then we’re into the familiar territory of plagiarism – passing off another’s work as your own. Something most academic institutions seriously frown upon.

So, those are my current thoughts on MT.

Would love to hear others.

20 Tech Tips from Joe Dale

Joe Dale is a wealth of ed-tech tips and information

For anyone unfamiliar with Joe Dale, I highly recommend you check out his YouTube channel and follow him on Twitter. The man is an absolute wealth of tech tips for language teachers. Here are a few gems I picked up from him in a single Zoom session:

  1. Make any video your lesson with EdPuzzle
  2. Visualize your ideas in a new and collaborative way using JamBoard
  3. Easily add transcribed voice comments and feedback to shared documents using the Mote Google Chrome extension
  4. Allow students to create digital learning portfolios with Seesaw
  5. Quickly and easily record your voice with Vocaroo or OnlineVoiceRecorder
  6. Immersive Reader, included in OneNote Learning Tools, is a full screen reading experience to increase readability of content in OneNote documents
  7. Ferrite Recording Studio makes it fast and easy to record and edit audio, and includes powerful features such as effects and automation
  8. Voice Record Pro 7 is a professional voice recorder for iOS
  9. Textivate generates a wide range of interactive activities based on your own text and / or matching items. It works with texts of up to 500 words and / or up to 200 matching items
  10. Teach any language with YouTube + TeachVid
  11. LearningApps.org is a Web 2.0 application, to support learning and teaching processes with small interactive modules
  12. You can easily allow anyone to create a copy of a Google doc you have created by changing the end of the URL from /edit?usp=sharing to copy: https://docs.google.com/document/d/1lQdVTkuiT6oi-CZ9A9y6rrCXOyoX8VeSgBw-sH94WHA/edit?usp=sharing -> https://docs.google.com/document/d/1lQdVTkuiT6oi-CZ9A9y6rrCXOyoX8VeSgBw-sH94WHA/copy
  13. Easily create any kind of Google Drive doc with the following URL shortcuts: doc.new, form.new, slides.new
  14. Use Ilini to learn French with the best videos on the web
  15. Create presentations, infographics, and more with Genially
  16. Create your own personal Emoji with Bitmoji
  17. Get popup translations for any website using Lingro
  18. Get easy-to-understand multilingual definitions with WordReference.com
  19. Exam.net is a robust, easy-to-use and secure exam platform
  20. Draftback is a Chrome extension that lets you play back any Google Doc’s revision history

How does Speech Recognition work, and how can it help us teach English? (Part 1)

Automatic Speech Recognition (ASR) seems to be everywhere these days, from your smart fridge, to your smart phone, and every device in between. But how does it actually work, and how can it be utilized by teachers of English?

In the first part of this blog post, we learn how speech is transformed from vibrations in the air to text on your screen. In the second part (coming soon!), we take a look at some of the ways speech recognition can be used as a teaching and testing tool in English language pedagogy.

Step 1. Analog to digital

Humans live in an analog world. When we speak to each other, we don’t transmit streams of numbers to each other; we vibrate our vocal chords, which create sound waves that vibrate other people’s eardrums, which send electrical signals into the brain, which the brain interprets as words. Unfortunately, computers can’t process sound waves without first converting them into a digital form, i.e. a stream of numbers. 

This is exactly what a microphone does. A microphone is basically an analog-to-digital converter (ADC), which changes vibrations in the air into electrical signals that can be represented by numbers. However, this is all a microphone can do. It can convert an analog audio wave into a digital stream of numbers, but it has no idea what words (or other sounds) those numbers represent.

In order to recognize words, we need a computer program that can break the recorded sound down into its individual phonemes, and then connect those phonemes into the most likely combinations to form words.

Step 2. Identifying phonemes

A phoneme is the smallest significant part of a spoken word. The word “cat”, for example, consists of three phonemes, transcribed in ARPABET as: 

K AE T

What rule can we specify to allow our computer to determine whether a certain segment of a sound recording is the phoneme “AE” in “cat”? It is not an exact science. Different speakers pronounce the “AE” phoneme differently depending on their accent, their tone of voice, their vocal timbre, their age, gender, and even emotional state.

Instead of trying to come up with a rule for what the “AE” phoneme sounds like, we can feed a Machine Learning (ML) algorithm thousands of hours of English speech, and allow it to figure out for itself what the “AE” phoneme is supposed to sound like. Then we can ask the algorithm:

Given that these sounds are all “AE”, is this sound also “AE”?

An important point to note here is that the algorithm is not trying to figure out which phonemes individual words are made up of. This process has already been completed by language experts, who have released dictionaries of word-phoneme mappings that can be used to train speech recognition engines.

What the ML algorithm is trying to do is map sounds to phonemes, and then connect those phonemes into the most likely combinations to form words.

It does this by chopping up phonetically annotated sound clips into very short (25ms) frames. Each frame is converted to a set of numbers which represent the different sound frequencies in the frame. The ML algorithm then learns to associate certain frames or combinations of frames with the corresponding parts of the phonetic transcription.

Every time the training program encounters the “AE” phoneme, it accommodates the new example in its Acoustic Model (AM) of the sound, thereby building up a comprehensive representation of what the “AE” phoneme should sound like.

Step 3. Connecting phonemes

Once the algorithm has processed all of the training data, we can then ask it to identify an audio recording of the word “cat”. It will break the recording down and analyze it, as described above, it an attempt to identify its constituent phonemes.

However, because some phonemes (and consequently some words) have incredibly similar pronunciations, sometimes the computer’s best guess at the recording’s constituent phonemes isn’t accurate enough for reliable speech recognition. Fortunately, there is a way to improve the computer’s accuracy.

We can narrow down the possible phoneme choices by employing a statistical algorithm called Hidden Markov Model (HMM). HMM uses statistical probability to determine the likelihood of a future state (the next phoneme in the sound) given a current state (the current phoneme in the sound). 

When it comes to phonemes in the English language, certain combinations are much more likely than other combinations. For example, “Z” in “zebra” never follows the phoneme “C” in “cat”, but “AE” in “cat” often follows “C” in “cat”.

When a speech recognizer is attempting to map a sound to its constituent words and phonemes, it will give precedence to likely combinations of words and phonemes over unlikely or impossible combinations. It knows what the likely combinations are by referring to a large database of phonetically transcribed recordings, known as the Language Model (LM).

For example, the sentence “Dolphins swim” is much more likely to occur in the English language than “Doll fins swim”, even though “dolphins” and “doll fins” are comprised of exactly the same sequences of phonemes.

Step 4. Hello computer!

We now have a computer program that can analyze recorded sound and convert it into the most likely sequence of words.

But how does all of this help English learners to improve their speaking skills? Read Part 2 to find out! (Coming soon!)