I was honored to receive a Best of JALT award for my presentation on Apps 4 EFL at the Nakasendo English Conference 2015. I’d like to thank the organizers of Saitama JALT for inviting me to give the plenary presentation, and for nominating me for this award. In particular I’d like to thank Matt Shannon, Tyson Rode, and Rob Rowland – you guys are awesome! Thanks!
I’ve just finished reading the excellent Language Learning with Technology by Graham Stanley. It’s packed full of useful ideas for how best to integrate technology with the ESL classroom, and I highly recommend it.
However, having eagerly loaded up almost all the links in the book, I encountered several problems, which are well-known to anyone who’s ever surfed the web. I should acknowledge here that these problems apply equally to all publications relating to web-based technology, including my own.
The trouble with books is that their text is unapologetically static. Once a book is published, once the ink is dry on the dead cellulose wood fibers, it can’t be changed. At least, not until a new edition is released.
Conversely, the web is unpredictably dynamic. URLs which exist on Monday may completely disappear by Friday, or take us somewhere we never intended or expected to go. Link rot is defined by Wikipedia as:
the process by which hyperlinks on individual websites or the Internet in general point to web pages, servers or other resources that have become permanently unavailable
Link rot is a major issue when writing anything about web-based technology. This problem exists not just in relation to dead tree publications, but also with web-based ones, although the latter can be more easily updated.
Link rot is the main reason why it’s inadvisable, to say the very least, to publish anything that relies mainly on the availability and predictability of web resources. It’s one of the reasons why search engines like Google eventually won out against web directories like Yahoo! The web is in constant flux, and trying to write down, describe, or analyze any website, excluding perhaps the web’s most permanent destinations, is an exercise in futility.
No, I’m not talking about British people’s infuriating refusal to acknowledge compliments. I’m talking about what TechTarget defines as the following:
In IT, deprecation means that although something is available or allowed, it is not recommended or that, in the case where something must be used, to say it is deprecated means that its failings are recognized
Perhaps the most infamous example of a deprecated web-based technology in recent years has been Adobe Flash. Once the only way for webmasters to easily deploy games, videos, audio, and a whole host of other snazzy features on their sites, Flash is now regarded with disdain by surfers, web browsers, and tech giants alike.
Almost anyone who has ever used a smartphone or a tablet can tell you: Flash just doesn’t work on mobile. Unfortunately, the appeal of Flash-based apps hasn’t faded as quickly as Apple would have liked. This means that teachers who have recently furbished their classes with a set of iDevices have to be extra careful about which web-based resources they prescribe or recommend, and must be prepared for disappointment when they see the old familiar message: This page requires Macromedia Flash Player to run correctly.
Squatters and Hackers
If squatters are the opportunistic freeloaders who jump into your house and claim it for themselves the minute you vacate it, hackers are the guys who sneak in through the hidden back entrance, change the locks, and stick their name on your front door just to prove a point.
Both hackers and squatters are a major issue in relation to web-based resources. The domain of my name, paulraine.com, is a good example of domain squatting. Back in 2006, I owned it, but later let the registration slip. Now it is “parked”, and if I ever want to use it again, I will probably have to pay the current owner an exorbitant sum to do so. This happens a lot with domains that have at one point been registered. If the owner fails to renew, they get scooped up by internet squatters, and used to advertise tenuously related sites and services. This can happen at any time, and we must be careful that sites we recommend to colleagues or students haven’t been surreptitiously converted into money-making portals.
If your domain hasn’t been taken over by squatters, it may still have been invaded by hackers, which seems to be the unfortunate case with the official companion site to Language Learning with Technology (languagelearningtechnology.com), which, as of November 2016, seems to have be “pwned” by a certain “gunz_berry”:
Bear in mind this is a book that was published only three years ago, in 2013. Who knows how long the companion site has been displaying the “Hacked by gunz_berry” message. Unfortunately, many of the ideas and suggestions in the book refer to the companion site, so it’s a real shame that it’s been compromised. I hope that Cambridge University Press can get it back up and running again soon (I’ve already tweeted the author to let him know the site is down).
Ultimately, there’s not a lot ed-tech authors can do about many of these problems. To avoid link rot, it’s best to go with sites that have been around for at least a couple of years, but even then, they can disappear suddenly and without warning.
We should also be careful not to endorse deprecated technologies, but the pace of technological progress is so fast, that even newer innovations are becoming deprecated very quickly.
As for squatters and hackers, we can only try to ensure that our security arrangements are up-to-date, and we remember to pay our domain renewal fees.
Perhaps a non-technical solution is the best to these technical challenges, and Graham Stanley manages it quite well: focus on types of technology rather than specific instances; focus on ESL activities rather that ESL sites; and give alternatives and variations for every suggestion, to ensure that ideas can still be applied even if the technology itself fails in certain instances.
I was somewhat surprised by several comments on social media in response to my last post, The War Against Machine Translation. Many of the comments spoke out in defense of machine translation (MT). In retrospect, some of the claims I made in my first post were a little far reaching. I’d like to address some of the points made in response to that post, and also clarify and moderate some of the initial claims I made.
I also want to preface this follow-up by stating that I am an avid proponent of Computer Assisted Language Learning (CALL). I have spent the last few years developing a website full of activities and tools for teachers and learners of English as a Foreign Language. However, I believe that anything that can be accurately classed as CALL must by definition assist the learning of a language.
Learning technology should never completely replace the learner. Unfortunately, the way many students view and use the output of MT is as a complete replacement for their own work. In some cases, entire reports are written in L1, pasted into an MT tool, and then the output is submitted as the student’s “own work”. It would be very difficult to say that the students in these cases have learned anything about English. In many cases students fail to even read the result of MT before submitting it, containing as it does so many basic grammatical errors (especially with Japanese to English translation).
Having hopefully clarified my position somewhat, I’ll move on to respond to some of the comments made in relation to my initial post.
Machine Translation is more accurate for language pairs other than English/Japanese
One of my main arguments against the use of MT is that it is simply inaccurate, and is more likely to produce word salad than grammatically correct sentences. Some commenters pointed out that Google Translate, and other MT tools, do much better for other language pairs, particularly the more syntactically and lexically related European languages.
One of the sentences I used in my initial post was “How many close friends do you have?”. After feeding the natural Japanese translation for this sentence (親友は何人いる？) into Google Translate, it output “Best friend How many people”, which is a somewhat unsatisfactory result. However, repeating the same experiment with German (Wie viele enge Freunde haben Sie?) and Italian (Quante amiche intime hai?), Google managed to get the translation bang on, both times outputting “How many close friends do you have?”. However, it failed again for Arabic (How you haveclose friends?) and Russian (How do youclose friends?).
“How many close friends do you have?” is a relatively short and simple sentence. How does MT perform with longer, more complex sentences or paragraphs?
I fed natural Japanese, French, and Italian translations of the following English paragraph into Google Translate:
One of the reasons Twitter is popular in Japan is a characteristic of Japanese itself: Japanese uses ideograms which enable it to convey more information in just 140 characters than other languages, not counting Chinese. Incidentally, the Japanese version of this sentence is written with exactly 140 characters. How many characters does it take in other languages?
For the Japanese translation, the tool output:
One of Twitter popularity of reason in Japan is on the characteristics of the language of Japanese. Japaneseuse ideographic characters, although not I the enemy is in Chinese, it is possible toincorporate a lot of content to 140 characters compared to many other languages. By the way, I wonder, but he has been written in Japanese version is just 140 charactersof this sentence, become what characters in other languages?
One of the reasons that make the success of Twitter in Japan due to a peculiarity of the Japanese language. Compared to many other languages, Japanese, with its ideograms, can convey a lot of information in 140 characters, without competing with the Chinese. Incidentally, the Japanese version of the sentence is written in exactly 140 characters. But how many characters does it do so in other languages?
One of the reasons why Twitteris so popular in Japan is a very characteristic of the Japanese: Japanese uses ideogramsthat allow you to convey more information in 140 characters than other languages,without considering the Chinese. Inter-alia, the Japanese version of this sentence is written in exactly 140 characters. How many characters it wants in other languages?
Further research would be required to determine exactly how accurate MT is for any given language pair, but from these preliminary tests, it would seem that the less related the languages, the less accurate the translations. MT seems to do much better with more closely related language pairs, regardless of length or syntactical complexity.
The best approach to MT is not to ban it, but to highlight its (potential) inaccuracies. This is the correct approach regardless of the motivation or level of the students
In my initial post, I argued that it would be difficult to ban MT all together (although we could reduce the opportunity to use MT by eliminating coursework, for example). If we ban smart phones, on which students can covertly use MT, we completely discard the other more positive technological affordances they provide. Instead, I suggested that we could highlight its inaccuracies to more highly motivated students.
The reason why I restricted this approach to more “highly motivated” students is because they have a desire to improve their English accuracy and idiomaticity, whereas students with low motivation often simply want to meet the course requirements and receive a passing grade in the easiest possible way. Some unmotivated students see MT as a quick and easy way to produce the required written assignments by writing them entirely in L1 and letting MT do the rest.
If you allow or even endorse the use of MT, when it comes to grading submissions, what are you actually grading? When MT produces good results, the student may unjustly receive a good grade. When MT produces bad results, the teacher may waste their time giving corrections on English mistakes the student hasn’t even made! Although I’m sure the Google engineers would be grateful for the feedback.
Fully featured MT is not the same as pop-up translation
Some commenters highlighted the usefulness of websites such as Rikai.com, which provides automatic pop-up translations of words when a user hovers their mouse over them. There are many other tools offering similar functionality, including PopJisho, ReadLang, Rikai-chan for Firefox, Rikai-kun for Chrome, and my own Pop Translation tool. However, there is a substantive difference between these tools and fully featured MT such as Google Translate.
Pop-up translation tools provide definitions on a word-by-word basis, rather than attempting to translate whole sentences. Allowing students to use pop-up translation to read and understand a passage in English is different to allowing them to translate the whole passage into their L1, and perhaps not even read the English version. Pop-up translation cannot be used to unilaterally produce a complete English passage from the student’s L1, or produce an equivalent passage in the student’s L1 from English. When using pop-up translation to read an English passage, students still have to read the English passage to decipher its meaning. Pop-up translation simply provides a more convenient and powerful alternative to a traditional dictionary.
In the preliminary tests I conducted, MT performed much better when translating closely related languages, such as English and French, or English and Italian. It did much less well with English and Russian and English and Arabic. It did quite poorly for English and Japanese.
Fully featured MT, such as that provided by Google Translate, may not be helpful for language learning where students view the output as a replacement for their own work. In the case where a student writes an assignment in L1, pastes it into Google Translate, and submits the output without even reading it, it would be difficult to imagine that any language learning has taken place. The tool is not being used to assist learning, but rather to avoid learning.
Teachers who permit or endorse the use of MT for English written assignments run the risk of unfairly rewarding students where the MT produces good results, and wasting time giving feedback to students where MT produces bad results.
Finally, fully featured MT, such as Google Translate, must be distinguished from pop-up translation tools such as those provided by Rikai.com. Pop-up translation tools do not attempt to translate sentences or paragraphs, but merely provide a more powerful and convenient alternative to traditional dictionaries. It is hoped that they assist the learning of vocabulary in the sense that students will read the English passage, encounter a word or phrase they do not understand, see the pop-up translation, and apply the meaning to the English word in that particular context.
As language teachers, it seems that every day we have to battle the pernicious force of machine translation (MT). In 1997, Alta Vista launched Babelfish, one of the first web-based interfaces for MT. Twenty years later, it seems like every web portal, social network, and search engine offers some kind of automatic translation tool. Even LINE, the kawaii messaging service ubiquitous in Japan, offers an instant translation function, which behaves just like regular chat.
But despite its apparent popularity, and arguable usefulness as an assistive tool to human translators, MT is not a helpful technology for language teachers or learners. It is at best a nuisance, and at worst strongly detrimental to students’ second language acquisition.
The main problems with MT with regard to language pedagogy are that:
It is inaccurate, especially for idiomatic expressions; and
It negates students’ opportunities for language learning
The first of these problems can be easily observed when typing any reasonably idiomatic expression into Google Translate, perhaps the best free web-based MT available right now. Unfortunately, as we shall see, that’s not saying very much..
In this example we see the translator mess up the word order, and also render the verb “drink” as the noun “drink”. “I went to drink a beer with friends” is the more natural human-produced translation for this sentence.
In this example, again, the word order is completely jumbled, and the singular “best friend” doesn’t make sense when the question requires a plural response. Once again, the human generated translation is far superior: “How many close friends do you have?”.
I won’t labor the point here, but you can do your own experiments with any of the currently available MT tools, and you will inevitably come to the same conclusion: MT is still quite bad. Although it can usually convey the gist of the input sentence, it clearly lacks eloquence, idiomaticity and accuracy.
What to do about it
Having concluded that MT is not a good pedagogical tool, the question arises as to how we can eliminate its use both inside and outside the language classroom.
Ban smart phones in the classroom?
Within the classroom, you could prevent the influence of MT by banning smart phones entirely. But if you do this, you are indiscriminately blocking off more fruitful avenues to autonomous learning, along with many other positive affordances offered by mobile devices.
Automatic MT detection
Outside the classroom, your power over students is limited, especially over those more inclined to take the “easy” option of MT in the first place. In addition, although we may strongly suspect a student of using MT outside class, it is often difficult to prove. Although progress is being made in developing MT detection tools, it is still nascent technology. Most of the solutions available at the moment require both the source and translation text in order to attempt to detect MT.
Manual MT detection
It can be possible, however, to manually detect and prove machine translation if you have a working knowledge of your students’ L1.
In a recent low-level speaking class, I asked students to record and transcribe their answers to a 1-minute speaking task. One student’s answer seemed suspiciously like “translationese”. One sentence in particular stood out: “Mother of rice is very delicious”. I guessed that the student had tried to translate the Japanese sentence “お母さんのご飯はとても美味しい” which would be more naturally rendered as “My mother’s rice is very tasty” or more idiomatically as “My mother makes very good rice”.
After inputting my hypothesis into Google Translate, I was presented with the exact same broken English as the student had used in his report. He was well and truly “busted”!
Of course, detecting and subsequently proving the use of MT for a pile of 20 or 30 written reports is a huge waste of time. However, because the temptation to use MT, especially for low-level, low-motivation students is so high, simply instructing students not to do so can be ineffective.
The use of MT became so prevalent with one of my lower level writing classes, that I decided to eliminate coursework altogether, and administer every written assessment in exam conditions. This was the only way I found that I could guarantee that students were not using MT in their written assignments.
Highlight the inadequacy of MT
An alternative solution for more highly motivated classes (those that actually care about developing their English accuracy and idiomaticity) is to highlight how bad MT can be, and in the process hopefully dissuade them from using it all together. One way to do this is to input some English phrases into an MT tool, and translate them into your students’ L1. Students will then understand in a more direct way how bad some of the translations can be.
One day, machine translation may be accurate enough to make language teachers redundant, along with translators, interpreters, subtitlers, and a host of other language-related professions. It may cause an industry shake-up as far-reaching as self-driving cars. But that day is unlikely to be any time in the near future, despite how far we’ve come in recent years. The current generation of MT tools often produce inaccurate and unidiomatic translations. MT is unhelpful for English language pedagogy, and steps should be taken to detect and prevent students’ use of MT.
Today I mark 10 years living and working in Japan. To commemorate the occasion, here is one of my first blog posts from October 2006:
Some things about Japan that I’ve noticed:
The plugs don’t have switches, so if you want to turn something off, you have to physically unplug it
Semi-automatic doors: they lack motion sensors and only open when you press the button
Pelican crossings have no buttons to press
When it rains, everyone uses an umbrella
There are little racks in which to put your wet umbrella when entering shops
The Japanese are incredibly polite: one night some of us got lost, and when we asked for directions, we were escorted by a stranger for a good half-mile to the train station, which was the opposite direction to which he had been walking
The local gaijin pub, Mattari, serves fish and chips
The Japanese like queuing even more than the British. You might even expect to find them queuing on the platform for trains
There are lots of bikes
Pachinko parlors: buy yourself a tub full of ball bearings and pour them into an inverted pinball machine. Adopt an expression of post-lobotomy desolation. These places are completely insane.