The video for my recent presentation at JALT International conference is now available! Error Spotter is a new web-app for improving students’ recognition of English grammatical errors.
BYOD (Bring Your Own Device) is the only solution for educators who wish to use technology in the classroom when access to a CALL lab or institutional set of devices is not available.
Almost all university freshmen in Japan now possess a smartphone of some description. These are generally either iPhones running iOS or OEM handsets running Android. iOS seems to be somewhat more popular in Japan, but there are still a fair number of students with Android handsets, and a few with rarer hardware/software combinations.
If you are relying on BYOD for your tech-powered teaching, the fact that not all your students will have the exact same device is where your problems begin, but not, unfortunately, where they end.
OS fragmentation is “a barrier to a consistent user experience, a security risk, and a challenge for app developers.” It is caused by mobile device owners’ unwillingness or inability to update to the latest version of their device operating system whenever an update is released. This problem is particularly pronounced for Android handsets, but also exists in relation to iOS.
This might not be a problem for individual users, but it becomes a major issue when leading a group of students in lock-step through a structured learning process. The fact that the “user experience” is inconsistent means that there is no single set of instructions that all students will be able to follow. The fact that developing for every possible OS/handset combination is a challenge means that many apps only run on the latest OS versions of the most popular handsets.
So, although every student may possess a smartphone, not every smartphone will be able to run the cool CALL app you have in mind. Even if they can, you will either have to give individual support to every student in helping them set up the activity, or create multiple iterations of the instructions to cover every OS/device eventuality.
Unlike institutionally owned devices, which can be easily wiped after the user logs out or finishes the class, student owned devices contain a trove of personal data: photos, messages, appointments, contact information, and more.
Most students would probably feel uncomfortable sharing at least some of this information with their teachers. So when we walk around the room monitoring students to make sure they are on-task, or helping them set up the mobile-based CALL activities, we have to be careful not to inadvertently peek into the personal lives behind the tiny glowing screens in their hands.
Ever since Apple overhauled the iOS notification system, it seems that every app and its dog wants to send me updates, offers, news and status reports. While I endeavor to disable notifications for any app that doesn’t absolutely need them, my students tend to be less discerning. There’s nothing worse than setting up a class activity on mobile devices, only to have students navigate away from the app or site the moment a giant emoji-laden message drops down from the top of the screen. Even the students who diligently dismiss annoying messages from friends must find them a distraction from the learning process.
And I haven’t even begun to mention the students who will double click the home button and go back to Candy Crush the minute you’re not hovering over their shoulders and spying on their screens.
The modified version of Maslow’s hierarchy of needs now puts battery life right at the bottom of the pyramid, directly below “Wi-Fi”. Yes, this a sarcastic dig at millennials’ seeming inability to pull themselves away from their devices and do something healthy like.. climb a tree. However, in the CALL-based EFL classroom, it is a very pertinent observation.
Battery life hasn’t really improved as much as we’d like in recent years, and certainly not as much as storage capacity or processor speeds. It seems that battery life isn’t subject to Moore’s law, as the science behind it is based on thermodynamics rather than electrodynamics.
This means that students, who are already heavy mobile users, may simple not have enough juice to utilize their devices during study time as well as break time. Where this is the case, you’d better hope that you have enough power outlets and charging cables to get them hooked back up to the mainline.
Capped data plans on mobile are generally the norm these days. There may be actual technological reasons behind this, but the cynical side of me suspects it’s just the carriers trying to milk heavy users for more money.
In any event, if you don’t have an easily accessible Wi-Fi network in your classroom (which isn’t restricted to just teachers) and you’re asking students to use their own data connections to engage with your chosen app or website, you have to be careful not to inadvertently incur additional charges for your students. Usually they will be quick to let you know when this is the case, but it can be yet another barrier to the successful exploitation of BYOD.
If you can overcome the difficulties presented by various models of various handsets running various versions of various operating systems, and all students have a fully juiced up device with plenty of bandwidth, and they are able to pull themselves away from Candy Crush, and ignore messages from their friends in other classes, then BYOD can be a good way to gain access to mobile technology in the classroom.
However, we must be careful not to appropriate students personal (and often private) devices as our own teaching tools, despite how cool that new ELT app may be.
I was honored to receive a Best of JALT award for my presentation on Apps 4 EFL at the Nakasendo English Conference 2015. I’d like to thank the organizers of Saitama JALT for inviting me to give the plenary presentation, and for nominating me for this award. In particular I’d like to thank Matt Shannon, Tyson Rode, and Rob Rowland – you guys are awesome! Thanks!
- The Free Music Archive is a great place to find music for apps, games, and other projects
- LiveInk allows you to easily render any text in a “brain friendly” way
- FreeHostia allows you to host your own website, blog, or bulletin board for free
- Roll20 is a suite of easy-to-use digital tools that expand pen-and-paper game play
- Pics4Learning provides free clip art for educational resources..
- ..while the Library of Congress can be used to find public domain images of historical significance..
- ..and ELT Pics is a Flickr photo stream containing over 25,000 pictures for teachers of ESL
- LibSyn provides podcast hosting for only $5 a month
- Hopscotch helps you learn to code through creative play..
- ..and the Learn How YouTube channel contains video tutorials for beginner programmers
- Imiwa is a Japanese dictionary for iOS
- Smart Smart produces many different apps for English study
- TEDict is an iOS app which allows you to use TED videos for listening dictation practice
- News in English provides English news in three different levels of difficulty
- Herstory is an innovative and interactive detective game
- EFL Technologies provide a number of free apps for learning the NGSL, GSL, NAWL and AWL
- English Test Prep Review provides unofficial guides and review materials for TOEFL, TOEIC, and other standardized tests
- Leander’s Lexicon Extractor is a free online tool that allows students and teachers of English to quickly extract and list important vocabulary from an inputted text
- CleverBot allows students to practice English conversation with an artificially intelligent chat bot
- Bloomin’ Apps lists apps and websites for every level of Bloom’s Revised Taxonomy
- The Rule of 6 (eBook) propounds a simple framework for how to teach with an iPad
- The WikiTude SDK allows you to build your own augmented reality app..
- ..while Aurasma enables anyone to easily create, manage, and track augmented reality experiences
- Documents 5 allows you to read, listen, view and annotate many kinds of documents on your iPad or iPhone
I’ve just finished reading the excellent Language Learning with Technology by Graham Stanley. It’s packed full of useful ideas for how best to integrate technology with the ESL classroom, and I highly recommend it.
However, having eagerly loaded up almost all the links in the book, I encountered several problems, which are well-known to anyone who’s ever surfed the web. I should acknowledge here that these problems apply equally to all publications relating to web-based technology, including my own.
The trouble with books is that their text is unapologetically static. Once a book is published, once the ink is dry on the dead cellulose wood fibers, it can’t be changed. At least, not until a new edition is released.
Conversely, the web is unpredictably dynamic. URLs which exist on Monday may completely disappear by Friday, or take us somewhere we never intended or expected to go. Link rot is defined by Wikipedia as:
the process by which hyperlinks on individual websites or the Internet in general point to web pages, servers or other resources that have become permanently unavailable
Link rot is a major issue when writing anything about web-based technology. This problem exists not just in relation to dead tree publications, but also with web-based ones, although the latter can be more easily updated.
Link rot is the main reason why it’s inadvisable, to say the very least, to publish anything that relies mainly on the availability and predictability of web resources. It’s one of the reasons why search engines like Google eventually won out against web directories like Yahoo! The web is in constant flux, and trying to write down, describe, or analyze any website, excluding perhaps the web’s most permanent destinations, is an exercise in futility.
No, I’m not talking about British people’s infuriating refusal to acknowledge compliments. I’m talking about what TechTarget defines as the following:
In IT, deprecation means that although something is available or allowed, it is not recommended or that, in the case where something must be used, to say it is deprecated means that its failings are recognized
Perhaps the most infamous example of a deprecated web-based technology in recent years has been Adobe Flash. Once the only way for webmasters to easily deploy games, videos, audio, and a whole host of other snazzy features on their sites, Flash is now regarded with disdain by surfers, web browsers, and tech giants alike.
Almost anyone who has ever used a smartphone or a tablet can tell you: Flash just doesn’t work on mobile. Unfortunately, the appeal of Flash-based apps hasn’t faded as quickly as Apple would have liked. This means that teachers who have recently furbished their classes with a set of iDevices have to be extra careful about which web-based resources they prescribe or recommend, and must be prepared for disappointment when they see the old familiar message: This page requires Macromedia Flash Player to run correctly.
Squatters and Hackers
If squatters are the opportunistic freeloaders who jump into your house and claim it for themselves the minute you vacate it, hackers are the guys who sneak in through the hidden back entrance, change the locks, and stick their name on your front door just to prove a point.
Both hackers and squatters are a major issue in relation to web-based resources. The domain of my name, paulraine.com, is a good example of domain squatting. Back in 2006, I owned it, but later let the registration slip. Now it is “parked”, and if I ever want to use it again, I will probably have to pay the current owner an exorbitant sum to do so. This happens a lot with domains that have at one point been registered. If the owner fails to renew, they get scooped up by internet squatters, and used to advertise tenuously related sites and services. This can happen at any time, and we must be careful that sites we recommend to colleagues or students haven’t been surreptitiously converted into money-making portals.
If your domain hasn’t been taken over by squatters, it may still have been invaded by hackers, which seems to be the unfortunate case with the official companion site to Language Learning with Technology (languagelearningtechnology.com), which, as of November 2016, seems to have be “pwned” by a certain “gunz_berry”:
Bear in mind this is a book that was published only three years ago, in 2013. Who knows how long the companion site has been displaying the “Hacked by gunz_berry” message. Unfortunately, many of the ideas and suggestions in the book refer to the companion site, so it’s a real shame that it’s been compromised. I hope that Cambridge University Press can get it back up and running again soon (I’ve already tweeted the author to let him know the site is down).
Ultimately, there’s not a lot ed-tech authors can do about many of these problems. To avoid link rot, it’s best to go with sites that have been around for at least a couple of years, but even then, they can disappear suddenly and without warning.
We should also be careful not to endorse deprecated technologies, but the pace of technological progress is so fast, that even newer innovations are becoming deprecated very quickly.
As for squatters and hackers, we can only try to ensure that our security arrangements are up-to-date, and we remember to pay our domain renewal fees.
Perhaps a non-technical solution is the best to these technical challenges, and Graham Stanley manages it quite well: focus on types of technology rather than specific instances; focus on ESL activities rather that ESL sites; and give alternatives and variations for every suggestion, to ensure that ideas can still be applied even if the technology itself fails in certain instances.
I was somewhat surprised by several comments on social media in response to my last post, The War Against Machine Translation. Many of the comments spoke out in defense of machine translation (MT). In retrospect, some of the claims I made in my first post were a little far reaching. I’d like to address some of the points made in response to that post, and also clarify and moderate some of the initial claims I made.
I also want to preface this follow-up by stating that I am an avid proponent of Computer Assisted Language Learning (CALL). I have spent the last few years developing a website full of activities and tools for teachers and learners of English as a Foreign Language. However, I believe that anything that can be accurately classed as CALL must by definition assist the learning of a language.
Learning technology should never completely replace the learner. Unfortunately, the way many students view and use the output of MT is as a complete replacement for their own work. In some cases, entire reports are written in L1, pasted into an MT tool, and then the output is submitted as the student’s “own work”. It would be very difficult to say that the students in these cases have learned anything about English. In many cases students fail to even read the result of MT before submitting it, containing as it does so many basic grammatical errors (especially with Japanese to English translation).
Having hopefully clarified my position somewhat, I’ll move on to respond to some of the comments made in relation to my initial post.
Machine Translation is more accurate for language pairs other than English/Japanese
One of my main arguments against the use of MT is that it is simply inaccurate, and is more likely to produce word salad than grammatically correct sentences. Some commenters pointed out that Google Translate, and other MT tools, do much better for other language pairs, particularly the more syntactically and lexically related European languages.
One of the sentences I used in my initial post was “How many close friends do you have?”. After feeding the natural Japanese translation for this sentence (親友は何人いる？) into Google Translate, it output “Best friend How many people”, which is a somewhat unsatisfactory result. However, repeating the same experiment with German (Wie viele enge Freunde haben Sie?) and Italian (Quante amiche intime hai?), Google managed to get the translation bang on, both times outputting “How many close friends do you have?”. However, it failed again for Arabic (How you have close friends?) and Russian (How do you close friends?).
“How many close friends do you have?” is a relatively short and simple sentence. How does MT perform with longer, more complex sentences or paragraphs?
I fed natural Japanese, French, and Italian translations of the following English paragraph into Google Translate:
One of the reasons Twitter is popular in Japan is a characteristic of Japanese itself: Japanese uses ideograms which enable it to convey more information in just 140 characters than other languages, not counting Chinese. Incidentally, the Japanese version of this sentence is written with exactly 140 characters. How many characters does it take in other languages?
For the Japanese translation, the tool output:
One of Twitter popularity of reason in Japan is on the characteristics of the language of Japanese. Japanese use ideographic characters, although not I the enemy is in Chinese, it is possible to incorporate a lot of content to 140 characters compared to many other languages. By the way, I wonder, but he has been written in Japanese version is just 140 characters of this sentence, become what characters in other languages?
One of the reasons that make the success of Twitter in Japan due to a peculiarity of the Japanese language. Compared to many other languages, Japanese, with its ideograms, can convey a lot of information in 140 characters, without competing with the Chinese. Incidentally, the Japanese version of the sentence is written in exactly 140 characters. But how many characters does it do so in other languages?
One of the reasons why Twitter is so popular in Japan is a very characteristic of the Japanese: Japanese uses ideograms that allow you to convey more information in 140 characters than other languages, without considering the Chinese. Inter-alia, the Japanese version of this sentence is written in exactly 140 characters. How many characters it wants in other languages?
Further research would be required to determine exactly how accurate MT is for any given language pair, but from these preliminary tests, it would seem that the less related the languages, the less accurate the translations. MT seems to do much better with more closely related language pairs, regardless of length or syntactical complexity.
The best approach to MT is not to ban it, but to highlight its (potential) inaccuracies. This is the correct approach regardless of the motivation or level of the students
In my initial post, I argued that it would be difficult to ban MT all together (although we could reduce the opportunity to use MT by eliminating coursework, for example). If we ban smart phones, on which students can covertly use MT, we completely discard the other more positive technological affordances they provide. Instead, I suggested that we could highlight its inaccuracies to more highly motivated students.
The reason why I restricted this approach to more “highly motivated” students is because they have a desire to improve their English accuracy and idiomaticity, whereas students with low motivation often simply want to meet the course requirements and receive a passing grade in the easiest possible way. Some unmotivated students see MT as a quick and easy way to produce the required written assignments by writing them entirely in L1 and letting MT do the rest.
If you allow or even endorse the use of MT, when it comes to grading submissions, what are you actually grading? When MT produces good results, the student may unjustly receive a good grade. When MT produces bad results, the teacher may waste their time giving corrections on English mistakes the student hasn’t even made! Although I’m sure the Google engineers would be grateful for the feedback.
Fully featured MT is not the same as pop-up translation
Some commenters highlighted the usefulness of websites such as Rikai.com, which provides automatic pop-up translations of words when a user hovers their mouse over them. There are many other tools offering similar functionality, including PopJisho, ReadLang, Rikai-chan for Firefox, Rikai-kun for Chrome, and my own Pop Translation tool. However, there is a substantive difference between these tools and fully featured MT such as Google Translate.
Pop-up translation tools provide definitions on a word-by-word basis, rather than attempting to translate whole sentences. Allowing students to use pop-up translation to read and understand a passage in English is different to allowing them to translate the whole passage into their L1, and perhaps not even read the English version. Pop-up translation cannot be used to unilaterally produce a complete English passage from the student’s L1, or produce an equivalent passage in the student’s L1 from English. When using pop-up translation to read an English passage, students still have to read the English passage to decipher its meaning. Pop-up translation simply provides a more convenient and powerful alternative to a traditional dictionary.
In the preliminary tests I conducted, MT performed much better when translating closely related languages, such as English and French, or English and Italian. It did much less well with English and Russian and English and Arabic. It did quite poorly for English and Japanese.
Fully featured MT, such as that provided by Google Translate, may not be helpful for language learning where students view the output as a replacement for their own work. In the case where a student writes an assignment in L1, pastes it into Google Translate, and submits the output without even reading it, it would be difficult to imagine that any language learning has taken place. The tool is not being used to assist learning, but rather to avoid learning.
Teachers who permit or endorse the use of MT for English written assignments run the risk of unfairly rewarding students where the MT produces good results, and wasting time giving feedback to students where MT produces bad results.
Finally, fully featured MT, such as Google Translate, must be distinguished from pop-up translation tools such as those provided by Rikai.com. Pop-up translation tools do not attempt to translate sentences or paragraphs, but merely provide a more powerful and convenient alternative to traditional dictionaries. It is hoped that they assist the learning of vocabulary in the sense that students will read the English passage, encounter a word or phrase they do not understand, see the pop-up translation, and apply the meaning to the English word in that particular context.
The problem of machine translation
As language teachers, it seems that every day we have to battle the pernicious force of machine translation (MT). In 1997, Alta Vista launched Babelfish, one of the first web-based interfaces for MT. Twenty years later, it seems like every web portal, social network, and search engine offers some kind of automatic translation tool. Even LINE, the kawaii messaging service ubiquitous in Japan, offers an instant translation function, which behaves just like regular chat.
But despite its apparent popularity, and arguable usefulness as an assistive tool to human translators, MT is not a helpful technology for language teachers or learners. It is at best a nuisance, and at worst strongly detrimental to students’ second language acquisition.
The main problems with MT with regard to language pedagogy are that:
- It is inaccurate, especially for idiomatic expressions; and
- It negates students’ opportunities for language learning
The first of these problems can be easily observed when typing any reasonably idiomatic expression into Google Translate, perhaps the best free web-based MT available right now. Unfortunately, as we shall see, that’s not saying very much..
In this example we see the translator mess up the word order, and also render the verb “drink” as the noun “drink”. “I went to drink a beer with friends” is the more natural human-produced translation for this sentence.
In this example, again, the word order is completely jumbled, and the singular “best friend” doesn’t make sense when the question requires a plural response. Once again, the human generated translation is far superior: “How many close friends do you have?”.
I won’t labor the point here, but you can do your own experiments with any of the currently available MT tools, and you will inevitably come to the same conclusion: MT is still quite bad. Although it can usually convey the gist of the input sentence, it clearly lacks eloquence, idiomaticity and accuracy.
What to do about it
Having concluded that MT is not a good pedagogical tool, the question arises as to how we can eliminate its use both inside and outside the language classroom.
Ban smart phones in the classroom?
Within the classroom, you could prevent the influence of MT by banning smart phones entirely. But if you do this, you are indiscriminately blocking off more fruitful avenues to autonomous learning, along with many other positive affordances offered by mobile devices.
Automatic MT detection
Outside the classroom, your power over students is limited, especially over those more inclined to take the “easy” option of MT in the first place. In addition, although we may strongly suspect a student of using MT outside class, it is often difficult to prove. Although progress is being made in developing MT detection tools, it is still nascent technology. Most of the solutions available at the moment require both the source and translation text in order to attempt to detect MT.
Manual MT detection
It can be possible, however, to manually detect and prove machine translation if you have a working knowledge of your students’ L1.
In a recent low-level speaking class, I asked students to record and transcribe their answers to a 1-minute speaking task. One student’s answer seemed suspiciously like “translationese”. One sentence in particular stood out: “Mother of rice is very delicious”. I guessed that the student had tried to translate the Japanese sentence “お母さんのご飯はとても美味しい” which would be more naturally rendered as “My mother’s rice is very tasty” or more idiomatically as “My mother makes very good rice”.
After inputting my hypothesis into Google Translate, I was presented with the exact same broken English as the student had used in his report. He was well and truly “busted”!
Of course, detecting and subsequently proving the use of MT for a pile of 20 or 30 written reports is a huge waste of time. However, because the temptation to use MT, especially for low-level, low-motivation students is so high, simply instructing students not to do so can be ineffective.
The use of MT became so prevalent with one of my lower level writing classes, that I decided to eliminate coursework altogether, and administer every written assessment in exam conditions. This was the only way I found that I could guarantee that students were not using MT in their written assignments.
Highlight the inadequacy of MT
An alternative solution for more highly motivated classes (those that actually care about developing their English accuracy and idiomaticity) is to highlight how bad MT can be, and in the process hopefully dissuade them from using it all together. One way to do this is to input some English phrases into an MT tool, and translate them into your students’ L1. Students will then understand in a more direct way how bad some of the translations can be.
One day, machine translation may be accurate enough to make language teachers redundant, along with translators, interpreters, subtitlers, and a host of other language-related professions. It may cause an industry shake-up as far-reaching as self-driving cars. But that day is unlikely to be any time in the near future, despite how far we’ve come in recent years. The current generation of MT tools often produce inaccurate and unidiomatic translations. MT is unhelpful for English language pedagogy, and steps should be taken to detect and prevent students’ use of MT.