Link rot, deprecation, squatters, and hackers: the trouble with books on CALL

Link rot can be a major issue when writing about web-based resources

I’ve just finished reading the excellent Language Learning with Technology by Graham Stanley. It’s packed full of useful ideas for how best to integrate technology with the ESL classroom, and I highly recommend it.

However, having eagerly loaded up almost all the links in the book, I encountered several problems, which are well-known to anyone who’s ever surfed the web. I should acknowledge here that these problems apply equally to all publications relating to web-based technology, including my own.

Link rot

The trouble with books is that their text is unapologetically static. Once a book is published, once the ink is dry on the dead cellulose wood fibers, it can’t be changed. At least, not until a new edition is released.

Conversely, the web is unpredictably dynamic. URLs which exist on Monday may completely disappear by Friday, or take us somewhere we never intended or expected to go. Link rot is defined by Wikipedia as:

the process by which hyperlinks on individual websites or the Internet in general point to web pages, servers or other resources that have become permanently unavailable

Link rot is a major issue when writing anything about web-based technology. This problem exists not just in relation to dead tree publications, but also with web-based ones, although the latter can be more easily updated.

Link rot is the main reason why it’s inadvisable, to say the very least, to publish anything that relies mainly on the availability and predictability of web resources. It’s one of the reasons why search engines like Google eventually won out against web directories like Yahoo! The web is in constant flux, and trying to write down, describe, or analyze any website, excluding perhaps the web’s most permanent destinations, is an exercise in futility.


No, I’m not talking about British people’s infuriating refusal to acknowledge compliments. I’m talking about what TechTarget defines as the following:

In IT, deprecation means that although something is available or allowed, it is not recommended or that, in the case where something must be used, to say it is deprecated means that its failings are recognized

Perhaps the most infamous example of a deprecated web-based technology in recent years has been Adobe Flash. Once the only way for webmasters to easily deploy games, videos, audio, and a whole host of other snazzy features on their sites, Flash is now regarded with disdain by surfers, web browsers, and tech giants alike.

Almost anyone who has ever used a smartphone or a tablet can tell you: Flash just doesn’t work on mobile. Unfortunately, the appeal of Flash-based apps hasn’t faded as quickly as Apple would have liked. This means that teachers who have recently furbished their classes with a set of iDevices have to be extra careful about which web-based resources they prescribe or recommend, and must be prepared for disappointment when they see the old familiar message: This page requires Macromedia Flash Player to run correctly.

Squatters and Hackers

If squatters are the opportunistic freeloaders who jump into your house and claim it for themselves the minute you vacate it, hackers are the guys who sneak in through the hidden back entrance, change the locks, and stick their name on your front door just to prove a point.

Both hackers and squatters are a major issue in relation to web-based resources. The domain of my name,, is a good example of domain squatting. Back in 2006, I owned it, but later let the registration slip. Now it is “parked”, and if I ever want to use it again, I will probably have to pay the current owner an exorbitant sum to do so. This happens a lot with domains that have at one point been registered. If the owner fails to renew, they get scooped up by internet squatters, and used to advertise tenuously related sites and services. This can happen at any time, and we must be careful that sites we recommend to colleagues or students haven’t been surreptitiously converted into money-making portals.

If your domain hasn’t been taken over by squatters, it may still have been invaded by hackers, which seems to be the unfortunate case with the official companion site to Language Learning with Technology (, which, as of November 2016, seems to have be “pwned” by a certain “gunz_berry”:


Bear in mind this is a book that was published only three years ago, in 2013. Who knows how long the companion site has been displaying the “Hacked by gunz_berry” message. Unfortunately, many of the ideas and suggestions in the book refer to the companion site, so it’s a real shame that it’s been compromised. I hope that Cambridge University Press can get it back up and running again soon (I’ve already tweeted the author to let him know the site is down).


Ultimately, there’s not a lot ed-tech authors can do about many of these problems. To avoid link rot, it’s best to go with sites that have been around for at least a couple of years, but even then, they can disappear suddenly and without warning.

We should also be careful not to endorse deprecated technologies, but the pace of technological progress is so fast, that even newer innovations are becoming deprecated very quickly.

As for squatters and hackers, we can only try to ensure that our security arrangements are up-to-date, and we remember to pay our domain renewal fees.


Perhaps a non-technical solution is the best to these technical challenges, and Graham Stanley manages it quite well: focus on types of technology rather than specific instances; focus on ESL activities rather that ESL sites; and give alternatives and variations for every suggestion, to ensure that ideas can still be applied even if the technology itself fails in certain instances.

The Rise of Machine Translation

I was somewhat surprised by several comments on social media in response to my last post, The War Against Machine Translation. Many of the comments spoke out in defense of machine translation (MT). In retrospect, some of the claims I made in my first post were a little far reaching. I’d like to address some of the points made in response to that post, and also clarify and moderate some of the initial claims I made.

I also want to preface this follow-up by stating that I am an avid proponent of Computer Assisted Language Learning (CALL). I have spent the last few years developing a website full of activities and tools for teachers and learners of English as a Foreign Language. However, I believe that anything that can be accurately classed as CALL must by definition assist the learning of a language.

Learning technology should never completely replace the learner. Unfortunately, the way many students view and use the output of MT is as a complete replacement for their own work.
Some students attempt to pass off the results of MT as their own work, which can cause issues for the teacher trying to fairly grade written English assignments

Learning technology should never completely replace the learner. Unfortunately, the way many students view and use the output of MT is as a complete replacement for their own work. In some cases, entire reports are written in L1, pasted into an MT tool, and then the output is submitted as the student’s “own work”. It would be very difficult to say that the students in these cases have learned anything about English. In many cases students fail to even read the result of MT before submitting it, containing as it does so many basic grammatical errors (especially with Japanese to English translation).

Having hopefully clarified my position somewhat, I’ll move on to respond to some of the comments made in relation to my initial post.

Machine Translation is more accurate for language pairs other than English/Japanese

One of my main arguments against the use of MT is that it is simply inaccurate, and is more likely to produce word salad than grammatically correct sentences. Some commenters pointed out that Google Translate, and other MT tools, do much better for other language pairs, particularly the more syntactically and lexically related European languages.

One of the sentences I used in my initial post was “How many close friends do you have?”. After feeding the natural Japanese translation for this sentence (親友は何人いる?) into Google Translate, it output “Best friend How many people”, which is a somewhat unsatisfactory result. However, repeating the same experiment with German (Wie viele enge Freunde haben Sie?) and Italian (Quante amiche intime hai?), Google managed to get the translation bang on, both times outputting “How many close friends do you have?”. However, it failed again for Arabic (How you have close friends?) and Russian (How do you close friends?).

“How many close friends do you have?” is a relatively short and simple sentence. How does MT perform with longer, more complex sentences or paragraphs?

I fed natural Japanese, French, and Italian translations of the following English paragraph into Google Translate:

One of the reasons Twitter is popular in Japan is a characteristic of Japanese itself: Japanese uses ideograms which enable it to convey more information in just 140 characters than other languages, not counting Chinese. Incidentally, the Japanese version of this sentence is written with exactly 140 characters. How many characters does it take in other languages?

For the Japanese translation, the tool output:

One of Twitter popularity of reason in Japan is on the characteristics of the language of Japanese. Japanese use ideographic characters, although not I the enemy is in Chinese, it is possible to incorporate a lot of content to 140 characters compared to many other languages. By the way, I wonder, but he has been written in Japanese version is just 140 characters of this sentence, become what characters in other languages?

For French:

One of the reasons that make the success of Twitter in Japan due to a peculiarity of the Japanese language. Compared to many other languages, Japanese, with its ideograms, can convey a lot of information in 140 characters, without competing with the Chinese. Incidentally, the Japanese version of the sentence is written in exactly 140 characters. But how many characters does it do so in other languages?

For Italian:

One of the reasons why Twitter is so popular in Japan is a very characteristic of the Japanese: Japanese uses ideograms that allow you to convey more information in 140 characters than other languages, without considering the Chinese. Inter-alia, the Japanese version of this sentence is written in exactly 140 characters. How many characters it wants in other languages?

Further research would be required to determine exactly how accurate MT is for any given language pair, but from these preliminary tests, it would seem that the less related the languages, the less accurate the translations. MT seems to do much better with more closely related language pairs, regardless of length or syntactical complexity.

The best approach to MT is not to ban it, but to highlight its (potential) inaccuracies. This is the correct approach regardless of the motivation or level of the students

When MT produces good results, the student may unjustly receive a good grade. When MT produces bad results, the teacher may waste their time giving corrections on English mistakes the student hasn't even made!
The only people who benefit from corrective feedback on MT generated English are Google Engineers

In my initial post, I argued that it would be difficult to ban MT all together (although we could reduce the opportunity to use MT by eliminating coursework, for example). If we ban smart phones, on which students can covertly use MT, we completely discard the other more positive technological affordances they provide. Instead, I suggested that we could highlight its inaccuracies to more highly motivated students.

The reason why I restricted this approach to more “highly motivated” students is because they have a desire to improve their English accuracy and idiomaticity, whereas students with low motivation often simply want to meet the course requirements and receive a passing grade in the easiest possible way. Some unmotivated students see MT as a quick and easy way to produce the required written assignments by writing them entirely in L1 and letting MT do the rest.

If you allow or even endorse the use of MT, when it comes to grading submissions, what are you actually grading? When MT produces good results, the student may unjustly receive a good grade. When MT produces bad results, the teacher may waste their time giving corrections on English mistakes the student hasn’t even made! Although I’m sure the Google engineers would be grateful for the feedback.

Pop-up translation, such as that provided by, is substantively different to MT provided by the likes of Google Translate
Pop-up translation, such as that provided by, is substantively different to MT provided by the likes of Google Translate

Fully featured MT is not the same as pop-up translation

Some commenters highlighted the usefulness of websites such as, which provides automatic pop-up translations of words when a user hovers their mouse over them. There are many other tools offering similar functionality, including PopJisho, ReadLang, Rikai-chan for Firefox, Rikai-kun for Chrome, and my own Pop Translation tool. However, there is a substantive difference between these tools and fully featured MT such as Google Translate.

Pop-up translation tools provide definitions on a word-by-word basis, rather than attempting to translate whole sentences. Allowing students to use pop-up translation to read and understand a passage in English is different to allowing them to translate the whole passage into their L1, and perhaps not even read the English version. Pop-up translation cannot be used to unilaterally produce a complete English passage from the student’s L1, or produce an equivalent passage in the student’s L1 from English. When using pop-up translation to read an English passage, students still have to read the English passage to decipher its meaning. Pop-up translation simply provides a more convenient and powerful alternative to a traditional dictionary.

Concluding remarks

In the preliminary tests I conducted, MT performed much better when translating closely related languages, such as English and French, or English and Italian. It did much less well with English and Russian and English and Arabic. It did quite poorly for English and Japanese.

Fully featured MT, such as that provided by Google Translate, may not be helpful for language learning where students view the output as a replacement for their own work. In the case where a student writes an assignment in L1, pastes it into Google Translate, and submits the output without even reading it, it would be difficult to imagine that any language learning has taken place. The tool is not being used to assist learning, but rather to avoid learning.

Teachers who permit or endorse the use of MT for English written assignments run the risk of unfairly rewarding students where the MT produces good results, and wasting time giving feedback to students where MT produces bad results.

Finally, fully featured MT, such as Google Translate, must be distinguished from pop-up translation tools such as those provided by Pop-up translation tools do not attempt to translate sentences or paragraphs, but merely provide a more powerful and convenient alternative to traditional dictionaries. It is hoped that they assist the learning of vocabulary in the sense that students will read the English passage, encounter a word or phrase they do not understand, see the pop-up translation, and apply the meaning to the English word in that particular context.

The War Against Machine Translation

LINE’s machine translation function can easily be confused for idle chat, but in fact it is potentially much more harmful

The problem of machine translation

As language teachers, it seems that every day we have to battle the pernicious force of machine translation (MT). In 1997, Alta Vista launched Babelfish, one of the first web-based interfaces for MT. Twenty years later, it seems like every web portal, social network, and search engine offers some kind of automatic translation tool. Even LINE, the kawaii messaging service ubiquitous in Japan, offers an instant translation function, which behaves just like regular chat.

But despite its apparent popularity, and arguable usefulness as an assistive tool to human translators, MT is not a helpful technology for language teachers or learners. It is at best a nuisance, and at worst strongly detrimental to students’ second language acquisition.

The main problems with MT with regard to language pedagogy are that:

  1. It is inaccurate, especially for idiomatic expressions; and
  2. It negates students’ opportunities for language learning

The first of these problems can be easily observed when typing any reasonably idiomatic expression into Google Translate, perhaps the best free web-based MT available right now. Unfortunately, as we shall see, that’s not saying very much..

Exhibit 1


In this example we see the translator mess up the word order, and also render the verb “drink” as the noun “drink”. “I went to drink a beer with friends” is the more natural human-produced translation for this sentence.

Exhibit 2


In this example, again, the word order is completely jumbled, and the singular “best friend” doesn’t make sense when the question requires a plural response. Once again, the human generated translation is far superior: “How many close friends do you have?”.

I won’t labor the point here, but you can do your own experiments with any of the currently available MT tools, and you will inevitably come to the same conclusion: MT is still quite bad. Although it can usually convey the gist of the input sentence, it clearly lacks eloquence, idiomaticity and accuracy.

What to do about it

Having concluded that MT is not a good pedagogical tool, the question arises as to how we can eliminate its use both inside and outside the language classroom.

Layout 1
Banning smartphones/laptops seems like overkill, especially considering the more positive technological affordances they offer

Ban smart phones in the classroom?

Within the classroom, you could prevent the influence of MT by banning smart phones entirely. But if you do this, you are indiscriminately blocking off more fruitful avenues to autonomous learning, along with many other positive affordances offered by mobile devices.

Automatic MT detection

Outside the classroom, your power over students is limited, especially over those more inclined to take the “easy” option of MT in the first place. In addition, although we may strongly suspect a student of using MT outside class, it is often difficult to prove. Although progress is being made in developing MT detection tools, it is still nascent technology. Most of the solutions available at the moment require both the source and translation text in order to attempt to detect MT.

Manual MT detection

It can be possible, however, to manually detect and prove machine translation if you have a working knowledge of your students’ L1.

In a recent low-level speaking class, I asked students to record and transcribe their answers to a 1-minute speaking task. One student’s answer seemed suspiciously like “translationese”. One sentence in particular stood out: “Mother of rice is very delicious”. I guessed that the student had tried to translate the Japanese sentence “お母さんのご飯はとても美味しい” which would be more naturally rendered as “My mother’s rice is very tasty” or more idiomatically as “My mother makes very good rice”.

After inputting my hypothesis into Google Translate, I was presented with the exact same broken English as the student had used in his report. He was well and truly “busted”!

Sometimes it is possible to recreate the exact same bad translation through guesswork and a knowledge of your students' L1
Sometimes it is possible to recreate the exact same bad machine translation through guesswork and a knowledge of your students’ L1

Eliminate coursework

Of course, detecting and subsequently proving the use of MT for a pile of 20 or 30 written reports is a huge waste of time. However, because the temptation to use MT, especially for low-level, low-motivation students is so high, simply instructing students not to do so can be ineffective.

The use of MT became so prevalent with one of my lower level writing classes, that I decided to eliminate coursework altogether, and administer every written assessment in exam conditions. This was the only way I found that I could guarantee that students were not using MT in their written assignments.

Highlight the inadequacy of MT

An alternative solution for more highly motivated classes (those that actually care about developing their English accuracy and idiomaticity) is to highlight how bad MT can be, and in the process hopefully dissuade them from using it all together. One way to do this is to input some English phrases into an MT tool, and translate them into your students’ L1. Students will then understand in a more direct way how bad some of the translations can be.

Translating from English to your students' L1 with MT can be a useful consciousness raising activity
Translating from English to your students’ L1 with MT can be a useful consciousness raising activity. The Japanese translation on the right is very unnatural.


One day, machine translation may be accurate enough to make language teachers redundant, along with translators, interpreters, subtitlers, and a host of other language-related professions. It may cause an industry shake-up as far-reaching as self-driving cars. But that day is unlikely to be any time in the near future, despite how far we’ve come in recent years. The current generation of MT tools often produce inaccurate and unidiomatic translations. MT is unhelpful for English language pedagogy, and steps should be taken to detect and prevent students’ use of MT.

30 Links for English Language Data Geeks

A typical corpus linguist
A typical corpus linguist.. Although I personally prefer blue braces.
  1. The Moby Lexicon Project
  2. BNC Baby
  3. Full BNC
  4. Project Gutenberg (Download full database)
  5. CMU Pronouncing Dictionary
  6. GNU Collaborative International Dictionary of English
  7. The Internet Dictionary Project
  8. English Wikitionary Dump
  9. Simple English Wiktionary Dump
  10. JACET 8000
  11. Minimal pairs in English RP
  12. List of homographs
  13. Homophones in English RP
  14. Google’s Official List of Bad Words
  15. Yasumasa Someya’s Lemmas List
  16. MRC Psycholinguistic Database
  17. Million Song Dataset
  18. Penn Treebank P.O.S. Tags
  19. Princeton University’s WordNet
  20. The Sentence Corpus of Remedial English
  21. Summer Institute of Linguistics (SIL) Word List
  22. The Tanaka Corpus
  23. The General Service List
  24. The New General Service List
  25. The Academic Word List
  26. The New Academic Word List
  27. The TOEIC Word List
  28. The Business Service List
  29. Apache Open Office MyThes
  30. Global WordNet

10 years in Japan

Today I mark 10 years living and working in Japan. To commemorate the occasion, here is one of my first blog posts from October 2006:

Some things about Japan that I’ve noticed:

  • The plugs don’t have switches, so if you want to turn something off, you have to physically unplug it
  • Semi-automatic doors: they lack motion sensors and only open when you press the button
  • Pelican crossings have no buttons to press
  • When it rains, everyone uses an umbrella
  • There are little racks in which to put your wet umbrella when entering shops
  • The Japanese are incredibly polite: one night some of us got lost, and when we asked for directions, we were escorted by a stranger for a good half-mile to the train station, which was the opposite direction to which he had been walking
  • The local gaijin pub, Mattari, serves fish and chips
  • The Japanese like queuing even more than the British. You might even expect to find them queuing on the platform for trains
  • There are lots of bikes
  • Pachinko parlors: buy yourself a tub full of ball bearings and pour them into an inverted pinball machine. Adopt an expression of post-lobotomy desolation. These places are completely insane.

For a more comprehensive run down of the past decade, check out my post on TEFL Journey.

20 Tech Tips from Vocab@Tokyo 2016

  1. Tom Cobb’s venerable Lex Tutor now has a mobile interface
  2. Collins and Merriam-Webster both provide free online dictionaries
  3. The University of Texas at Austin provides a wide selection of free handouts (PDF) for teachers of English language writing
  4. Calibre is a comprehensive e-book manager and converter
  5. OmniPage and ABBYY FineReader are powerful OCR (Optical Character Recognition) applications
  6. The Lexical Research Foundation is “a not-for-profit organisation to promote excellence in lexical and vocabulary acquisition, description and pedagogy.”
  7. AntWordProfiler, Web VocabProfile, Range, and P_Lex (PDF) are tools for profiling lexical sophistication of a text, i.e. the proportion of advanced (rare) vocabulary…
  8. …while TextInspector can be used to measure lexical variation, i.e. the proportion of word types to tokens
  9. Michael Covington has developed a number of algorithms and tools for analyzing texts, including Moving Average Type-Token Ratio (MATTR)
  10. Paul Nation’s book, What You Need to Know to Learn a Foreign Language, is available as a free PDF download…
  11. …as are all his Vocabulary Size Tests (VST)…
  12. …which can also be taken online via Tom Cobb’s site
  13. Laurence Anthony’s WebSCoRE is “a free, parallel concordancer with a specially developed bilingual pedagogical corpus”
  14. Paul Meara’s Lognostics website “is designed to provide access to up to date research tools for people working in the field of Second Language Vocabulary Acquisition”
  15. Vocabulary Learning and Instruction (VLI) is an open access international journal for research relating to vocabulary acquisition, instruction, and assessment.
  16. Showbie is a great tool for keeping digital portfolios of students’ work
  17. Coh-Metrix is a system for computing computational cohesion and coherence metrics for written and spoken texts
  18. Lexile Analyzer can be used to compute the complexity of a text, including sentence length and word frequency
  19. Cambridge University Press’s English Vocabulary Profile (EVP) “offers reliable information about which words and phrases are known and used by learners at each level of the Common European Framework (CEF)”
  20. The CEFR-J website provides a series of “can-do” descriptors specifically for English language teaching contexts in Japan.

30 Tech Tips from JALT CALL 2016


  1. James Rogers gives pronunciation advice for Japanese learners of English
  2. Linode is a powerful and good value web host
  3. The Multiplayer Classroom (Lee Sheldon) was one of the first publications arguing for gamification of education
  4. Class Craft helps you to make learning an adventure
  5. Socrative allows you to administer assessments and surveys via mobile phones
  6. Kahoot provides gamified classroom activities
  7. QuizUp offers a competitive multi-player gaming experience
  8. Sendtodropbox is a great way of getting files from your students into your Dropbox account…
  9. …while QuickVoice (iOS) allows you to record and send audio files as email attachments up to 5MB in size…
  10. …and MailVU are specialists in sending video via email
  11. Moxtra is a mobile-first embeddable collaboration platform…
  12. …and VoiceThread allows students to submit audio as attachments to images
  13. Schoology is a modern Learner Management System
  14. Ginger offers a variety of apps for online translation and grammar checking…
  15. …while Grammarly claims to make you a better writer by finding and correctly 10 times more mistakes than you word processor
  16. WikiTude is the world’s leading augmented reality SDK
  17. Diigo allows you to annotate and save web pages as you browse them
  18. Tiki Toki is web based software for creating beautiful timelines
  19. iBuildApp allows you to easily make apps for iOS or Android
  20. Mobyx (iOS) provides high quality VOIP (Voice over IP) services
  21. KanjiTomo is a comprehensive OCR (Optical Character Recognition) application for Japanese characters…
  22. …while Yomiwa (iOS) provides a real-time offline camera translator for Japanese…
  23. …and Perfect Master Kanji (iOS) is a fully fledged kanji practice app for people learning Japanese as a foreign language…
  24. …and Nihongo Shark provides free daily lessons for learners of Japanese
  25. Discord provides all-in-one text and voice chat for gamers
  26. Continuous Partial Attention (Linda Stone) is “motivated by a desire to be a LIVE node on the network”
  27. Wiggle allows you to easily import sporting goods and accessories (a weird one, but a good one for those of us who struggle to find bicycles big enough in Japan!)
  28. Phonologics offers automated pronunciation testing
  29. Words and Monsters taps into the addictive game play of apps like Puzzle and Dragons and Candy Crush by offering uncertain and unexpected rewards
  30. Paul Howard Jones is the preeminent expert on the effect of games on the brain