Re-evaluating assumptions about CALL and TELL in the era of AI

It’s been a while since I got some thoughts down about the current state of CALL and TELL in the era of AI. Here are a few of them, by no means exhaustive. Interested to hear others’ opinions on these considerations as usual.


What is the role of CALL (computer assisted language learning) or TELL (technology enhanced language learning)?

It’s to “assist” or “enhance” the learning of languages.

This has normally involved judiciously providing students with or depriving them of technological affordances such that they are better able to learn the target language.

My working definition of “learn a language” has been “to use and understand the language in a variety of situations without assistance”.

So technological assistance is a kind of scaffolding that will eventually be removed so that the learner no longer requires it and can use and understand the language without it.

When assessing whether a student has indeed “learned” the language, we often use technology while tightly controlling students’ access to it (consider standard assessments such as the TOEFL or IELTS).

The Internet, smartphones, and GenAI have challenged the assumption that the technological scaffolding will ultimately be removed, because most students now carry a device everywhere that can understand, translate and generate almost any language. In addition, Big Tech companies such as Google and Microsoft are baking AI into most of their existing communication services, both synchronous and asynchronous. There are a plethora of browser plugins to translate, transcribe or correct written and spoken webpage content. AI affordances affect not only “computer as a tutor” models of CALL/TELL but also “computer as a tool” models (e.g. automatic captioning and translation in Zoom meetings).

While we have traditionally separated the four skills of language learning into reading, speaking, listening, and writing, AI is now collapsing those distinctions, since any written text can instantly become a spoken one, and any spoken one a written one. Controlling students’ access to these technologies is difficult if not futile. Accessibility presents another curveball. You can’t “judiciously deprive” a hearing-impaired student of captions on a video, for example, even if that might be a legitimate exercise for non-hearing-impaired students.

It’s not hard to imagine a future where “smart glasses” become as ubiquitous as smartphones, heralding a new era of augmented reality, where information we cannot see or control is displayed before students’ eyes. Again, Big Tech companies like Meta and Google are already working on and pushing these products.

The struggle to understand the implications of all of the above for language teaching and learning approaches, methodologies, practices, and policies is a daily one; made harder by the fact that the ground seems to be shifting under our feet. Updating materials, syllabuses, and curriculums takes time, and technology moves much faster than educational bureaucracy.

Further Reading:

My Recent Papers:

Contending with criticisms of AI as a conscientious educator

Staking out a clear personal position on AI these days is a risky but necessary undertaking. AI has become one of the most controversial topics of our time, and will continue to be divisive as its impact on the economy, the environment, and the law becomes apparent. The controversiality of AI only intensifies when we consider its impact on education, which was already a very polarising topic. 

While it would be nice to “wait and see” what happens before making clear our individual stances on AI, that’s not really possible since the technology is already out there. Pandora’s box is open. The horse has bolted. The cat is out of the bag. Pick your metaphor. Even if we educators would rather ignore it, we can’t, because our students are already using it. 

It’s impossible to comprehensively cover all the common criticisms of AI in one sitting, but there are a few arguments that come up again and again that tend to make those of us who use AI in a moderate, judicious, and conscientious way feel unnecessarily guilty — and it’s worth taking the time to contend with them. 

The “soulless” nature of AI texts

One oft repeated criticism of AI is that the texts it generates are “soulless”. If we exclude religious understandings of “soul” and assume this remark relates to the fact that AI is not conscious or sentient, then yes, it is clearly true. AI is a “calculator for words” and calculators do not have souls. However, not all forms of writing need to have “soul”.  I would argue that genres such as purely factual, informative, or instructional writing constitute a niche where AI can excel without a soul. I am actually more concerned with attempts to make AI seem to have soul when it does not, which can be deceitful and disturbing. If we are using AI generated texts in our classes, we should be open and honest about that, but there is no reason why they cannot be a useful supplement to other more “soulful” human-authored materials.

Lack of respect for authors’ legal rights

Another argument connects to the idea that the legal rights of authors around the world have been ignored during the training process of some Large Language Models. LLMs like ChatGPT were literally trained on the whole of the publicly accessible Internet. These data sources were already being made readily available to any organisation with enough compute to make sense of them. Big Tech has always sought forgiveness rather than permission in its attempts to make Big Data more useful. It happened when Google scanned all the books in the public library (authors sued Google, Google won) and it’s happening again in the wake of OpenAI’s decision to train its models on the entirety of the open Internet. 

While it would have been unworkable to consult with every blog author and forum poster to assemble a whitelist of only those who consented to having their writing included in the model, OpenAI should have proceeded with more caution and public consultation. They are, after all, being sued by numerous authors’ organisations. But this is a matter beyond the influence of the average educator, and will be hashed out in America’s courts between OpenAI’s attorneys and the attorneys bringing the class action lawsuits against them. If OpenAI are found to have breached copyright in the way they trained their models, then they will be dealt with in accordance with the relevant laws. But when it comes to texts generated by their models, I don’t see how it will be technically possible to show that any specific text violates any specific individual author’s copyright (beyond instructing it to generate a text in a well known author’s voice or style, and even this could be protected by parody or fair use exemptions). 

The environmental impact of AI data centers

Another argument that rightly causes much concern is the impact of AI on the environment. AI is for sure a power hungry technology, and if that power is coming from non-renewable sources, that’s going to have a negative effect on the environment. Cooling the infrastructure required to run AI inference at scale also requires a lot of water, but many server farms are able to recycle the water they use. Even in cases where water is not recycled, it returns to the atmospheric water cycle and isn’t lost forever. In any event, environmental concerns are rightly high on the list of our reservations about AI. But there are plenty of other environmentally polluting industries that deserve just as much scrutiny. And none of them — other than AI — have the potential to come up with ways to reduce their own environmental impact. 

The threat against our livelihoods

Finally, there are concerns about the impact of AI on our jobs and livelihoods. It’s natural to be worried about such things, and skeptical of claims by AI CEOs that AI is only good at tasks not jobs, or that AI will bring about more jobs that it takes away. Even if that’s true, it won’t be easy to retrain for these jobs, which are likely to require a very high degree of education and expertise. 

But education is and will remain a human-centric social process. When we try to come up with a working definition of what it means to “learn”, it invariably involves being able to use, understand, and apply knowledge we have gained in an unassisted way. As educators, we need to utilise AI in a way that enhances the pedagogical process by supplementing human-centric teaching and learning, ensuring it does not negate or replace this process, or leave students in a position where they are totally reliant on technology and unable to write, speak, or think coherently without it.

In conclusion

I’ll end this post by reminding readers that I am not making light of AI’s potential negative impacts on the environment, economy, or authors’ legal rights. These are all very serious issues that need to be resolved by experts in their respective fields. Additionally, I am not an AI ideologue or fanatic by any means. I am open to changing my position on the issues I have highlighted above in accordance with emerging evidence. It is paramount to stay abreast of not only the technical but also the environmental, ethical, and legal implications of AI.

For now at least, in my career as an English language educator, I will continue to use AI to supplement and augment my human-centric lessons by having it do pedagogically beneficial things that I couldn’t hope to do by myself, with a view to making my teaching more engaging, effective and efficient. 

The image accompanying this article was generated by AI. The text was entirely composed by the (human) author.