Contending with criticisms of AI as a conscientious educator

Staking out a clear personal position on AI these days is a risky but necessary undertaking. AI has become one of the most controversial topics of our time, and will continue to be divisive as its impact on the economy, the environment, and the law becomes apparent. The controversiality of AI only intensifies when we consider its impact on education, which was already a very polarising topic. 

While it would be nice to “wait and see” what happens before making clear our individual stances on AI, that’s not really possible since the technology is already out there. Pandora’s box is open. The horse has bolted. The cat is out of the bag. Pick your metaphor. Even if we educators would rather ignore it, we can’t, because our students are already using it. 

It’s impossible to comprehensively cover all the common criticisms of AI in one sitting, but there are a few arguments that come up again and again that tend to make those of us who use AI in a moderate, judicious, and conscientious way feel unnecessarily guilty — and it’s worth taking the time to contend with them. 

The “soulless” nature of AI texts

One oft repeated criticism of AI is that the texts it generates are “soulless”. If we exclude religious understandings of “soul” and assume this remark relates to the fact that AI is not conscious or sentient, then yes, it is clearly true. AI is a “calculator for words” and calculators do not have souls. However, not all forms of writing need to have “soul”.  I would argue that genres such as purely factual, informative, or instructional writing constitute a niche where AI can excel without a soul. I am actually more concerned with attempts to make AI seem to have soul when it does not, which can be deceitful and disturbing. If we are using AI generated texts in our classes, we should be open and honest about that, but there is no reason why they cannot be a useful supplement to other more “soulful” human-authored materials.

Lack of respect for authors’ legal rights

Another argument connects to the idea that the legal rights of authors around the world have been ignored during the training process of some Large Language Models. LLMs like ChatGPT were literally trained on the whole of the publicly accessible Internet. These data sources were already being made readily available to any organisation with enough compute to make sense of them. Big Tech has always sought forgiveness rather than permission in its attempts to make Big Data more useful. It happened when Google scanned all the books in the public library (authors sued Google, Google won) and it’s happening again in the wake of OpenAI’s decision to train its models on the entirety of the open Internet. 

While it would have been unworkable to consult with every blog author and forum poster to assemble a whitelist of only those who consented to having their writing included in the model, OpenAI should have proceeded with more caution and public consultation. They are, after all, being sued by numerous authors’ organisations. But this is a matter beyond the influence of the average educator, and will be hashed out in America’s courts between OpenAI’s attorneys and the attorneys bringing the class action lawsuits against them. If OpenAI are found to have breached copyright in the way they trained their models, then they will be dealt with in accordance with the relevant laws. But when it comes to texts generated by their models, I don’t see how it will be technically possible to show that any specific text violates any specific individual author’s copyright (beyond instructing it to generate a text in a well known author’s voice or style, and even this could be protected by parody or fair use exemptions). 

The environmental impact of AI data centers

Another argument that rightly causes much concern is the impact of AI on the environment. AI is for sure a power hungry technology, and if that power is coming from non-renewable sources, that’s going to have a negative effect on the environment. Cooling the infrastructure required to run AI inference at scale also requires a lot of water, but many server farms are able to recycle the water they use. Even in cases where water is not recycled, it returns to the atmospheric water cycle and isn’t lost forever. In any event, environmental concerns are rightly high on the list of our reservations about AI. But there are plenty of other environmentally polluting industries that deserve just as much scrutiny. And none of them — other than AI — have the potential to come up with ways to reduce their own environmental impact. 

The threat against our livelihoods

Finally, there are concerns about the impact of AI on our jobs and livelihoods. It’s natural to be worried about such things, and skeptical of claims by AI CEOs that AI is only good at tasks not jobs, or that AI will bring about more jobs that it takes away. Even if that’s true, it won’t be easy to retrain for these jobs, which are likely to require a very high degree of education and expertise. 

But education is and will remain a human-centric social process. When we try to come up with a working definition of what it means to “learn”, it invariably involves being able to use, understand, and apply knowledge we have gained in an unassisted way. As educators, we need to utilise AI in a way that enhances the pedagogical process by supplementing human-centric teaching and learning, ensuring it does not negate or replace this process, or leave students in a position where they are totally reliant on technology and unable to write, speak, or think coherently without it.

In conclusion

I’ll end this post by reminding readers that I am not making light of AI’s potential negative impacts on the environment, economy, or authors’ legal rights. These are all very serious issues that need to be resolved by experts in their respective fields. Additionally, I am not an AI ideologue or fanatic by any means. I am open to changing my position on the issues I have highlighted above in accordance with emerging evidence. It is paramount to stay abreast of not only the technical but also the environmental, ethical, and legal implications of AI.

For now at least, in my career as an English language educator, I will continue to use AI to supplement and augment my human-centric lessons by having it do pedagogically beneficial things that I couldn’t hope to do by myself, with a view to making my teaching more engaging, effective and efficient. 

The image accompanying this article was generated by AI. The text was entirely composed by the (human) author.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.