Many people are embracing GenAI. Andy Balaam encourages us to pause and think about why this might not be a good idea.
I choose to avoid using ‘AI’ (by which I mean Large Language Models [Wikipedia]).
Here’s why:
- they have a devastating environmental impact,
- they are trained by exploiting and traumatising millions of low-paid workers,
- they produce biased and dangerously incorrect results, and
- they unfairly use people’s creative work.
Environmental impact
Between 2010 and 2020, the energy used by data centres around the world rose only slightly [Knight20], but since then energy use has risen sharply [Kearney24], driven by the expansion of AI [O’Donnell25]. Compounding the problem, because these data centres are using more energy than was predicted or provided for by existing generation, the carbon intensity of the electricity use is much higher than average (48% higher in the US according to one study [O’Donnell25]).
Driven by AI, data centres are predicted to double their energy use by 2030 [Chen25]. In Ireland in 2025, data centres use almost a fifth of the electricity supply [Campbell23], despite the rise in use of electric vehicles over recent years.
Unless we change course, this is not going to slow down or become sustainable. Quoting Sam Altman [Roytburg25]:
You should expect OpenAI to spend trillions of dollars on data center construction in the not very distant future.
This rapacious appetite for more data and more computation is hard-wired into the AI movement. It is driven by a belief that Sam Altman expresses like this on his blog: “it appears that you can spend arbitrary amounts of money and get continuous and predictable gains” [Altman25]. The movement is built on the idea that if we just consume more and more resources, we will achieve greater and greater success. As long as they are driven by this belief, we can never expect them even to attempt to curb their energy use.
Data centres are often harmful to the local area, and are often sited in areas of existing social deprivation. They consume both energy and water that could otherwise be used by people, and cause problems with pollution and energy shortages [Fleury25].
For more detail on the environmental impacts of AI, I recommend (perhaps surprisingly) the Teen Vogue article ‘ChatGPT Is Everywhere – Why Aren’t We Talking About Its Environmental Costs?’[McMenamin25].
Exploitation of workers
The AI companies don’t like to talk about it, but their models only work when provided with vast amounts of human-created data. This data is not simply passively scraped from the Internet: the models are built on the work of millions of people actively classifying images and rating answers, shaping the models to produce results that look and sound safe and reasonable [Williams22].
Most of the people involved are very poorly paid [Rowe23]. Many of them are traumatised by horrific images and speech that they are asked to classify [Stahl25].
Workers paid between $1.32 and $2 per hour in Kenya (a wage described as ‘an insult’) talk about their work like this [Bartholomew23]:
You’re reading this content, day in, day out, over the course of days and weeks and months, it seeps into your brain and you can’t get rid of it.
Biased and dangerous results
Despite wide acknowledgement among experts that AI produces unreliable results, many people are being encouraged to trust its output in terms of accuracy and safety.
Researchers have found that recent AI models confidently express judgements that are plain wrong, making mistakes about basic economic ideas like interest rates [Smith24], or inventing conversations about patients’ medical data [Burke24].
Even more concerningly, people are treating AI models as trustworthy conversation partners. This is done with full encouragement from the AI companies, despite the real risks involved. In 2023, Character AI founder Noam Shazeer said of AI, “It’s going to be super, super helpful to a lot of people who are lonely or depressed.” In fact, one of Character AI’s chat bots played an alarming role in the suicide of a teenager [Roose24]. The parents of another teenager say it explicitly encouraged him to commit suicide before he did [Hill25a]. There is a growing number of reports of chat bots guiding people down “delusional spirals” that can have devastating mental health consequences [Hill25b].
It is clear from all these examples and many more court cases currently in progress that it is impossible to control the words spewing from these models. Given the racial slurs included in the most widely-used training dataset [McQuaid21] it is not surprising that they occasionally lose the plot like Grok once did, producing racist rants and naming itself ‘MechaHitler’ [Hagen25].
Throughout all of this unreliability, the popular AI models very reliably convey total confidence in the latest answer they gave, even where it contradicts the previous one.
It is pure wishful thinking to say that AI models can replace human judgement in any area. If people treat AI as a trustworthy oracle or a trustworthy companion, this wishful thinking is actively harmful [Schneier23].
Unfair use of creative work
The leading models are trained on all the data the AI companies can get their hands on, regardless of licence. This includes proprietary information such as news and information sites, art galleries and personal web sites that are protected by traditional copyright arrangements. These sites (and indeed printed books and other offline materials) are published under a legal framework that allows searching and indexing of their content without reproducing it. Many commercial web sites depend on visitors being directed towards their web site so they can receive advertising income.
Meanwhile, a huge amount of material is available online in free and open source form, especially but not exclusively the enormous corpus of source code that is used to train AI coding models. The bargain for this material is different: authors require attribution for re-use, and place additional requirements such as ‘share-alike’ clauses that enforce the release of derived works under the same terms. AI models are breaking this legally-enforced bargain by reproducing derived or straightforwardly copied works with no attribution or correct license.
Directing visitors to web sites is not a benevolent or coincidental side effect of search engines: it is a self-sustaining bargain: you allow me to index your content and in exchange I direct users to your site. If this bargain breaks down, web site creators lose their source of income and many web sites will disappear [Stokes25]. Complying with licence terms is not optional: it is required to use any material, including free and open source content.
These bargains are enforced by copyright law. The invention of AI did not change anything about this bargain, except that it obfuscated the copying of copyrighted material [Carson25] [Gerken23], and convinced governments that enforcing the law would block the promised economic miracle of AI [Milmo25a] [Milmo25b].
Other reasons
This article is primarily concerned with ethical reasons for avoiding AI usage, but there are plenty of other reasons too:
- They don’t do what the billionaires claim they do: researchers found that open source developers were 19% less productive when using AI tools [Becker25], and a UK study showed no productivity gain [Kunert25].
- They make you worse at your job: doctors rapidly experienced erosion of their ability to spot cancer when they began using AI tools [Black25].
- They will be used as an excuse to cut jobs: Amazon recently told its workers to expect job cuts due to AI [Roth25] even though it has been engaging in regular job cuts since 2022.
Despite the hype, it is clear that AI can perform some tasks effectively, for example making very convincing fake videos. I choose to avoid them where I can, for the above reasons.
Conclusion
I believe that AI is a force that is doing real harm in our world, and is concentrating wealth and power in the hands of those who are already wealthy and powerful enough. If you agree, let’s work together as professionals to help our companies, organisation and friends to be skeptical of its benefits, and mindful of its problems when we make decisions about how and where to use it.
If you’d like to hear more AI-skeptical viewpoints, thecon.ai [Bender25] is a good place to start. The article ‘I Am An AI Hater’ [Moser25] by Anthony Moser was the research starting point for this article and is recommended if you’d like a less emotionally constrained view along similar lines.
References
(With thanks to the Overload reviewers for suggesting several extra references.)
[Altman25] Sam Altman ‘Three Observations’, Sam Altman’s blog, posted 9 February 2025 at https://blog.samaltman.com/three-observations
[Bartholomew23] Jem Bartholomew, ‘Q&A: Uncovering the labor exploitation that powers AI’, Columbia Journalism Review, published 9 August 2023 at https://www.cjr.org/tow_center/qa-uncovering-the-labor-exploitation-that-powers-ai.php
[Becker25] Joel Becker, Nate Rush, Elizabeth Barnes and David Rein, ‘[Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity’, METR, published 10 July 2025 at https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/
[Bender25] Emily M. Bender and Alex Hanna (2025), The AI Con: How to fight big tech’s hype and create the future we want, published by Harper, ISBN: 978-0063418561 (see also thecon.ai)
[Black25] ‘AI Eroded Doctors’ Ability to Spot Cancer Within Months in Study’, Bloomberg, published 12 August 2025 at https://www.bloomberg.com/news/articles/2025-08-12/ai-eroded-doctors-ability-to-spot-cancer-within-months-in-study
[Burke24] Garance Burke and Hilke Schellmann, ‘Researchers say an AI-powered transcription tool used in hospitals invents things no one ever said’ AP News, published 26 October 2024 at https://apnews.com/article/ai-artificial-intelligence-health-business-90020cdf5fa16c79ca2e5b6c4c9bbb14
[Campbell23] John Campbell, ‘Data centres use almost a fifth of Irish electricity’, BBC, published 12 June 2023 at https://www.bbc.co.uk/news/articles/cpe9l5ke5jvo
[Carson25] David Carson, ‘Theft is not fair use’, published 21 April 2025 at https://jskfellows.stanford.edu/theft-is-not-fair-use-474e11f0d063
[Chen25] Sophia Chen, ‘Data centres will use twice as much energy by 2030 – driven by AI’, Nature, published 10 April 2025 at https://www.nature.com/articles/d41586-025-01113-z
[Fleury25] Michelle Fleury and Nathalie Jimenez, ‘ “I can’t drink the water” – life next to a US data centre’, BBC, published 10 July 2025 at https://www.bbc.co.uk/news/articles/cy8gy7lv448o
[Gerken23] Tom Gerken, ‘New York Times sues Microsoft and OpenAI for ‘billions’ ’, BBC, published 27 December 2023 at https://www.bbc.co.uk/news/technology-67826601
[Hagen25] Lisa Hagan, Huo Jingman ‘Elon Musk’s AI chatbot, Grok, started calling itself ‘MechaHitler’ ’, NPR, published 9 July 2025 at https://www.npr.org/2025/07/09/nx-s1-5462609/grok-elon-musk-antisemitic-racist-content
[Hill25a] Kashmir Hill, ‘A Teen Was Suicidal. ChatGPT Was the Friend He Confided In’ New York Times, updated 27 August 2025 at https://www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html
[Hill25b] Kashmir Hill and Dylan Freedman, ‘Chatbots Can Go Into a Delusional Spiral. Here’s How It Happens’, New York Times, published 8 August 2025 at https://www.nytimes.com/2025/08/08/technology/ai-chatbots-delusions-chatgpt.html?ref=platformer.news
[Kearney24] Laila Kearney ‘US data-center power use could nearly triple by 2028, DOE-backed report says’, Reuters, published 20 December 2024 at https://www.reuters.com/business/energy/us-data-center-power-use-could-nearly-triple-by-2028-doe-backed-report-says-2024-12-20/)
[Knight20] Will Knight, ‘Data Centers Aren’t Devouring the Planet’s Electricity – Yet’, Wired, published 27 February 2020 at https://www.wired.com/story/data-centers-not-devouring-planet-electricity-yet/
[Kunert25] Paul Kunert, ‘UK government trial of M365 Copilot finds no clear productivity boost’, The Register, published 4 September 2025 at https://www.theregister.com/2025/09/04/m365_copilot_uk_government/
[McMenamin25] Lex McMenamin, ‘ChatGPT Is Everywhere – Why Aren’t We Talking About Its Environmental Costs?’ in Teen Vogue, published 7 May 2025 at https://www.teenvogue.com/story/chatgpt-is-everywhere-environmental-costs-oped
[McQuaid21] John McQuaid, ‘Limits to Growth: Can AI’s Voracious Appetite for Data Be Tamed?’ Undark, published 18 October 2021 at https://undark.org/2021/10/18/computer-scientists-try-to-sidestep-ai-data-dilemma/
[Milmo25a] Dan Milmo, ‘UK copyright law consultation ‘fixed’ in favour of AI firms, peer says’, The Guardian, published 11 February 2025 at https://www.theguardian.com/technology/2025/feb/11/uk-copyright-law-consultation-fixed-favour-ai-firms-peer-says)
[Milmo25b] Dan Milmo and Robert Booth, ‘UK proposes letting tech firms use copyrighted work to train AI’, The Guardian, published 17 December 2024 at https://www.theguardian.com/technology/2024/dec/17/uk-proposes-letting-tech-firms-use-copyrighted-work-to-train-ai)
[Moser25] Anthony Moser, ‘I Am An AI Hater’, published 26 August 2025 at https://anthonymoser.github.io/writing/ai/haterdom/2025/08/26/i-am-an-ai-hater.html
[O’Donnell25] James O’Donnell and Casey Crownhart, ‘We did the math on AI’s energy footprint. Here’s the story you haven’t heard’, MIT Technology Review, published 20 May 2025 at https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/
[Roose24] Kevin Roose, ‘Can A.I. Be Blamed for a Teen’s Suicide?’, New York Times, published 23 October 2024 at https://www.nytimes.com/2024/10/23/technology/characterai-lawsuit-teen-suicide.html (subscription required)
[Roth25] Emma Roth, ‘Amazon CEO says it will cut jobs due to AI’s ‘efficiency’ ’, The Verge, published 17 June 2025 at https://www.theverge.com/news/688679/amazon-ceo-andy-jassy-ai-efficiency
[Rowe23] Niamh Rowe, ‘Millions of Workers Are Training AI Models for Pennies’ Wired, published 16 October 2023 at https://www.wired.com/story/millions-of-workers-are-training-ai-models-for-pennies/
[Roytburg25] Eva Roytburg, ‘Sam Altman admits OpenAI ‘totally screwed up’ its GPT-5 launch and says the company will spend trillions of dollars on data centers’ Fortune, published 18 August 2025 at https://fortune.com/2025/08/18/sam-altman-openai-chatgpt5-launch-data-centers-investments/
[Schneier23] Bruce Schneier, ‘AI and Trust’, published 4 December 2023 at https://www.schneier.com/blog/archives/2023/12/ai-and-trust.html)
[Smith24] Gary Smith, ‘Large Language Models Are Often Wrong, Never in Doubt’, MindMatters, published 29 April 2024 at https://mindmatters.ai/2024/04/large-language-models-are-often-wrong-never-in-doubt/
[Stahl25] Lesley Stahl, ‘Labelers training AI say they’re overworked, underpaid and exploited by big American tech companies’ CBS News, updated 29 June 2025 at https://www.cbsnews.com/news/labelers-training-ai-say-theyre-overworked-underpaid-and-exploited-60-minutes-transcript/
[Stokes25] Mark Stokes, ‘The AI Starvation Loop’, Medium, published 29 July 2025 at https://medium.com/@mark_stokes/the-ai-starvation-loop-how-ai-is-starving-the-web-and-what-we-can-do-about-it-e0e567f13ad4)
[Wikipedia] ‘Large language model’ Wikipedia, https://en.wikipedia.org/wiki/Large_language_model
[Williams22] Adrienne Williams, Milagros Miceli and Timnit Gebru ‘The Exploited Labor Behind Artificial Intelligence’, Noema, published 13 October 2022 at https://www.noemamag.com/the-exploited-labor-behind-artificial-intelligence/
Andy is happy as long as he has a programming language and a problem. He finds over time he has more and more of each. You can find his open source projects at artificialworlds.net









