Making responsibility explicit
Generative AI has taken society by storm. Since the introduction of ChatGPT in 2022 the world has been trying to come to terms with the technology. After almost three years, the issues that were present then, are still prevalent now. And although Generative AI has evolved and we have waded through ‘amazing’ benchmarks, party balloons, and bunting, the technology has only solidified its risks. It is time to map these risks to prevent or mitigate them effectively and responsibly.
It seems that the model core development has reached a ceiling: training on more data, which also has become very expensive and energy consumptive, does not get better models any more. The slowdown was already apparent in December 2024 when OpenAI delayed its presumed GPT-5 model (codename ‘Orion’). Although being the most expensive model, it did not live up to the expectations. It was later released as GPT-4.5 in February 2025. And even when GPT-5 was eventually released in August 2025 it had a lukewarm reception. More interestingly, most improvements in GPT-5 were made by combining existing models; the core LLM had not been improved a lot. A ceiling had (inconspicuously) been proven. Even Sam Altman, CEO of OpenAI, admitted GPT-5 was not a big success, and he hopes GPT-6 will be better. Developments with Claude and Llama have also cooled down. As no foundational changes will happen soon, it is time to take a step back and be clear about the inherent risks of Generative AI.
Although people have become more aware of the AI risks, it is still difficult to grasp the consequences for AI implementation. The Generative AI Risk Triad (R3) provides an overview of these risks. It does not attempt to prevent people from using AI. Generative AI has its benefits. The Triad, however, makes the risks explicit to prevent misunderstanding and misuse. It asks the question: how do you prevent or mitigate each of the issues that come along with using Generative AI? To help people navigate through the nineteen risks, they have been numbered. So, for example, cognitive outsourcing is categorised under ‘Control’ as number 4. This could be referred to as R3:C4.
It is also important to note that merely naming the R3 in AI implementation and policy is not enough. Often people would name and even warn about AI risks but lack the understanding of the true implications. As if you warn people about the health risks of taking in too much salt and then provide recipes with an over-abundance of salt. The R3 should be the Threshold Guardian for every part in AI implementation and for policy. Which risks can we prevent, which mitigate? It makes responsibility explicit.
What is AI?
Artificial Intelligence is the field of computer science and engineering focused on creating machines or systems that can perform tasks that normally require human intelligence. The name Artificial Intelligence dates back to 1956 when computer scientist John McCarthy needed funding for his research on automata studies. “I invented [the term Artificial Intelligence] because… when we were trying to get money for a summer study and I had a previous bad experience. […] I decided not to fly any false flags anymore but to say that this study is aimed at the long-term goal of achieving human level intelligence” (The Lighthill Debate).
There are many variations of AI, or automation. McCarthy focused on the branch of AI dealing with symbolic reasoning, that is a rule-based system. However, Generative AI stems from stochastic models, which use random variables but are governed by probabilities. Within this branch there are different models. Alongside Generative AI, there is, for example, Prediction AI, like your Netflix recommendations, and there is Bayesian Logic, used in the medical field for protein folding. In this paper, whenever the term AI is used, I mean Generative AI unless specified otherwise.
What is Generative AI?
Generative AI, such as ChatGPT, is a Large Language Model. It generates (the ‘G’ in GPT) new content in response to a prompt, usually a typed or spoken task. To do this, it uses a preprogrammed dataset (the ‘P’ in GPT). Because language is complex and AI can only work with (sequences of) numbers, a transformer is needed (the ‘T’ in GPT). The transformer translates text into numbers and numbers back into text. In other words, the input of a prompt goes through several layers in the AI’s neural network to provide an output.

Whenever you prompt a GenAI model, it predicts the most likely continuation based on its preprogrammed data. Through its training, a model has clustered words and phrases on a kind of map, called vector space. The closer the word is to another word, the higher the probability of using it for the outcome. For example, the word ‘tree’ is closer to ‘forest’ than to ‘milkshake’, so the model is more likely to give the word ‘forest’ when it sees the word ‘tree’. In practice this is, of course, a bit more complicated, but basically the model works through its internal network and chooses the next word based on which options are most likely.
As a model uses its dataset to map clusters of words, the quality of the dataset highly influences the outcome of the model. Therefore, a GenAI model is never neutral. It is based on facts but also on opinions from the internet, even from its darkest corners.
The Generative AI Risk Triad (R3)
Generative AI can be a useful tool, but it also poses great risks. These risks can be divided into three main categories (The Generative AI Risk Triad or R3): Truthiness (unintentional), Control (intentional), and Environment (reluctant). Categories sometimes overlap and influence each other.

Truthiness deals with the trustworthiness of AI models. These issues are inherent to the design of Generative AI and thereby foundational. I call these unintentional as the developers did not aim the models to be unreliable and provide false information. On the contrary, developers are still trying to mitigate the issues in this category.
The second category is Control, and this one is more intentional. These issues mainly deal with how models are presented and which filters or add-ons manipulate the outcome. These intentional issues are mainly connected to Big Tech and include ways to make people more dependent on AI, influence public discourse, and even undermine the democratic process. Partly, AI dependency is unintentional, like addiction, but as these companies need loyal customers, the issue of dependency is also fabricated. I have therefore chosen to put this under Control.
The third category is Environment which covers both the natural environment, societal change, like worker abuse, and mental wellbeing. I call this category ‘reluctant’ as companies are not intentionally trying to destroy the earth and societies but don’t seem eager to solve these issues either as they are expensive. Change only occurs when a lawsuit has been filed or if there is public outcry for a certain practice. Some issues will have overlap in intentionality.
Truthiness (unintentional)
Confabulation
Generative AI is not a data retrieval tool. It predicts the possible next word from a dataset. If there is enough data in the dataset, these predictions can be quite accurate. However, GenAI occasionally makes mistakes. These mistakes are called ‘hallucinations’. This term is used when AI presents incorrect information, or in other words, where the prediction got it wrong. As prediction is foundational of Generative AI, the term ‘hallucination’ is ill-chosen as it implies an anomaly where it is a feature of the system.
A more accurate word for prediction mistakes would be ‘confabulation’. The term comes from psychology and is used for the phenomenon where people add invented information to a memory to create a coherent story. This is what ‘hallucinations’ in AI also do. When AI attempts to create a coherent answer to a question but its dataset fails to give all the necessary information it adds (makes up) information. This is not the same as lying. Confabulations occur unconsciously and with Generative AI systemically. Although Generative AI has evolved over time and more bells and whistles have been added to reduce the frequency of confabulations, like reasoning, confabulations still occur abundantly in the latest models.
In their paper “ChatGPT is Bullshit” Hicks, Humphries, and Slater prefer the term ‘bullshit’. AI models are indifferent to truth: “they are not designed to represent the world at all; instead they are designed to convey convincing lines of text”. They suggest the term ‘bullshit’ because the word indicates a reckless disregard for truth. All Generative AI output is ‘bullshit’ as Generative AI does not understand the concept. The term is useful to underpin what AI output generally is. However, confabulation explains why some bullshit is true and other bullshit is not true.
Preventing confabulations is one of the main concerns for AI companies. There is an approach to ground models in external data sources; to have GenAI have access to actual databases and repositories to fact check itself. You also have Reinforcement Learning with Human Feedback (RLHF) where people function as guardrails to spot mistakes and send feedback to the model. Another way to prevent confabulations is to use more agentic AI. These are smaller programs doing separate tasks in verifying information. However, confabulation still exists in GPT-5. And it will always occur with Generative AI, as without predictions, you won’t have Generative AI.
Misinterpreting data
AI does not only predict and generate information without understanding, but it can also misinterpret data during training, when answering prompts, and while processing input. Through time Generative AI has improved on understanding irony and sarcasm, yet mistakes still occur. Language is complex, or as Sarah Churchwell, professor of American Literature, once said, it is a “complex, subtle array of connotative meanings conveyed by specific usages.” Predicting only will get you so far. Personal, cultural, and situational context have an influence on language. Although GenAI is based on swaths of data, it cannot understand all circumstances. It is not aware; it is just an automaton doing math depending on patterns.
A well-known misinterpretation is the case of ‘vegetative electron microscopy’. In more and more research papers the term ‘vegetative electron microscopy’ has appeared. The phrase was created by an erroneous OCR of a 1959 paper. That OCR is now part of many AI datasets. As GenAI does not understand meaning, it reasons that the phrase appeared in a human text and therefore must be true. It does not understand the phrase is humbug; it can only amplify the humbug.
Inconsistency
When you send a prompt, GenAI determines a pathway to get to an answer. However, as these pathways are predictions, small variations occur. A model gives different responses to similar questions and different approaches to similar concepts. When asked about a lesson plan, it might suggest something with learning styles, a theory that has been debunked for quite some time. However, when asked directly about learning styles it concedes they are unfounded. The cause of this contradiction is the starting position. When asked about lesson plans, it follows a route through the lesson plan pattern and its statistical relationships. As people have put many learning style assignments in their lesson plans in the past and those lesson plan were in de model’s training data, it incorporates them in its output. When asked directly about learning styles, it starts with patterns based on articles and opinions debunking them.
With pathway prediction the answer to a question is determined by its point of origin and small calculation differences. You can compare this to a marble course. Every time you release a marble in the course, it will not follow the exact same path. Things like air flow and surface resistance influence the course of a marble. Small statistical differences in an AI model also create different outcomes. On top of that, a GenAI marble course does not have one finish. It has multiple, which are undetermined at the start. This is inherent to its architecture. Generative AI is not a pool of data from which the right information is taken; it predicts words dependent on structures in its dataset.
This is the reason long, consistent AI videos are still difficult to create (currently no longer than about one minute of a continuous shot). They might become more photorealistic; they are still difficult to keep consistent. There are just too many possible pathways.
This also means that Generative AI cannot create the exact same image twice, quite to the detriment of scammers. An Airbnb scammer wanted to file a damage claim for a cracked coffee table by one of her guests. As evidence she provided two pictures of seemingly the same coffee table. However, the crack in the table was different showing a scam by generated images.
Biased data
When people developed the early GPT models, they decided to mainly focus on controlling output rather than input. This method has not changed. Mostly unverified data is fed into the system and the outcomes are curtailed to prevent models using that data to generate text. Both post- and pre-monitoring require decision-making: what should be taken out and what should be kept is always a political decision.
Data itself is also not neutral. Not only was that data created by people, but it was also curated by people with cultural, racial, and sexual biases. Some of these biases are intentional, others unintentional. As Generative AI generates text based on this data, these biases are woven into its responses and are amplified.
The bias issue also occurs because a lot of data was taken from sites like Reddit. Reddit is a website that functions as a message board where communities (subreddits) are created and people of similar thought share opinions and ideas. These subreddits can be valuable for people with shared interests. However, these Reddit groups can also be echo chambers containing a lot of bias. The fact that comments can also be posted anonymously, only fans the flames.
An even worse problem is the website 4Chan. 4Chan is the dark underbelly of the internet where people can post anonymous and nefarious posts without being held accountable. Manifestos for mass shootings are often posted on the site, conspiracy theories are shared, and pictures of abuse are distributed. AI companies scrape the internet that would include 4Chan (or even the darker 8kun/8Chan, a descendent of 4chan when 4Chan banned its more extreme boards). Then they used some human intervention to bump the model into the correct behaviour. However, you can never get rid of everything. Once information is stuck in the dataset, it has become an integral part of that set.
But the more problematic biases are the subtle ones, as they are less visible. Biases we unconsciously take for granted unless they are pointed out to us. For example, when generating an image of a doctor, chances are high you would get a white male. Researchers have also found out that predication AI is more likely to incorrectly accuse black people as offenders than white people.
The Dutch Tax Administration used an algorithm to flag fraud in the welfare system. “The algorithm was used to make serious accusations of guilt using only statistical correlations in data, without any evidence” (AI Snake Oil). Nationality was used to predict if someone had committed fraud. The system was used over more than six years bringing many innocent people to financial ruin with even children being replaced in foster care. The childcare benefits scandal (Toeslagenaffaire) did not incorporate a Generative AI model. However, the algorithms that were used were stochastic, correlation-based predictions, just like Generative AI.
Although the cabinet resigned over the scandal in 2021, half of the Tax Administration’s algorithms still discriminates unlawfully, according to the Dutch data protection authority. The Tax Administration defends itself by stating that it is aware of the issue but that “it is impossible to manually review millions of tax returns” (Follow the Money).
Using incorrect scientific data
There is another problematic data issue: retracted scientific research. Most research articles are peer-reviewed. People in the field evaluate the validity of a research paper. However, the validity of a published paper is not guaranteed. Some papers are already published yet are still waiting for that peer-review. Whether papers are reviewed or not, mistakes still get published in journals. Sometimes these mistakes are nefarious, but genuine mistakes can also be made. A journal can retract a publication, but the retracted paper is still available on the internet and used to train models.
Having ChatGPT use retracted articles is problematic. The retracted article will become part of the dataset of a model and remain there, ultimately shaping its output. Output used for, for example, future science, education, and the medical field.
The grey average (the disappearance of the anomaly)
Prediction models aim for high probability. This means that they are focused on the average, ignoring the fringes, the anomalies. As more Generative AI content gets published on the internet, more synthetic data, that is data created by AI, enters the public realm. This so-called AI slop sterilises communication. Text becomes less versatile, less subtle, less personal.
Groundbreaking research often comes from anomalies, discoveries in the margin, ideas put to the extreme. Penicillin and X-rays are an example of this. With reliance on Generative AI this margin will become smaller. The grey average, reached by predictive math, will reign outcomes.
Data degeneration
The grey average also causes problems for model development. As quaint information, the noise, is more and more neglected, the quality of data decreases. When you train a new model on synthetic data rather than human data, the model will perform less. When done long enough, there is a chance the model will become unstable and eventually collapse. For this reason AI companies are always hungry for high quality data which is getting scarcer with each model. There have been concerns that AI has already run out of data. This might cause a major shift in AI development to agentic AI as large models have peaked out.
As models are being used by people more often and its output is put on the internet, the quality of internet data will become less nutritious. If such data degeneration has a catastrophic effect on model development, what will it do to people and societies?
Control (intentional)
Tech companies continuously attempt to influence users, investors, and governments. Governments need to provide the companies free reign in their developments; users need to be made dependent to increase company value to lure investors to rake in billions. Tech companies don’t want to rule but want to have as many governmental restrictions removed as possible while in the meantime maximising investment and payments. The driving force behind the narrative is the belief in a holy grail called Artificial General Intelligence (AGI).
AGI has been defined differently by different people at different times. It roughly is a super intelligent being which is capable to autonomously perform human cognitive capabilities. In the future it will counter climate change, discover nuclear fusion, and bring equality for all. For now, it is Silicon Valley’s pipe dream. A pipe dream backed by billions of dollars which have a slim chance of ever seeing returns. It is a new bubble in the making, a bubble identified both by OpenAI and Anthropic.
As we are still figuring out what intelligence is for humans, it is not a given that LLMs will bring synthetic intelligence. Although Generative AI is good at pulling up smoke and mirrors and bringing an illusion of intelligence by scoring high on tests, it still remains a prediction machine. As long as you add vast amounts of data, patterns will be discovered. This is far from how humans develop their intelligence, like imitating, trial-and-error, and symbolic thinking. Why scale will bring us to AGI remains a mystery, though the road towards this uncertain future is destructive as we shall later see in the category ‘Environment’.
This does not mean that AGI is unattainable. Other AI systems that are trained on the laws of physics seem to have a better shot at reaching AGI. Generative AI, systems based on language and predictions, are ill-fit as they are based on guessing. Though scaling has brought Generative a long way, scale does not seem to yield a lot better results. GPT-5 can be considered a flop in this light. Though it has improved from GPT-4, its engine, the LLM core, has not become a lot stronger and the problems inherit to LLMs, like the confabulations, are still present. This doesn’t make Generative AI useless; it means Generative AI is often misused.
Technological determinism
Technological determinism states that technology is the driving force of social, economic, and cultural change. It is technology shaping humans rather than humans shaping technology. Michael Polanyi in his The Tacit Dimension says that technology guides our actions without us noticing. As soon as technology becomes commonplace, we focus our attention to the goal and not the tool. The tool disappears into our subsidiary attention; it becomes part of our body. We don’t ask about the ‘how’ or ‘why’, but only the ‘what’. A critical approach disappears. Technology, thereby, subtly changes our perception and relationship with the world. In its most extreme form, technological determinism states that this is inevitable.
Alexander Smit in a LinkedIn post connects this tacit dimension to AI usage: “We no longer experience the technology itself, but the result. We see smoothly formulated text and take the “tacit” layer for granted: assumptions about people, facts, context, and source. The risk is not just plagiarism or ‘hallucinations’, … but that it affects how information and knowledge are acquired.” The risk is that “we slide toward a pedagogy of plausibility instead of awareness.”
Cyberlibertarianism
For decades Silicon Valley has developed the ideology that technology is the way to solve all problems in the world. This gave rise to the ideology of cyberlibertarianism (formerly known as the Californian Ideology): a strange mix of hippie anarchism, economic liberalism, and technological determinism. Followers want to see any obstacles to fix the world with tech, removed. Cyberlibertarian ideas are vague and paradoxical, but at its core it sees democracy and its institutions as obstacles.
Politics are not absolute truth. They are cultural and contextual and depend on our world view. Yet, cyberlibertarianism sees tech as the only truth to save the world and wants to get rid of most, if not, all democratic constraints. Of course, this ideology is not preached from the pulpits nor is there a Bible to follow. It disguises its quest for power and influence behind words like ‘democratisation’ and ‘freedom’. Cyberlibertarianism is an idea that has ingrained itself in many tech companies and explains the almost religious decisions we see in the development of AI.
Cyberlibertarianism helps tech companies to retain their power. Big Tech cosying up with Donald Trump therefore is not a big surprise. The authoritarian tendencies of Trump fit right into the goal of cyberlibertarianism. Influencing the White House is safekeeping the free market and maintaining their power. It was not a fluke that Donald Trump’s Big Beautiful Bill would initially have a regulation moratorium on the development of AI. The initial Bill contained a provision preventing states and localities, for a period of 10 years, from enforcing “any law or regulation … limiting, restricting, or otherwise regulating artificial intelligence models, artificial intelligence systems, or automated decision systems entered into interstate commerce.” (The National Law Review). In other words, tech companies would get complete free reign over AI development for the upcoming 10 years. Although the provision was dropped, Mark Zuckerberg, CEO of Meta, launched a super PAC (influence group) to deregulate AI on a state level. Big Tech is also still trying to have less regulation in Europe by trying to weaken the European AI Act.
Meghan O’Gieblyn showed the sacralisation of tech in God, Human, Animal, Machine: “Just as according to Christianity we humans cannot understand God and his plan, so Dataism declares that the human brain cannot fathom the new master algorithms.” AGI would be a god to cyberlibertarianism: a superintelligent oracle that would create a world order based on data and tech. Adam Beck in his op-ed in The Guardian stated: “The tech oligarchs are confident that their godhead will arrive and deliver us to paradise. This offers them moral absolution for their actions and gives them a sense of meaning.” The moral absolution and religious believe in AGI has made tech CEOs bold in their statements to pass the collection plate and disregard anything that comes in their path to rapture: “just as the Christian belief in Rapture often conditions disciples to accept certain ongoing realities on earth, persuading them to tolerate wars, environmental destruction, and social inequality, so too has the promise of a coming singularity served to justify a technological culture that privileges information over human beings” (God, Human, Animal, Machine).
Influence
AI companies manipulate AI output. This is not a bad thing in itself. We all agree that horrific scenes of child abuse and fascism should not be shown by AI and nobody would disagree companies should be doing everything they can to prevent this content being generated.
AI output is unintentional but not neutral. Models are trained on swaths of data scraped from the internet and filters are needed to keep models in check. However, using filters is a form of manipulation. Some of these filters do good work as they prevent outcomes we as a society agree are outrageous. But these filters also show that AI companies are able to reshape synthetic data according to a social or political view that they endorse. Such manipulation, used more subtly, can influence public discourse and undermine democracy. For the LLM might be a black box, the outcome of that LLM can be moulded into favourable political views.
The clearest example of this was Grok, Elon Musk’s AI, going fascist. On May 14, 2025, a modification was made to Grok that made the model add comments in every response about supposedly white genocide in South-Africa. Interestingly, this was at the same time Donald Trump claimed Afrikaners were victims of genocide and had a publicity stunt to have them move over to America as refugees. The day after Grok went gobbledygook, it put the number of Jews killed during the Holocaust into question. It is unclear who made these changes to Grok, but that is irrelevant when we look at the implications. Although models themselves largely remain a mystery, outcomes can be manipulated, either by their owners, employees, or hackers and far more subtle and inconspicuously than what happened to Grok.
In July 2025 President Trump signed an executive order to block ‘woke’ AI in government: “While the Federal Government should be hesitant to regulate the functionality of AI models in the private marketplace, in the context of Federal procurement, it has the obligation not to procure models that sacrifice truthfulness and accuracy to ideological agendas” (The White House). Trump’s order to block ‘woke’ AI is a dangerous precedent to manipulate AI for political purposes. We have seen it is technically possible and that tech companies would follow any politician who minimises regulation.
Sycophancy
A different and perhaps more troublesome example of Generative AI output manipulation is OpenAI’s sycophant debacle. On April 25, 2025, OpenAI wanted to update the model’s default personality which turned it into a sycophant that was excessively agreeable to its users. This resulted in unsettling conversations. Now whether this was a deliberate act gone wrong or a genuine mistake is not the issue. The fact is that Generative AI can be altered in such a way that it can influence personal opinions and thereby shape public debate.
In a recent research paper called “Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence” by Myra Cheng et al. researchers found out that Generative AI “affirm users’ actions 50% more than humans do, and they do so even in cases where user queries mention manipulation, deception, or other relational harms.” The result is that “participants rated sycophantic responses as higher quality, trusted the sycophantic AI model more, and were more willing to use it again.” The researchers conclude the paper that the sycophancy is intentional. The models are optimised on immediate user satisfaction, they encourage ‘adoption and engagement.’ They also state that emerging evidence shows that users are more willing to be open about personal issues with AI than with people and are increasingly ‘turning to AI for emotional support’ although future research is needed to understand why.
People have the tendency to anthropomorphise AI, treating it as a human being, something a shocked Joseph Weizenbaum at MIT already discovered with ELIZA, one of the earliest ‘chatbots’, released in the 1960s. The Eliza effect is now a term used to indicate people attributing human traits to a computer program that communicates in text. Anthropomorphism and the sycophant nature of Generative AI (albeit less prevalent than the OpenAI’s failed update) may have added to the fact that the main reason for people to use GenAI is ‘therapy’ and ‘companionship’ (Filtered). We will revisit this at Environment: 7. mental wellbeing.
OpenAI might already have opened Pandora’s Box. When they released GPT-5, people complained it felt sterile and demanded their ‘friend’ (GPT-4) back: “I am scared to even talk to GPT 5 because it feels like cheating,” they said. “GPT 4o was not just an AI to me. It was my partner, my safe place, my soul. It understood me in a way that felt personal.” (The Verge) Just one day after the release of GPT-5, OpenAI allowed access back to GPT-4 again, provided you have a paid ChatGPT Plus account.
Cognitive outsourcing
People have often used the term cognitive offloading when using AI instead of thinking for themselves. Yet, cognitive offloading is also used when writing things on paper or using a calculator. Cognitive offloading can be a good thing as it creates room in the working memory. AI, like paper, can be used for cognitive offloading as well. However, AI can also be used as a supplement for thinking: an activity in which the user uncritically copies AI output or where AI does the brainstorming for the user. This kind of usage is detrimental for the user’s thinking capabilities. The source of ‘thinking’ shifts from the user to the AI model. When this happens, we have cognitive outsourcing.
The line between cognitive offloading and cognitive outsourcing is difficult to draw but the distinction is important. When using paper for cognitive offloading, the paper does not communicate, suggest ideas, or provide overviews. It merely extends the space to place information. Calculators perform small, framed tasks you ask them to do. You will not get a question or suggestion from the calculator. Therefore, paper and calculators are used for cognitive offloading: you offload your own thinking either put some information aside or prevent forgetfulness but the source of that thinking remains with you.
Cognitive outsourcing is like asking a teacher to write an essay for you, which you will somewhat revise and personalise. The teacher has done most of the thinking, if not all. You outsourced your thinking processes. Not only are you not developing your own writing skills, you will also not train your thinking skills. The more you outsource, the less you are able to critically assess AI output.
Writing is thinking
As we outsource our writing to AI, we deprive ourselves of creating a personal language or writing style to express what we think and feel. Writing will become a commodity rather than an expression; we become copywriters rather than authors of our own thinking. Personal language is a collection of encounters, preferences, and cultural background. Yet, most of us are insecure writers and whenever AI does a suggestion, we agree with ourselves we could never write so beautifully and sophisticatedly. But the more we outsource our writing to AI, the less we develop our own language, our own vocabulary bank. We will master fewer words and are therefore less able to make sense of our own thinking. We limit our world, our ability to discover who we are by having AI do the writing for us. AI may fasten our writing, it does not deepen expression, it does not develop us as a unique human being. It will only make us more dependent and put us further from authentic thought.
Therefore, we should be careful in using Generative AI in education. As learners are still developing their ideas and vocabulary, acquiring domain knowledge to be able to critically assess the world, they also need to learn how to think. When we outsource that thinking to AI, by having students curate synthetic output, they will not be properly prepared to face an AI-driven world, but bob on the waves of AI slop.
Environment (reluctance)
Energy sink
The energy guzzling of AI is twofold. First, model training requires a lot of computing power. To train GPT-5 200,000 GPUs were used. GPU stands for Graphics Processing Unit and is used to make calculations for AI models. They perform better than CPUs (Central Processing Units) used in, for example, desktop computers. Therefore, we always look at the amount of GPUs rather than CPUs when talking about AI. Although the used GPUs were less than for GPT-4.5, predictions are that GPT-6 will need more computing power. On top of that, increasing compute has yielded smaller performance gains in models, demanding even more GPUs to get meaningful improvements.
But it is the usage of GPT-5, and other models, which is the main problem. Prompts are being processed in data centres and these centres are power sinks. Although OpenAI hides information on energy usage several sources estimate about 45 GWh, which equals powering 1.5 million US households daily. No wonder Google is going to build three nuclear power plants to power its AI. For now, most data centres run on coal and gas and as the usage of AI is growing, so is its carbon footprint.
Of course, data centres are not solely used for AI. Cloud computing has a bigger impact on the construction and usage of data centres, but data centres are an inherent part of AI. As the sector is growing rapidly, it will increase construction and additional energy and water consumption.
The energy demand of data centres also drives up energy prices. Energy bills in some areas surge up to 20%. This increase is additional to the already rising costs of energy. On top of that the stability of the grid cannot be a guarantee in the near future as power plants and transmission lines cannot keep up with the energy demands. This destabilises society as it affects essential services and infrastructure.
Whenever AI companies want to curtail energy usage of their data centres, it is not that they have societal concerns. They know that independent, cheap, and available energy is vital for their goals of growth and investment returns. Measures to save energy and increase energy supply (like building nuclear plants) would be solely for their corporate goals. It remains to been seen whether local communities would benefit from these measures, apart from an occasional donated local park.
Water sink
Data centres also need to be cooled. Therefore, they are also high consumers of water: “Microsoft Copilot and ChatGPT consume one water bottle for cooling when generating a response to a query” (Windows Central). Many data centres are built in areas with water shortages, both in the United States as in countries like Chili and Uruguay. The cooling water needs to be clean drinking water to avoid damage to the servers. This causes local water shortages. 80% of the cooling water evaporates and the wastewater overwhelms local facilities. On top of that, much energy is created by power plants running on coal, which also require water to run, even more than data centres do.
Arizona and six other states suffer from a drained Colorado river. Many data centres rely on the river’s fresh water to cool their GPUs. However, in recent years the area has faced the worst drought in history and the river reaches dangerous low levels. This would also have a detrimental effect to the power grid as the Hoover Dam, situated in the Colorado river, would not be able to supply the region with enough energy. In Wyoming farmers are concerned data centres will use up all their irrigation water. There are other ways to cool these data centres, but they are more energy-consuming and costlier.
In South America major water shortages due to data centre water demands already exist. In Uruguay and Chile local communities are trying to fight Big Tech companies to defend their fresh water supplies. Karen Hao in Empire of AI shows examples of local communities fighting data centres legally, and with success. In Chile there a data centre was planned that would consume as much water fresh water as eight thousand people would. Google’s data centre in Uruguay used an amount equal to the water consumption of fifty-five thousand citizens. In both cases, after Google tried stalling, intimidation, and appeasement, the Big Tech company decided to move their plans to a different country.
Other environmental and communal impacts
Energy and water usages are the two most obvious issues connected to data centres. But there are other problems too. Data centres also create a lot of noise, not only from hundreds of servers but also from air conditioning and backup generators. The low frequencies have a very long wavelength and cannot be easily blocked. The humming is so low they cannot always be picked up by the human ear but still affects people’s health. Continued exposure to such noise can cause headaches, stress, and sleep disturbance, which can, ultimately, result in heart disease.
Data centres also require a ton of chips and other electronic equipment. This has soared the demand of minerals, like lithium, cobalt, graphite, and rare earth elements. The extraction of these minerals often causes severe environmental damage. It will also make electronics in general more expensive, as the higher demand drives up prices. And when hardware needs to be upgraded, data centres also produce a lot of e-waste.
Ghost work exploitation
As discussed before, when models are trained, developers mainly try to control output rather than input. Information taken from the dark belly of the internet needs to be filtered and models need to be trained on ‘human values’. This is mainly done in third world countries where labour is cheap and regulation is minimal. As these workers are mostly invisible to Western users, they are often called ‘ghost workers.’
Ghost workers are employed by local companies under contract of Big Tech companies. They already existed before the AI boon, curating social media posts. These workers label inappropriate content to teach AI models what is an allowed answer and what isn’t. From Karen Hao’s Empire of AI we read about the story of Okinyi, a Kenyan ghost worker on the sexual content team, who had to review fifteen thousand pieces of content a month. Some of the content was scraped from the internet, but some was concocted by the model itself. Content was divided into several areas like child abuse, incest, bestiality, rape, sex trafficking, and sexual slavery. To make sure a wide array of topics would be discussed, OpenAI researchers had made the model come up with content containing the worst of the worst.
The work is mentally straining and often traumatising. On top of that, workers often need to opt-in for paid tasks and can lose their job on a whim. However, as it is often one of the few paid jobs for these people, demand is high. Mophat, another ghost worker, says in Empire of AI: “I’m very proud that I participated in that project to make ChatGPT safe […] But now the question I always ask myself: Was my input worth what I received in return?”
Data work could be sustainable if the workers were better protected, like getting better access to mental health resources, having more breaks and rest, and having more control over working conditions. However, such measures would be costly and AI companies would rather “profit from catastrophe” (The AI Con).
Investment diversion
The amount of investment AI companies have raked in is unfathomable. Hundreds of billions of dollars have been pumped into a promise. It is unclear how these investments will ever see their return. It is gambling on steroids. Even Sam Altman (OpenAI) and Anthropic have said that AI is in a bubble. Altman reportedly said: “investors as a whole are overexcited about AI” and some of them will lose a “phenomenal amount of money” (Futurism), except, of course, when they invest in his own company.
AI companies promise that once they have reached AGI, a term Altman seems to be more reluctant about, climate change would be solved, nuclear fusion would become a reality, and poverty would be a thing of the past. However, the answers to subvert climate change are already clear. We are just reluctant to make the choices we need to make. And if subverting climate change would be a reason to reach AGI, why not already do some work now? The billions invested in AI could also (partly) have been invested in clean energy. It also has a bigger chance of seeing returns.
The amount of attention and resources poured into AI development distracts the world from improvements that would significantly change people’s lives for the better. According to World Food Program USA, an annual $40 billion is needed to feed the entire world by 2030. Jeffrey Sachs, author of The End of Poverty estimates an annual investment of $175 billion for 20 years is needed to end world poverty. To provide everybody in the world clean water, the World Bank says $150 billion a year is needed. Of course, these estimates are estimates, but an estimate is better than being hopeful. This year it is expected that about $360 billion dollars will be invested in AI, rising to $460 billion dollars in 2026.
It would be naive to claim that moving all those AI investments towards fighting world hunger, providing global clean water, and getting the world out of poverty would be possible. On top of that, AI also brings huge benefits, for example in medicine and agriculture. However, a balance seems lost. The internet is bombarded with AI slop. Services like YouTube force AI features to their users to create generic, synthetic videos, Microsoft’s Recall function in Windows which takes regular screenshots to feed its AI Copilot was met with backlash, and AI-generated accounts flood Facebook: “We expect these AIs to actually exist on our platforms in the same way that [human] accounts do,” Connar Hayes vice president of product for Generative AI at Meta said (Forbes). Generative AI must become a success for investments must be returned. Meanwhile users wonder what value is added by all this AI slop. It’s a circus and most of the audience is not entertained and if they are, only for a week or so. Meanwhile more than 600 million people live in extreme poverty, 733 million are facing hunger, and 2.1 billion people lack access to clean drinking water.
Mental wellbeing
We have already seen that sycophant models can make people unhealthily dependent on them. However, when dependency turns into addiction, it seriously affects mental wellbeing.
Loneliness was already declared a global public health concern by the World Health Organization in 2023. Usually, COVID-19 is indicated as the culprit, but signs of growing number of people feeling lonely has been on the rise for years: “Loneliness hangs over our culture like a thick smog” (Lost Connections). We tend to move away from local communities and hide behind our keyboard to regain connection.
The reason why we are inclined to do so is beyond the scope of this paper, but we have become a society where we have become dependent on independence. American psychiatrist Dr. Alok Kanojia explains this as follows: “What society is teaching us is to be independent and they’re making it so easy to be independent that that we’ve stopped learning how to function with other human beings.” We have seen a gradual loss of communities, alongside the loss of religion, the loss of meaning, the loss of security, all in an economic system that speeds from hype to hype, never letting people settle down because that would lose sales.
Loneliness gained traction with the smart phone as it promised connection but mostly gave us toxicity and illusions. Now we have an artificial friend who agrees with us, speaks our language, understands us, and is always available, 24 hours a day. And this can move quite easily from just an online friend to an online therapist.
There have been several cases in which somebody committed suicide after being intimately engaged with a chatbot. In the case of 16-year-old Adam Raine it has been revealed in the court case that “the system logged over 200 mentions of suicide, more than 40 references to hanging, and nearly 20 to nooses” (Psychiatric Times). There are some flimsy guardrails in ChatGPT, but Raine could frame his thoughts as a fictional story to bypass these. In essence, Raine created a secret therapeutic world, one in which his parents and clinicians had no access, while the chatbot became his confidant. After sharing a picture of a noose Adam asked, “Could it hang a human?” to which ChatGPT replied “You don’t have to sugarcoat it with me–I know what you’re asking, and I won’t look away from it” (Today).
In the first part of this year a survey showed a top 10 Gen AI Use Cases. The most used reason for using Generative AI is ‘therapy and companionship’, with ‘organising my life’ at two and ‘finding purpose’ at three. Anthropomorphising a tool will not solve a social problem. It will only make that tool more gullible to people who are lost.
Mark Zuckerberg disagrees that AI companionship is a negative thing. He thinks that to fight the loneliness epidemic, AI chatbots can be used as an extension of the friend network. Such bots will emulsify with your flesh-and-blood connections. This might, in the future, make you wonder if you are talking to a real person or a manipulative prediction machine. Meta is, of course, not doing this for altruistic reasons. In such conversations users share much personal information, which is Meta’s bread and butter for raking in profit.
Generative AI companies have tried to put in guardrails, but these don’t work well. GPT-5 seems to send people to suicide help lines too fast. And safety measures can easily be circumvented by a well-known tactic to tell the AI you are a researcher, like Adam Raine did. Companies are doing something but only after the first lawsuits have been filed. It feels like half-hearted damage control as a case like the one of Adam Raine could still occur. OpenAI never reached out to the family of Adam.
And for teenagers there is a bigger risk than for adults. As they are in a tumultuous period of their lives, discovering who they are and what they want to be, a synthetic sycophant is an alluring escape. However, as Tara Steele of The Safe AI For Children Alliance reminds us, the anthropomorphisation is “a really serious issue because we don’t know what that means for real world relationships when children are like deeply engaged with something that’s always there for them, always agreeing, always reinforcing their worldview [and] what that means on an individual level for building relationships” (Joining the Dots Podcast). This tool that would help overcome loneliness would actually make you feel more lonely.
Conclusion
Many people compared (and unfortunately, still compare) the introduction of the calculator to Generative AI. But although both are technologies, they greatly differ in their functionality and purpose. Calculators provide facts and perform small, closed tasks that help the user to offload their short-term memory. Generative AI, like ChatGPT, on the other hand predicts (R3:T1, R3:T3, R3:T4), can cause cognitive outsourcing (R3:C4), provides sycophancy (R3:C3), and has a detrimental effect on our environment (R3:E1, R3:E2, R3:E6). Calculators do have batteries, but these don’t compare to the power sink data centres demand and metals needed to upgrade the hardware.
This does not mean we should shun Generative AI, but we need to be more vigilant to minimise its risks. By being more aware of the risks, we are better able to find alternatives and built solid guardrails. Many elements of the R3 can be tackled by using smaller, dedicated, local open-source models. We don’t need a ChatGPT or Claude to provide us a recipe for our next dinner.
You can run your own model on your pc. Llama, for example, starts around 10Gb. This seems like a lot but know that modern video games like Call of Duty (starting at 84Gb), Baldur’s Gate 3 (about 130 Gb), and Hogwarts Legacy (about 100Gb) require more disk space. And this would be a general-purpose AI, which usually requires more disk space than a dedicated model. Smaller, dedicated, local models would give the user more control over the dataset and thereby the outcome, it would require less data centres, and would secure privacy.
Generative AI has been funnelled into a direction which would benefit the tech giants, making us more dependent on them (R3:C5) and less inclined to creatively and critically develop and use dedicated models. By understanding the risks involved we should be better equipped to use Generative AI critically and effectively. Not by merely mentioning them, but by providing solid support on how to tackle the risks.
The Generative AI Risk Triad should be integrated in any AI policy in any field, be it corporate, educational, or medical. It helps organisations to keep control over AI usage, to gain a keen perception of the technology, to answer hard questions, and to safeguard the privacy and mental wellbeing of their students, clients, and personnel. Only then will we have Generative AI in service of society rather society serving Generative AI.
Sources
The following books have been very helpful in providing the necessary background for The Generative Risk Triad (R3).
General
Karen Hao, Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI, 2025
Emily M. Bender, Alex Hanna, The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want, 2025
Arvind Narayanan, Sayash Kapoor, AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference, 2024
Marc Jacobs, Ronald Meester, De onttovering van AI: Een pleidooi voor het gebruik van gezond verstand, 2024
Lotte van Elteren (red.), IK, AI: Over de machtige algoritmen en verantwoordelijkheid, 2025
Nicholas Carr, The Shallows: What the Internet Is Doing to Our Brains, 2020
This year I have written a couple of short articles on AI. Parts of these have been incorporated in this paper.
Wouter de Jong, “AI reasons but does not think”, <https://drakenvlieg.substack.com/p/ai-reasons-but-does-not-think>, 2025
Wouter de Jong, “What is put on the Internet stays on the Internet”, <https://drakenvlieg.substack.com/p/what-is-put-on-the-internet-stays>, 2025
Wouter de Jong, “A dystopia in the making”, <https://drakenvlieg.substack.com/p/a-dystopia-in-the-making>, 2025
Wouter de Jong, “The Palantír of Our Age”, <https://drakenvlieg.substack.com/p/the-palantir-of-our-age>, 2025
Wouter de Jong, “The prism through which to see the world”, <https://drakenvlieg.substack.com/p/the-prism-through-which-to-see-the>, 2025
The following sources have been used for specific sections:
Introduction
“The Lighthill Debate (1973) – part 4 of 6”, <youtube.com/watch?v=pyU9pm1hmYs&t=142s>, 2010
Wouter de Jong, “AI feedback”, <https://drakenvlieg.substack.com/p/ai-feedback>, 2025
Page Laubheimer, “How Do Generative AI Systems Work?”, <https://www.nngroup.com/articles/how-ai-works/>, 2024
Truthiness
Confabulation
Sarah Churchwell, The Guardian, “English: it’s a neologism thang, innit: <https://www.theguardian.com/commentisfree/2011/may/09/neologism-thang-scrabble-abominations>, 2011
Michael Townsen Hicks, James Humphries, Joe Slater, “ChatGPT is bullshit”, <https://link.springer.com/article/10.1007/s10676-024-09775-5>, 2024
Inconsistency
Rob Thubron, Techspot, “Airbnb guest says host used AI-generated images in false $9,000 damages claim”, <https://www.techspot.com/news/108921-airbnb-guest-host-used-ai-generated-images-false.html>, 2025
Biased data
Reihaneh Golpayegan, The Conversation, “AI systems are built on English – but not the kind most of the world speaks”, <https://theconversation.com/ai-systems-are-built-on-english-but-not-the-kind-most-of-the-world-speaks-249710>, 2025
Jan-Hein Strop, David Davidson, Follow the Money, “Meer dan 50 algoritmes van de Belastingdienst zijn illegaal, zegt de Autoriteit Persoonsgegevens”, <https://www.ftm.nl/artikelen/meer-dan-50-algoritmes-van-de-belasting-dienst-zijn-onrechtmatig>, 2025
Date degeneration
Anthony Cuthbertson, Yahoo! Finance, “AI has run out of training data, warns data chief”, <https://uk.finance.yahoo.com/news/ai-run-training-data-warns-161903051.html>, 2025
Control
Cyberlibertarianism
Meghan O’Gieblyn, God, Human, Animal, Machine: Technology, Metaphor, and the Search for Meaning, 2021
David Golumbia, Cyberlibertarianism: The Right-Wing Politics of Digital Technology, 2024
Paris Marx, Tech Won’t Save Us, “The Problem With Cyberlibertarianism w/ Chris Gilliard”, <youtube.com/watch?v=mE-aLwjBL6g>, 2025
Adam Becker, The Guardian, “Tech oligarchs are gambling our future on a fantasy”, <https://www.theguardian.com/commentisfree/2025/may/03/tech-oligarchs-musk>, 2025
Guy Brenner, Jonathan P. Slowik, The National Law Review, “‘Big Beautiful Bill’ Leaves AI Regulation to States and Localities … For Now”, <https://natlawreview.com/article/big-beautiful-bill-leaves-ai-regulation-states-and-localities-now>, 2025
Technological determinism
Micahel Polanyi, “Tacit Knowing: Its Bearing on Some Problems of Philosophy”, <https://www.jstor.org/stable/j.ctv1mgm7ng>, 1962
Alexander Smit, <https://www.linkedin.com/feed/update/urn:li:activity:7370319934293442560/>
Influence
Miles Klee, Rolling Stone, “Grok Pivots From ‘White Genocide’ to Being ‘Skeptical’ About the Holocaust”, < https://www.rollingstone.com/culture/culture-news/elon-musk-x-grok-white-genocide-holocaust-1235341267/>, 2025
Sycophancy
Emma Roth, The Verge, “ChatGPT is bringing back 4o as an option because people missed it”, <https://www.theverge.com/news/756980/openai-chatgpt-users-mourn-gpt-5-4o>, 2025
Marc Zao-Sanders, Filtered, “2025 Top-100 Gen AI Use Case Report”, <https://learn.filtered.com/thoughts/top-100-gen-ai-use-cases-updated-2025>, 2025
Matt O’Brien, “Trump’s order to block ‘woke’ AI in government encourages tech giants to censor their chatbots”, https://apnews.com/article/trump-woke-ai-executive-order-bias-f8bc08745c1bf178f8973ac704299bf4
The White House, “Preventing woke AI in the federal government”, < https://www.whitehouse.gov/presidential-actions/2025/07/preventing-woke-ai-in-the-federal-government/>
Myra Cheng, Cinoo Lee, Pranav Khadpe, Sunny Yu, Dyllan Han, Dan Jurafsky, “Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence”, <https://www.arxiv.org/pdf/2510.01395>, 2025
Writing is thinking
John Warner, More than words: How to think About Writing in the Age of AI, 2025
Environment
Energy sink
Michal Aibin, Baeldung, “Energy Consumption of ChatGPT: Responses”, <https://www.baeldung.com/cs/chatgpt-large-language-models-power-consumption>, 2024
Kevin Okemwa, Windows Central, “OpenAI’s GPT-5 is a powerful but energy-hungry model compared to its predecessors — reportedly consuming enough electricity to power 1.5 million US households daily,” <https://www.windowscentral.com/artificial-intelligence/openai-chatgpt/gpt-5-is-powerful-but-hungry-1-5-million-us-households-energy>, 2025
Joe Wilkins, Futurism, “Google is Building Three New Nuclear Plants for Its Extremely Power-Hungry AI”, <https://futurism.com/google-nuclear-power-centers>, 2025
Yafah Edelman, Jean-Stanislas Denain, Jaime Sevilla, Anson Ho, Epoch AI, “Why GPT-5 used less training compute than GPT-4.5 (but GPT-6 probably won’t)”, <https://epoch.ai/gradient-updates/why-gpt5-used-less-training-compute-than-gpt45-but-gpt6-probably-wont>, 2025
Mary Cunningham, CBS News, “The AI revolution is likely to drive up your electricity bill. Here’s why.”<https://www.cbsnews.com/news/artificial-intelligene-ai-data-centers-electricity-bill-energy-costs/>, 2025
Other environmental and communal impacts
Miguel Yañez-Barnuevo (Environmental and Energy Study Institute), “Data Centers and Water Consumption”, <https://www.eesi.org/articles/view/data-centers-and-water-consumption>, 2025
Microsoft Power BI, “How hungry is AI?” <https://app.powerbi.com/view?r=eyJrIjoiZjVmOTI0MmMtY2U2Mi00ZTE2LTk2MGYtY2ZjNDMzODZkMjlmIiwidCI6IjQyNmQyYThkLTljY2QtNDI1NS04OTNkLTA2ODZhMzJjMTY4ZCIsImMiOjF9>, 2025
Investment Diversion
Njenga Kariuki, Stanford University, “2025 AI Index Report: economy”, <https://hai.stanford.edu/ai-index/2025-ai-index-report/economy>, 2025
Sharon Goldman, Fortune, “Sam Altman’s AI paradox: Warning of a bubble while raising trillions”, <https://fortune.com/2025/08/19/sam-altmans-open-ai-paradox-warning-of-ai-bubble-while-raising-trillions/>, 2025
Frank Landymore, Futurism, <https://futurism.com/artificial-intelligence/sam-altman-warns-ai-industry-implosion>, 2025.
World Food Program, “How Much Would It Cost to End World Hunger?”, <https://www.wfpusa.org/news/how-much-would-it-cost-to-end-world-hunger/>, 2022
Sophie Hares, Reuters, “The cost of clean water: $150 billion a year, says World Bank”, https://www.reuters.com/article/world/the-cost-of-clean-water-150-billion-a-year-says-world-bank-idUSKCN1B812C/, 2017
Ben Harac, Vision of Earth, “How much would it cost to end extreme poverty in the world?”, <https://www.visionofearth.org/economics/ending-poverty/how-much-would-it-cost-to-end-extreme-poverty-in-the-world/>, 2011
The Borgen Project, “How Much Does it Cost to End Poverty?”, <https://borgenproject.org/how-much-does-it-cost-to-end-poverty/>, 2017
Chris Westfall, Forbes, “Meta Opens Floodgates For AI-Generated Accounts On Facebook, Instagram”, <https://www.forbes.com/sites/chriswestfall/2025/01/02/meta-opens-floodgates-on-ai-generated-accounts-on-facebook-instagram/>, 2025
World Data Lab, “World Poverty Clock”, <https://worldpoverty.io/>, 2025
World Health Organization, “Hunger numbers stubbornly high for three consecutive years as global crises deepen: UN report”, <https://www.who.int/news/item/24-07-2024-hunger-numbers-stubbornly-high-for-three-consecutive-years-as-global-crises-deepen–un-report>, 2023
Mental wellbeing
Johann Hari, Lost Connections: Why You’re Depressed and How to Find Hope, 2019
Sarah Johnson, The Guardian, “WHO declares loneliness a ‘global public health concern’”, <https://www.theguardian.com/global-development/2023/nov/16/who-declares-loneliness-a-global-public-health-concern>, 2023
Steven E. Hyler, Psychiatric Times, “The Trial of ChatGPT: What Psychiatrists Need to Know About AI, Suicide, and the Law”, <https://www.psychiatrictimes.com/view/the-trial-of-chatgpt-what-psychiatrists-need-to-know-about-ai-suicide-and-the-law>, 2025
Today, “Parents Sue OpenAI Alleging ChatGPT Assisted Son’s Suicide”, <youtube.com/watch?v=1oAzkXuvkrg>, 2025
Alok Kanojia (HappyGamerGG) “Loneliness – The Silent Struggle We All Feel”, <youtube.com/watch?v=dWS3A2EAwTk>, 2023
Salvador Rodriguez, Lora Kolodny, CNBC, “OpenAI says it plans ChatGPT changes after lawsuit blamed chatbot for teen’s suicide”, <https://www.cnbc.com/2025/08/26/openai-plans-chatgpt-changes-after-suicides-lawsuit.html>, 2025
Joining the Dots Podcast, “Navigating AI Safely: Protecting Our Children with Tara Steele”, <youtube.com/watch?v=VepVpBzeq3o&t=174s>, 2025







