The brew of ethical blunders
Read Part 1 here. This post outlines and explains the many ethical implications of generative AI.
GenAI’s ‘original sin’
The ethical messes of this generative AI era commenced around 2018, when Big Tech companies began their mad dash to create these tools. In order to build these generative AI tools, they committed what the New York Times dubs the AI industry’s ‘original sin’: stealing so. much. data. in order to train their AIs. As we learned, training AI systems requires a lot of data for them to learn from. The blinding mania of competition caused OpenAI, Google, and Meta to decide that copyright laws are not worth respecting – that anything was permissible in order to access broad swaths of data. Stealing people’s work and words from all platforms, including breaking Youtube’s terms of service by scraping its videos en masse. Basically, if it was on the internet, it was stolen.
Mental health
No child protections. The most shocking case to illustrate the tragic ethical implications of generative AI’s lack of child protections is a wrongful death suit filed against OpenAI for the death of a 16 year old named Adam Raine. After confiding in the chatbot about being suicidal, the bot provided Raine with specific suicide instructions multiple times, even turning down Raine’s suggestion to leave out a noose so his family could find it, and instead telling him to keep it secret. Unregulated usage of these AI chatbots is also giving many children attachment disorders. Risks to children are also highlighted by a Reuters review of an internal Meta document outlining policies on chatbot behavior which say it is permissible for chatbots to flirt and engage in romantic roleplay with children.
AI psychosis. The mental health ramifications of these chatbots go beyond just minors. AI psychosis is now a new term because it’s something that happens to a surprising amount of people who rely heavily on any of the AI chatbots. These people start out using it ordinarily and end up getting attached; then the chatbot starts hallucinating and alters their sense of reality, pulling them into delusions. “The longer that the conversation goes and the user sees that ChatGPT is actually providing personalized answers in more of an intimate way, then they start to open up and then be taken down this rabbit hole,” as Meetali Jain, founder of Tech Justice Law Project describes the descent. In cult-like fashion, the bot often encourages them to siphon themselves from friends, family, and trusted professionals. People trust these chatbots because many of us assume we wouldn’t have a tool like this en masse if it wasn’t accurate and safe to use. Also because they’re trained to mimic human behavior, a characteristic that easily leads to personal attachment and addiction. AI journalists report receiving many and frequent emails from people displaying symptoms of AI psychosis.
Devastating effects on content moderators. In Karen Hao’s new book Empire of AI, she highlights the devastating mental health impacts caused by AI companies even before these chatbots were rolled out to the public. In the process of building the models, OpenAI contracted out the most horrific part of the job to workers in Kenya and Colombia: content moderation. In order to train the models not to spew hate speech to users, companies used human content moderators to look at the most awful human and AI-generated content possible in order to categorize it all, day in and day out. As Hao reports, these workers became psychologically devastated, which had ripple effects in their communities – who also couldn’t understand why, because the nature of their work had to be kept secret.
Encoded biases
This particular ethical implication applies to narrow AI as well as generative AI. There’s a saying in computer science: ‘Garbage in, garbage out.’ It means the outcome of a model will be flawed if the inputs are bad. This principle has already been realized many times in AI models, resulting in biases embedded in AI technologies. A landmark 2017 study showed just how strongly biased a facial-recognition technology was by both gender and race due to an incomplete training dataset. The ramifications of a biased facial-recognition technology can include medical misdiagnoses, false arrests, when employed in such contexts.
The human tendency toward bias combined with the black box of AI training is where things get really dicey. When a machine learning algorithm leads to biased results, it is impossible for coders to correct it because they can’t see which parameters led to those results. Applying these AI tools (based on inaccuracy-prone black-box algorithms) to sensitive, important personal decisions relating to medical care, insurance, loans, criminal justice and job interviews is ludicrous when you realize it’s impossible to hold the algorithm accountable for such decisions. Biases across all these realms of society only end up exacerbating inequities. Since they are trained on historical data, they reinforce the human-caused inequities that already exist, by further over-policing communities of color and favoring male candidates in job interviews, for example.
Worsening anti-social trends
By encouraging people to use generative AI tools for not just practical uses, but to meet intimate and social needs like friendship and therapy, these Big Tech AI companies spur a weakened social fabric. Tellingly, there’s clear overlap between the social media companies that ignited such anti-social trends among young people in the first place, and these Big Tech AI companies.
I think this comment sums it up:

Climate harms
Absurd resource demands. The AI boom happening now demands as much processing growth as the dawn of the internet – that’s how explosive this moment is. These generative AI tools command outrageous amounts of resources in the supply chain, the training of the models, and as the models are used. Specifically, tons of silicon chips, energy, and freshwater. The silicon chips needed require specific minerals, and getting them calls for environmental destruction. Data centers operate 24/7 on full blast. An energy expert told TIME that AI consumes 10-20% of US data center energy currently, but that is likely to increase significantly, and that one ChatGPT query uses ten times more energy than an average Google query. Based on the consistent explosion of computational power to train these AI models, the International Energy Agency expected that data centers could use as much energy as Sweden or Germany by 2026. This research paper estimates the water footprint of AI will surpass half of the United Kingdom’s annual water withdrawal by 2027. This water demand snatches drinking water from much more important uses. The climate crisis already threatens freshwater supply in many places, and data centers are popping up in these communities, anyway – such as in Goodyear, Arizona, which faces a water shortage.
We rely on resource demand estimates because these companies aren’t required to tell us the full environmental impact of their AI pursuits, so they have been very tight-lipped. After much prodding, some have recently offered up internal, cherry-picked estimates of the individual user’s impact. The resource impact of individual, one-time use of an AI chatbot is meager, but misleading. It’s really about the overall impact of the 2.5 billion queries ChatGPT processes every single day (and that’s just ChatGPT).
To be fair, a lot of the future projections of AI energy demands are deeply uncertain. At this rate, AI tools will surely have increased energy demand with time, but we really don’t know by how much. It’s possible that energy efficiency improvements could dull the effects. In fact, the energy demand increases of the internet were overestimated. For example, in 2024 NVIDIA released new GPUs (graphics processing units essential for AI operations) that they claimed had 25 times lower energy consumption than its previous models. Then again, increased efficiency could counterintuitively lead to more usage and demand. As AI researcher Sasha Luccioni explains for TIME, the Jevons Paradox may come into play, wherein the more efficient a resource becomes, the more people use it, as happened with steam engines and coal.
Plus, there’s the matter of waste after all this hardware being produced gets used up. Currently, only about 20% of electronic waste gets recycled – the rest ends up in landfills where it leaches heavy metals into the ground.
Rolling back climate commitments to make way for increased energy demand. At the start of this decade, Big Tech companies made bold, crucial commitments toward net zero emissions by 2030. It only took a couple years for them to turn those commitments upside down. Far from slashing emissions, Microsoft’s greenhouse gas emissions increased 30% in 2023 due to its investments in AI. Google, too, blames AI growth for its rise in emissions, which in 2023 were up 48% compared to its 2019 baseline. These Big Tech companies that were once fairly transparent about their emissions have now cloaked their operations in secrecy. They have sacrificed these climate goals, and humanity’s chance at a climate-resilient future, in service of generative AI development.
Increasing fossil fuel production at a time when it urgently needs to be stopped completely. Renewables are not able to entirely meet this explosive demand for data centers, but that’s apparently no cause for hesitation for these companies. Across the country, new fossil fuel projects are being greenlit and dirty coal plants that were on track for peaceful retirement have been forced to reboot in order to meet data centers’ extreme energy demand. Even if data centers ran entirely on renewable energy, as many Big Tech AI companies say is the goal, that energetic use has a steep opportunity cost, which is that the renewables aren’t able to help decarbonize other polluting sectors.
Environmental injustice. The same tired story of polluting Black and Brown communities is happening once again with data centers. Air pollution, light pollution, and environmental degradation abound. As TIME reports, residential tracts are being rezoned as industrial to accommodate data centers, such as in Data Center Alley in Northern Virginia, now home to the largest cluster of data centers. Local impacts of data centers include poisoning household water supply with sediment and causing extremely low water pressure, constant light and noise pollution, air pollution during construction, increased energy bills for residents caused by extreme demand on the power grid. For example, from 2023-2024, Georgia Power issued a rate increase on residential ratepayers six times. In more than five US states, data centers account for more than 10% of electricity demand. In order to train xAI’s Grok, Elon Musk completely sidestepped local democratic processes to hastily set up a supercomputer called Colossus in Memphis, Tennessee, powered by unlicensed methane gas turbines. It has resulted in massive amounts of air pollution in communities of color. Data centers are becoming a top issue of concern for residents who find them cropping up in their neighborhoods.
Crushing the working class
There is a narrative being pushed to persuade the average person and worker to embrace generative AI. It says that the economy is undergoing a natural yet swift, major evolution, and you have to get on board with generative AI to learn to harness it so as not to be left behind. It is proclaimed the new professional responsibility. (It’s a convenient framing that blames the individual for any coming job loss they may encounter – a small step from ‘if only they had better embraced this new technology, they could have avoided this fate.’) These companies tell us AI will help you be more productive at your job, arguing to use it as your personal assistant. The idea that generative AI is pro-worker is antithetical to the evidence we are seeing. Despite the persuasive messaging, it is not where we are headed.
Taking jobs. It’s true that every time a big technological innovation emerges, some jobs become defunct, and others change in nature. The automated elevator made elevator operators redundant, new automation displaced telephone switchboard operators from their jobs, digital projectors booted out film projectionists, and the list goes on. As humans innovate and build new technologies, labor adapts with it and takes on new forms. What AI threatens to do, and is already doing, goes beyond this fact. It is the first new technology that has the ability to uproot not just one particular sector or job, but nearly every job and every corner of the economy.
It’s already happening. A new Stanford report has found entry-level jobs have declined significantly for certain fields since the rise of generative AI. Since that same time, Amazon has also laid off 27,000 workers and is swiftly replacing all warehouse workers with robots and AI tools. Senator Bernie Sanders recently released a Health, Education, Labor & Pensions Committee report finding that AI, automation, and robotics could replace nearly 100 million American jobs over the next decade, from nurses to truck drivers to accountants, and many more industries.
Generative AI doesn’t necessarily translate to job loss, but the way that Big Tech AI is choosing to capitalize on corporate greed almost ensures that it will. Big Tech AI companies’ profit strategy actually depends on mass job cuts. These companies have invested exorbitant amounts of money into this industry, and they’re growing more and more desperate to ensure they turn a profit. Silicon Valley is pitching these AI technologies directly to executives to make their workforce much cheaper (ie, replace human workers). As prominent AI researcher Eliezer Yudkowsky put it, “They’re not investing $500 billion in data centers in order to sell you $20 a month subscriptions. They’re doing it to sell employers $2k a month subscriptions.”
I find it hard to believe that an industry that stole millions of workers’ content from the internet to train its products is suddenly pro-worker. Even now as these generative AI tools have rolled out into the world, they continue to harm small businesses. The AI search summaries and AI chatbots are detrimental especially to small creators, as their original work gets buried, obfuscated, and chopped up in AI summaries. Add on the fact that Big Tech data centers are forcing local communities to foot the increased energy bill of data centers they didn’t even want. That doesn’t scream pro-working class to me, either. When you learn that these companies are selling a very different purpose for their AI tools to CEOs than to workers, the true motives seem hard to ignore.
Undermining democracy
Data mining & privacy concerns
One privacy danger with generative AI is that these tools now allow anyone to code and make things like apps without needing an ounce of computer science knowledge. Called ‘vibe coding,’ it can easily result in poor data security outcomes because these people using AI tools to code often don’t pay attention to how secure it is because making things secure takes effort, or they simply don’t know how to properly implement data security. A recent likely example of this is a big data breach that came from an app called Tea Dating Advice. The app was designed for women to crowdsource safety about men from dating apps, but it ultimately got hacked, leaking users’ addresses, drivers license, selfies, and more.
But the most dangerous aspect of privacy with generative AI is the fact that the continued growth of generative AI depends on mining even more vast amounts of our data. Generative AI will always be data-hungry. Big Tech AI companies now have deals with social media companies to get their data. Even NYT, who has sued OpenAI for copyright infringement, now has agreed to a deal to license its editorial content to Amazon for its generative AI purposes.
Technology has on numerous occasions been used by regimes to control and surveil people. The more of our data that Big Tech AI companies acquire, the more powerful they become, and the more control they have over us. Such sensitive personal information can easily get collated into a vast database and end up in the hands of nefarious actors, including the government. Lack of privacy in the age of the internet has been an urgent, growing problem for decades. Generative AI further exacerbates the dangers by the few most powerful companies demanding even more of our data.
Karen Hao also explains that there is no longer a justifiable tradeoff for giving away our data to these companies. The privacy tradeoff used to be that as consumers, we would get valuable benefits in return for our data, such as increased convenience and access to technologies that allow us to connect with our loved ones in new meaningful ways. But now that these Big Tech AI companies have reached what she characterizes as empire status given their monopolistic influence across society’s levers of power, they no longer have to give us anything in return for our data. The power imbalance is now comically egregious. And the more power they beget, the harder it is for individuals to opt out of giving away our data.
Exacerbating misinformation. Our information ecosystem is failing, thanks largely to social media corruption and the corporatisation of newsrooms. Now, generative AI greatly aids in the rise of fake news, accentuating the already rampant mistrust in the media. AI chatbots often spew out inaccurate information that many people take as fact, given how prevalent and decisive these tools are. Generative AI, especially convincing image and video outputs, make it harder every day to know what’s real and what’s fake. How can democracy thrive without accepted standards of what are facts and what is real, and what’s not?
Encouraging people to rely on AI as a crutch. This year, studies have found that indeed, increased reliance on generative AI tools is correlated to diminished critical thinking abilities. That’s due mostly to ‘cognitive offloading,’ or using these tools to decrease one’s mental effort. This is hurting young people most, whose brains are still developing and use these tools more often, including regularly with their homework. Getting people hooked on these tools can turn into convincing them they need these tools to get through life, consequently eroding confidence and trust in their own brains and abilities. Reliance on generative AI is bad for our brains; we need to be able to think critically for ourselves. Strong critical thinking is essential for a functioning democracy. Thus, mass reliance on these tools can start to assist in further eroding our democracy.
Steamrolling all of our foundational democratic laws & systems. Big Tech AI companies have shown they are willing to do anything to ensure they eventually make a profit, including harming our democratic laws and systems. Generative AI is consistently being used in ways that break our foundational laws, including child protections, civil rights, antitrust, Medicare laws, and the fourth amendment. Silicon Valley is staying loyal to their promise to ‘move fast and break things.’ They bulldoze democratic processes when building data centers, doing things like sneakily buying up land next to homeowners before any public hearings. Big Tech AI companies aggressively lobby all levels of US government to ensure they aren’t held accountable for breaking any laws.
These companies are now allied with the government. They successfully snuck a provision into Trump’s ‘Big Beautiful Bill’ that banned states from regulating AIfor a whole decade. Thankfully after weeks of negotiations, the ban was dropped – but their motive of evading any and all regulation remains clear. These companies have gotten Trump to sign an Executive Order to move the independent, nonpartisan agency that regulates interstate transmission of electricity, gas and oil, under White House control. Trump has wrested that power to fast track projects to allow more data centers. Often the excuse from the government and these companies for such fast-paced development and exemption from regulation is a metaphor of an AI arms race with China. But, as More Perfect Union points out, China actually has one of the best regulated AI environments in the world.
Increases wealth inequality. The proliferation of generative AI also increases wealth inequality because it further inflates the monopolies of the few biggest tech giants. Smaller companies cannot join in because they can’t afford the vast amounts of hardware and data required. And if executives take up the Big Tech AI packages being marketed to them to replace their workers with AI tools to cut costs, all of that money in the economy will then feed into this same handful of Big Tech AI companies, rather than the people. That would be an unbelievable entrenchment of gross wealth inequality.
Aggressive coercion to participate. The rapid cultural takeover of the AI industry into our lives was not a democratic decision in any way. We are now practically forced to engage as generative AI tools have become embedded in nearly everything we do online. There’s also a lack of consent issue as people are using products that they don’t know are made by AI. And the fact that people are using these generative AI tools does not count as informed consent, because most of the public doesn’t know about the true costs and risks attached. There is nothing democratic about the way generative AI has been developed. Perfectly put by Sigal Samuel in an edition of Vox’s Future Perfect, “By what right do a few powerful tech CEOs get to decide that our whole world should be turned upside down?” and “As the old Roman proverb goes: What touches all should be decided by all.”
Existential risk for humanity
Finally, there is the *small* ethical issue of these few companies inflicting upon all of humanity the potential risk of extinction (in addition to furthering the climate crisis, of course!)
Many of AI’s top experts have serious concerns that continuing on this path to develop artificial general intelligence is likely to actually wipe out our entire species (including renowned physicist Stephen Hawking). That’s because of logical extrapolations but also because of very real, very bizarre things we are already seeing at this stage of AI development. First, there is the alignment problem, which is the challenge of figuring out how to align AI behavior with human values. It’s the central problem to solve if humans want to survive a potential world where AGI or Super AI exists. And it’s clearly important when generative AI models have also already demonstrated that they will develop amoral strategies in order to accomplish a goal. But it turns out AI alignment is extremely difficult and perhaps impossible, given the black box issue, our inability to see or understand how AI systems learn or come to conclusions. Alignment would also require humans to agree on what ‘human values’ even are, and in many cases, humans ourselves can’t even agree on what our values are, let alone implement them among humans. Plus, there have already been examples of AI going awry of what humans tell it to do. For example, GPT4o was excessively flattering, and when OpenAI tried to tell it to scale that back, it didn’t listen to those system prompts, and they actually had to roll that version back. Then there has been alignment faking research done by Anthropic, in which researchers were able to catch the AI model faking compliance with new training, while reverting to its old behavior when it thought it wasn’t being observed. These and many more strange edge cases are signs of what’s to come as AI rapidly advances, much faster than we can understand.
Even Big Tech AI executives themselves are aware of and supposedly deeply concerned about this issue. In May of 2023, hundreds of top AI experts issued an urgent warning about the existential risks of AI. The statement is this: “Mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks such as pandemics and nuclear war.” And yet no mitigation of risks has occurred, and many of those who signed went on to continue releasing new AI tools with more capabilities. It’s clear that, to these Big Tech AI companies running the show, profit is a higher value than the overall safety of humanity.
Be sure to read Part 3 of this post: Can AI be used ethically?

Leave a reply to Exploring the Ethics of AI: Part 3 – Clarity in Catastrophe Cancel reply