Can AI be used ethically?

Read Part 1: Understanding AI here. Read Part 2: The brew of ethical blunders here.


started this post with a simple question: is it possible to use AI ethically? This is where the distinction between different types of AI becomes very important. All AIs are not equal.

In my opinion: AI = not all bad. Generative AI = more bad than good. Alternative forms of AI do exist, but they are not things we would be engaging directly with as consumers on any regular basis. Narrow AI is designed to excel in one particular domain to deliver accurate results and increased efficiency of a specific task. On the other hand, the goal of artificial general intelligence (AGI), for which generative AI is thought to be a stepping stone, is to be adaptable across all domains. Think ‘jack of all trades, master of none.’ Narrow AI does not carry the risk of evolving into a general or superintelligent AI, and it lacks broader contextual understanding that at this point only humans are capable of (something to find comfort in). You can think of narrow AI as workhorses. Narrow AI can be extremely useful if harnessed in the right way.

There are plenty of applications of narrow AI that can and have been used for good.

AI’s potential for good

We don’t need an artificial general intelligence to make great scientific advancements. Narrow AI is already capable of doing that – we just have to employ it in the right ways.

The AlphaFold 2 model is one example of this. It was an application of narrow AI that was able to crack the elusive protein-folding problem in biology, earning the Nobel Prize in 2024. This AI-enabled breakthrough will help with drug discovery. That counts as a win for humanity.

Narrow AI can also be utilized successfully for specific issues like disease diagnosis, solving traffic congestion and reducing vehicle accidents, and delivering efficiencies in transport networks and power systems. The key is using AI for very narrow, practical use cases. Narrow AI is excellent at delivering greater efficiency and accuracy in the specific realm it is trained for. That can be utilized for industries which stand to gain a lot from accuracy and efficiency improvements, like agriculture, energy production, and transportation. The earth science field is finding that narrow AI has potential use for weather predictionswhile using far less energy than conventional models. Another example is optimization models that better integrate renewable energy into the grid. A third worthy climate-related use case is geothermal energy improvements. Unlocking the potential of geothermal energy is key for the clean energy transition, but many technical challenges have thus far prevented large-scale deployment. Zanskar is a startup utilizing narrow AI to successfully overcome these barriers.

Narrow AI is not always used for good, however. It can of course also deliver efficiency and accuracy improvements to industries that are harmful, such as the fossil fuel industry and the military. Even when narrow AI is used for noble purposes like those mentioned in the previous paragraph, the models can still be created and deployed in ways that cause harm, such as biased outcomes. Another nuanced example of narrow AI’s application is how AI has been used to help find metals critical to electric vehicle production. Despite its benefit to the electrification of transit, this kind of extraction could be classified as neo-colonial exploitation. The beneficial or harmful potential of AI always depends on several factors surrounding model development and implementation.

But while AI does have the potential for profound good in various use cases of narrow AI, these are not the focuses of the current AI boom we are witnessing. This is not the future being sold to us and invested in by those most powerful.

Do you want the world that AI’s leaders promise?

It’s a question to ask if you’re trying to decide what role you want to play. If yes, then you can feel good about engaging with Big Tech’s generative AI tools and work to help build that AI future. If no, don’t support the building blocks paving the way for that world to be possible.

Big Tech AI companies see generative AI as a stepping stone to their ultimate goal of AGI, artificial general intelligence. These companies promise a certain world, which they argue is so utopic, it will make up for the irresponsibly hasty development journey to get there. They claim that AGI will solve climate change and cancer (while contributing to the climate crisis on the way there). Karen Hao, AI expert and author of Empire of AI, says this promise is essentially a scam. All the messages and strategies Big Tech AI is employing are the same features of traditional empires: seizing resources not their own but redesigning the rules to suggest they are, exploiting labor, competing with other empires while claiming to be the ‘good’ empire who must be imperial in order to defeat the ‘evil’ empire, and justifying it all under a ‘civilizing’ mission for humanity. Based on their research, some AI experts understand the real goal of these Big Tech AI companies to be not the AGI-based utopia they’re selling, but the maximization of profits and consolidation of power.

Here’s another question worth asking. If narrow AI is more reliable, more effective at achieving their goals, more controllable with fewer risks, and requires way less data and resources, why are Big Tech AI companies pushing so hard for generative AI and ultimately AGI? The only answer that makes sense is more money and more power. The theory is that generative AI chatbots are the most addictive form AI can take, and more engagement leads to more profits. That’s why they’ve invested so much in this particular product and type of AI.

They are scamming us. Why believe their story of the future they’re taking us into, if they’ve already lied to us about their intentions? Karen Hao reported on OpenAI when it was first founded as a nonprofit. She watched up-close as their public proclamations about being transparent, open-source, collaborative, and advancing AI without any commercial incentives proved the complete opposite internally, behind closed doors. It’s illuminating to note that when these companies now try to sell their vision to the public, they do so in terms of the collective good (i.e. solving climate change and cancer), but when talking to companies, they describe AGI as the perfect worker replacement. Another telling tidbit is how AGI is defined by these companies in official contract language. As More Perfect Union reports, in an investment deal contract between Microsoft and OpenAI, it is specified that they would consider a system AGI once it had generated $100 billion in profits. Fascinating. And so, it is not about achieving an actual technological breakthrough for a better society, but about whatever they can get enough people hooked on so they rack in lots of profit. This desperation and lack of moral compass leaks through everything generative AI touches (see Part 2 for the long list – add on the latest moves of OpenAI and xAI debuting erotic chatbotsto boost their customer base). They are racing ridiculously fast to enhance these tools, despite their own admission of the existential risks, because what they desire above all else is to be first. To dominate. The AI industry is desperate to maximize profit and power, whatever it takes.

The world they promise also requires much more of our data. AI-generated articles are now roughly equal in quantity with human-written articles on the internet. That’s insane, and points to an issue Big Tech AI companies have to contend with. The portion of the internet that is AI-generated content now makes a problem for generative AI model training in the future. They are going to seek ever-increasing amounts of our data to get around this and continue on their path toward AGI. Expert Karen Hao explains, “the amount of data that they need has completely eclipsed the amount of data that social media companies took from us.” Is that degree of privacy invasion a fair tradeoff for their vision? I think no.

Look at the incentives. Everything the Big Tech AI industry has done in these past few years is evidence of their true incentives and motivations (profit and power). These companies have proven to not care about human rights in the here and now, so why should it be any different in the future? And even if these promises of a future utopia were real, we still need to reprioritize current harms over future potential good. When they tell us AGI will be good, we must ask, good for who? It is not in our best interest to have general purpose reasoning machines. ‘Everything machines’ are bad at a lot of things, and cannot be depended on for accurate outputs. You can’t test or control for the potential risks and failure outcomes of such broad models. With task-specific, narrow AI, you can do that.

Is it possible to use AI ethically?

So, what’s the answer? Is it to not use AI at all, or is there a feasible middle ground?

I believe that the ethical harms of generative AI far outweigh potential benefits, so when it comes to generative AI specifically, my answer is no.

A Soapbox on individual ethical decision-making

When I decide whether to support a business, I don’t just look at the product they’re selling; I look at who runs the company and how they run it. Why would I trust a product if the people who made it and the way they operate their business seem to have no moral standards – or at least, don’t align with my own?

I firmly believe that small actions, like whether to use an AI chatbot, matter not because they necessarily lead to a ripple effect of collective action (although you can’t have collective action without individual action), but because I think of morality as a muscle. Each individual decision you make reaffirms and strengthens a muscle. The more I do things that don’t align with my values, the easier it is to do more things that go against my morals. On the other hand, the more I practice refusing to partake in products and practices that I don’t align with morally, the stronger that muscle becomes, and the easier it is to make more and larger decisions that DO align with my morals. And of course I can’t make the most moral decision 100% of the time – no one can. But I strive towards it, because every decision I make is a vote towards a certain part of me. That’s why I take seriously a decision as seemingly small as whether to use generative AI or buy something cheap from Amazon. Luckily, opting out of AI is not a challenge for me in the slightest (except when it sneaks up without my permission and I have to figure out how to disable it.) If it is hard for you, watch this for a pep talk.

There is a strong reaction in social movements against focus on individual decision-making as solutions to societal problems. The instinct of that is good. It’s a response to the world’s problematic industries’ patterns of gaslighting consumers into absorbing those entire industries’ guilt into their own conscience. Rather than maintaining a steadfast focus on holding accountable the real corporations and systems at fault. Here’s why ethical individual decision-making matters. 1) Taking a stand where you can is never in vain; it builds a habit of morality-driven decision-making. 2) It also reaffirms to your psyche that you do have agency over the decisions you make, regardless of how powerless this techno-capitalistic society is designed to make us feel. 3) Corporations rely on people to buy and use their products. That’s why boycotting works. They need our commitment and our data, and right now, we still retain some agency over how much of those we hand over. These companies are still far from profitable, and as consumers, we have power to decide whether they ever can make a profit.

For this piece, I wanted to find out for myself if there was ethical justification using generative AI, not to tell or shame others about their own generative AI use. But I think others are asking that question, too, and it’s important to consider all the ethical implications if you’re trying to decide for yourself. As always, what anyone considers ethically justified is a personal decision.

Fully opting out of AI is nearly impossible

It is always somewhat of a privilege to opt out of an unethical system. But I believe more of us can do it more often than we believe. When it comes to generative AI, it’s everywhere we go. It has infiltrated every browser, social media site, and seemingly every website. There are creative if tedious ways around some of these invasions, such as using alternative browsers like Ecosia or DuckDuckGo, searching in settings for an AI summary switch you can toggle off, and tricks like adding ‘-ai’ at the end of every Google search.

It should be deeply upsetting that these are the lengths each one of us has to go to in order to avoid this undemocratic technology.

If you ‘need’ to use AI (perhaps for your job)

Of course, boycotting anything is easier for some than others. If using generative AI is genuinely required for your job, such as is becoming the case for coders, opting out probably isn’t feasible. If that’s the case, there are still some things you can do to mitigate negative impacts. There might be some smaller generative AI companies that have entered the scene to make slightly more thoughtful alternatives to the big-name AI tools. You can look into a company’s internal operations, such as whether they hire employees to monitor copyright violations or inaccurate outputs of their tools. You can also try to deduce a company’s intentions by seeing who funds them. If using generative AI tools to generate visual content, you can look for a product that claims to be trained only on non-copyrighted material.


Assessing the ethics of an action requires an understanding of the broader context it falls in. Everything about the Big Tech AI industry makes me feel icky, and the more I learn, the more I want it to fail in its moonshot attempts to take over the world in a lasting way. I don’t buy into it; I don’t want any part of it. Do you?

Look out for my next post, which will cover all the ways you can push back against the AI empire, beyond the individual act of refusing to engage with generative AI tools.

Claire Thomas Avatar

Published by

One response to “Exploring the Ethics of AI: Part 3 ”

  1. Exploring the Ethics of AI: Part 2 – Clarity in Catastrophe Avatar

    […] sure to read Part 3 of this post: Can AI be used […]

    Like

Leave a reply to Exploring the Ethics of AI: Part 2 – Clarity in Catastrophe Cancel reply