Part 1: Understanding AI
There’s no shortage of discussion around the ethical qualms AI has forced us to confront both as a society and as individuals. Many of us find ourselves asking if it’s okay to engage with any of these AI tools that have infiltrated our daily lives. On one hand, they’re being effectively shoved down our throats and we’re simultaneously told we’ll have to get on board if we want to stay competitive in this emerging AI-driven economy. On the other hand, we’re all aware that AI comes with many negatives: the stolen content, the environmental destruction, the job loss, etc (we’ll get into it all) – and a lot of us fear the world AI developers are seeking to realize. Between this rock and hard place, is there a middle ground? I wanted to explore the full picture and decide for myself: does ethical AI use exist? If so, what does that look like?
First, we have to understand the very confusing and oft-misunderstood – but essential to know – foundational terms.

Understanding AI
Relevant Terms
Algorithm – encoded rules that tell a computer how to complete a task
Neural networks – this is the ‘black box’ of AI training; a model that functions like a human brain, through loosely-connected nodes that communicate with each other in order to learn patterns from inputs
Machine learning (ML) – method of using algorithms to allow models to learn and improve from experience with examples or historical data, without being explicitly programmed; method of training AI to learn and make predictions on new data
Deep learning – works by feeding large amounts of data into neural network algorithms that allow the system to learn from the data the way children learn, by recognizing patterns; no one knows how the learning happens specifically/what patterns the system is picking up on
Natural language processing (NLP) – a method for training AI that teaches it to understand and interpret text
Large language model (LLM) – a specific application of generative AI; specifically designed for natural language processing tasks; AI that learns from extensive datasets of words so that it can generate a coherent text-based response
Training vs serving – training happens once – it is when AI learns from a historic data set; serving is when users interact the AI tool after it is developed and trained
Data centers – where training and serving of AI models occurs; these are computers where the models are loaded up. When you send a message to ChatGPT, it sends a request to some data center to the model that was trained.
Types of AI
AI – intelligent computer systems that can mimic human behavior
Narrow / specialized / task-specific AI – AI that is limited to a specific purpose or skill; models that classify data rather than generate new samples; trained to identify patterns and features and make predictions based on them (such as personalized Netflix, Youtube, Spotify recommendations)
Generative AI – trained on deep learning or neural networks, a kind of machine learning (ML); models that can generate new data based on the training data it has seen; these tools are capable of image synthesis, text generation, music, video
AGI, or artificial general intelligence – has not been achieved; when AI is considered to have broadly applicable ‘human-level’ intelligence, and the ability to generate new knowledge beyond what humans have generated, and operate independently of human intervention; the nebulous but purported goal of Big Tech AI companies
Super AI, or superintelligence – the hypothesized peak of AI development; the point at which AI surpasses human intelligence and capabilities to the point that it may threaten human existence
So, generative AI is one of the two main types of AI in existence today – narrow AI being the other. Narrow AI runs behind the scenes of a lot of our daily tasks when we interact with machines that have it encoded, such as digital voice assistants like Siri, search engines, speech recognition, and autonomous vehicles. The advancement of narrow AI technology has been relatively gradual, kicking off in the mid-20th century. The gigantic AI gold rush we’re experiencing now, though, is different. That specific beast kicked off with the introduction of ChatGPT in 2022 (well, technically a few years before with the training/building process). As consumers, the AI products being aggressively marketed towards us for our direct use are generative AI tools. We know them as the AI chatbots: OpenAI’s ChatGPT, Google’s Gemini, Anthropic’s Claude, Microsoft’s Copilot, xAI’s Grok, and Meta AI. (There are also generative AI tools that focus on image and video now, too). All these generative AI tools are the focus of this AI ethics analysis.

Leave a reply to Exploring the Ethics of AI: Part 3 – Clarity in Catastrophe Cancel reply