The adverse impacts of technology can be effectively handled and lessened. However, this is only possible through open discussions about these issues and avoiding the distraction of misleading narratives. In “The Elements of AI Ethics” chart, I outline the harms that have already been reported, many of which were predicted beforehand and have been persistent. This tool serves as a guide, offering insights and discussion points to help prioritize the use of “smart” tools. It emphasizes the need for all AI-deploying teams to have a mitigation strategy for various potential harms.
Derived from “The Elements of Digital Ethics” (2021), the original diagram’s identified harms remain pertinent. The new chart, however, provides a concentrated overview of the evolving types of harm that are emerging with the continuous progress of AI in various industries, particularly in general-purpose and generative tools. To understand how such a tool can be utilized in education and project work, one can refer to the original chart.
The Elements of AI Ethics – Concise Overviews
The chart comprises six main sections (Organization, Machine, Society, Human, Supervision, and Environment), encompassing a total of 18 elements.
Organization
These factors pertain to the attributes and considerations associated with organizations engaged in the development of AI-driven services and products, spanning both the private and public sectors.
Accountability Projection
I appreciate the term “Moral outsourcing,” as coined by Dr. Rumman Chowdhury. It denotes how creators of AI often delegate moral decision-making to machines, absolving themselves of any influence over the outcomes.
By introducing the concept of accountability projection, I aim to underscore that organizations not only tend to avoid moral responsibility but also project the genuine accountability that should accompany the creation of products and services. Drawing from psychology, the term “projection” describes how manufacturers and AI implementers seek to absolve themselves of guilt by attributing their own shortcomings to something else—in this case, the tool itself.
The way Artifical Intelligence is framed seems to grant manufacturers and product owners a “get-out-of-jail free card” by shifting blame onto the entity they have created, as if they have no control over the product. It’s akin to purchasing a washing machine that consistently shrinks your clothes, and the manufacturer evading accountability by asserting that the washing machine has a “mind of its own. “Machines themselves are not inherently unethical, but the creators of machines can behave unethically.
Monoculture
Presently, there are nearly 7,000 languages and dialects worldwide, yet only 7% are represented in published online content. A striking 98% of the internet’s web pages are in just 12 languages, with over half of them in English. Despite 76% of the cyber population residing in Africa, Asia, the Middle East, Latin America, and the Caribbean, the majority of online content originates elsewhere. For instance, on Wikipedia, more than 80% of articles come from Europe and North America.
Consider what content most AI tools are trained on.
In the original description of this element, I highlighted that viewing the vast array of perspectives and potential impacts of a new creation is challenging through the narrow lens of a small subset of human experience and circumstances. The homogeneity among those granted the ability to create in the digital space means that primarily their mirror-images benefit—with little consideration for the well-being of those who remain unseen in the reflection.
Designing Deception
Numerous AI tools are intentionally crafted to create the illusion of conversing with entities that possess thought, consideration, or even remorse. These deliberate design choices contribute to the perilous perception of these tools as sentient beings. As previously noted, this not only enables manufacturers to shirk responsibility but also adds complexity to trust and relationship dynamics, akin to those traditionally experienced with genuinely sentient beings. The long-term impact on individuals’ emotional lives and well-being, resulting from regular interactions with entities designed to simulate human emotions that they do not actually possess, remains largely unknown.
There are compelling reasons to believe that individuals can form strong emotional bonds with tools generating unpredictable outputs. The escalating number of people seeking solace from Artifical Intelligence chatbots instead of professional mental health assistance is a noteworthy trend, and in some instances, such behaviour is even encouraged.
Concentration of Power
When power resides in the hands of a select few, their own needs and concerns naturally take precedence. With only three million Artifical Intelligence engineers, constituting a mere 0.0004% of the world’s population, their priorities gain increasing influence.
Sensational Sentience Claims
An escalating number of reports envision a future where AI dominates, rendering humans obsolete. Often propagated by industry giants, these exaggerated doomsday scenarios capture attention and control the narrative. The ensuing effects include diverting attention from real harms, fostering trust in system output’s infallibility, endorsing accountability outsourcing to machines, empowering doomsayers as the supposed saviors, and positioning them to guide legislative regulation.
Imagined Harm
Explicitly clarified in the chart is the notion that the claim “It will kill us all!” lacks evidence and perpetuates an unfounded narrative, constituting a harm in itself.
Machine
As decision-making becomes increasingly automated, humans find themselves subject to algorithms, machine learning, and opacity. These elements outline inherent machine qualities that amplify potential harm.
Bias and Prejudice Acceleration
AI, trained on large datasets often containing biases and prejudiced content, has the potential to exacerbate these issues in its output. The reproduction of biases, whether subtle or overt, can occur unnoticed, especially without active monitoring, leading to the dissemination of toxic, misogynistic, and racist content.
Invisible Decision-Making
As algorithms grow in complexity, understanding them becomes more challenging. Proprietary code is often concealed, hindering scrutiny and diminishing comprehension of decision-making processes. This opacity affects autonomy, making it difficult for individuals to make informed choices in their best interest.
Society
The digital era’s arrival has far-reaching implications for society, affecting values, opportunities, fears, and safety. New possibilities, often associated with savior-like promises from digital transformation and AI, lead to a shift in focus toward quantifiable metrics, potentially overshadowing spirituality, compassion, and nuanced sensitivity.
Detachment from Values
Efficiency gained by delegating tasks to machines involves a necessary detachment or surrendering of something. While framed as relinquishing mundane tasks for a more enriched life, this trade-off isn’t always straightforward. By ceding decision-making to machines, individuals may forfeit work that involves critical thinking and reflection. The shift in values is evident, with the concept of decision-making based on personal values handed over to external entities.
Acceleration of Misinformation
An almost unanimous concern centers on the proliferation of misinformation facilitated by these tools at minimal cost to bad actors. The tools generate misinformation with impeccable grammar and language proficiency, both from troll farms and educated professionals within government and public services, perpetuating a widespread and often unknowing dissemination of false information, A lingering concern that remains unanswered is the potential outcome when the tools are trained using the texts they have generated themselves.
Human
While the majority of AI ethics considerations revolve around the adverse effects on human well-being, certain elements hit closer to home, directly and tangibly impacting human welfare.
Rise of Fraud and Deepfakes
Tools capable of generating convincing content with authentic wording, or those that can replicate your voice or likeness, introduce new avenues for fraudulent activities that pose threats to mental health, the economy, reputation, and interpersonal relationships. Individuals may be led to believe false information about others, or they could fall victim to the illusion that a complete stranger is a family member talking to them over the phone
Expedited Injustice
Due to systemic problems and the inherent embedding of bias in these tools, the repercussions for individuals who are already marginalized can be severe. Automated decision-making tools frequently employ scoring systems that, in turn, can impact various aspects such as employment prospects, eligibility for welfare or housing, and outcomes within
Trauma Experienced by Content Moderators
To prevent users from encountering traumatizing content, such as instances of physical violence, self-harm, child abuse, killings, and torture, many of these tools employ content filtering. However, to implement this filtering, individuals must watch the content. Unfortunately, the workers responsible for this task often face exploitation and suffer from post-traumatic stress disorder (PTSD) without receiving sufficient care for their well-being. Many are unaware of the challenges they will face when they initially take on the jobs the judicial system.
Data and Privacy Violations
Personal data finds its way into AI tools through various channels. Initially, as these tools are frequently trained on data sourced online in an unsupervised manner, personal data becomes integrated into the functionality of the tools themselves. This data may have been unintentionally disclosed online or originally published for a distinct purpose, rarely aligning with the intention of supporting the feeding of an AI system and its diverse outputs.
Secondly, everyday users contribute personal data to the tools through careless or thoughtless usage, and in some cases, this data is stored and utilized by the tools. Lastly, when numerous data points from various sources are amalgamated within a single tool, they can unveil details about an individual’s life that no single piece of information could reveal on its own.
Avoidance of Regulation
The deployment and use of Artifical Intelligence may be perceived as sidestepping regulations due to the numerous oversteps involved. One potential consequence is the normalization of the misuse of others’ content, as oversight becomes challenging unless more stringent constraints are promptly imposed on “data theft,” deployment, and usage. The release of tools by many companies, built upon the works of others without disclosing their foundations, raises significant moral dilemmas. Furthermore, the personal data concealed for potential misuse in future outputs is not only unknown to the operators but, to a large extent, also undisclosed even to the tool makers themselves.
The existence of different national laws poses additional challenges for cross-border implementation. For instance, OpenAI CEO Sam Altman initially expressed willingness to withdraw from the EU if subjected to proposed regulations, although he later retracted those statements.
Environment
This section addresses issues arising during the creation of digital tools, often overlooked as they are considered secondary conditions unrelated to the primary intent of the creation. Though sometimes deemed untouchable or unavoidable, these elements still require attention or management to mitigate negative impacts.
Neglect of Supply Chain
To bring digital services and solutions to fruition, both software and hardware are essential. Behind their production lies a potential web of oppressive relationships and worker exploitation. For instance, the mining of cobalt, crucial for lithium batteries, often occurs under unfair conditions reminiscent of slavery. Failing to acknowledge one’s role in the supply chain necessary for deploying digital services is to neglect accountability for potential harm to others.
Carbon Footprint
The energy expended in data sourcing, model training, model powering, and computing interactions with AI tools is substantial. Although precise figures are often undisclosed, numerous studies have delved into the considerable environmental impact of AI development and the necessary mitigation measures. This prompts a pertinent question: should every challenge be pursued with an AI solution, considering the environmental costs? Could consumers eventually view the phrase “AI-powered System” akin to “Diesel-Powered SUV”?
The Positive Outlook
The reason for my contribution to understanding the harms associated with the expansion of Artifical Intelligence is rooted in the belief that all these harms are within human control. By acknowledging and discussing them, we can actively work to avoid or minimize these negative consequences. It is crucial to demand transparency from manufacturers regarding each issue, rather than assuming that new technology is inherently benevolent simply because it aims to “improve lives” (albeit some lives more than others). Engaging in these conversations is a positive step.
If you share this perspective, I would greatly appreciate your support in spreading my posts and tools within your networks. Additionally, I am always open to hearing your thoughts, perspectives, and experiences.