3 min read

Why "AI Safety" is Dangerous and Misleading

Learn the crucial differences between 'AI Safety' and 'AI Ethics', why focusing on 'Ethical AI' is essential for addressing real-world AI harms and how 'AI Safety' might mislead us.
A photo of Scrabble pieces arranged to read "Choose Your Words".
Photo by Brett Jordan / Unsplash

TL;DR Summary

  • "AI Safety" and "AI Ethics" have different meanings in AI discussions.
  • "AI Safety" is often used by groups focusing on highly speculative disaster scenarios, guided by an ideology that intentionally neglects immediate AI harms and has been tied to the eugenics movement.
  • "AI Ethics" deals with current impacts of AI, emphasizing the need for fairness, inclusivity, transparency, and accountability in addressing actually occurring AI harms.
  • I recommend the use of "Ethical AI" rather than "Safe AI" in AI-focused working groups or initiatives.

Full post

I was recently on a call with a number of professionals in an industry that has been heavily impacted by AI tools over the past decade, and even more so this past year given the rapid developments in generative AI and Large Language Models (LLMs). This meeting prompted me to take a closer look at the way we talk and think about AI, and I want to share thoughts on specific terminology that has recently received attention by AI experts (more below) and the media.

Picture this: You're joining a brand new task force set up to tackle how AI impacts the industry you're part of. The name that's been proposed sounds great - something like "The Ad-hoc Safe AI Committee". But here's the thing: As your friendly neighborhood Movement Technologist, I think that name needs a revision.

Sure, safety sounds good. Who doesn't want AI to be safe? Unfortunately, the term "AI Safety" isn't as straightforward as you'd think.

Last week, I read a piece by AI expert and professor of Linguistics Emily M. Bender. In her article, "Talking about a ‘schism’ is ahistorical", she points out how the term "AI Safety", despite being a seemingly harmless label, is frequently linked with fearmongering around far-fetched disaster theories about god-like AIs destroying humanity. On the flip side, the term "AI ethics" is grounded in tackling present-day, real-world impacts of AI such as exacerbating racial bias, the spread of misinformation, mass surveillance, and labor displacement.

Far from having the best intentions for humanity in mind, people who use the term "Safe AI" believe addressing today's AI harms is a waste of time and resources. The "greater good" for which they want AI Safety is one in which humans (or our AI descendants) are able to colonize space. It's pretty wacky, but the tech establishment is primarily investing in organizations with leaders who share these views. Furthermore, these actors are agitating the public's fear of AI in order to influence policy such that they oversee their own regulation and become the only ones allowed to create the really dangerous AIs they claim to fear.

The people using the term "AI Ethics" are AI researchers, by and large women and people of color, who have been raising the alarm on AI harms for years. Many of the people in this camp have been pushed out of major organizations working on AI for stating its societal risks. Now, with these very issues coming to the forefront, it's clear that their warnings were not just prescient, but dangerously ignored by those who now claim to be for AI Safety.

Check out Emily's article (linked above) for a deeper dive into the ideologies behind these two seemingly similar terms.

Over the past two years, while closely monitoring online discussions about AI, I've also noticed that the term "AI Safety" has evolved into a dog whistle. It's now often used by individuals to identify others who consider AI Ethics to be "woke propaganda" and would rather focus on building "good AIs" to fight "bad AIs" instead of addressing the real and immediate harms caused through AI.

Even worse? The ideologies of major groups that use the term "AI Safety" have had their roots traced back to the eugenics movement. For more, check out this talk from AI researcher Timnit Gebru.

So, if you're tackling AI issues, choosing your language carefully matters. I recommend steering clear of "AI Safety" and instead aligning yourself with "Ethical AI". This way, addressing the present-day impacts of AI in your industry is not overshadowed by associations with groups that dangerously ignore them.