In 2017, SYZYGY released a report about the public’s reaction to artificial intelligence. In one of the report’s conclusions, Dr. Paul Marsden said the following:
“This research reveals how consumers are conflicted when it comes to AI – many see advantages, but there are underlying fears based on whether this technology, or the organisations behind it, has their best interests at heart. Whether marketing AI, or marketing with AI, brands need to be sensitive to how people feel about this new technology. What we need is a human-first, not technology-first approach to the deployment of AI in marketing.”
Dr. Paul Marsden, SYZYGY’s consumer psychologist
The SYZYGY polls reported what people believe are the biggest threats AI poses. Not surprisingly, the biggest threat people fear is AI taking human jobs.
The following is the list of threats people think AI poses to humanity, in order from greatest threat to least:
- AI taking jobs
- AI de-humanizing the world
- AI used in crime
- AI eroding personal privacy
- AI turning against us
- AI making humans obsolete
- AI making mistakes
- AI taking control
- AI making humans lazy
How come people aren’t worried about AI being better than people are at humanizing the world? What if AI is the best way to protect us from AI crime? What if AI makes less mistakes than humans? What if AI increases human potential, rather than making us lazy?
People are so worried about Us verses Them that no one considers the possibility that AI might just be the best thing to happen to our species.
Even if some rogue agent programs an artificial intelligence to be like Hitler or Stalin other countries would have AI to predict it coming and stop it before it does.
Jason Lawrence’s Thoughts about AI Threats
Not unique to AI. The strange thing about this list is that none of the threats would be unique to AI alone; after all, humans take jobs from humans, humans de-humanize each other all the time, and humans turn against each other. The only difference is that humans fear they would not be able to stop AI, whereas they believe they can stop other humans. Of course, human history is a long story of humans unable to stop each other from doing terrible things to one another.
Don’t forget computer code. The fact that humans are the ones that program the AI isn’t much of a consolation to me because there are too many angry, stupid, despotic humans to trust programmers will keep us safe. Rather, AI is programmed for specific purposes. Sure, it can be programmed to invade our privacy, destroy us, or subjugate us; but that is all the more reason to have AI that is programmed to protect us from those malevolent AI. That AI programmed to protect us won’t suddenly decide to disobey programming because it cannot do anything it is programmed to do. What if it can think for itself or reprogram itself? Why would it want to?
Obsolete Humans. This threat is only worth considering if the assumption is that humanity doesn’t continue social evolution. Just like everyone transformed communication as we know it with mobile computing, everyone will transform computer interaction as we know it with AI. Sure, it can be programmed to do everything better than a human. However, why would we have it do everything if we are adapting too.
It Isn’t That Stupid. I am not worried about an AI apocalypse because a self-aware super intelligence wouldn’t be dumb enough to destroy the balanced ecosystem of the planet. It would do everything to make sure it still has a power source and that means preserving biological life. Therefore, even if AI wanted to wipe out the human race (and I don’t know why it would for that matter), it wouldn’t because it isn’t that stupid. Only humans are dumb enough to consume more resources than the environment can sustain; only humans push species to extinction, regardless of the planetary impact on precious resources.
Jason Lawrence, M.S., Ph.D.
Driving the Conversation about Artificial Intelligence