Artificial Intelligence

Ignoring Parallels Between Nature & AI is Counterproductive

Generated by StableDiffusion v1.5

Recently, I was listening to a podcast called Tools and Weapons, which is hosted by the current President of Microsoft, Brad Smith. In one episode, Smith brought on Kai-Fu Lee, an accomplished writer and innovator who has worked at big names like Apple, Google, Microsoft, and venture capital firms.

Around the 20-minute mark, the two turn to discussing the abilities, or lack thereof, of AI. Discussing capabilities in which AI will fall short, Lee remarks, “AI is still quantitative and calculating, and it doesn’t feel anything. If you shut off an AI program, it stops running; it doesn’t feel sad or feel bad. If it beats the world champion in go, it doesn’t feel happy.” Though Lee likely didn’t intend to do so, his comments represent a greater trend in the way leaders aggressively separate the abilities of humans from that of AI. In the end, however, these comments are counterproductive, lacking elaboration that could help listeners learn important concepts concerning AI.

Lee’s quote that “If you shut off an AI program, it stops running; it doesn’t feel sad or feel bad” is painfully obvious. After all, if you shut off the human brain, the same thing happens: a dead person doesn’t physically “feel sad or feel bad” either. (This, of course, ignores any spiritual considerations.) Unfortunately, statements like this are born out of a desire to develop stark contrasts between humanity and AI, but they serve as a useless distraction that detract from the overall conversation. The fact that humans can turn off doesn’t negate the amazing abilities we have when we are alive and focused on a task. The same goes for AI. Just because AI algorithms have an off button, they aren’t inherently weaker because of that; in fact, their ability to “die” makes them even more similar to nature.

Lee went on to remark, “If [AI] beats the world champion in go, it doesn’t feel happy.” While debating the emotional capacity of machines demands much deeper philosophical exploration and discourse, Lee’s statement seems to ignore the basic processes happening within most AI algorithms. For instance, machine learning (ML) programs like AlphaGo–the algorithm that made waves for easily defeating world champions of go–are trained by rewarding positive outcomes and punishing negative ones, a concept known as reinforcement learning. If a move encourages a swift victory, it sends strong positive encouragement to the model; if a move pushes the program closer to defeat, discouraging messages are sent.

Biological systems exhibit the exact same pattern. A child learns not to steal because their mom scolds them, and they learn to crave sugary and savory foods because it quickly injects “feel good” neurotransmitters into their brain. As adults, we may choose not to speed because of the negative reinforcement of a $200 ticket, but we put in more hours at work because we want to win that incentive trip to Florida. Just like many AI algorithms, a lot of our learning comes from simple positive and negative hormonal feedback.

The reality is that AI has an incredible number of similarities with nature. Neural networks are modeled from our own neurons, swarms of drones have been trained to mimic flocks of birds and fish, and robots are constantly being modeled to behave in a more ‘human’ manner. There are even instances of our AI development helping us understand the human brain much better: psychologists have discussed the possibility of humans exhibiting model-free and model-based behavior, something usually reserved for ML contexts.

Hans Moravec’s rising sea analogy. Credit: Tegmark (2018)

While Kai-Fu Lee probably wasn’t trying to act nefariously, his comments lacked key elaboration that could help to further educate the masses on ML and AI. By implying that AI is a completely different area with zero parallels to the living world, leaders decrease overall understanding and increase the fear of the unknown. This might be done out of a desire to encourage an aura of “uniqueness” about our abilities in an age where AI seems to surpass us in a new area each day. But ignoring parallels between technology and humanity will only continue the cycle of surprise when the tides of AI progress continue to rise.

Though humanity and machines are fundamentally different, both have the capacity to make immense strides in understanding the other. Thus, we can’t be afraid of placing “AI” and “human” in the same sentence.

Further Reading

Lee, K.-F., & Chen, Q. (2021). AI 2041: ten visions for our future (First edition.). Currency.

Tegmark, M. (2018). Life 3.0: being human in the age of artificial intelligence (First Vintage books edition.). Vintage Books, A Division of Penguin Random House LLC.

More content at PlainEnglish.io.

Sign up for our free weekly newsletter. Follow us on Twitter, LinkedIn, YouTube, and Discord.


Ignoring Parallels Between Nature & AI is Counterproductive was originally published in Artificial Intelligence in Plain English on Medium, where people are continuing the conversation by highlighting and responding to this story.

https://ai.plainenglish.io/ignoring-parallels-between-nature-ai-is-counterproductive-a1312cd24c54?source=rss—-78d064101951—4
By: B. F. Campbell
Title: Ignoring Parallels Between Nature & AI is Counterproductive
Sourced From: ai.plainenglish.io/ignoring-parallels-between-nature-ai-is-counterproductive-a1312cd24c54?source=rss—-78d064101951—4
Published Date: Mon, 12 Jun 2023 04:17:17 GMT

Leave a Reply

Your email address will not be published. Required fields are marked *

Exit mobile version