skip to Main Content

The age of A.I. might not be as scary as we think

The age of A.I. might not be as scary as we think

71- Emotional Intelligence - Social Hero

A recent article in The New York Times ($) by political commentator David Brooks encouraged readers to combat the age of A.I. by “majoring in being human”.

Among the ‘subjects’ to take were developing “a distinct personal voice”, “presentation skills”, and “empathy”.

These were all crucial to having the “distinctly human skills” required to stand out in an age ruled by ChatGTP, Siri, Viso Suit, and whatever comes next in the world of A.I. 

Brooks concluded: “My hope for the age of A.I. (is) that it forces us to more clearly distinguish the knowledge that is useful information from the humanistic knowledge that leaves people wiser and transformed”.

Aside from riffing on the humourous idea of checking in to a “human university”, Brooks’ article lightly touches on a very real existential fear; the A.I. revolution.

This has, of course, been the subject of dozens of books and feature films, from the nightmarish (I, Robot, Do Androids Dream of Electric Sheep?) to the light-hearted (Wall-E, Big Hero 6).

The age of robots and A.I. is starting to feel less sci-fi and more realistic.

Many academics, scientists, journalists, and writers have for years now sounded alarm bells on what this means for industry, ethics and – as touched upon by Brooks, what it means to be “human”.

Working alongside robots could contribute to burnout and fears over losing your job, new study finds | Euronews

The A.I. industry take-over has worried observers around the globe

“If AI is ever genuinely going to interact with us as humans, it needs to process more than surface-level information. It needs to detect and interpret emotional signals. This isn’t just aesthetically pleasing or psychologically satisfying for us. It has serious practical implications,” said John Dickson, speaking on the Undeceptions podcast. 

“This is a major concern with forging links between artificial intelligence and human emotions. If robots can perceive emotions, and respond to them, they can also affect and, maybe, manipulate them.

The one thing A.I. can’t do … yet

However Rosalind Picard, a Professor of Media Arts & Sciences at Massachusetts Institute of Technology (MIT) who spends most of her time working on the direct applications of A.I. in our world, says we needn’t be too worried.

A.I. is, according to Professor Picard, still a long way off from developing qualia – that is, a quality or property as perceived by a person.

In layman’s terms, A.I. doesn’t have anything that would constitute  “feelings”.

“Human emotion is much more than a set of small signals that our technology can detect,” she said, speaking with John Dickson on Undeceptions.

“We do not have, through technology, insight into your innermost feelings, or into your experience. Technology does not give that, even through brain scanning. We can see changes in blood flow, and changes in electrical activity (but) it does not mean that we know what you’re really feeling inside.

Professor Picard’s qualified opinion is a welcome encouragement to those of us concerned about the ability of A.I. to eventually access our innermost thoughts.

You can put away those tin-foil hats (for now).

When it comes to the direct application of A.I. in society though, there’s no doubt that new tools like the aforementioned ChatGPT pose questions for industry.

New frontiers, new possibilities

Despite this, Professor Picard is quick to note the pros of advanced A.I. will likely outweigh the cons.

She points to medical care as just one example.

“Medicine and the quote/unquote ‘healthcare system’ wait until you’re in bad shape before it does anything to help you”, she said.

“Technology can help us help ourselves earlier. It could help prevent cancer cells from growing into something that you have to treat with massive surgery, chemo, and radiation. It could help prevent most cases of depression – I think it could also help with a lot of other kinds of mental illnesses.”

Professor Picard said workers in her field didn’t want to replace people, but rather build “A.I. that helps people do their jobs better, helps people be healthier, live better, and helps us all understand and better solve problems that we don’t fully understand right now”.

“It’s psychology and technology. It’s neuroscience and technology. It’s social science and technology.

“It’s very much trying to understand what helps people flourish, and then trying to reshape the future of technology to respect humans, and not to replace humans.

5 Takeaways from the AI for Healthcare Virtual Conference | Udacity

Healthcare is one of the fields that could most benefit from better A.I., according to Professor Picard

Getting connected – old school

While humans are grappling with the implications of technology never before seen in history, the value of sharing real emotions with real people has won out before. 

In the third century B.C. on the philosophical frontier of ancient Greece emerged the idea of stoicism; a school of thought that valued knowledge and wisdom above all, but also championed the endurance of pain and suffering without the display of feelings or complaint. 

It was wildly popular and spread throughout the ancient near east. Many readers will be familiar with the Roman Emperor Marcus Aurelius, the most famous stoic of antiquity, whose Meditations sells millions of copies each year. 

However, Christain teaching runs against the ideas of stoicism. 

One of the very interesting things about the ancient portrayal of Jesus of Nazareth in the gospels is His varied emotional states,” said John Dickson.

“He’s presented as very much a man in touch with his feelings – that sounds sweet and relevant in our modern context where emotions are valued, sometimes overvalued, (but) the fact is, it was this biblical affirmation of the emotions … that won the ancient argument and reshaped Western culture.

“It’s true we can, now, be too emotional, of course, driven by our passions. And that is a problem. It’s the problem Stoicism was trying to fix with its sledgehammer.

“But the emotions, themselves, are good. They are human. They are also divine. I’m glad there is a Lord in the universe who is furious at injustice, joyful at our fortune, and Whose love for us could even move Him to tears.” 

Emotional connection is something machines can’t yet – and likely never will – be able to truly give us.

As the world embarks on a new adventure of discovery, we can take comfort in the knowledge that researchers like Professor Picard, at the forefront of techno-revolution, see A.I. as a tool for good, to help us better experience the wonder of being a human, made in God’s image.

“The more I learn, the more I realize we have to learn,” she said.

“The more I learn about the brain, the more I see how mysterious and amazing how it works.

“It leaves me speechless. And it makes me feel like our work will never be done.”

Written by Alasdair Belling, adapted from the Undeceptions podcast episode ‘Emotional Intelligence’

UN Logo-RGB-enLarged

Want to be further undeceived? 
Check out our network of podcasts and articles in the Undeceptions Library.




Oh boy, does John love questions. So don’t be afraid to send them in. At the end of each season we dedicate an episode or two for John to answer all your burning questions about Christianity. Want to know something more about a previous episode? Or perhaps you’ve got a question about faith that you’ve been struggling to find an answer for?
Let us know here.

or send us an email

Back To Top
Become an Undeceiver