Could Artificial Intelligence Become Self-Aware?

AI capable of recognizing itself through self-awareness

By Jonny Lupsha, Wondrium Staff Writer

So-called “weak” AI is here, but “strong” or self-aware AI is yet to come. Weak AI can suggest similar purchases based on your shopping history, but strong AI would be closer to a nonorganic brain. Can artificial intelligence achieve self-awareness?

Artificial intelligence, AI, cyber brain. Digital mind 3D illustration. Neural connections and data analysis network structure
This 3D illustration stylistically represents a data analysis network of a “digital mind” of artificial intelligence. Photo by Yurchanka Siarhei / Shutterstock

Great strides have been made in artificial intelligence (AI) in the last 20 years. Digital assistants like Siri and Alexa can hear speech, translate it into searchable data, and return answers based on those searches. Some artificial intelligence programs can pass the bar exam, while others can generate artwork based on human requests, including emulating legendary painters from any number of periods.

Artificial intelligence has sprouted up in the news repeatedly in recent months. A planned G7 talk regarding common regulatory agreements on AI was recently announced just two weeks after AI pioneer Geoffrey Hinton quit a prestigious job at Google to warn of the dangers of artificial intelligence. In March, a man committed suicide after being advised to do so by a chatbot, while in April, an AI-generated, two-sentence horror story casually brushed off human extinction as a setup to a tale of an AI who couldn’t stop its own deletion.

Is it possible for AI to become truly self-aware? In his video series Redefining Reality: The Intellectual Implications of Modern Science, Dr. Steven Gimbel, who holds the Edwin T. Johnson and Cynthia Shearer Johnson Distinguished Teaching Chair in the Humanities at Gettysburg College in Pennsylvania, examines the potentials and pitfalls of strong artificial intelligence.

What Is the Key to AI Self-Awareness?

Computer models need to have several layers of analysis based on both feedback and something called “feed-forward behavior.” According to Dr. Gimbel, feed-forward behavior means a system that responds to a context, like a set of values, by doing something. A feedback loop, on the other hand, occurs when an algorithm’s output gets fed back into the program as input.

“When the program behaves in feed-forward behavior, it acts, and then the feedback behavior takes that action into account in reassessing the situation,” he said. “In this way, it can act and learn from the results of past actions—It can improve itself at tasks. Add to this the ability to perform layers of analysis and you have the ability to not just judge the likely successfulness of a given act, but to come to generalized results about classes of similar acts.”

This kind of behavior creates strategies in two ways. It gives the computer the specific results about one case and general results about categories, or similar cases. Having “neural nets” like these, which can give context to various situations for an AI through repeated experiments and results, is very similar to what we may think of as “different perspectives,” which may be the key to strong AI.

“If intelligence, real thought, or sentience is an emergent property, then this ability to create neural nets capable of working on different levels may be the key to having a machine that is capable of recognizing itself as a thing in the world,” Dr. Gimbel said. “It perhaps could lead to self-realization, and this artificial consciousness would be understanding as we understand it.”

Redefining Reality: The Intellectual Implications of Modern Science is now available to stream on Wondrium.

Edited by Angela Shoemaker, Wondrium Daily