• Senthil Kumar

Artificial Intelligence (AI), Trustworthiness, & Ethical Reasoning

An interesting anecdote (of course imaginary!) to start visualizing how to make our computers and robots think and behave ethically and be trustworthy. Our friend by the name of Robert-Genie, a top-notch computer scientist working for the most benevolent software enterprise in the world was trying to program his newly designed state-of-art AI robot. Mr. Robert-Genie wanted to make his AI robot named "Next-to-God" to think and behave ethically so that it would become the most trusted humanoid ever built. The "Next-to-God" is almost God because it can do all the wonders of intelligence and possessed supernatural capabilities ... from predicting earthquakes to calculating all the parameters of rocket science for a perfect Mars landing to pre-empt a ballistic missile before its launch.

Robert-Genie, being a pious man trained in the most coveted religious convent throughout his life, thought that he would start this task by imbibing the robot "Next-to-God" with Ten commandments and some biblical verses from New Testament. He started dictating the verses to the robot. Something strange happened when he was halfway through his programming task. The AI robot "Next-to-God" raised its hands and started choking the scientist and shaking him violently. Robert-Genie was almost dead by the time he got help from his colleagues. A team of dozen persons had to struggle with the robot "Next-to-God" to save Robert-Genie.

First, the entire team working for Robert-Genie was shocked and surprised by the reaction of the "Next-to-God" to the lesson on the most revered religious commandments and biblical verses which are believed to be "Godly" attributes. Second, they were zapped by the determination of the robot to draw such decisive conclusions and behave deadly toward its creator. Third, now they wonder where do they start - in terms of making the world of super-intelligent computers and AI robots become ethical and trustworthy?

The crisis led to some fundamental philosophical and existential questions - for which the team of scientists had no answer. Where do ethics come from? Are ethics absolute? Can ethics be universal standards? Are ethics context-specific? Are they time-bound? Are they subjective? How do we define the limits of ethical behaviors? Is there no yardstick or benchmark to set standards and rules for both humans and robots to replicate or follow? If the rules and lessons (learned behaviors) cannot be the starting point for making the AI and robots superhuman (let alone becoming godly), then how do we make them ethical and behave humanely?

Let us have some conversations on a few ideas for ethics and trustworthiness among AI and Robotics. I would like the readers of this post to throw their opinions and suggestions on this challenge. This post is open to contributions from all those interested in making the world of AI superhuman.

Can "Pain and Pleasure" be the starting point for setting the AI robotic transactions and boundaries for their responses and behaviors?

Can "Tit for Tat" be a guideline for the robots to exercise their reaction to human interactions and situations warranting proportionate and measured response?

Are there "Golden Rules" from the fields of religion and philosophy to dictate the behavior of robots?

How do we transform the "intelligent transactions" into "intellectual", "ethical" and "humane" immersive and meaningful experiences?

How do we create perfect machines in the world of imperfect and incomplete creations?