Concern about artificial intelligence and its potential ramifications for society shape much of the current cultural debate. Can a science fiction writer aid Pope Francis’ reflection on artificial intelligence? 70 years before the current debate about artificial intelligence, science fiction writer Isaac Asimov was reflecting on how to deal humanely with futuristic technology such as robots. His reflection on robots translates well to our world’s pressing reflections about how to deal with the rise of artificial intelligence (AI) technology.
In a world that proclaims little concern about the moral aspects of life, it seems incongruous to see such a broad spectrum of people raise concerns about artificial intelligence and its potential ramifications for society. The interest is so great that the G7 Summit will invite Pope Francis to speak this June in Italy during their session dedicated to AI. While it is great that they are turning to a moral authority like Pope Francis, I hope they do not overlook the powerful reflections made by Isaac Asimov over 70 years ago.
One of the first modern thinkers to think of the morality of robots was the science-fiction author, Isaac Asimov. Delving into his concepts of the three laws of robotics can give us moral wisdom to face the ethical issues of artificial intelligence.
Isaac Asimov’s Three Laws of Robotics
Isaac Asimov, a true visionary of his time, outlined his three laws of robotics in his futuristic novel I, Robot.
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Asimov seems to recognize a primacy of the value of human life as the basis for his laws regarding robotics. He explores this concept more in his novel I, Robot which speculates on the possible mishaps and departures from the ethical principles laid down by these three laws. We face a challenge similar to the one that Asimov faced. He could never predict everything that would happen with robots in the future, just as we find it difficult to accurately predict everything that will happen with AI.
In 2020, the RenAIssance foundation penned The Call for AI Ethics. These principles are supposed to represent the foremost ethical thinking by the Vatican to date, and are listed as follows: transparency, inclusion, accountability, impartiality, reliability, and security and privacy. While we can agree principally with these six criteria, they seem harder to define and apply than may be apparent at first glance.
Transparency in Artificial Intelligence
This foundational principle suggests that everyone should understand AI, but how realistic of a goal is this really? Even in our society which is permeated by electronics, how many people understand how their Ring doorbell works? We use new technology before understanding it. Maybe this is the only practical way to do things, since we can’t really just wait for everyone to understand something beneficial before we start using it. When Thomas Jefferson developed a new plow in 1784, farmers were able to understand it. When the iPhone debuted in 2007, most of us just trusted that it would do what Steve Jobs promised us.
Inclusion
At first glance, inclusion seems like a good principle for ethics regarding AI. However, the word has become charged with meaning and it would be helpful to know to what exactly we are referring with the word “inclusion.”
Inclusion is a nice enough word, connoting magnanimity and beneficence as well as welcoming and tolerance.
But that is the problem. The generosity of spirit that inclusion carries also informs us of who is leading the way: the same forces that maintained exclusionary policies until the realization dawned that fashions have changed (Peter Slatin, “The Trouble with Inclusion”).
Who will be included? I believe what this principle is really trying to say is that we want to make sure that all humans have access to AI technology. Instead of inclusion, it might be better to speak about accessibility.
Accountability
Accountability as a criterion is something I can get behind wholeheartedly. Every AI software should have somebody who answers for its actions. The responsible company or individual can help explain when something goes awry and adjust the software so that it serves people better in the future. Recently, Catholic Answers got into trouble when they presented an AI priest as an interactive tool for learning apologetics. Internet users quickly held them accountable and they later issued an apology. They announced that they were “laicizing” the virtual tool. We can see this criterion is viable since it has been effectively applied in the real world.
Impartiality
As AI develops, we need to ask ourselves if the influence of the programmers will always impart bias into the programs they create. When the Call for AI Ethics proposes impartiality as a principle, it seems that they base their hope for impartiality on the algorithms involved in the programming.
One of the greatest hopes regarding algorithmically-driven decisions in organizational contexts lies in AI’s ability to suppress or even eliminate common human biases that threaten the enactment of fair procedures AI has the potential for standardizing decision-making processes, thereby eliminating many of the idiosyncrasies that can lead human decision-makers to depart from impartiality (Claudy, Aquino, and Graso, “Artificial Intelligence Can’t be Charmed: The Effects of Impartiality on Laypeople’s Algorithmic Preferences”).
It is a helpful moral criterion but will be difficult to confirm for most of us who are not involved in the relevant field(s). Can we really trust AI designers to follow this principle authentically and sincerely? Even if we implicitly trust AI designers to follow this principle authentically and sincerely, how do we verify that they have done so?
Reliability
AI must be reliable. The list of six principles seems to be more of a series of catchwords, rather than a clear statement of what they intend to communicate. We have to go to other sources to look for the meaning of the words they are using in the context of AI ethics. Christian Mayer goes deep into the topic of reliability, exploring possible dangers of AI and encouraging regulation to help make it as ethically acceptable as possible.
What does “reliable” mean in the AI context? We speak of a “reliable” AI application if it is built in compliance with data protection, makes unbiased and comprehensible decisions, and can be controlled by humans. (Christian Mayer, “Reliability of AI systems: “We make a complex task tangible’”).
This definition gives the impression that the previous criterion of impartiality includes reliability. The authors assume that governmental regulation of AI will affect trust positively and protect democratic society.
Security and Privacy
Security and privacy are priority criteria for the ethical use of AI. The difficulty may be that as most users are laypeople regarding the technology, it is hard to ensure that the security and privacy are real. It seems that we become aware of insecurity of technology mostly after a security breach.
Conclusion
Much thought has gone into elaborating these six principles for ethical use of AI. Still, Isaac Asimov seems to have already responded to many of the same difficulties over 70 years ago while managing to keep his list much simpler than that of the RenAIssance foundation. Perhaps there are other things to be considered, but the basic morality of robot technology seems to favor Asimov’s “humans first” approach.
What do you think? Comment below.
Subscribe to the newsletter to never miss an article.