Ethical Challenges in Developing Artificial Intelligence

Fernando Favoretti shares his ethics & society case study, which he completed as part of our Young Scientist Program.

When we hear the words ’Artificial Intelligence’, the idea of intelligent cyborgs, the sky-net, and future terminators comes to mind. With rapid advancement in technology nowadays mostly thanks to the ability to create cheap hardware attached to equally cheap parallel computing systems, artificial intelligence is now capable of developing advancements that were previously only considered to be the fruit of science fiction novels, movies and TV shows. For many, the idea of develop intelligent machines may seem to be an unacceptable and dangerous fact, but in fact Artificial intelligence (AI) is everywhere and it’s here to stay. Most aspects of our lives are now touched by artificial intelligence in one way or another, from deciding what books or flights to buy online and the way we work. The way that we communicate is directly related to AI as well, such as the speech recognition system in our smart phones.

Artificial intelligence today is properly known as weak AI: it is designed to perform only a weak task at a time, e.g. only drive a car or only play chess, but the long term goal for scientists is to develop what is called general purpose AI. While weak Artificial intelligence already can outperform human in many specific tasks, general purpose AI, a.k.a. strong AI, is aimed to outperform humans in practically all kind of cognitive tasks.

Due to the fact that AI has the potential to become more intelligent than any human, we have no way to predict how it will behave in the future, (it’s important to remember that we are not arguing the possibility of AI turning evil, that’s a myth, but the possibility of AI becomes extremely competent in all fields and maybe takes goals misaligned with ours). We’ve never created anything that, in a not so distant future, may have the ability to outsmart us. The best example of what we could face may be our own evolution. Humans
control the planet, not because of a significant physical advantage over the other species that coexist with us on this planet, simply because we outsmarted them all. If we’re no longer the smartest competitor, maybe we can lose this control?

Taking a different perspective about the future and looking about a less dangerous result of developing strong AI, where we successfully developed it, but with safety, what will change? Will super intelligent machines coexist with us? Will we need to continue to work? And a lot of other questions exists, but the main point here is: What will it mean to be a human in the age of strong artificial intelligence?

Not all experts agree that we can achieve strong AI before 2100 but many researchers, including big names in science and technology like Stephen Hawking, Steve Wozniak, Bill Gates, and others, have recently expressed concern in the media and via open letters about the risks posed by AI, and the uncertainty of when it will come. It could be in the next years or in the next decades, we cannot predict exactly when. The main concern right now is should be to start implementing safety research and terms in the area before is too late, which is why companies like Facebook, Google, and Amazon have joined forces and launched a consortium with the object to develop solutions related to safety and privacy of AI.

But why we are seeking to research something that can bring danger to humankind? Throughout humanity’s history and technological development we have seen many ethical implications. Being it due to its controversial nature or by the fear of its true potential, during the first industrial revolution factories were dangerous, working hours were long and many skilled laborers lost their jobs when new steam-powered machines were invented. Backlashes of this science geared progress like, machines taking over once human operated tasks, is an inevitable price to pay since machines are cheaper, more reliable and offer day and night production cycles, something that could never be achieved through human based labor. For the first time we are maybe creating something that can be more than us, in all the aspects. But on the other hand investing in new technologies, such a research in human level AI might help us eradicate war, disease, and poverty, and so the creation of strong AI might be the biggest event in human history.

Which scenario would be the most ethical acceptable? Abandon all fronts of research that seek the development of strong artificial intelligence by protecting mankind from possible overcoming in the domain of intelligence and domination of the earth environment, or accept the risk and continue to develop technology that may or may not be responsible for the end of reign of humans on earth? It is true that we can not stop the advances of humanity, in the next 10 years the world will undergo more transformations than in the last century and the extreme development of artificial intelligence will be of enormous importance in this respect, but I believe that the right path may be to stop for a little moment and think how we can continue this advance in the most sensible way possible. But even thinking on this side, when should we pause? What will make us realize the right time to think more carefully about the development of strong AI? Should we expect a real sign of danger or caution us in advance based on the research and expert opinion? Regardless of our choice, only the passing of the years will tell us if we choose correctly.

”Everything we love about civilization is a product of intelligence, so amplifying our human intelligence with artificial intelligence has the potential of helping civilization flourish like never before – as long as we manage to keep the technology beneficial.” -Max Tegmark.

In other words, and in my opinion, research such type of advanced technology can be difficult in terms of ethics, because maybe it will bring some kind of disadvantages to our society, but the advantages that it can bring, and the natural cycle of the humankind to evolve is the big reason that we not give up researching AI, the humanity already has adapted to new technologies several times, and we’re close to needing to do it again