“Angry? Bad? Good!”: what is the ethics of artificial intelligence and how it helps to make algorithms useful and safe for people 5.3

How can AI have ethics at all? He’s inanimate! Of course, by itself, it cannot distinguish between good and evil. The task of humanity is not to make AI look like a human, but to teach it to act based on the principles of justice and morality, and not at all as most people usually do. Naked Science answers popular questions about who and why formulates ethical principles for AI, and where and how they are applied.

Ethics is something from philosophy, isn’t it? What is it all about?

Indeed, initially ethics is a philosophical discipline about the norms of behavior towards other people, society, God, etc., and subsequently about internal moral categories. There is, for example, such an ethical maxim of the philosopher Immanuel Kant — the categorical imperative — “act so that you always treat humanity both in your own person and in the person of everyone else as an end, and never as a means.”

But ethics can also be understood much more simply — as a system of moral principles that allow each person to distinguish between good and evil; to understand what is good and what is bad. Ethics regulates the community of people, communities, and states. The golden rule of ethics is “don’t do to others what you don’t want to yourself.”

And what does ethics have to do with technology?

For a long time, people have divided the world into animals — acting exclusively “by nature”, based on instincts, and reasonable people whose actions are rational and based on ethical principles. Therefore, if someone violates them, it is either a bad person, unworthy to live in society, or a madman. However, over time, people began to delegate some of their actions to machines. Now the machines were doing certain things, but without having either thinking or moral directives.

Nowadays, a special class of technologies has appeared that solve individual intellectual tasks previously subject only to man. This is the so-called weak, narrow or applied artificial intelligence. He can play chess or drive a car. And if in the first case AI is unlikely to have a strong impact on people and society, then in the second it has to solve a lot of ethical dilemmas.

Some of them are summarized in the framework of the thought experiment “trolley problem” — a variety of choice situations when it is necessary to minimize victims. So, the algorithm of an unmanned vehicle must decide in critical conditions what to do — go into a ditch and put the passenger at risk, or move on, creating a danger for traffic violators.