Intelligent systems are programmed to process numerical and nonnumerical values, understanding the same values or morals. With the definition of criteria and principles, there is an attempt to empower Artificial Intelligence.

 

How Artificial Intelligence Works

 

Artificial Intelligence is programmed to perform a precise task in an efficient and fast way, often lightening the work of man, becoming for him a useful partner.

 

Intelligent systems, for example, can analyze a large amount of information and documents, extrapolating the essential data from them or identifying particular correlations. Even the human being can achieve this result, but with more effort and time spent.

 

Artificial Intelligence is focused on specific algorithms, designed to ensure the particular task to be carried out by the system.

 

As a result, intelligent machines are programmed to make a decision using a precise scheme and based on clear and objective information.

 

There are therefore several questions about the morals and ethics of Artificial Intelligence and how intelligent systems can be empowered.

 

Ethics: an elusive concept, not easily definable

 

Artificial Intelligence is programmed to solve a problem rationally, providing a logical response through the processing of available information.

 

In everyday life, however, situations emerge which are challenging to assess singly.

 

Some concepts are not simple to rationalize or define. For example, it is complex describe the characteristics that conduct must have to be considered ethical.

 

The human being often faces uncertainty by relying on instinct. On the contrary, systems intelligent people find themselves at a crossroads and the solution they offer erroneous, or distorted.

 

Artificial Intelligence is not objective

 

Due to its capabilities and potentialities, Artificial Intelligence is often supposed to be an infallible and neutral system.

 

This belief stems from a mistaken assumption: the A.I makes decisions based on rational and not emotional elements, so its choices are objective.

 

Intelligent systems are often opaque, and it is, therefore, difficult to understand on which against data are discriminated.

 

Distortions of Artificial Intelligence: Discrimination and Racism

 

The type of information, used to program the system, strongly influences artificial Intelligence

 

Consequently, if the data are affected by errors (perhaps related to the prejudices of the programmer), or to historical, cultural or social distortions, the system will make decisions wrong.

 

Research in 2017 has shown how Artificial Intelligence can take racist and discriminatory decisions.

 

A premise: intelligent systems use a statistical approach. Consequently, they read the words giving them a positive or negative meaning depending on the terms which, in genre, accompany these words.

 

The research found that Artificial Intelligence associated European names with favorable terms, while African names were associated with negative expressions.

 

As a result, in the use of intelligent systems in selection processes, it could happen that the Artificial Intelligence (often used for a first skimming) ends up for to write nominations for the most European names.

 

Artificial Intelligence begins to be also used in the banking sector. Therefore, for the same principle, the system might be more likely to grant a loan or a credit to a person with a European name.

 

The criteria to be met to design an Artificial Ethical Intelligence

According to Nick Bostrom, Professor of Philosophy at Oxford,  Artificial Intelligence to be ethical, it must meet specific criteria:

 

 Algorithms must be transparent or quickly inspectedIt would make it possible to understand why Artificial Intelligence took a certain the decision before a request, or a problem;

 

The algorithms must be predictable or accompanied by a clear explanation of their results;

 

Intelligent systems must be safe, or third parties must not manipulate them with malicious intentions

 

 

– Clearly define the reference of a particular intelligent system, to have a clear point of reference to which to turn in case of problems.

 

The principles of the European Commission for Artificial Ethical Intelligence 

 

The European Commission has published an Ethics Code for Artificial Intelligence.

 

The document underlines the need for an anthropocentric approach to the A.I; this means that intelligent systems are identified as a means by which improve the well-being of the man.

 

Intelligent systems are identified as an essential tool to address current and global challenges, such as health and climate change.

 

In this regard, the European Commission has identified seven principles that Artificial Intelligence must respect to be ethical and reliable:

Human action and surveillance: intelligent systems must support a man in life daily and does not reduce its autonomy;

Robustness and safety: algorithms must be safe and reliable;

Privacy and data governance: citizens must be aware of the shared data used. It is also necessary to avoid this information being used to infringe it;

Transparency: traceability of intelligent systems;

Diversity, non-discrimination, and equity: intelligent systems must be accessible to everyone;

– Social and environmental welfare:  AI should support sustainability and ecological responsibility;

Accountability: ensuring accountability of AI and the results produced.

 

Translating ethics into the algorithm: the dawn of Algor-Ethics 

Artificial Intelligence is a new technology with great potential and rapid development, forced to clash with a society that moves more slowly.

 

Intelligent systems are programmed to make their decisions using several numerical values.

 

Consequently, it is also necessary to transform ethics into something understandable for Artificial Intelligence.

 

Thus the strand of Algor-ethics is opening, based on the idea of being able to translate values and principles into the binary language.

 

In doing so, in particular, situations, Artificial Intelligence could falter and the decision would be taken by human intervention.

 

Bostrom N. (2011), The Ethics of Artificial Intelligence