Artificial Intelligence can be used to implement the company's Knowledge Management strategy. Pigro uses AI with a statistical approach to speed up the search for information within the company database, both for customers and employees.
In general, AI – Artificial Intelligence can be defined as a branch of Computer Science focused on creating intelligent machines that think and react like humans.
But the concept of Artificial Intelligence is very wide, and, in literature, there are many definitions that try to explain what AI is.
To systematize the many descriptions have thought Stuart Russel and Peter Norvig, identifying and arranging them into four categories:
– systems that think like humans;
– systems that think rationally
– systems that act like human beings;
– systems that act rationally.
According to the authors, these categories coincide with different phases of the historical evolution of A.I., starting from the 1950’s up to today.
Jerry Kaplan, in his book “Artificial Intelligence. A Guide to the Near Future,” agrees with the multiplicity of definitions that have revolved around AI since its inception but notes one element that unites them all: “creating computer programs or machines capable of behaviors that we would consider intelligent if enacted by human beings.”
According to the European classification, there are two types of artificial intelligence: software and embedded intelligence.
By software we mean:
– virtual assistants: these are software that, by interpreting natural language, can converse with humans. The purposes can be multiple, from providing information to performing certain functions;
– image analysis software: mainly used in the security, medical, biomedical and aerospace sectors;
– Search engines: programs that can be accessed from appropriate sites to locate information in which the user may be interested;
– voice and facial recognition systems: software that uses biometric data for recognition.
Embedded intelligence, on the other hand, includes:
– robots: programmable mechanical and electronic devices that can be used to replace humans in performing repetitive or dangerous tasks:
– autonomous vehicles: capable of automatically matching the main transportation capabilities of a traditional car;
– drones: remotely controlled aircraft capable of detecting information;
– Internet of Things (IoT): a network of objects capable of communicating and equipped with identification technologies.
The history of artificial intelligence does not begin with the invention of the term, but a decade earlier, thanks to the experiments of mathematician Alan Turing.
Turing in 1950 wrote an article entitled “Computing machinery and intelligence” with the aim of addressing the issue of AI, at that time little known to the point of not even having a name. The term “Artificial Intelligence” will be born, in fact, only six years later.
With the aim of analyzing artificial intelligence and human intelligence creates the “Turing Test” or “Imitation game”: the test consists of three participants of which one, at some point, is replaced by a machine, without the knowledge of the other two. The goal is to see if the “human” participants can realize that they are dealing with a machine.
Although the foundations of AI had already been laid by Alan Turing, it is only with John McCarthy that this field of research finally has a name: “artificial intelligence”.
He uses it for the first time during a conference on the subject held at Dartmouth in 1956 in which emerges the need for a name that differentiates AI from the already known cybernetics.
A paper entitled “Dartmouth proposal” is produced in which, for the first time, the term “artificial intelligence” is used.
The Dartmouth conference sparks interest and enthusiasm for this new area of research and many people invest in the field and study the subject.
Among these is Arthur Samuel, an American computer scientist, who in 1959 created the “checkers game“, a program created to self-learn to the point of surpassing the abilities of humans.
Analyzing the possible moves in every moment of the game, it is able to base its decisions on a large number of variables and information, which make it better than other players.
But this is not the only contribution that Arthur Samuel has made to Artificial Intelligence: to give a name to his inventions, he also invented the term “machine learning”.
Machine learning was historically born in 1943, by Warren McCullock and Walter Pitts, who noticed how the brain was sending digital and, precisely, binary signals (Kaplan, 2017).
Frank Rosenblatt, a psychologist, took up the findings of the two scholars by implementing them and creating Perceptron, an electronic device capable of showing learning capabilities.
The first wave of enthusiasm, however, is followed by one of stalemate, in which research is halted and investment plummets. For the research field to become interesting again, it is necessary to wait until the 80’s and non-linear neural networks.
With the 70’s come the expert systems, born with the aim of replacing “artificially” a person expert in a particular field.
The Artificial Intelligence, in fact, can detect specific solutions to a problem, without having to consult a person expert in the field.
But how do expert systems work?
They are composed of three sections:
3. user interface: it is thanks to it that the interaction between program and human can take place.
With the 1970’s the production of minicomputers increases, and they begin to be present in many companies. In the 1970’s, the production of minicomputers began to increase and many companies began to produce them.
This period saw the emergence of second-generation expert systems, which differed from programming systems in that “The common approach to programming required that the programmer himself be an expert in the program’s area of expertise, and that he always be readily available to make changes […]. In contrast, the concept behind expert systems was to explicitly represent scope knowledge, making it available for analysis and modification.”
In 1984 we have the birth of a new term: “AI winter”. Already from the name this is a period of cooling, in which there is a decline in investment and research in the field.
Some examples are:
– mid-1960’s: investment in AI is halted by the United States following a loss of confidence in the research field;
– 1987: the U.S. Department of Defense government agency (DARPA), freezes investment by ousting AI from areas recognized as promising.
However, like every season, winters end and, with the 90’s, new innovations and new investments arrive, laying the foundations for the future of Artificial Intelligence.
It is 1996 and a chess game is being held in Philadelphia. One of the two players is world champion Garri Kimovič Kasparov, known for being the youngest to have won the title, at 22 years and 210 days.
Up to here, nothing special, except that the other player “Deep Blue” is a computer, designed by IBM to play chess.
The challenge is won by Kasparov but the revenge does not delay to arrive: the following year in fact Deep Blue, after an update, is able to overcome the world champion, winning the victory.
The original project dates to the previous decade, to 1985, when the student Feng-hsiung Hsu designs for the thesis a machine to play chess, called ChipTest.
In 1989, this project was joined by Murray Campbell, his classmate, and other computer scientists, including Joe Hoane, Jerry Brody, and CJ Tan.
The chess player opens the way to a wide range of possible fields of use: the research has in fact allowed developers to understand the ways in which to design a computer to solve complex problems using in-depth knowledge aimed at analyzing an increasing number of possible solutions.
Such a revolutionary victory inevitably also generates a lot of criticism about what human supremacy over machines means and what it entails.
There is also an attempt to downplay the event, focusing primarily on “the role of the supercomputer designed for the task, rather than the sophisticated techniques used by the team of programmers” (Kaplan, 2017).
Already known to scholars, the debate between weak AI and strong AI further ignites in the 1990’s.
The human mind begins to be seen as something programmable and therefore replaceable by a machine.
Let’s see together the characteristics of weak and strong AI and the main differences.
Weak A.I. – Artificial Intelligence simulates the functioning of some human cognitive functions and is related to the fulfillment of a specific task (Russel and Norvig, 2003);
But the goal is not to equal and exceed human intelligence, but rather to act as an intelligent subject, without it mattering if it really is.
The machine in fact is not able to think independently, remaining bound to the presence of man.
According to John Searle, philosopher of language and mind “the computer would not be only, in the study of mind, a tool; rather, a properly programmed computer is really a mind”.
The A.I. – Strong Artificial Intelligence, in fact, emulates in a more complete way the functioning of the human mind, resulting autonomous and able to act as a human being (Russel and Norvig, 2003).
The technology used is that of the expert systems we discussed in the chapter in the 1980’s.
It has always been a matter of public debate to define what boundary Artificial Intelligence must respect.
The fear that it will replace man, that technology will rebel, and other apocalyptic scenarios are the plot of many films on the subject that lead Artificial Intelligence to be seen as something to be feared.
To dictate margins to the ethical dimension of AI, the European Union has stepped in, issuing the Code of Ethics in 2019, containing guidelines on the use and development of Artificial Intelligence systems.
The document places humans at the center and defines the purpose of AI use as increasing well-being and ensuring freedom.
The main points of the Code are:
– human control and oversight: Artificial Intelligence must be used to benefit human life. Therefore, only systems that protect fundamental rights and allow total human management and oversight can be developed;
– Security: security must never be endangered, at any stage of the system life cycle;
– Privacy: in case of use of personal data, the involved subjects must be informed, in the maximum respect of the EU law on privacy;
– Traceability: All data used must be tracked and documented;
– Non-discrimination: AI systems must ensure accessibility to all and respect for diversity;
– Environmental change: AI must support positive climate change;
– Accountability: accountability mechanisms related to the algorithms used must be adopted in the use of data, with the aim of minimizing any negative impacts.
As it has already emerged in the chapter on the history of artificial intelligence, machine learning was born in 1943, although only in the 80’s it became an authoritative field of research through the development of the first non-linear neural networks.
But what is machine learning? Since the dawn of AI, scholars have understood the importance of the ability to learn, and how this should be “taught” to new technologies.
But learning comes not only from study and reasoning, but also from experience, practice, and training. “Saying that something has been learned doesn’t just mean that that something has been grasped and stored, as happens to data in a database – it has to be represented in some way that can be put to use” (Kaplan, 2017).
In fact, the concept goes beyond the mere collection and analysis of data that can be traced back to statistics, using computational techniques that mimic the human brain and its processes.
Machine learning relies on big data and, rather than making assumptions, allows the system to learn from it.
One of the approaches to machine learning is using neural networks, which are computational models composed of “neurons” that, as the name implies, are inspired by biological neurons.
Following this approach, the fundamental elements of machine learning systems are:
– Neural networks: which connect the various neurons;
– Neurons: models that try to detect the most important aspects of neuronal functioning;
– Learning model: the organization of networks, aimed at the execution of a task.
Learning can also be of various types:
– Supervised learning: its purpose is to instruct a system to make it “capable” of processing predictions automatically;
– Unsupervised learning: provides the system with some inputs that will be organized and classified with the goal of reasoning about them, to make predictions about subsequent inputs.
– Reinforcement learning: it aims at realizing autonomous agents and making them able to choose which actions to perform to achieve certain goals, through interaction with the environment.
To better understand the wide field of machine learning, it is important to better define what data are, how they differ from information and knowledge.
What are data? They are an objective (and therefore uninterpreted) representation of the analyzed reality.
But the process does not stop there: once detected it is necessary to give an interpretation. A meaning is attributed to those available, thus passing from data to information.
For this information to become knowledge, another step is needed: choosing how to use it and, on the basis of what emerges, making decisions.
As we have seen in the previous paragraph, correct data collection is fundamental to arrive at information and then at knowledge.
There are two types:
– Structured data: these are those organized in charts and tables and stored in databases;
– unstructured data: they do not have any schema and therefore they do not have a predefined model. Images, audio and video are some examples of unstructured data;
– data semi structured: it is a mix of the two types introduced previously. precedentemente.
Traditional statistical models are machine learning models, used until the early 2000’s. They are based on a reduced amount of data sample on which they make assumptions generated from the periods of past demand. Such models, to find a solution, prefer a smaller amount of data, repeatable and linear, in contexts where the relationships between them prove to be relatively stable.
Until the early 21st century, such models were the most widely used methodology for medium- to long-term forecasting, at which point the tables begin to turn.
The management of a small amount of data begins to be no longer sufficient and the need for new forecasting techniques arises.
After outlining what artificial intelligence is, we have analyzed all the steps that, starting from the 1950’s, have made AI great.
This historical excursus has allowed us to understand how artificial intelligence an integral part of our daily lives is and how the advancement of technological progress has allowed AI to be more and more present in our everyday lives.
The European Parliament has carried out an analysis of the main fields in which artificial intelligence and everyday life intersect, allowing technology to increasingly come alongside us:
– Web shopping and advertising: AI, as we have seen in the previous paragraphs, is also used to make future predictions, based on previously collected data. And this is also applied to product suggestions, made based on purchases, search intent, online behaviors, and more.
– Online searches: as with shopping, search engines use the data collected to “learn” what the user is interested in and propose results that are similar to it;
– Virtual assistants: with the purpose of providing answers to users, they answer questions in a personalized way;
– Machine translation: AI software automatically generates translations of text, video and audio. The most common example is youtube’s auto-generated subtitles;
– Smart infrastructure: from tech tools inside smart homes that learn the behaviors of those living in the home, to using AI to improve the viability of cities;
– Cyber security: using artificial intelligence to recognize and block cyber threats, learning from previous attacks and how to recognize them;
– Artificial intelligence in the fight against COVID19: AI in the fight against the pandemic has been used in a variety of ways, from monitoring restricted entrances to temperature detection, to more specific applications in the healthcare system, such as recognizing infections starting with CT scans of the lungs;
– Fighting misinformation: a valuable aid to recognize fake news, monitoring and analyzing social content, identifying suspicious or alarming expressions, aimed at recognizing authoritative sources.
Artificial intelligence is not only present in our personal routine, but also in our working routine. In fact, more and more companies are using AI to offer better services to customers and increase employee productivity.
One example is knowledge management, or the management of corporate knowledge, which can be implemented with artificial intelligence systems with a statistical approach, to allow users to find the information more quickly they are looking for within the corporate database.
One thing is now certain: AI is present in so many aspects of our daily lives, increasing our security and allowing us to have support in numerous activities.
What the future holds for us we don’t know, but what is certain is that Artificial Intelligence will not stop its expansion and, as we have seen in tracing the history of its evolution, this is faster than it seems
Fonti: Jerry Kaplan, Intelligenza artificiale. Guida al futuro prossimo, 2017
Smart Health, Artificial Intelligence, healthcare: they can make it possible to speed up clinical examinations and medical procedures, obtain disease diagnoses and administer treatment.
For AI enthusiasts and experts, the Pigro team has decided to retrace, through a journey "in episodes", the main stages of the history of artificial intelligence. This is the first one!
In the light of new technological perspectives, Artificial Intelligence is a valid ally in the active search for work.