Artificial Intelligence can be used to implement the company's Knowledge Management strategy. Pigro uses AI with a statistical approach to speed up the search for information within the company database, both for customers and employees.
In general, AI – Artificial Intelligence can be defined as a branch of Computer Science focused on creating intelligent machines that think and react like humans.
But to define Artificial Intelligence, which is a very wide concept, many formulations in the literature try to explain what is AI technology.
Stuart Russel and Peter Norvig have systematized the many descriptions that explain artificial intelligence, identifying and arranging them into four categories:
– systems that think like humans;
– systems that think rationally
– systems that act like human beings;
– systems that act rationally.
According to the authors, these categories coincide with different phases of the historical evolution of A.I., starting from the 1950s up to today.
Jerry Kaplan, in his book “Artificial Intelligence. A Guide to the Near Future,” agrees with the multiplicity of definitions that have revolved around AI since its inception but notes one element that unites them all: “creating computer programs or machines capable of behaviours that we would consider intelligent if enacted by human beings.”
According to the European classification, there are two types of artificial intelligence: software and embedded intelligence.
By software we mean:
– virtual assistants: these are software that, by interpreting natural language, can converse with humans. The purposes can be multiple, from providing information to performing certain functions;
– image analysis software: mainly used in the security, medical, biomedical and aerospace sectors;
– Search engines: programs that can be accessed from appropriate sites to locate information in which the user may be interested;
– voice and facial recognition systems: software that uses biometric data for recognition.
Embedded intelligence, on the other hand, includes:
– robots: programmable mechanical and electronic devices that can be used to replace humans in performing repetitive or dangerous tasks:
– autonomous vehicles: capable of automatically matching the main transportation capabilities of a traditional car;
– drones: remotely controlled aircraft capable of detecting information;
– Internet of Things (IoT): a network of objects capable of communicating and equipped with identification technologies.
The history of artificial intelligence does not begin with the invention of the term, but a decade earlier, thanks to the experiments of mathematician Alan Turing.
In 1950, Turing wrote an article entitled “Computing machinery and intelligence” to address the issue about AI, which at that time was little known to the point of not even having a name. The term “Artificial Intelligence” will be born, in fact, only six years later.
He creates the “Turing Test” or “Imitation game”, to analyse artificial intelligence and human intelligence: the test consists of three participants of which one, at some point, is replaced by a machine, without the knowledge of the other two. The goal is to see if the “human” participants can realize that they are dealing with a machine.
Although the foundations of Artificial Intelligence technology had already been laid by Alan Turing, it is only with John McCarthy that this field of research finally has a name: “artificial intelligence”.
He uses it for the first time during a conference on the subject held at Dartmouth in 1956 in which emerges the need for a name that differentiates AI from the already known cybernetics.
A paper entitled “Dartmouth proposal” is produced in which, for the first time, the term “artificial intelligence” is used.
The Dartmouth conference sparks interest and enthusiasm for this new area of research and many people invest in the field and study the subject.
Among these is Arthur Samuel, an American computer scientist, who in 1959 created the “checkers game“, a program created to self-learn to the point of surpassing the abilities of humans.
Analyzing the possible moves in every moment of the game, computer intelligence can base its decisions on a large number of variables and information, which make it better than other players.
But this is not the only contribution that Arthur Samuel has made to Artificial Intelligence: to give a name to his inventions, he also invented the term “machine learning”.
Machine learning was historically born in 1943, by Warren McCulloch and Walter Pitts, who noticed how the brain was sending digital and, precisely, binary signals (Kaplan, 2017).
Frank Rosenblatt, a psychologist, took up the findings of the two scholars by implementing them and creating Perceptron, an electronic device capable of showing learning capabilities.
The first wave of enthusiasm, however, is followed by one of stalemate, in which research on AI is halted and investment plummets. For the research field to become interesting again, it is necessary to wait until the ’80s and non-linear neural networks.
In the ‘70s expert systems arrived, intending to replace “artificially” a person expert in a particular field. Artificial Intelligence can detect specific solutions to a problem, without having to consult a person expert in the field.
But how do expert systems work? They are composed of three sections:
3. user interface: it is thanks to it that the interaction between the program and humans can take place.
In the 1970s the production of minicomputers increases and many companies began to produce them.
This period saw the second-generation expert systems emerging, which differed from programming systems because “The common programming approach required that the programmer himself be an expert in the program’s area of expertise and that he always be readily available to make changes […]. In contrast, the concept behind expert systems was to explicitly represent scope knowledge, making it available for analysis and modification.”
In 1984 we have the birth of a new term: “AI winter”. As we can guess from the name this is a period of cooling, in which there is a decline in investment and research in the field.
Some examples are:
– the mid-1960s: investment in AI is halted by the United States following a loss of confidence in the research field;
– 1987: the U.S. Department of Defense government agency (DARPA), freezes investment by ousting AI from areas recognized as promising.
However, like every season, winters end and, in the ’90s, innovations and new investments arrive, laying the foundations for the future of Artificial Intelligence.
It is 1996 and a chess game is being held in Philadelphia. One of the two players is world champion Garri Kimovič Kasparov, known for being the youngest to have won the title, at 22 years and 210 days.
Up to here, nothing special, except that the other player “Deep Blue” is a computer, designed by IBM to play chess.
The challenge is won by Kasparov but the revenge does not delay to arrive: the following year Deep Blue, after an update, is able to overcome the world champion, winning the victory.
The original project dates to the previous decade, in 1985, when the student Feng-Hsiung Hsu designs a machine to play chess, called ChipTest.
In 1989, this project was joined by Murray Campbell, his classmate, and other computer scientists, including Joe Hoane, Jerry Brody, and CJ Tan.
The chess player opens the way to a wide range of possible fields of use: the research has allowed developers to understand how to design a computer to solve complex problems using in-depth knowledge aimed at analyzing an increasing number of possible solutions.
Such a revolutionary victory inevitably also generates a lot of criticism about what human supremacy over machines means and what it entails.
There is also an attempt to downplay the event, focusing primarily on “the role of the supercomputer designed for the task, rather than the sophisticated techniques used by the team of programmers” (Kaplan, 2017).
Already known to scholars, the debate between weak AI and strong AI further ignites in the 1990s. The human mind begins to be seen as something programmable and therefore replaceable by a machine.
Let’s see together the characteristics of weak and strong AI and the main differences.
Weak A.I. – Artificial Intelligence simulates the functioning of some human cognitive functions and is related to the fulfilment of a specific task (Russel and Norvig, 2003);
However, the goal is not to equal and exceed human intelligence, but rather to act as an intelligent subject, without having any importance if it really is.
The machine in fact is not able to think independently, remaining bound to the presence of man.
According to John Searle, philosopher of language and mind, “the computer would not be only, in the study of mind, a tool; rather, a properly programmed computer is really a mind”.
The A.I. – Strong Artificial Intelligence emulates the functioning of the human mind more completely, resulting autonomous and able to act as a human being (Russel and Norvig, 2003).
The technology used is that of the expert systems we discussed in the chapter on the 1980s.
It has always been a matter of public debate to define what boundary Artificial Intelligence must respect.
The fear that it will replace man, that technology will rebel, and other apocalyptic scenarios are the plot of many films on the subject that lead Artificial Intelligence to be seen as something to be feared.
To dictate margins to the ethical dimension of AI, the European Union has stepped in, issuing the Code of Ethics in 2019, containing guidelines on the use and development of Artificial Intelligence systems.
The document places humans at the centre and defines the purpose of AI use as increasing well-being and ensuring freedom.
The main points of the Code are:
– human control and oversight: Artificial Intelligence must be used to benefit human life. Therefore, only systems that protect fundamental rights and allow total human management and oversight can be developed;
– Security: security must never be endangered, at any stage of the system life cycle;
– Privacy: in case of the use of personal data, the involved subjects must be informed, in the maximum respect of the EU law on privacy;
– Traceability: All data used must be tracked and documented;
– Non-discrimination: AI systems must ensure accessibility to all and respect for diversity;
– Environmental change: AI must support positive climate change;
– Accountability: accountability mechanisms related to the algorithms used must be adopted in the use of data, to minimise any negative impacts.
As it has already emerged in the chapter on the history of artificial intelligence, machine learning was born in 1943, although only in the ’80s it became an authoritative field of research through the development of the first non-linear neural networks.
But what is machine learning? Since the dawn of AI, scholars have understood the importance of the ability to learn, and how this should be “taught” to new technologies.
Learning comes not only from study and reasoning, but also from experience, practice, and training. “Saying that something has been learned doesn’t just mean that something has been grasped and stored, as happens to data in a database – it has to be represented in some way that can be put to use” (Kaplan, 2017).
The concept goes beyond the mere collection and analysis of data that can be traced back to statistics, using computational techniques that mimic the human brain and its processes.
Machine learning relies on big data and allows the system to learn from it, rather than making assumptions.
One of the approaches to machine learning is using neural networks, which are computational models composed of “neurons” that, as the name implies, are inspired by the biological ones in the human brain.
Following this approach, the fundamental elements of machine learning systems are:
– Neural networks: which connect the various neurons;
– Neurons: models that try to detect the most important aspects of neuronal functioning;
– Learning model: the organization of networks, aimed at the execution of a task.
Learning can also be of various types:
– Supervised learning: its purpose is to instruct a system to make it “capable” of processing predictions automatically;
– Unsupervised learning: it provides the system with some inputs that will be organized and classified with the goal of reasoning about them, to make predictions about subsequent inputs.
– Reinforcement learning: it aims at realizing autonomous agents and making them able to choose which actions to perform to achieve certain goals, through interaction with the environment.
To better understand the wide field of machine learning, it is important to better define what data are, and how they differ from information and knowledge.
What are data? They are an objective (and therefore uninterpreted) representation of the analyzed reality.
But the process does not stop there: once detected it is necessary to give an interpretation. Meaning is attributed to those available, thus passing from data to information.
For the information to become knowledge, another step is needed: choosing how to use it and making decisions based on the context.
As seen in the previous paragraph, correct data collection is fundamental to get to information and then knowledge.
There are three types of data:
– structured data: these usually consist of numbers and text, presented in a readable format, organized in charts and tables and stored in databases;
– unstructured data: these do not have any schema and therefore they do not have a predefined model. Images, audio and video are some examples of unstructured data;
– semi-structured data: it is a mix of the two types introduced previously.
NLP, or natural language processing, is a branch of Artificial Intelligence that deals with analysing and understanding human language, defined as natural language. To do this it keeps together linguistic, informatic, and AI technology.
Until the 1980s, NLP studies have attempted to bring language rules back to computers, with poor results: it was impossible to manage the lexical, syntactic and semantic complexity of human language.
In the ‘90s, the first statistical approach is born, based on machine learning, that allows utilising a huge number of datasets to train the systems to understand the various meanings of natural language.
Later, in the 2000s, there are the first applications of neural networks to NLP models, which bring artificial intelligence development to the next level. Thanks to deep learning algorithms (a typical approach to machine learning that takes advantage of the concept of the neural network) several techniques have been deepened to represent and elaborate the natural language and to allow the machines to combine the information in their possession in the best way, to solve some tasks.
So today, thanks to the data that are gradually fed into the systems, it is possible to enrich the model at its base, improving its accuracy and using the NLP for different purposes. Some of the most popular are real-time information search, chatbots and voice bots, text generation, document classification, sentiment analysis, etc.
After outlining what artificial intelligence is, we have analyzed all the steps that, starting from the 1950s, have made AI great.
This historical excursus has allowed us to understand the purpose of artificial intelligence development and how the advancement of technological progress has allowed AI to be more and more present in our everyday lives.
The European Parliament has carried out an analysis of the main fields in which artificial intelligence and everyday life intersect, allowing technology to increasingly come alongside us:
– Web shopping and advertising: AI, as we have seen in the previous paragraphs, is also used to make future predictions, based on previously collected data. And this is also applied to product suggestions, made based on purchases, search intent, online behaviours, and more.
– Online searches: as with shopping, search engines use the data collected to “learn” what the user is interested in and propose results that are similar to it;
– Virtual assistants: to provide answers to users and customers, they answer questions in a personalized way;
– Machine translation: AI software automatically generates translations of text, video and audio. The most common example is youtube’s auto-generated subtitles;
– Smart infrastructure: from tech tools inside smart homes that learn the behaviours of those living in the home, to using AI to improve the viability of cities;
– Cyber security: using artificial intelligence to recognize and block cyber threats, learning from previous attacks and how to recognize them;
– COVID-19 emergency: AI in the fight against the pandemic has been used in a variety of ways, from monitoring restricted entrances to temperature detection to more specific applications in the healthcare system, such as recognizing infections starting with CT scans of the lungs;
– Fighting misinformation: a valuable aid to recognising fake news, monitoring and analyzing social content, identifying suspicious or alarming expressions, aimed at recognizing authoritative sources.
Artificial intelligence is not only present in our personal and work routines. More and more companies are using AI to offer better services to customers and increase employee productivity.
One example is knowledge management, or the management of corporate knowledge, which can be implemented with artificial intelligence systems with a statistical approach, to allow users to find the information they are looking for within the corporate database more quickly.
One thing is now certain: AI is present in so many aspects of our daily lives, increasing our security and allowing us to have support in numerous activities.
We don’t know what the future holds for us, but what is certain is that Artificial Intelligence will not stop its expansion and, as we have seen in tracing the history of its evolution, this is faster than it seems.
Jerry Kaplan, Artificial Intelligence. What Everyone Needs to Know, 2017
The world of Innovation Managers is buzzing with excitement as the new national registry is inaugurated. The Italian Minister of Economic Development has announced the highly anticipated return of the innovation consulting voucher, now with a bigger budget. Curious about the technologies that are the focus of these consultations? Wondering how you can get your hands on the voucher? Keep reading to find out!
Not only does it make one relevant, but it also provides the edge that differentiates a leader from a follower.
Philip Dick once wondered if androids dream of electric sheep. This a question that seemed absurd to us, yet today we ask: can artificial intelligence have hallucinations? The answer is yes.Let's start with the basics and try to understand what hallucinations mean when it comes to Artificial Intelligence and how to manage them.