The acronym A.I. refers to Artificial Intelligence.
There are several more or less complex definitions associated with the term Artificial Intelligence (A.I.).
Overall, we can define Artificial Intelligence as the branch of computer science that deals with the study and development of hardware and software systems aimed at endowing machines with one or more features typically considered human (learning, visual perception, temporal perception).
The history associated with Artificial Intelligence (A.I.) is as fascinating as it is constantly evolving: from Alan Turing’s early studies, to the trends of the 1980s that saw the opposition between strong AI/weak AI, to the futuristic scenarios of the beginning of the millennium, today artificial intelligence is certainly considered one of the most interesting and relevant areas by the entire scientific and computer science community.
The reason for this is undoubtedly attributable to its numerous applications: algorithms, computational calculations, and solutions capable of reproducing human behaviour now cover various fields of application and also closely touch our daily lives.
In order to understand how different artificial intelligence applications work, let’s think about chatbots for example, we need to take into account the different features/capabilities that characterise them. Here are some examples:
- natural language processing;
- image processing;
- learning;
- social interaction;
- interaction with the environment;
- reasoning and classification.
The centrality of artificial intelligence (A.I.) in the digital transformation process of companies and administrations is now a cornerstone; the coming years will be relevant for it to become increasingly central to every activity.
In the field of A.I- Artificial Intelligence, Machine Learning is also very often referred to. If artificial intelligence can be considered the field of study relating to systems with “human” capabilities, machine learning concerns all those systems capable of learning on their own.