In the last century, the scientific community speculated about the possibility of building machines capable of performing tasks Clever, so far, reserved for humans. Although, at the moment, we are not close to reaching something similar to the call artificial general intelligencethe use of those known as Weak artificial intelligence or Narrow (NAI, for its English acronym) is very common in social networks, facial recognition systems and natural language analysis, for example. the prize Princess of Asturias for Scientific and Technical Research This year he’s getting to know some of the architects for crafting and developing the algorithms that make this kind of intelligence possible.
NAIs are algorithms that specialize in certain tasks in which they can achieve significantly higher performance than humans. However, they are unable to perform tasks other than those for which they were designed and trained. At the moment, NAI performs tasks that were elusive to machines even just twenty years ago. These include both image recognition and natural language processing.
This achievement was made possible by the advent of neural networks. Originally coined in the 1940s, it didn’t begin to show its great potential until the 1980s. Today, the term neural networks refers to a group of algorithms that are highly adaptive and performant, based on their mathematical formulation and also on the incredible. Increase in arithmetic ability Experienced since its formulation. These algorithms were born with biological inspiration: their goal was to mimic the human brain. Its basic structure consists of artificial neurons They play a role similar to neurons in the real nervous system. These simple units are arranged in parallel and formed Layers is being processed. Learning in neural networks consists of building a system with several stacked layers and training Each neuron separately.
The number of layers in a system determines the depth of the network, and only when we have two or more layers, the term is used. deep learning also deep learning. These types of deep networks can be used to perform different types of tasks and data sources, more precisely than single-layer networks: from text analysis and comprehension, to searching for elements in an image and describing the scene in real life. and even suggest new music based on our preferences.
The work of Jeffrey Hinton, Yan Likuo, Yoshua Benjiu, are generally considered parents deep learningas well as Demis Hasabis – CEO and co-founder deep mindthe company behind some of the most important milestones in artificial intelligence – has been critical to the development of deep neural network capabilities and to our current understanding of them.
Hinton, Leecon and Bingio used an idea reverse diffusion (reverse diffusion) to extend the original mathematical formulation of neural networks and allow networks to be trained in more than one layer. Hinton introduced this concept in 1986, first allowing deep neural networks to perform correctly a task from a set of data. These technologies create networks Right themselves, causing groups of neurons learn To get acquainted with the relevant characteristics of the input data, combining the custom design of each task and a series of common principles of training. This allows them to exploit the intrinsic properties of the data and perform specific tasks with high accuracy. In addition, it can be made so that Whatch out To the spatial arrangement of data – such as pixels in an image – as well as models that take into account temporal succession Of which – such as the meaning given by the order of the words in the sentence -.
For his part, Demis Hesabis is considered one of the most influential modern figures in artificial intelligence research. Among other accomplishments, his company DeepMind has created AlphaGo — an artificial intelligence capable of beating human champions in Go — from AlphaFold — an algorithm that, in its latest version, allows prediction of the 3D structure of a protein from an amino acid sequence — or cat -a general agent Which, thanks to the power of its crafting, is able to perform more than 600 different tasks, including chatting with users, playing games, describing images and manipulating robotic arms. The latter, while still far from being a truly intelligent client, is a promising step in the direction of the Holy Grail of Artificial General Intelligence.
Although it is difficult to predict whether DeepMind, or any other company or research center, will eventually succeed in creating artificial general intelligence, the impact of all these technologies has been enormous in the past 20 years and will undoubtedly become even greater in the near future. . Specifically, the contributions of these four scholars have been in the field of artificial intelligence and will be essential in shaping modern industrial societies and also in confronting some of the major problems of the twenty-first century, such as climate change.
Simon Rodriguez He is a postdoctoral researcher in ICMAT.
Timon J Longoria Agate He is the coordinator of the Sports Culture Unit at ICMAT.
Coffee and theories A section dedicated to mathematics and the environment in which it was created, coordinated by the Institute of Mathematical Sciences (ICMAT), where researchers and center members describe the latest developments in the discipline, share meeting points between mathematics and other social media and cultural expressions and remember those who marked their development and knew how to transform mathematics. Coffee to theories. The name evokes the definition of Hungarian mathematician Alfred Rainey: “A mathematician is a machine that turns coffee into theorems.”
Edition and Format: Agate A. Timón G Longoria (ICMAT).