Free Access
Issue
Europhysics News
Volume 56, Number 1, 2025
AI for Physics
Page(s) 13 - 14
Section Features
DOI https://doi.org/10.1051/epn/2025105
Published online 24 March 2025

© European Physical Society, EDP Sciences, 2025

There are ideas that transcend the world of physics. Think of magnetic resonant imaging (MRI), lasers, electronic components, superconducting coils just to cite some examples. These systems are based on ideas born and developed in physics but their impact is so deep that have transformed human lifestyle in the last 50 years. They are nowadays standard tools used in medicine, engineering and many advanced technological processes but sometimes we forget its origin. The ideas behind the Nobel prize 2024 in physics belong to this category and probably will change our lives in the next future.

In 1982 John Hopfield introduced a model [1] designed to mimic some computational capabilities of biological systems. He considered neurons as binary entities (on/off) representing the basic units of computation and their interconnections as synapses. Inspired by the work of the psychologist D.O Hebb he assumed that, in the learning process of a pattern (memory), some synapses are reinforced and others weakened following Hebb’s rule, creating in this way a synaptic network. The model displayed outstanding emergent properties. The dynamics evolve towards stable attractor states, which correspond to stored patterns. It works as an associative memory where information is retrieved by a content driven process in contrast to conventional computers where data is stored in specific locations or addresses and retrieved by referencing these addresses. Hopfield noticed that the network could store an extensive number p of memories p = αN with α ≈ 0.14 being N the number of neurons. Beyond that point it saturates and it is not able to retrieve any useful information.

The model shares some analogies with a certain type of strong disordered materials called spin-glasses. They were studied in depth in the 70’s and a full theoretical framework was available at the time so there was an increasing interest to solve it analytically using these techniques. Finally Amit, Gutfreund and Sompolinsky [2] succeeded, opening the way to new discoveries. Researchers had a tool to modify and extend the original model and, as a consequence, the field experimented an explosion. For a decade the Hopfield model was present in almost all the conferences and workshops on equilibrium statistical mechanics. In this context it is worth mentioning the innovative ideas of E. Gardner [3] analyzing the space of interactions of neural networks and a model based on a stochastic variation of the Hebb’s rule, called Boltzmann machine, first studied by G. Hinton and collaborators [4]. The success of the Hopfield model goes beyond itself, it is a source of inspiration that has boosted other fields. In this regard I want to mention three areas of knowledge: computational neuroscience, synchronization dynamics and computational learning, which is the stone foundation of the successful language models of artificial intelligence.

The Hopfield model provides insights into how the brain retrieves memories despite noise, how neural networks stabilize information or how error correction occurs in cognitive processes [5] but overall it is a simplified version of a much more complex reality. Understanding how our brain works [7] is a massive challenge, requires the combined effort of scientists with very different profiles, from clinical to computational neuroscience, an interdisciplinary task. Several institutes were created across the world designed to build the appropriate synergies to evolve the field. A few decades later we can say that it has been a total success and a proof is the prestigious Brain Prize 2024 awarded to three physicists: L. Abbott, T. Sejnowski and H. Sompolinsky.

Coordinated firing of neurons across different regions plays a crucial role in processing information and supporting cognitive functions. This synchronization can occur at various scales, from small local circuits to largescale networks, and is often seen in the form of rhythmic brainwaves, such as alpha, beta, and gamma waves. These oscillations are believed to facilitate communication between distant brain areas, enabling processes like attention, memory, and perception. Many models and theories have been introduced along the years, among them dynamic versions of the Hebb’s rule. However, synchronization dynamics in realistic models is not always easy to understand. Several coupled non-linear equations are required to describe the state of a single neuron and only by solving them numerically one could get information about the collective behavior of a population. There was an effort to find more simplified models, able to capture the essence of the phenomenon but complex enough to display interesting behavior. In 1975 Y. Kuramoto [6] introduced a new model of phase oscillators. Each oscillator has a natural frequency and interacts with the rest through a non-linear function. Without interaction each oscillator runs at its own but the coupling is designed to ensure that the synchronized state is a stable fixed point of the dynamics. There is a competition between both terms and for a critical value of the intensity of the coupling a transition from an incoherent state to a phase locked state emerges spontaneously. The model is fully solvable. This little gem was hidden for a decade but at the end of the 80’s it was rediscovered and nowadays represents a paradigm of synchronization dynamics, dozens of papers are published every year about this model.

Computational learning is probably the field that has evolved more significantly. Learning can be split in two categories: unsupervised and supervised. Clustering algorithms such as K-means or dimensionality reduction techniques such as principal component analysis belong to the realm of unsupervised learning. On the other hand energy-based learning algorithms usually characterize supervised learning. The story of energy-based learning algorithms begins with the perceptron, introduced by Rosenblatt in 1958 [8]. It is basically a linear classifier able to solve linear separable problems. To overcome these limitations Werbos [9] proposed a new architecture which consists of multiple layers of neurons: an input layer, one or more hidden layers, and an output layer. Unlike the single-layer perceptron, the multilayer perceptron could handle non-linearly separable problems. His work was somehow forgotten for a decade but in the 80’s, thanks to the increasing interest in neural networks, it was rediscovered [10] by G. Hinton and collaborators. Since then the field (machine learning) has experimented a non-stop improving evolution: vector machines, deep learning and finally artificial intelligence [11, 12, 13, 14]. In this process G. Hinton has played a crucial role being an absolute reference character. What has changed from the initial models of the 80’s to the current explosion of AI models? Basically two things. An extremely efficient way to process information and, thanks to the development of GPU’s, a huge improvement in terms of computational speed which allows to handle systems of mind blowing size.

In 2024 the Royal Swedish Academy of Sciences has decided to award the Nobel prize in Physics to J. Hopfield and G. Hinton ”for foundational discoveries and inventions that enable machine learning with artificial neural networks”. They have used tools from physics to develop methods that are the foundation of today’s powerful machine learning and contributed significantly to the growth of interdisciplinary fields such as computational neuroscience. We have a tendency to judge quickly a scientific work or set of ideas from a short-term modern perspective, forgetting the origin, the context in which they were launched. This fact is crucial to understand why they are so inspiring. n

About the Author

Conrad J. Pérez-Vicente is Associate Professor at Department de Física de la Matèria Condensada,  Universitat de Barcelona. He received a PhD in physics in 1989 on statistical mechanics of the Hopfield model with low levels of activity. Currently he works on complex systems and synchronization phenomena.

References

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.