Two separate research efforts are advancing artificial intelligence (AI) by drawing inspiration from biological brain structures.
Two separate research efforts are advancing artificial intelligence (AI) by drawing inspiration from biological brain structures. One study from Johns Hopkins University indicates that specific AI architectures can exhibit brain-like activity without extensive training, suggesting the crucial role of architectural design. Concurrently, a team led by Cold Spring Harbor Laboratory has developed a highly efficient AI model, trained on primate visual data, which significantly reduces computational complexity while maintaining performance. Both studies point toward developing more efficient, human-like AI systems with reduced reliance on vast datasets and computational resources.
Johns Hopkins Research on Untrained Architectures
Research conducted by Johns Hopkins University suggests that artificial intelligence systems designed with biologically inspired structures can exhibit activity patterns similar to those found in human brains, even before formal data training. The study, published in Nature Machine Intelligence, proposes that the structural design of AI systems may be as significant as the volume of data processed.
This perspective offers an alternative to current AI development methods, which typically involve substantial training periods, large datasets, and considerable computational resources. Mick Bonner, assistant professor of cognitive science at Johns Hopkins University and lead author, noted that human learning requires significantly less data compared to current AI development trends. The study posits that architectural designs more closely mirroring brain structures could provide AI systems with an advantageous initial state.
To investigate this, Bonner's team examined three neural network designs: transformers, fully connected networks, and convolutional neural networks (CNNs). They generated numerous untrained artificial neural networks by adjusting these designs. These models were then exposed to images of objects, people, and animals, and their internal activity was compared to brain responses observed in humans and non-human primates viewing the same images.
While adjusting the number of artificial neurons in transformers and fully connected networks resulted in minimal changes in activity, similar adjustments within convolutional neural networks led to activity patterns demonstrating a closer resemblance to those found in the human brain. The researchers reported that these untrained convolutional models achieved performance levels comparable to traditional AI systems that typically require exposure to millions or billions of images. These results imply that architectural design contributes more significantly to the development of brain-like behavior than previously understood. Bonner suggested that integrating biological insights into initial designs could potentially accelerate learning in AI systems and reduce their dependency on extensive datasets.
Cold Spring Harbor Research on Compressed Models
A separate team of scientists has developed an efficient artificial intelligence model inspired by the brain's visual system, addressing the high power consumption of current AI systems relative to the human brain. The findings were published in the journal Nature.
The AI model, which simulates a portion of the brain's visual processing, was initially developed with 60 million variables. Researchers successfully compressed this into a version using only 10,000 variables, while largely maintaining its performance. This compact model is notably small and transferable.
Ben Cowley, a study author and assistant professor at Cold Spring Harbor Laboratory, indicated that the compact model operates in a manner more aligned with a living brain. He suggested this characteristic might offer new avenues for studying neurological conditions. The study utilized data from macaque monkeys to train the AI model, which simulates V4 neurons responsible for encoding visual features such as colors, textures, curves, and complex proto-objects. Compression was achieved by identifying redundant or unnecessary parts of the initial model and employing statistical techniques similar to those used in digital photo compression.
"Such biology-inspired models could enhance understanding of human brain functions and potentially lead to more potent and human-like artificial intelligence systems." — Mitya Chklovskii, Simons Foundation's Flatiron Institute
Implications and Future Directions
The reduced size and simplicity of the compressed model allowed the Cold Spring Harbor team to observe the activity of its artificial neurons. For example, some V4 neurons responded to shapes with strong edges and curves, while others responded specifically to small dots, a feature potentially linked to primate attraction to eyes. These specialized responses may contribute to explaining how primate brains process visual information efficiently with limited computing power.
The findings suggest that current AI systems could potentially be smaller and simpler, potentially benefiting applications like self-driving cars in distinguishing objects more accurately. Chklovskii also noted that further advancements in AI may require updating current models, which are often based on earlier understandings of the brain, to incorporate more recent neurological discoveries.
The Johns Hopkins research team is currently investigating biologically inspired learning methodologies. This ongoing work aims to inform the development of deep learning frameworks that could enhance AI system speed and efficiency, while reducing their dependency on extensive datasets. Both research efforts underscore the potential for bio-inspired design to lead to more robust, efficient, and brain-like artificial intelligence.