Why tiny bee brains could hold the key to smarter AI
A new discovery of how bees use their flight movements to facilitate remarkably accurate learning and recognition of complex visual patterns could mark a major change in how next-generation AI is developed, according to a University of Sheffield study.
The model not only deepens our understanding of how bees learn and recognize complex patterns through their movements, but also paves the way for next-generation AI. It demonstrates that future robots can be smarter and more efficient by using movement to gather information, rather than relying on massive computing power.
Professor James Marshall, Director of the Centre of Machine Intelligence at the University of Sheffield and senior author on the study, said:"In this study we've successfully demonstrated that even the tiniest of brains can leverage movement to perceive and understand the world around them. This shows us that a small, efficient system — albeit the result of millions of years of evolution — can perform computations vastly more complex than we previously thought possible.
"Harnessing nature's best designs for intelligence opens the door for the next generation of AI, driving advancements in robotics, self-driving vehicles and real-world learning."
The study, a collaboration with Queen Mary University of London, is published recently in the journaleLife. It builds on the team's previous research into how bees use active vision — the process where their movements help them collect and process visual information. While their earlier work observed how bees fly around and inspect specific patterns, this new study provides a deeper understanding of the underlying brain mechanisms driving that behavior.
The sophisticated visual pattern learning abilities of bees, such as differentiating between human faces, have long been understood; however the study's findings shed new light on how pollinators navigate the world with such seemingly simple efficiency.
Dr. HaDi MaBouDi, lead author and researcher at the University of Sheffield, said: "In our previous work, we were fascinated to discover that bees employ a clever scanning shortcut to solve visual puzzles. But that just told us what they do; for this study, we wanted to understand how.
"Our model of a bee's brain demonstrates that its neural circuits are optimized to process visual information not in isolation, but through active interaction with its flight movements in the natural environment, supporting the theory that intelligence comes from how the brain, bodies and the environment work together.
"We've learnt that bees, despite having brains no larger than a sesame seed, don't just see the world — they actively shape what they see through their movements. It's a beautiful example of how action and perception are deeply intertwined to solve complex problems with minimal resources. This is something that has major implications for both biology and AI."
The model shows that bee neurons become finely tuned to specific directions and movements as their brain networks gradually adapt through repeated exposure to various stimuli, refining their responses without relying on associations or reinforcement. This lets the bee's brain adapt to its environment simply by observing while flying, without requiring instant rewards. This means the brain is incredibly efficient, using only a few active neurons to recognize things, conserving both energy and processing power.
To validate their computational model, the researchers subjected it to the same visual challenges encountered by real bees. In a pivotal experiment, the model was tasked with differentiating between a 'plus' sign and a 'multiplication' sign. The model exhibited significantly improved performance when it mimicked the real bees' strategy of scanning only the lower half of the patterns, a behaviour observed by the research team in a previous study.
Even with just a small network of artificial neurons, the model successfully showed how bees can recognise human faces, underscoring the strength and flexibility of their visual processing.
Professor Lars Chittka, Professor of Sensory and Behavioural Ecology at Queen Mary University of London, added: 'Scientists have been fascinated by the question of whether brain size predicts intelligence in animals. But such speculations make no sense unless one knows the neural computations that underpin a given task.
"Here we determine the minimum number of neurons required for difficult visual discrimination tasks and find that the numbers are staggeringly small, even for complex tasks such as human face recognition. Thus insect microbrains are capable of advanced computations."
Professor Mikko Juusola, Professor in System Neuroscience from the University of Sheffield's School of Biosciences and Neuroscience Institute said: "This work strengthens a growing body of evidence that animals don't passively receive information — they actively shape it.
"Our new model extends this principle to higher-order visual processing in bees, revealing how behaviorally driven scanning creates compressed, learnable neural codes. Together, these findings support a unified framework where perception, action and brain dynamics co-evolve to solve complex visual tasks with minimal resources — offering powerful insights for both biology and AI."
- RELATED TOPICS Plants & Animals Ecology Research New Species Animal Learning and Intelligence Behavioral Science Computers & Math Computers and Internet Artificial Intelligence Neural Interfaces Computer Modeling
- Plants & Animals Ecology Research New Species Animal Learning and Intelligence Behavioral Science
- Ecology Research
- Animal Learning and Intelligence
- Behavioral Science
- Computers & Math Computers and Internet Artificial Intelligence Neural Interfaces Computer Modeling
- Computers and Internet
- Artificial Intelligence
- Neural Interfaces
- Computer Modeling
- RELATED TERMS Digital economy Computing Pollination management Computing power everywhere Robot Quantum computer Motor neuron Bee
- Pollination management
- Computing power everywhere
- Quantum computer
Materials provided byUniversity of Sheffield.Note: Content may be edited for style and length.
- HaDi MaBouDi, Mark Roper, Marie-Geneviève Guiraud, Mikko Juusola, Lars Chittka, James AR Marshall. A neuromorphic model of active vision shows how spatiotemporal encoding in lobula neurons can aid pattern recognition in bees. eLife, 2025; 14 DOI: 10.7554/eLife.89929
UCLA Engineers Build Room-Temperature Quantum-Inspired Computer
Tiny Lab-Grown Spinal Cords Could Hold the Key to Healing Paralysis
41,000 Years Ago, Something Weird in Space Changed How Humans Lived on Earth
50-Million-Year-Old Sea Turtle Unearthed in Syria Stuns Paleontologists
Source: Sciencedaily



