Today’s artificial intelligence (AI) can read, talk, and analyze data, but it still faces significant limitations. NeuroAI researchers have now developed a new AI model inspired by the human brain‘s efficiency, allowing AI neurons to receive feedback and adjust in real-time, enhancing learning and memory processes. This innovation has the potential to usher in a new generation of more efficient and accessible AI, bridging the gap between AI and neuroscience.
Despite their impressive capabilities, current AI technologies like ChatGPT remain limited in their interaction with the physical world and their ability to perform tasks such as solving math problems and writing essays, which require billions of training examples. Kyle Daruwalla, a NeuroAI Scholar at Cold Spring Harbor Laboratory (CSHL), has been seeking unconventional ways to design AI to overcome these computational challenges.
The key challenge lies in data movement. Modern computing consumes vast amounts of energy due to the need to transfer data over long distances within artificial neural networks, which consist of billions of connections. To address this issue, Daruwalla turned to one of the most computationally powerful and energy-efficient systems known: the human brain.

Inspired by how human brains process and adjust data, Daruwalla designed a new method for AI algorithms to move and process data more efficiently. His design allows individual AI neurons to receive feedback and adjust on the fly, rather than waiting for an entire circuit to update simultaneously. This approach reduces the distance data must travel and enables real-time processing.
In our brains, our connections are changing and adjusting all the time. It’s not like you pause everything, adjust, and then resume being you.
Kyle Daruwalla / CSHL
This new machine-learning model supports an unproven theory that links working memory with learning and academic performance. Working memory is the cognitive system that allows us to stay on task while recalling stored knowledge and experiences. Daruwalla’s model provides evidence for how working memory circuits might facilitate learning by adjusting each synapse individually.
“There have been theories in neuroscience about how working memory circuits could help facilitate learning, but there hasn’t been something as concrete as our rule that ties these two together,” Daruwalla says. “The theory led to a rule where adjusting each synapse individually necessitated this working memory sitting alongside it.”
Daruwalla’s design may help pioneer a new generation of AI that learns in a manner similar to humans. This advancement would not only make AI more efficient and accessible but also represent a full-circle moment for neuroAI. Neuroscience has long provided valuable data to AI development, and soon AI may reciprocate by offering insights back to neuroscience.
This breakthrough underscores the potential for AI to evolve in ways that mirror human cognitive processes, enhancing both the fields of AI and neuroscience. By integrating principles from the human brain, AI can achieve greater efficiency and capability, paving the way for more sophisticated and human-like artificial intelligence.