February 17, 2025

AI Meets Neuroscience: Looking Back at the Simons-IVADO Workshop

By Prof Danilo Bzdok – IVADO Scientific Co-Director, Research Programs and Academic Relations

Walking through the halls of Berkeley’s Simons Institute last week, I was struck by the unique convergence of minds – neuroscientists, linguists, and AI researchers all gathered to tackle fundamental questions about language and intelligence. I was delighted to represent IVADO at the LLMs, Cognitive Science, Linguistics, and Neuroscience workshop at the Simons Institute for the Theory of Computing in Berkeley.

As IVADO’s Scientific Co-Director for Research Programs and Academic Relations, and as a researcher working at the intersection of AI and neuroscience, I had the opportunity to engage in deep discussions with experts from across disciplines. This event marked the first of three workshops in the ongoing Thematic Semester on Large Language Models (LLMs), co-organized and co-funded by Simons and IVADO. Our IVADO delegation—composed of professors and students from across Canada—contributed to these important conversations, exploring how LLMs are reshaping our understanding of language, cognition, and intelligence.

Key Themes and Insights

The workshop provided an interdisciplinary platform to explore how LLMs challenge and inform our understanding of language, the brain, and human intelligence. Discussions spanned fundamental questions such as:

  • The Human-LLM Learning Gap – Unlike human children, today’s instantiation of LLMs require vast amounts of data to learn effectively, highlighting an unresolved discrepancy between biological and artificial intelligence systems.
  • Do LLMs Truly Model Human Language? – Speakers debated whether LLMs capture human text patterns meaningfully or merely approximate statistical sequences.
  • Chomsky’s Influence – The theories of Noam Chomsky were revisited, with LLMs both reinforcing and challenging his seminal ideas on linguistics.
  • Mapping LLM representations to the Brain – Some neural mechanisms in LLMs, such as positional encoding, show intriguing parallels with how neuron populations may process information in the human brain. However, pinpointing how distinct semantic concepts are represented remains a challenge, even with direct neural recordings.
  • Emerging AI Models – Researchers assessed the capabilities of DeepSeek R1, an emerging LLM from China, comparing its strengths and weaknesses against Western models like GPT-4, Gemini, Llama, and Grok. It appears that it remains unclear how the financial efforts towards these solutions ultimately compare.
  • Security and Safety Risks – The risks posed by LLMs continue to evolve, with some experts suggesting that auxiliary AI systems could play a role in auditing or refining model outputs. The potential for unexpected emergent behaviors when LLMs interact with minimal constraints was also a topic of concern.

A Unique Collaborative Environment

Held at the Simons Institute at the University of Berkley, a premier hub for theoretical computer science research, the workshop benefited from the Institute’s strong tradition of fostering interdisciplinary collaboration. Long breaks and open discussions provided ample opportunity for knowledge exchange across disciplines, reinforcing the importance of bridging insights from neuroscience, cognitive science, and AI.

Looking Ahead: Join Us for the Next Workshops!

This workshop was just the beginning! IVADO is still accepting applications from professors and students from across Canada interested in participating in the next two workshops in the Thematic Semester. If you are a researcher at a Canadian University with relevant expertise and interest in LLMs, we encourage you to register via our website in the coming weeks: IVADO Thematic Semester on Large Language Models and Transformers

Stay tuned for more updates, and thank you to all participants who contributed to this stimulating event!