LLMs, Cognitive Science, Linguistics, and Neuroscience

Share the event

At a conceptual level, LLMs profoundly change the landscape for theories of human language, of the brain and computation, and of the nature of human intelligence. In linguistics, they provide a new way to think about grammar, semantics, and conceptual representation. In neuroscience, vector models provide a new approach to computational models of the brain. In cognitive science, they challenge our notions of what are the essential elements of human intelligence. And more the remarkable capabilities of LLMs now outstrip our ability to scientifically understand them. The time is ripe to assemble a group of interdisciplinary researchers who study linguistics, cognitive science, neuroscience, theory of LLMs, and applications of LLM to science to explore what we can all learn from each other’s perspectives. For example how do the behaviors and neural representations of LLMs compare to those of humans when processing language? How do we understand the mechanisms of language processing in LLMs? How can we better apply LLMs more robustly and rigorously to complex scientific domains?

This workshop is part of the programming for the thematic semester on Large Language Models and Transformers, organized in collaboration with the Simons Institute for the Theory of Computing.

Travel grants are available to attend the event in California.

Workshops will also be available online and live (registration required).

Organizers

Yejin Choi (Nvidia)
Umesh Vazirani (Simons Institute, UC Berkeley)
Surya Ganguli (Stanford University)
Jitendra Malik (UC Berkeley)
Steven Piantadosi (UC Berkeley)

Invited Participants

Gasper Begus (UC Berkeley), Katherine Collins (University of Cambridge), Laura Gwilliams (Stanford University), Thomas Icard (Stanford University), Anya Ivanova (Georgia Institute of Technology), Hope Kean (MIT), Brendan Lake (New York University), Kyle Mahowald (UT Austin), Raphaël Millière (Macquarie University), Tom Mitchell (Carnegie Mellon University), Alane Suhr (UC Berkeley), Leslie Valiant (Harvard University), Alex Warstadt (UC San Diego), Ethan Wilcox (Georgetown University)

AGENDA

MONDAY, FEB. 3rd, 2025

8:45 – 9:15 a.m. : Coffee and Check-In
9:15 – 9:30 a.m. : Opening Remarks
9:30 – 10:30 a.m. : Rules vs. Neurons and what may be next
Steven Piantadosi (UC Berkeley)
10:30 – 11 a.m. : Break
11 a.m. – 12 p.m. : Neuroscience and AI: a symbiosis
Surya Ganguli (Stanford University)
12 – 1:45 p.m. : Lunch (on your own)
1:45 – 2:45 p.m. : Learning a language like infants do: Results and challenges for developmentally inspired AI
Emmanuel Dupoux (Laboratoire de Science Cognitive et Psycholinguistique)
2:45 – 3 p.m. : Break
3 – 4 p.m. : How DeepSeek changes the LLM story
Sasha Rush (Cornell University)
4 – 5 p.m. : Reception

TUESDAY, FEB. 4th, 2025

9 – 9:30 a.m. : Coffee and Check-In
9:30 – 10:30 a.m. : How Linguistics Learned to Stop Worrying and Love the Language Models
Kyle Mahowald (UT Austin)
10:30 – 11 a.m. : Break
11 a.m. – 12 p.m. : Why it Matters That Babies and Language Models are the Only Known Language Learners
Alex Warstadt (UC San Diego)
12 – 1:45 p.m. : Lunch (on your own)
1:45 – 2:45 p.m. : Neural algorithms of human language
Laura Gwilliams (Stanford University)
2:45 – 3:45 p.m. : Do LLMs Use Language?
Alane Suhr (UC Berkeley)
3:45 – 4 p.m. : Break
4 – 5 p.m. : Panel Discussion

WEDNESDAY, FEB. 5th, 2025

9 – 9:30 a.m. : Coffee and Check-In
9:30 – 10:30 a.m. : More accurate behavioral predictions with hybrid
Bayesian-Transformer models

Brenden Lake (New York University)
10:30 – 11 a.m. : Break
11 a.m. – 12 p.m. : Talk by
Mike Frank (Stanford University)
12 – 1:45 p.m. : Lunch (on your own)
1:45 – 2:45 p.m. : Knowledge is structured and domain-specific: lessons from developmental cognitive science
Fei Xu (UC Berkeley)
2:45 – 3:45 p.m. : How do Transformers learn to encore variable bindings?
Raphaël Millière (Macquarie University)
3:45 – 4 p.m. : Break
4 – 5 p.m. : Talk by
James Zou (Stanford University)

THURSDAY, FEB. 6th, 2025

9 – 9:30 a.m. :  Coffee and Check-In
9:30 – 10:30 a.m. : Interpreting LLMs to Interpret the Brain
Shailee Jain (UT Austin)
10:30 – 11 a.m. : Break
11 a.m. – 12 p.m. : Language and thought in brains: Implications for AI
Evelina Fedorenko (Massachusetts Institute of Technology)
12 – 2 p.m. : Lunch (on your own)
2 – 3 p.m. : Dissociating language and thought in large language models
Anya Ivanova (Georgia Institute of Technology)
3 – 3:30 p.m. : Break
3:30 – 4:30 p.m. : AI safety via Inference-time compute
Boaz Barak (Harvard University)

FRIDAY, FEB. 7th, 2025

9 – 9:30 a.m. : Coffee and Check-In
9:30 – 10:30 a.m. : Building scalable systems for automatically understanding LLMs
Jacob Steinhardt (UC Berkeley)
10:30 – 11 a.m. : Break
11 a.m. – 12 p.m. : Talk By
Naomi Saphra (Kempner Institute at Harvard University)
12 – 2 p.m. : Lunch (on your own)
2 – 3 p.m. : The Cognitive Boundaries of Language Models: Hallucination and
Understanding

Santosh Vempala (Georgia Institute of Technology)
3 – 3:30 p.m. : Break
3:30 – 4:30 p.m.  : Talk By
Andrew White (FutureHouse)