Conférence by Ryan Badman

Event date: December 17, 2024

ForageWorld: RL agents in complex foraging arenas develop internal maps for navigation and planning

Ryan Badman, postdoc at Harvard Medical school in Kanaka Rajan’s lab, will give a talk at the ISC, room ‘Salle C’, 2nd floor. on the 17th of December at 2pm, see the abstract below. Here is his google scholar: https://scholar.google.com/citations?user=Q2esw9MAAAAJ&hl=en

Summary: Foraging, the set of behaviors associated with seeking valuable resources (e.g., food, water) while avoiding danger (e.g., predators), is ubiquitous among organisms, but its neural circuit basis is presently unclear. Most existing work on navigation, a key component of foraging, involves animals in small, fully observable arenas with few-to-no obstacles, partly because recording neural activity in naturalistic settings is challenging. How do animals successfully forage in complex naturalistic environments, especially given realistic neural circuit constraints on capacity and connectivity? To study this question in tractable settings, we designed ForageWorld, a procedurally-generated and partially observable arena-based environment, in which artificial agents must satisfy hunger, thirst, and sleep requirements while navigating complex terrain and avoiding predators. We found that agents trained via reinforcement learning (RL) explored the arenas to locate resource-rich patches, and strategically traveled directly between known patches outside the current field-of-view, including ones unobserved for hundreds of timesteps. Moreover, this sophisticated navigational planning was achieved by agents with fewer neurons than insect brains and with sparse connectivity constraints. To analyze these foraging behaviors, we used generalized linear models (GLMs) to quantify how patch features (e.g., distance from agent, historical predator rates, depletion tracking) influence patch revisitation decisions, Bayesian path segmentation to characterize agent behaviors on different timescales, and neural decoding analyses to probe the circuit basis of navigational mapping. Since path integration is thought to be foundational for navigation, we compared agents explicitly trained on path integration to agents that were not. Path-integrating agents explored more of the environment, and in a manner modulated by spatial uncertainty. They also had clearer neural representations of past and future locations, pointing to the emergence of internal maps. Our results pinpoint biologically plausible foraging strategies implementable by neural circuits in navigating organisms with small brains, such as bees and ants.

Invited by Nils Kolling