BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//IFDS - ECPv6.0.1.1//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://ifds.info
X-WR-CALDESC:Events for IFDS
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20240310T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20241103T090000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20240126T133000
DTEND;TZID=America/Los_Angeles:20240126T143000
DTSTAMP:20260423T080653
CREATED:20240318T212134Z
LAST-MODIFIED:20240318T212230Z
UID:2879-1706275800-1706279400@ifds.info
SUMMARY:How do neural networks learn features from data?
DESCRIPTION:Speaker Bio: Adit is currently the George F. Carrier Postdoctoral Fellow in the School of Engineering and Applied Sciences at Harvard. He completed his Ph.D. in electrical engineering and computer science (EECS) at MIT advised by Caroline Uhler and was a Ph.D. fellow at the Eric and Wendy Schmidt Center at the Broad Institute of MIT and Harvard. His research focuses on advancing theoretical foundations of machine learning and developing new methods for tackling biomedical problems. \n\n\n\nAbstract: Understanding how neural networks learn features\, or relevant patterns in data\, for prediction is necessary for their reliable use in technological and scientific applications. We propose a unifying mechanism that characterizes feature learning in neural network architectures. Namely\, we show that features learned by neural networks are captured by a statistical operator known as the average gradient outer product (AGOP). Empirically\, we show that the AGOP captures features across a broad class of network architectures including convolutional networks and large language models. Moreover\, we use AGOP to enable feature learning in general machine learning models through an algorithm we call Recursive Feature Machine (RFM). We show that RFM automatically identifies sparse subsets of features relevant for prediction and explicitly connects feature learning in neural networks with classical sparse recovery and low rank matrix factorization algorithms. Overall\, this line of work advances our fundamental understanding of how neural networks extract features from data\, leading to the development of novel\, interpretable\, and effective models for use in scientific applications.
URL:https://ifds.info/event/how-do-neural-networks-learn-features-from-data/
LOCATION:CSE (Allen) 403
CATEGORIES:MLOpt@UWash
END:VEVENT
END:VCALENDAR