BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//IFDS - ECPv6.0.1.1//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:IFDS
X-ORIGINAL-URL:https://ifds.info
X-WR-CALDESC:Events for IFDS
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/Los_Angeles
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20210314T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20211107T090000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20211029T133000
DTEND;TZID=America/Los_Angeles:20211029T143000
DTSTAMP:20260425T132741
CREATED:20211029T163929Z
LAST-MODIFIED:20211029T163929Z
UID:1711-1635514200-1635517800@ifds.info
SUMMARY:ML-Opt@UW Stephen Mussmann
DESCRIPTION:Understanding and analyzing the effectiveness of uncertainty sampling\n\nActive learning techniques attempt to reduce the amount of data required to learn a classifier by leveraging adaptivity. In particular\, an algorithm iteratively selects and labels points from an unlabeled pool of data points. Over the history of active learning\, many algorithms have been developed\, though one heuristic algorithm\, uncertainty sampling\, stands out by its popularity\, effectiveness\, simplicity\, and intuitiveness. Despite this\, uncertainty sampling has known failure modes and lacks the theoretical underpinnings of some other algorithms such as those based on disagreement. Here\, we present a few analyses of uncertainty sampling. First\, we find that uncertainty sampling iterations implicitly optimizes the (generally non-convex) zero-one loss\, explaining how uncertainty sampling can achieve lower error than labeling the entire unlabeled pool and highlighting the importance of a good initialization. Second\, for logistic regression\, we show that the extent to which uncertainty sampling outperforms random sampling is inversely proportional to the asymptotic error\, both theoretically and empirically. Finally\, we use the previous insights to show uncertainty sampling works very well on a particular NLP task due to extreme label imbalance. Taken together\, these results provide a sturdier foundation for understanding and using uncertainty sampling.\n\nhttps://washington.zoom.us/j/99919016373?pwd=UHpFYmlOL3dXcHEvMWNHcC9Wak1Edz09
URL:https://ifds.info/event/ml-optuw-stephen-mussmann/
CATEGORIES:MLOpt@UWash
END:VEVENT
END:VCALENDAR