BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//IFDS - ECPv6.0.1.1//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://ifds.info
X-WR-CALDESC:Events for IFDS
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/Chicago
BEGIN:DAYLIGHT
TZOFFSETFROM:-0600
TZOFFSETTO:-0500
TZNAME:CDT
DTSTART:20210314T080000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0500
TZOFFSETTO:-0600
TZNAME:CST
DTSTART:20211107T070000
END:STANDARD
TZID:America/Los_Angeles
BEGIN:DAYLIGHT
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
TZNAME:PDT
DTSTART:20210314T100000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
TZNAME:PST
DTSTART:20211107T090000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/Chicago:20210503T133000
DTEND;TZID=America/Chicago:20210503T140000
DTSTAMP:20260516T064910
CREATED:20210114T204824Z
LAST-MODIFIED:20210115T200804Z
UID:831-1620048600-1620050400@ifds.info
SUMMARY:IFDS Ideas Forum: Changhun Jo
DESCRIPTION:Title: \nAbstract: \nChanghun Jo (Mathematics)\, advised by Kangwook Lee (Electrical and Computer Engineering) and Sebastien Roch (Mathematics)\, is working on the theoretical understanding of machine learning. His recent work focuses on finding an optimal data poisoning algorithm against a fairness-aware learner. He also works on finding the fundamental limit on sample complexity of matrix completion in the presence of graph side information.
URL:https://ifds.info/event/ifds-ideas-forum-changhun-jo/
LOCATION:Webex
CATEGORIES:IFDS Ideas Forum
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Chicago:20210503T140000
DTEND;TZID=America/Chicago:20210503T143000
DTSTAMP:20260516T064910
CREATED:20210114T205134Z
LAST-MODIFIED:20210115T195541Z
UID:833-1620050400-1620052200@ifds.info
SUMMARY:IFDS Ideas Forum: Shuqi Yu
DESCRIPTION:Title: TBD \nAbstract: \nShuqi Yu (Mathematics)\, advised by Sebastien Roch (Mathematics) and working with Karl Rohe (Statistics) on large scale network models. She aims to establish theoretical guarantees for a new estimator of the number of communities in a stochastic blockmodel. She is also interested in phylogenetics questions\, in particular\, she works on the identifiability of the species phylogeny under an horizontal gene transfer model.
URL:https://ifds.info/event/ifds-ideas-forum-shuqi-yu/
LOCATION:WI
CATEGORIES:IFDS Ideas Forum
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Chicago:20210505T123000
DTEND;TZID=America/Chicago:20210505T133000
DTSTAMP:20260516T064910
CREATED:20210202T193340Z
LAST-MODIFIED:20210219T193043Z
UID:1002-1620217800-1620221400@ifds.info
SUMMARY:SILO: Yuanzhi Li
DESCRIPTION:
URL:https://ifds.info/event/silo-05052021/
LOCATION:WI
CATEGORIES:SILO
ORGANIZER;CN="Rob%20Nowak":MAILTO:rdnowak@wisc.edu
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20210507T133000
DTEND;TZID=America/Los_Angeles:20210507T143000
DTSTAMP:20260516T064910
CREATED:20210504T203604Z
LAST-MODIFIED:20210504T203845Z
UID:1230-1620394200-1620397800@ifds.info
SUMMARY:IFDS All-Hands: Rina Foygel Barber
DESCRIPTION:Convergence for nonconvex ADMM\, with applications to CT imaging\nThe alternating direction method of multipliers (ADMM) algorithm is a powerful and flexible tool for complex optimization problems of the form min{f(x)+g(y):Ax+By=c}. ADMM exhibits robust empirical performance across a range of challenging settings including nonsmoothness and nonconvexity of the objective functions f and g\, and provides a simple and natural approach to the inverse problem of image reconstruction for computed tomography (CT) imaging. From the theoretical point of view\, existing results for convergence in the nonconvex setting generally assume smoothness in at least one of the component functions in the objective. In this work\, our new theoretical results provide convergence guarantees under a restricted strong convexity assumption without requiring smoothness or differentiability\, while still allowing differentiable terms to be treated approximately if needed. We validate these theoretical results empirically\, with a simulated example where both f and g are nondifferentiable (and thus outside the scope of existing theory)\, as well as a simulated CT image reconstruction problem. \n\n\nBio: Rina Foygel Barber is a Louis Block Professor in the Department of Statistics at the University of Chicago. She was a NSF postdoctoral fellow during 2012-13 in the Department of Statistics at Stanford University\, supervised by Emmanuel Candès. She received her PhD in Statistics at the University of Chicago in 2012\, advised by Mathias Drton and Nati Srebro\, and a MS in Mathematics at the University of Chicago in 2009. Prior to graduate school\, she was a mathematics teacher at the Park School of Baltimore from 2005 to 2007.
URL:https://ifds.info/event/ifds-all-hands-rina-foygel-barber/
LOCATION:WI
CATEGORIES:Monthly All-Hands
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Los_Angeles:20210510T133000
DTEND;TZID=America/Los_Angeles:20210510T143000
DTSTAMP:20260516T064910
CREATED:20210521T152822Z
LAST-MODIFIED:20210521T153150Z
UID:1265-1620653400-1620657000@ifds.info
SUMMARY:IDFS Ideas Forum: Subjoyoti Mukherjee
DESCRIPTION:
URL:https://ifds.info/event/idfs-ideas-forum-subjoyoti-mukherjee/
LOCATION:WI
CATEGORIES:IFDS Ideas Forum
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Chicago:20210512T123000
DTEND;TZID=America/Chicago:20210512T133000
DTSTAMP:20260516T064910
CREATED:20210202T202020Z
LAST-MODIFIED:20210512T132548Z
UID:1027-1620822600-1620826200@ifds.info
SUMMARY:SILO: Rashmi Vinayak
DESCRIPTION:Title: Convertible Codes: Efficient Conversion of Coded Data in Large-scale Storage Systems \nAbstract:\nIn large-scale data storage systems\, failures are the norm in day-to-day operations. To protect data in the face of such failures\, erasure codes (a tool from coding theory) are employed to store data in a redundant fashion.  In this setting\, a set of k data blocks to be stored is encoded using an [n\, k] code to generate n blocks that are then stored on distinct storage devices. In a recent work\, we showed that the failure rate of storage devices vary considerably over time\, and that dynamically tuning the parameters n and k of the code provides significant reduction in storage cost. However\, traditional codes suffer from prohibitively high resource overheads in changing the code parameters on already encoded data. \nMotivated by this application\, in this talk\, we:\n1. Present a new theoretical framework to formalize the notion of “code conversion”—the process of converting data encoded using an [n\, k] code into data encoded using a code with different parameters [n’\, k’]\, while maintaining desired decodability properties\,\n2. Introduce “convertible codes”\, a new class of codes that enable resource-efficient conversion\,\n3. Prove tight bounds on two important metrics for code conversion (a) the number of nodes accessed\, and (b) bandwidth consumed\,\n4. Present practical constructions of convertible codes for a broad range of parameters. \nBio:\nRashmi Vinayak is an assistant professor in the Computer Science department at Carnegie Mellon University. Her research interests broadly lie in computer/networked systems and information/coding theory\, and the wide spectrum of intersection between the two areas. Her current focus is on fault tolerance and resource efficiency in data systems. Rashmi is a recipient of NSF CAREER Award\, Tata Institute of Fundamental Research Memorial Lecture Award 2020\, Facebook Distributed Systems Research Award 2019\, Google Faculty Research Award 2018\, Facebook Communications and Networking Research Award 2017\, UC Berkeley Eli Jury Award 2016 for “outstanding achievement in the area of systems\, communications\, control\, or signal processing”. Her work has received USENIX NSDI 2021 Community (Best Paper) Award\, and IEEE Data Storage Best Paper and Best Student Paper Awards for the years 2011/2012. Rashmi received her Ph.D. from UC Berkeley in 2016\, and was a postdoctoral scholar at UC Berkeley’s AMPLab/RISELab from 2016-17. During her Ph.D. studies\, Rashmi was a recipient of Facebook Fellowship 2012-13\, the Microsoft Research PhD Fellowship 2013-15\, and the Google Anita Borg Memorial Scholarship 2015-16.\nWebpage: http://www.cs.cmu.edu/~rvinayak/ \n  \nUNTIL FURTHER NOTICE: Seminars are virtual. Sign up for the SILO email list to receive the links to each talk at https://groups.google.com/ and browse for silo
URL:https://ifds.info/event/silo-05122021/
LOCATION:WI
CATEGORIES:SILO
ORGANIZER;CN="Rob%20Nowak":MAILTO:rdnowak@wisc.edu
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Chicago:20210514T084500
DTEND;TZID=America/Chicago:20210514T150000
DTSTAMP:20260516T064910
CREATED:20210512T165210Z
LAST-MODIFIED:20210512T165210Z
UID:1250-1620981900-1621004400@ifds.info
SUMMARY:Data Science Day
DESCRIPTION:Uses and Abuses of Data in Higher Education \nVirtual Event via Zoom with Opportunities for Engagement\nDetails and registration: https://citl.ucsc.edu/data-science-day-2021/
URL:https://ifds.info/event/data-science-day/
LOCATION:WI
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/Chicago:20210519T123000
DTEND;TZID=America/Chicago:20210519T133000
DTSTAMP:20260516T064910
CREATED:20210202T202131Z
LAST-MODIFIED:20210517T133821Z
UID:1029-1621427400-1621431000@ifds.info
SUMMARY:SILO: Dimitris Tsipras
DESCRIPTION:Title: Robust Machine Learning: The Worst-Case and Beyond \nAbstract:\nOne of the key challenges in the real-world deployment of machine learning models is their brittleness: their performance significantly degrades when exposed to even small variations of their training environments. \nHow can we build ML models that are more robust? \nIn this talk\, I will present a methodology for training models that are invariant to a broad family of worst-case input perturbations. I will then describe how such robust learning leads to models that learn fundamentally different data representations\, and how this can be useful even outside the adversarial context. Finally\, I will discuss model robustness beyond the worst-case: ways in which our models fail to generalize and how we can guide further progress on this front.” \nBio:\n“Dimitris Tsipras is a PhD student in the MIT EECS Department\, advised by Aleksander Mądry. His work revolves around the reliability and robustness of machine learning systems\, as well as the science of modern machine learning. He is currently being supported by a Facebook PhD Fellowship \nUNTIL FURTHER NOTICE: Seminars are virtual. Sign up for the SILO email list to receive the links to each talk at https://groups.google.com/ and browse for silo
URL:https://ifds.info/event/silo-05192021/
LOCATION:WI
CATEGORIES:SILO
ORGANIZER;CN="Rob%20Nowak":MAILTO:rdnowak@wisc.edu
END:VEVENT
END:VCALENDAR