Title: Consequential Machine Learning
Abstract: Recent findings cautioned the potential unfairness issues that can arise when deploying a machine learning model. Correspondingly, treatments have been proposed to add fairness guarantees when training such statistical models. An often overlooked question is “what after?” One can discover ways to improve the model after deployment, and after receiving feedback from human subjects that saw these algorithmic treatments. But the unfortunate fact is once a model is deployed, it will start to impact a later decision and the society for a long period of time. In this talk, I am going to introduce some of the recent works that initialize our attempt to understand the consequences of the deployment of a machine learning algorithm. We will first show that an appeared-to-be fair algorithm might not necessarily help reduce disparities among societal groups that we aim to protect. The long-term impacts of a sequence of deployments of machine learning models will depend on the induced dynamics of the underlying populations’ qualification. Then I discuss how to use the design and deployment of machine learning to induce proper dynamics by offering human agents actionable recourse to improve their qualifications.
All Hands titles and abstracts are tentative, as of the posting date.