diff --git a/_data/prg_presentations.yml b/_data/prg_presentations.yml index 3c27c1a..d1348cb 100644 --- a/_data/prg_presentations.yml +++ b/_data/prg_presentations.yml @@ -13,7 +13,8 @@ workshops_am: collapsable: true title: "Coming Together: Addressing Ethical AI with Diverse Teams and Perspectives" abstract: > - It is important to understand what we are willing to entrust to AI as society integrates these systems into more facets of public, private, and commercial life. Trust is a key concept relating to autonomous systems as is outlined in AI Ethics guidelines, frameworks, and regulation. While there is universal agreement on the importance of trust and there are common key principles, there is no agreement on what defines trust and how to develop, design and deploy trustworthy systems. Furthermore, different disciplines approach trust in various ways. This workshop aims to facilitate interactive discussions on how to address issues of trustworthiness in autonomous AI systems through an interdisciplinary lens. Workshop participants will hear insights from guest speaker Dr. Steve Kramer, Chief Scientist at KUNGFU.AI, an Austin-based AI consulting firm providing interdisciplinary AI expertise. Following a presentation, participants will be encouraged to approach AI from a set of diverse lenses with the goal of reaching consensus on key ethical issues through a case study challenge. This event is designed to broadly appeal to researchers from various backgrounds (both technical and non-technical) working on or interested in issues at the intersection of AI and ethics. + It is important to understand what we are willing to entrust to AI as society integrates these systems into more facets of public, private, and commercial life. Trust is a key concept relating to autonomous systems as is outlined in AI Ethics guidelines, frameworks, and regulation. While there is universal agreement on the importance of trust and there are common key principles, there is no agreement on what defines trust and how to develop, design and deploy trustworthy systems. Furthermore, different disciplines approach trust in various ways. This workshop aims to facilitate interactive discussions on how to address issues of trustworthiness in autonomous AI systems through an interdisciplinary lens. Workshop participants will hear insights from guest speaker Dr. Steve Kramer, Chief Scientist at KUNGFU.AI, an Austin-based AI consulting firm providing interdisciplinary AI expertise. Following a presentation, participants will be encouraged to approach AI from a set of diverse lenses with the goal of reaching consensus on key ethical issues through a case study challenge. This event is designed to broadly appeal to researchers from various backgrounds (both technical and non-technical) working on or interested in issues at the intersection of AI and ethics. + website: https://sites.google.com/view/coming-together/home - id: workshop_3 type: workshop @@ -80,4 +81,4 @@ keynote: plenary_2: - id: plenary_2 type: plenary - abstract: Maria De-Arteaga is an Assistant Professor at the Information, Risk and Operation Management (IROM) Department at the University of Texas at Austin, where she is also a core faculty member in the Machine Learning Laboratory and an affiliated faculty of Good Systems. She holds a joint PhD in Machine Learning and Public Policy and a M.Sc. in Machine Learning, both from Carnegie Mellon University, and a. B.Sc. in Mathematics from Universidad Nacional de Colombia. Her research focuses on the risks and opportunities of using AI to support experts’ decisions in high-stakes settings, with a particular interest in algorithmic fairness and human-AI collaboration. As part of her work, she characterizes risks of bias and erosion of decision quality when relying on AI, and develops algorithms and sociotechnical systems to enable responsible human-AI complementarity. She currently serves in the Executive Committee of the ACM FAccT Conference. \ No newline at end of file + abstract: Maria De-Arteaga is an Assistant Professor at the Information, Risk and Operation Management (IROM) Department at the University of Texas at Austin, where she is also a core faculty member in the Machine Learning Laboratory and an affiliated faculty of Good Systems. She holds a joint PhD in Machine Learning and Public Policy and a M.Sc. in Machine Learning, both from Carnegie Mellon University, and a. B.Sc. in Mathematics from Universidad Nacional de Colombia. Her research focuses on the risks and opportunities of using AI to support experts’ decisions in high-stakes settings, with a particular interest in algorithmic fairness and human-AI collaboration. As part of her work, she characterizes risks of bias and erosion of decision quality when relying on AI, and develops algorithms and sociotechnical systems to enable responsible human-AI complementarity. She currently serves in the Executive Committee of the ACM FAccT Conference.