Skip to content

Commit

Permalink
Update prg_presentations.yml
Browse files Browse the repository at this point in the history
  • Loading branch information
mporcheron authored Aug 26, 2024
1 parent a3d385d commit 218f315
Showing 1 changed file with 3 additions and 2 deletions.
5 changes: 3 additions & 2 deletions _data/prg_presentations.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,8 @@ workshops_am:
collapsable: true
title: "Coming Together: Addressing Ethical AI with Diverse Teams and Perspectives"
abstract: >
It is important to understand what we are willing to entrust to AI as society integrates these systems into more facets of public, private, and commercial life. Trust is a key concept relating to autonomous systems as is outlined in AI Ethics guidelines, frameworks, and regulation. While there is universal agreement on the importance of trust and there are common key principles, there is no agreement on what defines trust and how to develop, design and deploy trustworthy systems. Furthermore, different disciplines approach trust in various ways. This workshop aims to facilitate interactive discussions on how to address issues of trustworthiness in autonomous AI systems through an interdisciplinary lens. Workshop participants will hear insights from guest speaker Dr. Steve Kramer, Chief Scientist at KUNGFU.AI, an Austin-based AI consulting firm providing interdisciplinary AI expertise. Following a presentation, participants will be encouraged to approach AI from a set of diverse lenses with the goal of reaching consensus on key ethical issues through a case study challenge. This event is designed to broadly appeal to researchers from various backgrounds (both technical and non-technical) working on or interested in issues at the intersection of AI and ethics.
It is important to understand what we are willing to entrust to AI as society integrates these systems into more facets of public, private, and commercial life. Trust is a key concept relating to autonomous systems as is outlined in AI Ethics guidelines, frameworks, and regulation. While there is universal agreement on the importance of trust and there are common key principles, there is no agreement on what defines trust and how to develop, design and deploy trustworthy systems. Furthermore, different disciplines approach trust in various ways. This workshop aims to facilitate interactive discussions on how to address issues of trustworthiness in autonomous AI systems through an interdisciplinary lens. Workshop participants will hear insights from guest speaker Dr. Steve Kramer, Chief Scientist at KUNGFU.AI, an Austin-based AI consulting firm providing interdisciplinary AI expertise. Following a presentation, participants will be encouraged to approach AI from a set of diverse lenses with the goal of reaching consensus on key ethical issues through a case study challenge. This event is designed to broadly appeal to researchers from various backgrounds (both technical and non-technical) working on or interested in issues at the intersection of AI and ethics.
website: https://sites.google.com/view/coming-together/home

- id: workshop_3
type: workshop
Expand Down Expand Up @@ -80,4 +81,4 @@ keynote:
plenary_2:
- id: plenary_2
type: plenary
abstract: <b>Maria De-Arteaga</b> is an Assistant Professor at the Information, Risk and Operation Management (IROM) Department at the University of Texas at Austin, where she is also a core faculty member in the Machine Learning Laboratory and an affiliated faculty of Good Systems. She holds a joint PhD in Machine Learning and Public Policy and a M.Sc. in Machine Learning, both from Carnegie Mellon University, and a. B.Sc. in Mathematics from Universidad Nacional de Colombia. Her research focuses on the risks and opportunities of using AI to support experts’ decisions in high-stakes settings, with a particular interest in algorithmic fairness and human-AI collaboration. As part of her work, she characterizes risks of bias and erosion of decision quality when relying on AI, and develops algorithms and sociotechnical systems to enable responsible human-AI complementarity. She currently serves in the Executive Committee of the ACM FAccT Conference.
abstract: <b>Maria De-Arteaga</b> is an Assistant Professor at the Information, Risk and Operation Management (IROM) Department at the University of Texas at Austin, where she is also a core faculty member in the Machine Learning Laboratory and an affiliated faculty of Good Systems. She holds a joint PhD in Machine Learning and Public Policy and a M.Sc. in Machine Learning, both from Carnegie Mellon University, and a. B.Sc. in Mathematics from Universidad Nacional de Colombia. Her research focuses on the risks and opportunities of using AI to support experts’ decisions in high-stakes settings, with a particular interest in algorithmic fairness and human-AI collaboration. As part of her work, she characterizes risks of bias and erosion of decision quality when relying on AI, and develops algorithms and sociotechnical systems to enable responsible human-AI complementarity. She currently serves in the Executive Committee of the ACM FAccT Conference.

0 comments on commit 218f315

Please sign in to comment.