Skip to content

Commit

Permalink
Update Shyam's keynote
Browse files Browse the repository at this point in the history
  • Loading branch information
mporcheron authored Sep 3, 2024
1 parent 13c155a commit 364792d
Show file tree
Hide file tree
Showing 2 changed files with 6 additions and 2 deletions.
6 changes: 5 additions & 1 deletion _data/prg_presentations.yml
Original file line number Diff line number Diff line change
Expand Up @@ -200,7 +200,10 @@ session_7:
session_8:
- id: keynote_1
type: keynote
abstract: <b>S. Shyam Sundar</b> (PhD, Stanford University) is Evan Pugh University Professor and James P. Jimirro Professor of Media Effects, co-director of the Media Effects Research Laboratory, and Director of the Center for Socially Responsible Artificial Intelligence at Penn State University. Prof. Sundar is a theorist as well as an experimentalist. His theoretical contributions include several original models on the social and psychological consequences of communication technology such as Modality-Agency-Interactivity-Navigability (MAIN) Model, and the Theory of Interactive Media Effects (TIME), along with its extension to Human-AI Interaction (HAII-TIME). His research examines social and psychological effects of interactive media, specifically the role played by technological affordances in shaping user experience of mediated communications. Current research pertains to psychological effects of Human-AI interaction in media contexts, ranging from personalization and recommendation to fake news and content moderation. His research portfolio includes extensive examination of user responses to online sources, including machine sources such as chatbots and smart speakers. His research is supported by National Science Foundation (NSF) and National Institutes of Health (NIH), among others. He is editor of the first-ever Handbook of the Psychology of Communication Technology (Blackwell Wiley, 2015). He served as editor-in-chief of the Journal of Computer-Mediated Communication, 2013-2017.
abstract: >
This keynote talk will discuss psychological aspects of autonomous systems by drawing out the tension between machine agency and human agency, especially as they play out in the context of online media platforms. While automated features offer many conveniences, they also threaten our privacy, promote addictive use, and lead us down rabbit holes of extreme content, making us vulnerable to misinformation. If users are to avoid such negative consequences, they will need to be more deliberate and mindful in their interactions, which might detract from the very purpose of automation. This poses a design challenge, which could be addressed by making automation a technological affordance, and drawing upon concepts and mechanisms from the speaker’s model of Human-AI Interaction based on his Theory of Interactive Media Effects (HAII-TIME), as we attempt to build socially responsible trust in autonomous systems.
<br><br>
<i><b>S. Shyam Sundar</b> (PhD, Stanford University) is Evan Pugh University Professor and James P. Jimirro Professor of Media Effects, co-director of the Media Effects Research Laboratory, and Director of the Center for Socially Responsible Artificial Intelligence at Penn State University. Prof. Sundar is a theorist as well as an experimentalist. His theoretical contributions include several original models on the social and psychological consequences of communication technology such as Modality-Agency-Interactivity-Navigability (MAIN) Model, and the Theory of Interactive Media Effects (TIME), along with its extension to Human-AI Interaction (HAII-TIME). His research examines social and psychological effects of interactive media, specifically the role played by technological affordances in shaping user experience of mediated communications. Current research pertains to psychological effects of Human-AI interaction in media contexts, ranging from personalization and recommendation to fake news and content moderation. His research portfolio includes extensive examination of user responses to online sources, including machine sources such as chatbots and smart speakers. His research is supported by National Science Foundation (NSF) and National Institutes of Health (NIH), among others. He is editor of the first-ever Handbook of the Psychology of Communication Technology (Blackwell Wiley, 2015). He served as editor-in-chief of the Journal of Computer-Mediated Communication, 2013-2017.</i>
session_9:
- id: paper_17
Expand All @@ -219,6 +222,7 @@ session_9:
session_10:
- id: plenary_2
type: plenary
collapsable: true
abstract: <b>Maria De-Arteaga</b> is an Assistant Professor at the Information, Risk and Operation Management (IROM) Department at the University of Texas at Austin, where she is also a core faculty member in the Machine Learning Laboratory and an affiliated faculty of Good Systems. She holds a joint PhD in Machine Learning and Public Policy and a M.Sc. in Machine Learning, both from Carnegie Mellon University, and a. B.Sc. in Mathematics from Universidad Nacional de Colombia. Her research focuses on the risks and opportunities of using AI to support experts’ decisions in high-stakes settings, with a particular interest in algorithmic fairness and human-AI collaboration. As part of her work, she characterizes risks of bias and erosion of decision quality when relying on AI, and develops algorithms and sociotechnical systems to enable responsible human-AI complementarity. She currently serves in the Executive Committee of the ACM FAccT Conference.

session_11:
Expand Down
2 changes: 1 addition & 1 deletion _data/prg_sessions.yml
Original file line number Diff line number Diff line change
Expand Up @@ -167,7 +167,7 @@

- id: session_8
type: Keynote
title: "Keynote"
title: "Toward Trustworthy Automation: Ensuring Human Agency with Warranted Cues and Interactive Actions"
location: Etter-Harbin Alumni Center
location_link: https://maps.app.goo.gl/u9t9NXbt6fMkBym96
start_time: 2024-09-18T16:00:00.000Z
Expand Down

0 comments on commit 364792d

Please sign in to comment.