From fa7790a6604ee043320023ecd6c90c2cefdbabc4 Mon Sep 17 00:00:00 2001 From: Julia Stoyanovich Date: Mon, 30 Oct 2023 08:40:54 -0400 Subject: [PATCH] various changes --- _pages/people.md | 2 +- _pages/people_dropdown.md | 4 ++-- _pages/publications.md | 35 +++++++++++++++++++++++++++++------ 3 files changed, 32 insertions(+), 9 deletions(-) diff --git a/_pages/people.md b/_pages/people.md index cc64d666..f4ce5580 100644 --- a/_pages/people.md +++ b/_pages/people.md @@ -131,7 +131,7 @@ nav_order: 1 -

Visitors

+

Visitors

diff --git a/_pages/people_dropdown.md b/_pages/people_dropdown.md index a6b5c7ac..4ef519bc 100644 --- a/_pages/people_dropdown.md +++ b/_pages/people_dropdown.md @@ -8,6 +8,6 @@ children: - title: Team permalink: /people/#team - title: divider - - title: Visitors & Affiliates - permalink: /people/#affiliates + - title: Visitors + permalink: /people/#visitors --- diff --git a/_pages/publications.md b/_pages/publications.md index cac5a207..62a4e20d 100644 --- a/_pages/publications.md +++ b/_pages/publications.md @@ -48,26 +48,49 @@ appropriate?). Our work on data-centric responsible AI and on responsible data management is based on the observation that the decisions we make during data collection and preparation profoundly impact the robustness, fairness, and interpretability of the systems -we build. +we build. - {% for y in page.years %}

{{y}}

{% bibliography -f papers -q @*[year={{y}} && keywords ^= *data]* %} {% endfor %}

Education

-Insert a blurb about education here. - + +We cannot understand the impact – and especially the risks – of AI +systems without active and thoughtful participation of everyone in +society, either directly or through their trusted representatives. To +think otherwise is to go against our democratic values. To enable +broad participation, we have been developing responsible AI curricula +and methodologies for different stakeholders: university students, +working practitioners, and the public at large. In this section, you +will find our publication on responsible AI education. Take a look at +the education area of the site to access our +courses and other open-source materials we have developed. + {% for y in page.years_edu %}

{{y}}

{% bibliography -f papers -q @*[year={{y}} && keywords ^= *edu ]* %} {% endfor %}

Explainability

- Insert a blurb about explainability here. - {% for y in page.years %} + +There is a variety of terms associated with this topic: transparency, +interpretability, explainability, intelligibility. But let’s not get +too tangled up in terminology. The main point is that we need to +allow people to understand the data, the operation, and the decisions +or predictions of an AI system, and to also understand why these +decisions or predictions are made. This understanding is critical +because it allows people to exercise agency and take control over +their interactions with AI systems. And so, no matter what +terminology we use, the overarching idea behind transparency & friends +is to expose the “knobs of responsibility” to people, as a means to +support the responsible design, development, use, and oversight of AI +systems. + + +{% for y in page.years %}

{{y}}

{% bibliography -f papers -q @*[year={{y}} && keywords ^= *explainability]* %} {% endfor %}