From fa64afe1527d4940eff13d564c275229f574b64d Mon Sep 17 00:00:00 2001 From: Candace Savonen Date: Thu, 22 Aug 2024 08:11:24 -0400 Subject: [PATCH 1/4] Make word doc --- _output.yml | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/_output.yml b/_output.yml index 530df4b5..329fe05a 100644 --- a/_output.yml +++ b/_output.yml @@ -16,4 +16,6 @@ bookdown::gitbook:

This content was published with bookdown by:

The Fred Hutch Data Science Lab

Style adapted from: rstudio4edu-book (CC-BY 2.0)

-

Click here to provide feedback

\ No newline at end of file +

Click here to provide feedback

+bookdown::word_document2: + toc: true From 0071c5c0e88bd2af4b3ad78d5aefaec87746c33f Mon Sep 17 00:00:00 2001 From: Candace Savonen Date: Thu, 22 Aug 2024 08:54:06 -0400 Subject: [PATCH 2/4] Allow html option --- _output.yml | 1 + 1 file changed, 1 insertion(+) diff --git a/_output.yml b/_output.yml index 329fe05a..c88f2af7 100644 --- a/_output.yml +++ b/_output.yml @@ -19,3 +19,4 @@ bookdown::gitbook:

bookdown::word_document2: toc: true + always_allow_html: true From a8d179998ed71db33308d4cbd73812488ed2e66b Mon Sep 17 00:00:00 2001 From: Candace Savonen Date: Thu, 22 Aug 2024 09:33:15 -0400 Subject: [PATCH 3/4] Make it doc compatible --- 01e-AI_Possibilities-possibilities.Rmd | 9 ++++++--- 02b-Avoiding_Harm-concepts.Rmd | 10 ++++------ 2 files changed, 10 insertions(+), 9 deletions(-) diff --git a/01e-AI_Possibilities-possibilities.Rmd b/01e-AI_Possibilities-possibilities.Rmd index caaf7974..29e67e1f 100644 --- a/01e-AI_Possibilities-possibilities.Rmd +++ b/01e-AI_Possibilities-possibilities.Rmd @@ -1,3 +1,6 @@ +--- +always_allow_html: yes +--- ```{r, include = FALSE} ottrpal::set_knitr_image_path() @@ -315,14 +318,14 @@ Here is the response: Here is a toy time series dataset tracking individuals, time points, and coffee consumption: ```{r echo = FALSE, warning = FALSE, message = FALSE} -install.packages("kableExtra") -library(kableExtra) +install.packages("kableExtra", repos = "https://cloud.r-project.org") +library(magrittr) data.frame( ID = c(rep(1:3, each = 3)), Time_point = c(rep(1:3, times = 3)), Coffee_cups = c(2,3,1,4,2,3,1,0,2) ) %>% - kbl() + kableExtra::kbl() ``` This tracks 3 individuals over 3 time points (days) and their daily coffee consumption in cups. Individual 1 drank 2 cups on day 1, 3 cups on day 2, and 1 cup on day 3. Individual 2 drank 4 cups on day 1, 2 cups on day 2, and 3 cups on day 3. Individual 3 drank 1 cup on day 1, 0 cups on day 2, and 2 cups on day 3. diff --git a/02b-Avoiding_Harm-concepts.Rmd b/02b-Avoiding_Harm-concepts.Rmd index 665c81b5..05b37cd8 100644 --- a/02b-Avoiding_Harm-concepts.Rmd +++ b/02b-Avoiding_Harm-concepts.Rmd @@ -1,4 +1,6 @@ - +--- +always_allow_html: yes +--- ```{r, include = FALSE} @@ -258,11 +260,7 @@ In the flip side, AI has the potential if used wisely, to reduce health inequiti * Consider the possible outcomes of the use of content created by newly developed AI tools. Consider if the content could possibly be used in a manner that will result in discrimination. -See @belenguer_ai_2022 for more guidance. We also encourage you to check out the following video for a classic example of bias in AI: - -```{r, fig.align="center", fig.alt = "video", echo=FALSE, out.width="90%"} -knitr::include_url("https://www.youtube.com/embed/TWWsW1w-BVo?si=YLGbpVKrUz5b56vM") -``` +See @belenguer_ai_2022 for more guidance. We also encourage you to check out [this video for a classic example of bias in AI](https://www.youtube.com/embed/TWWsW1w-BVo?si=YLGbpVKrUz5b56vM). For further details check out this [course](https://www.coursera.org/learn/algorithmic-fairness) on Coursera about building fair algorithms. We will also describe more in the next section. From 361a737f22645943a8e9107d5de57bbba521a8ca Mon Sep 17 00:00:00 2001 From: Candace Savonen Date: Thu, 22 Aug 2024 09:37:05 -0400 Subject: [PATCH 4/4] Fix URL --- 02b-Avoiding_Harm-concepts.Rmd | 42 +++++++++++++++++----------------- 1 file changed, 21 insertions(+), 21 deletions(-) diff --git a/02b-Avoiding_Harm-concepts.Rmd b/02b-Avoiding_Harm-concepts.Rmd index 05b37cd8..6d21ecc1 100644 --- a/02b-Avoiding_Harm-concepts.Rmd +++ b/02b-Avoiding_Harm-concepts.Rmd @@ -11,7 +11,7 @@ ottrpal::set_knitr_image_path() # Societal Impact -There is the potential for AI to dramatically influence society. It is our responsibility to proactively think about what uses and impacts we consider to be useful and appropriate and those we consider harmful and inappropriate. +There is the potential for AI to dramatically influence society. It is our responsibility to proactively think about what uses and impacts we consider to be useful and appropriate and those we consider harmful and inappropriate.
`r config::get("disclaimer")` @@ -20,7 +20,7 @@ There is the potential for AI to dramatically influence society. It is our respo ## Guidelines for Responsible Development and Use of AI. -There are currently several guidelines for the responsible use and development of AI: +There are currently several guidelines for the responsible use and development of AI: - United States [Blueprint for an AI Bill of Rights](https://www.whitehouse.gov/ostp/ai-bill-of-rights/) - United States [Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence](https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/) @@ -48,12 +48,12 @@ In this chapter we will discuss the some of the major ethical considerations in 1) **Inappropriate Use and Lack of Oversight** - There are situations in which using AI might not be appropriate now or in the future. A lack of human monitoring and oversight can result in harm. 1) **Bias Perpetuation and Disparities** - AI models are built on data and code that were created by biased humans, thus bias can be further perpetuated by using AI tools. In some cases bias can even be exaggerated. This combined with differences in access may exacerbate disparities. 1) **Security and Privacy Issues** - Data for AI systems should be collected in an ethical manner that is mindful of the rights of the individuals the data comes from. Data around usage of those tools should also be collected in an ethical manner. Commercial tool usage with proprietary or private data, code, text, images or other files may result in leaked data not only to the developers of the commercial tool, but potentially also to other users. -1) **Climate Impact** - As we continue to use more and more data and computing power, we need to be ever more mindful of how we generate the electricity to store and perform our computations. +1) **Climate Impact** - As we continue to use more and more data and computing power, we need to be ever more mindful of how we generate the electricity to store and perform our computations. 1) **Transparency** - Being transparent about what AI tools you use where possible, helps others to better understand how you made decisions or created any content that was derived by AI, as well as the possible sources that the AI tools might have used when helping you. It may also help with future unknown issues related to the use of these tools. :::{.ethics} -Keep in mind that some fields, organizations, and societies have guidelines or requirements for using AI, like for example the policy for the use of large language models for the [International Society for Computational Biology](https://www.iscb.org/iscb-policy-statements/iscb-policy-for-acceptable-use-of-large-language-models). Be aware of the requirements/guidelines for your field. +Keep in mind that some fields, organizations, and societies have guidelines or requirements for using AI, like for example the policy for the use of large language models for the [International Society for Computational Biology](https://www.iscb.org/iscb-policy-statements/iscb-policy-for-acceptable-use-of-large-language-models). Be aware of the requirements/guidelines for your field. ::: Note that this is an incomplete list; additional ethical concerns will become apparent as we continue to use these new technologies. We highly suggest that users of these tools be **careful to learn more about the specific tools they are interested in** and to be **transparent** about the use of these tools, so that as new ethical issues emerge, we will be better prepared to understand the implications. @@ -77,7 +77,7 @@ ottrpal::include_slide("hhttps://docs.google.com/presentation/d/1L6-8DWn028c1o0p
**For decision makers about AI use:** -* Consider how the content or decisions generated by an AI tool might be used by others. +* Consider how the content or decisions generated by an AI tool might be used by others. * Continually audit how AI tools that you are using are preforming. * Do not implement changes to systems or make important decisions using AI tools without AI oversight. @@ -102,10 +102,10 @@ AI systems should be thought of as better computers as opposed to replacements f ottrpal::include_slide("https://docs.google.com/presentation/d/1L6-8DWn028c1o0p9gwXmz90BRcy_PjPqb683nbk1gHQ/edit#slide=id.g2aaead717c1_8_69") ``` -While there are some contexts in which human labor has already been replaced by robotics and AI, studies show that humans tend to prefer human-made goods when those goods are not strictly functional (@bellaiche_humans_2023, @granulo_preference_2021). -It has been proposed that there will be radical shifts in the way that humans work in many fields including health care, banking, retail, security, and more (@selenko_artificial_2022). Yet we need to implement changes gradually to allow for time to better understand the consequences and mindfully consider how such changes impact human employment and well-being. +While there are some contexts in which human labor has already been replaced by robotics and AI, studies show that humans tend to prefer human-made goods when those goods are not strictly functional (@bellaiche_humans_2023, @granulo_preference_2021). +It has been proposed that there will be radical shifts in the way that humans work in many fields including health care, banking, retail, security, and more (@selenko_artificial_2022). Yet we need to implement changes gradually to allow for time to better understand the consequences and mindfully consider how such changes impact human employment and well-being. -@selenko_artificial_2022 have proposed a [framework](https://journals.sagepub.com/doi/full/10.1177/09637214221091823) for considering the impact of AI usage on human workers to promote benefit and avoid harm. It suggests considering usage in a few different ways: AI for complementing work, AI for replacing tasks, and AI for generating new tasks. It suggests considering how such usages might reduce tedious or dangerous work, while also preserving work-related benefits such as self-esteem, belonging, and perceived meaningfulness. See [here](https://journals.sagepub.com/doi/full/10.1177/09637214221091823) for the article. +@selenko_artificial_2022 have proposed a [framework](https://journals.sagepub.com/doi/full/10.1177/09637214221091823) for considering the impact of AI usage on human workers to promote benefit and avoid harm. It suggests considering usage in a few different ways: AI for complementing work, AI for replacing tasks, and AI for generating new tasks. It suggests considering how such usages might reduce tedious or dangerous work, while also preserving work-related benefits such as self-esteem, belonging, and perceived meaningfulness. See [here](https://journals.sagepub.com/doi/full/10.1177/09637214221091823) for the article. @@ -121,7 +121,7 @@ democracy and human rights." (@latar_robot_2015) > "This potential threat to the profession of human journalism is viewed by some optimistic journalists merely as another tool that will free them of the necessity to conduct costly and, at times, dangerous investigations. The robot journalists will -provide them, so the optimists hope, with an automated draft for a story that they will edit and enrich with their in-depth analysis, their perspectives and their narrative talents. +provide them, so the optimists hope, with an automated draft for a story that they will edit and enrich with their in-depth analysis, their perspectives and their narrative talents. > The more pessimistic journalists view the new robot journalists as a real threat to their livelihood and style of working and living. @@ -138,7 +138,7 @@ Computer science is a field that has historically lacked diversity. It is also c * Avoid thinking that content by AI tools must be better than that created by humans, as this is not true (@sinz_engineering_2019). * Recall that humans wrote the code to create these AI tools and that the data used to train these AI tools also came from humans. Many of the large commercial AI tools were trained on websites and other content from the internet. * Be transparent where possible about **when you do or do not use AI tools**, give credit to the humans involved as much as possible. -* Make decisions about using AI tools based on ethical [frameworks](https://journals.sagepub.com/doi/full/10.1177/09637214221091823) in terms of considering the impact on human workers. +* Make decisions about using AI tools based on ethical [frameworks](https://journals.sagepub.com/doi/full/10.1177/09637214221091823) in terms of considering the impact on human workers.
@@ -148,7 +148,7 @@ Computer science is a field that has historically lacked diversity. It is also c **For decision makers about AI development:** * Be transparent about the data used to generate tools as much as possible and provide information about what humans may have been involved in the creation of the data. -* Make decisions about creating AI tools based on ethical [frameworks](https://journals.sagepub.com/doi/full/10.1177/09637214221091823) in terms of considering the impact on human workers. +* Make decisions about creating AI tools based on ethical [frameworks](https://journals.sagepub.com/doi/full/10.1177/09637214221091823) in terms of considering the impact on human workers.

@@ -159,7 +159,7 @@ A new term in the medical field called [AI paternalism](https://www.technologyre ## Inappropriate Use and Lack of Oversight -There are situations in which we may, as a society, not want an automated response. There may even be situations in which we do not want to bias our own human judgment by that of an AI system. There may be other situations where the efficiency of AI may also be considered inappropriate. While many of these topics are still under debate and AI technology continues to improve, we challenge the readers to consider such cases given what is currently possible and what may be possible in the future. +There are situations in which we may, as a society, not want an automated response. There may even be situations in which we do not want to bias our own human judgment by that of an AI system. There may be other situations where the efficiency of AI may also be considered inappropriate. While many of these topics are still under debate and AI technology continues to improve, we challenge the readers to consider such cases given what is currently possible and what may be possible in the future. Some reasons why AI may not be appropriate for certain situation include: @@ -239,8 +239,8 @@ In the flip side, AI has the potential if used wisely, to reduce health inequiti **For decision makers about AI use:** * Be aware of the biases in the data that is used to train AI systems. -* Check what data was used to train the AI tools that you use where possible. Tools that are more transparent are likely more ethically developed. -* Check if the developers of the AI tools you are using were/are considerate of bias issues in their development where possible. Tools that are more transparent are likely more ethically developed. +* Check what data was used to train the AI tools that you use where possible. Tools that are more transparent are likely more ethically developed. +* Check if the developers of the AI tools you are using were/are considerate of bias issues in their development where possible. Tools that are more transparent are likely more ethically developed. * Consider the possible outcomes of the use of content created by AI tools. Consider if the content could possibly be used in a manner that will result in discrimination. @@ -255,7 +255,7 @@ In the flip side, AI has the potential if used wisely, to reduce health inequiti - Are the data adequately inclusive? Examples could include a lack of data about certain ethnic or gender groups or disabled individuals, which could result in code that does not adequately consider these groups, ignores them all together, or makes false associations. - Are the data of high enough quality? Examples could include data that is false about certain individuals. * Evaluate the code for new AI tools for biases as it is developed. Check if any of the criteria for weighting certain data values over others are rooted in bias. -* Continually audit the code for potentially biased responses. Potentially seek expert help. +* Continually audit the code for potentially biased responses. Potentially seek expert help. * Be transparent with users about potential bias risks. * Consider the possible outcomes of the use of content created by newly developed AI tools. Consider if the content could possibly be used in a manner that will result in discrimination. @@ -338,18 +338,18 @@ Are there any possible data security or privacy issues associated with the plan ## Climate Impact -AI can help humans to innovate ways to improve efficiency and to devise strategies to help mitigate climate issues (@jansen_climate_2023; @cowls_ai_2023). Importantly this needs to be done in a manner with social justice in mind, as often those that have the least resources deal with climate issues are also the most likely to be impacted (@jansen_climate_2023; @bender_dangers_2021). +AI can help humans to innovate ways to improve efficiency and to devise strategies to help mitigate climate issues (@jansen_climate_2023; @cowls_ai_2023). Importantly this needs to be done in a manner with social justice in mind, as often those that have the least resources deal with climate issues are also the most likely to be impacted (@jansen_climate_2023; @bender_dangers_2021). A few organizations are working on supporting the use of AI for climate crises mitigation uses such as: -- AI for the Plane: https://www.aifortheplanet.org/en -- Climate Change AI (CCAI): https://www.climatechange.ai/about +- AI for the Plane: https://www.aifortheplanet.org +- Climate Change AI (CCAI): https://www.climatechange.ai/about However, AI also poses a number of climate risks (@bender_dangers_2021; @hulick_training_2021; @jansen_climate_2023; @cowls_ai_2023) . 1) The data storage and computing resources needed for the development of AI tools could exacerbate climate challenges (@bender_dangers_2021) -2) If not designed carefully, AI could also spread false solutions for climate crises or promote inefficient practices (@jansen_climate_2023). +2) If not designed carefully, AI could also spread false solutions for climate crises or promote inefficient practices (@jansen_climate_2023). 3) Differences in access to AI technologies may exacerbate social inequities related to climate (@hulick_training_2021) ```{r, fig.align='center', out.width="100%", echo = FALSE, fig.alt= "A cartoon of robots camping."} @@ -384,7 +384,7 @@ ottrpal::include_slide("https://docs.google.com/presentation/d/1L6-8DWn028c1o0p9 In the United States Blueprint for the AI Bill of Rights, it states: > You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you. - + This transparency is important for people to understand how decisions are made using AI, which can be especially vital to allow people to contest decisions. It also better helps us to understand what AI systems may need to be fixed or adapted if there are issues. @@ -424,7 +424,7 @@ Here is a summary of all the tips we suggested: * Be aware that AI systems may behave in unexpected ways. Implement new AI solutions slowly to account for the unexpected. Test those systems and try to better understand how they work in different contexts. * Be aware of the security and privacy concerns for AI, be sure to use the right tool for the job and train those at your institute appropriately. * Consider the climate impact of your AI usage and proceed in a manner makes efficient use of resources. -* Be transparent about your use of AI. +* Be transparent about your use of AI. Overall, we hope that awareness of these concerns and the tips we shared will help us all use AI tools more responsibly. We recognize however, that as this is emerging technology and more ethical issues will emerge as we continue to use these tools in new ways. Staying up-to-date on the current ethical considerations will also help us all continue to use AI responsibly.