Ethics Class Discussion Questions

We will have 5 ethics discussions throughout the semester. Students will read various articles and watch videos related to different topics. Below are the reading/watching assignments and associated discussion questions.


Ethics Discussion #1: Bias, Social Media, Current vs Future Harms

We will try to go over a few different topics as a general introduction to AI and ethics. Read the links below and answer the 6 questions on canvas. I will have more questions for us to discuss as a group in class.

  1. AI Bias

    Read this article about wrongful arrests due to facial recognition, as well as this horrific article about how predictive policing got an innocent man shot twice. Finally, read this article about a generative AI SNAFU back in 2020, and watch this one minute video by Latanya Sweeney (Click here to see an opposing point of view by John McWhorter).
    • Q 1.1 How do people in these articles posit it is possible for algorithms to be "biased"? (Come to class ready to expand on this as well)
    • Q 1.2 According to these articles, why might victims of AI misclassification not seek redress?
  2. AI in Social Media

    Read this article on how Facebook groups led to the proliferation of white supremacist content. Then, skim this article about layoffs of trust/safety teams at large tech companies (also optionally read this article about Facebook whistleblower Sophie Zhang, and note that at Twitter this challenge was shutdown after leadership changed). Finally, skim this article about the "enshitification" of social platforms as they scale.
    • Q 2.1: Where is AI/automation used in social media platforms?
    • Q 2.2: What are the perverse incentives social media platforms face when choosing to automate recommendations, and when choosing to automate content moderation?
  3. Current vs Future Harms of AI/Automation

    Browse through some hypothetical future misues of AI at this link. Then, read a rebuttal article at this link by Bender/Hanna.
    • Q 3.1: Give three examples of current harms of AI that Bender/Hanna mentioned. Give three examples of hypothetical future harms of AI that safe.ai mentions.
    • Q 3.2: What would Bender/Hanna say about the claim that we need to be concerned about AI surpassing human intelligence?

Ethics Discussion #2: Corporate Capture And Colonial Practices

  1. Corporate Capture

    Read "The Steep Cost of Capture" by Meredith Whittaker, then answer the following questions:

    • Q 1.1 According to Whittaker, what led to the inception of the current "AI summer," starting in 2012? What role do corporations play in this?
    • Q 1.2 How does the nature of the current AI summer impact academic researchers? What are some ways that academic researchers cope with this?
    • Q 1.3 What ulterior motives does Whittaker posit Eric Schmidt has when shaping the NDAA?
  2. Colonial Practices

    Read this Time magazine article from last January (warning: it is disturbing).

    • Q 2.1 What were the Kenyan workers contributing to the tech behind ChatGPT? How much did they make?
    • Q 2.2 What were the "occupational hazards" of the data labeling work the Sama contractors did for OpenAI? How did Sama allegedly address this?
    • Q 2.3 How much was OpenAI worth at the time the article was written? How much did they pay for the total contract with Sama to curate a dataset of illegal images?

    Other related articles (optional):


Ethics Discussion #3 AI And Art / Music

The topic of AI art is vast and rapidly evolving in the "age of generative AI." We will focus our readings primarily on image generation, as this is what students will implement in HW6, and I want students to be aware of what can happen when large swaths of unexamined data are fed to train these models. To that end, read the following Bloomberg article and answer the questions that follow:

  1. Give 3 examples of how bias from this dataset shows up in generative images
  2. Post an answer to the following on our discord class channel: A developer for stable diffusion says the following:

    "By open-sourcing our models, we aim to support the AI community and collaborate to improve bias evaluation techniques and develop solutions beyond basic prompt modification"

    Given what you know after reading the Bloomberg article and how models are trained, to what extent do you agree with this statement? Why?

  3. Post a response to the following on our discord class channel: The Laion3 dataset is an index of images and associated metadata that was used to train Stable Diffusion (among other models). The creators of this dataset say the following:

    "do not recommend using it for creating ready-to-go industrial products, as the basic research about general properties and safety of such large-scale models, which we would like to encourage with this release, is still in progress"

    Prove that this disclaimer is not being heeded. Beyond this disclaimer, are there other ethical responsibilities you think those who curate such links have?

  4. Read the mission statement from the company Alamy about protecting the rights of people who created art used in training. What happened with Getty images and generative AI? What is Alamy doing to address a similar issue?

Though the readings focus on images, in class, we will focus on the music side in generative AI. To that end, we will have a live discussion with Ben Cantil, a musician and visiting assistant professor of music production, technology, and innovation at Berklee School of Music, Valencia, who is currently product lead at a startup called DataMindAudio that is making an ethical approach to sourcing music data central to its mission.

Other Links

  • The Guardian: "Design me a chair made from petals!"": The artists pushing the boundaries of AI (sadly the link to the original conference they were covering seems to be down, but there are a lot of good examples in the article still)
  • Carlini, Nicolas, et al. Extracting training data from diffusion models."" 32nd USENIX Security Symposium (USENIX Security 23). 2023

    This paper shows how diffusion models are more prone to memorizing their training data than other generative image models like generative adversarial nets.

  • For a related issue in the text world, see the following WSJ: "Chatbots Are Digesting the Internet. The Internet Wants to Get Paid."
  • The Guardian: "We got bored waiting for Oasis to re-form: AIsis, the band fronted by an AI Liam Gallagher"

  • Ethics Discussion #4: AI And The Climate Crisis

    AI is often touted as a potential solution to the climate crisis we're facing, but the environmental costs of AI are less often discussed. This part of the ethics unit will focus on two papers:

    • Strubell, E., Ganesh, A., & McCallum, A. (2020). Energy and Policy Considerations for Modern Deep Learning Research. Proceedings of the AAAI Conference on Artificial Intelligence, 34(09), 13693-13696. https://doi.org/10.1609/aaai.v34i09.7123
    • Luccioni, Alexandra Sasha, Sylvain Viguier, and Anne-Laure Ligozat. "Estimating the carbon footprint of bloom, a 176b parameter language model." Journal of Machine Learning Research, 23-0069 (2022). https://www.jmlr.org/papers/volume24/23-0069/23-0069.pdf

    We will not be discussing this as a class; rather, students will examine some of these topics on their own and explore further in HW7 on diffusion models, where they will use code carbon to get a carbon footprint of their code.

    Strubell 2020

    Read section and beginning of 2, skim 2.1 and 3, and reach 4 and 5 of the Strubell Paper. As they explain, the carbon cost of training a single model is estimated to be quite high (e.g. equivalent to ~1 roundtrip NYC to SF trip for 1 person to train BERTbase for 3 days). Even so, one might argue that this is a one-time cost and so it shouldn't concern us. However, as the authors hint at in table 4, one often has to explore and train many models during research and development, or during some meta-search phase, to find the best one that is ultimately described, released or used in production. When we consider that tens of thousands of groups are doing this continuously across industry, academia, and government agencies, these costs add up. And this is only part of the story; models require continuous energy to evaluate queries in production, and we also have to consider the lifecycle of the hardware that's involved. In fact, Google admitted back in 2022 that machine learning accounts for roughly 15% of Google's total energy use, and Google is a behemoth company! So hopefully it is clear why this is an important problem to consider.

    Given the above context, and after reading the paper, answer the following questions:

    • Q 1.1 What are the 3 components of energy that the authors measured? How did they measure them?
    • Q 1.2 What are examples of algorithms that the authors point to that can help reduce environmental impact during training?
    • Q 1.3 Neural architecture search (NAS) for the "evolved transformer model" is an example of searching through many models to find the best one. Google claims that the Strubell paper overestimated the training cost of NAS. Click here to read the press release from a Google research paper. If we are to take Google at their word, how many round trip flights (avg for a single passenger) is training NAS actually equivalent to?

    Luccioni 2022

    BigScience Large Open-science Open-access Multilingual Language Model (BLOOM), which has 176B parameters, is a model in comparable performance to GPT-3 (the direct precursor to GPT 3.5, otherwise known as ChatGPT), and in comparable size to GPT3 and GPT3.5. The authors of this Journal of Machine Learning Research paper by Luccioni, Viguier, and Ligozat, estimate the cost of training this model, and their work serves as a slightly more recent pulse on the carbon costs of large language models than the Strubell paper.

    • Q 2.1 Look at section 4.2 of their paper. Noting that a ton is 2000 pounds, how many equivalent round trip NYC->SF flights (for a single person) did the authors estimate was the cost of the dynamic power consumption when training BLOOM?

    NeurIPS Workshop on Tackling Climate Change with Machine Learning

    NeurIPS, the flagship conference on machine learning, has been holding an annual workshop on tackling climate change with machine learning since 2019. Click here to view the accepted papers and posters from last December.

    • Q 3.1: Skim the proceedings and find a paper or poster that interests you. Have a look through the paper, and summarize briefly to the class in a thread on discord what that paper is doing and why it interests you.

    Keep your eyes on the 2023 workshop, which is right around the corner in a few weeks!

    Other resources


    Ethics Discussion #5: Stochastic Parrots

    Large language powered chat bots are all the rage these days, and they need no introduction. To ground us a bit, I'd like to return to a paper that's a few years old, but which was quite prescient (and which is thought to have contributed to the firing of co-authors Timnit Gebru, and, shortly thereafter, Margaret Mitchell):

    Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. "On the dangers of stochastic parrots: Can language models be too big?🦜." In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, pp. 610-623. 2021.

    Read sections 4, 5, and 6, and answer the following questions (I'm having you skip section 3 because we covered some closely related topics in the last ethics discussion):

    • Q 1.1 What are some issues with the training data used for large language models?
    • Q 1.2 What is "documentation debt," and how does it relate to data curation?
    • Q 1.3 Between form and meaning, which one do the authors say large language models capture? Which one do they not capture?
    • Q 1.4 What is a stochastic parrot? How does our own "linguistic competence" trick us when we engage with stochastic parrots?

    Then, watch between the timestamp range (29:20 - 46:10) in the following retrospective video from "Stochastic Parrots Day" last spring at this link, and answer the following question :

    • Q 2.1 What did the authors not anticipate in 2020/2021 that came to pass over the next two years?

    Class Discussions

    We will discuss the following questions in class, among other things depending on where the conversation goes:

    • What is going on with google saying we can melt eggs and stackoverflow laying off 28% of its employees last month? What implications does this have for coding and large language models moving forward?
    • What are the pitfalls of using benchmarks and "ground truth" to measure progress in natural language processing and machine learning in general?
    • According to the stochastic parrots paper, from a linguistic perspective, what is the opportunity cost of putting so much effort into large language models?
    • In this interview with Lex Fridman, Sam Altman says one of the ethical motivations behind ChatGPT's public release is to incrementally get information on how well chatbots can fail in public so that they can be developed more safely. Do you agree that this is an ethical approach? Why or why not? (Feel free to agree or disagree, but you need to back it up)
    • An implicit assumption in Sam Altman's reasoning is that chat bots are a necessary and inevitable technology. Discuss.