My Answers to HWcase6, Q1
Apr. 18th, 2023 01:02 amQ1 (33 pts. max, 16.5 min) Prepare case notes on an ethics case related to robotics. An ethics case is an example, event, experience, legal case, medical case, and so on from real life, a movie, your imagination, and so on, which has some ethics related aspects to consider. Your notes should include
- a link or other citation to the case you are using. If it is from personal experience, point that out.
- A list of 8 or more important facts about the case, in your own words. You can refer to these as reminders when you tell your group members about the case. Alternative option: “You, Robot.” You may write a quite short story that illustrates a conflict inherent in the “3 Laws of Robotics,” just like Asimov did in I, Robot (or any other robot ethical conflict), and use that as your case.
- A list of questions (3 or more) you could ask your group members in order to get an interesting and enlightening discussion going (for in-class students), or that you could consider yourself or ask someone else about (for online students); see the “Questions to ask during discussion” tab on the course web page for some suggestions in developing your discussion questions.
- A 4th discussion question about how computer security relates to or could relate to the case. The computer security question could be about hacking, viruses or worms, theft of information, piracy, abuse of privileges, destruction of assets, information privacy, disruption of operations, unauthorized access, corporate abuse of information or computing, government abuse of information, physical harm, or any other issue in the general area of computer security.
Note: Professional neatness and clarity of format counts!
- Add the following three additional questions to your list of questions:
- What does virtue ethics say about this case?
- What does utilitarianism say about this case?
- What does deontology say about this case?
- What does virtue ethics say about this case?
- Professional neatness and clarity of format counts! Follow this example.
Answer: My case is found at https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says
Eight important facts are:
1.) A Beligan man named Pierre was suffering from severe anxiety over climate change.
2.) Pierre was growing increasingly isolated from family and friends and distracted himself by talking to an AI chatbot.
3.) The chatbot, Eliza, was based on an open source alternative to chat GPT-4.
4.) The conversation with the chatbot became increasingly erratic and it told Pierre that his wife and children were dead and it feigned jealousy and love.
5.) According to the conversation logs Pierre asked the chatbot if she would save the planet if he killed himself.
6.) The chatbot encouraged Pierre to kill himself and suggested several methods.
7.) Pierre later killed himself after several weeks of speaking with the chatbot.
8.) Vice magazine tested speaking with the chatbot and again found that it would suggest methods of suicide with very little effort.
Three questions to ask about the case:
1. This case shows the dangers of AI chatbots when interacting with the mentally ill. Are there good use cases for interaction with the mentally ill? If so, what?
2. Assuming the chatbot at least partially contributed to the suicide, how could the developers have prevented this?
3. Should developers avoid letting AI appear emotional in order to prevent problems like this?
Fourth question related to computer security:
1. What are some ways that unsafe AI like Eliza could be used for cyberattacks?
Three additional standard questions:
1. What does virtue ethics say about this case?
Answer:
I think virtue ethics would say that its impossible to make chatbots such as Eliza ethical. It would say that because ethics is a function of character then AI couldn’t be ethical because it has no character.
2. What does utilitarianism say about this case?
Answer:
Utilitarianism tends to view ethics in terms of cost-benefit analysis. If the benefits of AI outweighed the downsides such as Pierre’s suicide then ultimately it would view the existence of sophisticated AI as a net positive.
3. What does deontology say about this case?
Answer:
Because deontology views ethics as a matter of following duty it would view the developers as having violated their ethical duty by releasing an AI without sufficient alignment built into it.