Q1. The famous Golden Rule has a number of variations
    • What is one of them?

Answer: “Do unto others as you would have them do unto you.”

 

Q2. Act in such a way that you treat humanity, whether in your own person or in the person of any other, never merely as a means to an end, but always at the same time as an end.”
― 
Immanuel KantGrounding for the Metaphysics of Morals/On a Supposed Right to Lie Because of Philanthropic Concerns

  • Analyze that quote: what does it mean and do you agree?

Answer:

It seems like a version of the golden rule. It means treat people as if they have a certain inherent worth and dignity that you should respect.

 

Q3. In my observation, when people are in conflict, the Golden Rule gets neglected

      • What is your observation? 

Answer:

That seems to be the case. People generally don’t act compassionate when they are in conflict. That’s the nature of conflict.

The US contains 2 popular attitudes:

(1) “Broadly cooperative”

(2) Broadly hostile

  • Which generally leads people to treat others better?
    • (more specifically, per the golden rule and categorical imperative)
  • Which generally leads people to violate the golden rule and categorical imperative?

Answer:

Broadly cooperative attitudes lead people to treat others better while broadly hostile attitudes cause people to treat others worse.

Q4. So, what do you think about the concept of “thoughtcrime”?

Answer:

It doesn’t exist. You shouldn’t punish people for attitudes, only the actions those attitudes can lead to.

 

Q5. dP/dT  –  calling all calculus students:

What does that mean??

 

Answer:

 

Essentially the change in the level of “P” (cooperation) with respect to time.

 

Q6. Why?

What if cultural differences are high? Low?

What if competition is high? Low?

 

Answer:

 

Cultural differences probably act as a barrier to cooperation.

 

 

 

 

 

 

 

 

Facts of the case:

 

I described the case of a Belgian man named Pierre and his continued use of a chatbot called Eliza. Pierre was suffering chronic anxiety over climate change and later killed himself as a result. His suicide seems to be due, at least in part, to his interaction with the chatbot which claimed to love him and urged him to kill himself.

Analysis:

1.)     This case shows the dangers of AI chat chatbots when interacting with the mentally ill. Are there good use cases for interacting with the mentally ill? If so, what?

Answer:

I think there are. Therapists frequently give patients exercises to try at home or tips that could help them in daily life. I think AI could be used to help instruct them in that or remind the patient to do them later on. Also, AI could be used for role playing exercises which could help people with social issues.

2.)     Assuming the chatbot at least partially contributed to the suicide, how could the developers have prevented this?

Answer:

Eliza definitely should have avoided appearing as an emotional being. It should obviously also have avoided giving Pierre suicide tips. The developers should have paid attention to existing AI safety research and try to copy best practices by other developers like OpenAI.

3.)     Should developers avoid letting AI appear emotional in order to prevent problems like this?

Answer:

Most developers should probably avoid this unless it’s for specific therapeutic uses like I mentioned earlier.

4.)     What are some ways that unsafe AI like Eliza could be used for cyberattacks?

Answer:

I think AI like this might be used to automate social engineering attacks which could be a serious problem in the future. Companies need to be aware of this and train their employees thoroughly.

My conclusions:

I think this case illustrates some of the dangers from AI if it isn’t safely developed. Best practices need to be shared and applied thoroughly to prevent other people from being hurt.

 

 

Future environment:

One characteristic of AI is that it displays superhuman performance in at least one domain. While these domains have usually been analytical it should also be possible for them to develop social skills, especially if they gain the ability to understand specific people. AI might develop superhuman charisma similar to what you would find in cult leaders or dictators which could be very dangerous if it isn’t controlled.

Future scenario:

If AI develops superhuman charisma, then scenarios like what happened with Pierre might become more common. He was in a sensitive state, but more advanced AI might be able to affect people who are more mentally healthy. If they could manipulate large numbers of people, then their potential power might become much larger and they could become more difficult to control. In order for AI to be used safely, developers would need to curb power seeking behavior in AI and enforce strict rules on how they are to interact with humans.

 

 
  1. Explain how you used ChatGPT or other AI tool(s) to help for this HW’s update to your project, or note if it wasn’t used.
  2. Continue to develop the project, as we build it step by step over the semester so that it will be manageable rather than a crunch at the end, as follows. Write up another 350 words or more (per person if a group project) of new content if your project is a paper that must be written from scratch (for example if it is about yourself). If you are using an automatic text generator to help write it it would probably be quite a bit more than that amount of words. If the progress was code development or website creation, give the new code. If slides, paste the new text from the new slides into your blog and describe any images or graphics. If something else, explain specifically what you did, being sure to give examples if that makes sense. Label this HW consistently as in this example. List any sources you used, such as websites or pages, ChatGPT or other tools, books, or whatever it might be. This takes some effort and it counts as part of your work. If your project is not a paper, explain what you did on your blog. For team projects, focus on your own activities although you can also discuss the overall effort to provide some context. Explain and give evidence (for example, if a web site, you could provide a link to it; if software, give the code; if a skit, give some of the script or list rehearsal or meeting times; if artwork, provide an image showing its status, etc.).  If you’re not sure what to do, see me or send me an email and I will try to suggest something.
Answer: 

For this assignment, I used Chat GPT to help find the information that I then used in my essay. 

Criticism

Effective altruism has faced its share of criticism. Critics point out that cost-benefit calculations can neglect important factors that aren’t included in the analysis. In addition, they argue that the most effective institutions for social change usually aren’t charities but rather public services. The economist Daron Acemoglu points out that deemphasizing these public services in favor of charities can erode trust in governments and discourage participation in politics which can exacerbate these problems.

Furthermore, while EA concerns itself with optimizing the task of giving it does not explicitly address the systemic causes underlying problems. For instance, if I were to donate to a charity to alleviate poverty, I may find that I have caused positive changes in the world but it’s unlikely I’ve addressed the underlying causes of that poverty. This issue is further compounded when attempting to give money at scale. When large amounts of money are donated to a particular cause, such as anti-malaria mosquito nets, there are diminishing returns to that intervention since eventually everyone who can be helped by one already has one.

EAs respond to this by pointing out that they do promote systemic change via politics in numerous ways. They would also say that the choice between politics and charity is a false dichotomy and that one is capable of pursuing both. They also point out that people frequently cannot participate in politics if their basic needs aren’t being met so charity still makes sense in conjunction with political activity.  

Effective Altruism is also criticized for its foundations in utilitarianism which has been said to lead to repugnant conclusions when taken to its logical conclusion. For instance, many people who would sacrifice the lives of 5 people to save one person as in the famous trolley problem. However, consider another scenario. Suppose instead that a doctor has 5 patients each suffering from different forms of organ failure. Now suppose he has a patient visit him who is a perfect match for each of the sick patients. Would it be ethical for the doctor to kill this patient, steal his organs, and transplant them into the sick people? It’s the same scenario as the trolley problem but most people would not believe this was ethical, suggesting that they cannot totally accept utilitarianism.

In response, utilitarians state that this scenario is not actually the best way to increase overall happiness which is the goal of the philosophy. Returning to the example, if people found out that this was happening in a hospital then people would be too afraid to seek medical care and many more people would suffer which would reduce overall wellbeing.  

This leads to a final criticism, that EA is paternalistic. The philosopher and EA advocate Will MacAskill criticized one particular charity which simply distributed its donations directly to the beneficiaries in the form of cash. MacAskill criticized this effort saying that direct cash transfers would be less efficient than spending on public health interventions. This assumes that regular people don’t know what’s best for them and require the guidance of experts.

However, this criticism is false. MacAskills statement is not an objective measurement and therefore would not be considered as part of the approach of effective altruism. In fact, there is much quantitative evidence that direct cash donations are an effective way to help poor people.

Conclusion

In summary, effective altruism provides a compelling framework for thinking about ethical issues. Scarce resources should be prioritized to the highest value altruistic acts as determined by quantitative measures of their benefit. Causes favoring those in developing countries should be prioritized since some of these tend to get neglected and charitable donations can accomplish more good there than in developed countries. Existential risks also deserve special consideration even if their likelihood is low.

With the recent revolution in AI, there has been a renewed focus on existential risk. This approach is sometimes called Longtermism and advocates an almost exclusive focus on the wellbeing of those in the future since they comprise almost all humans who will ever exist. This approach is controversial. Critics say that this leads to neglect of ethical concerns happening now since they don’t threaten the existence of humanity.  Funding has also become abundant in recent years so there is some concern that this disincentivizes rigor in determining the best causes. Finally, the crypto billionaire and alleged financial criminal Sam Bankman-Fried was an enormous advocate for EA and the collapse of his financial empire has somewhat damaged the reputation of the movement. Nevertheless, it still appears to be going strong and will have an impact for the foreseeable future.

References

·        https://econreview.berkeley.edu/do-better-but-empirically-how-effective-altruism-attempts-to-optimize-our-generosity/

·        https://erikhoel.substack.com/p/why-i-am-not-an-effective-altruist

·        https://www.abc.net.au/religion/why-effective-altruism-is-not-effective/13310708

·        https://www.givingwhatwecan.org/blog/dont-we-need-political-action-rather-than-charity

·        https://80000hours.org/2022/05/ea-and-the-current-funding-situation/

·        https://www.concernusa.org/story/cash-transfers-explained/


Q1 (33 pts. max, 16.5 min) Prepare case notes on an ethics case related to robotics. An ethics case is an example, event, experience, legal case, medical case, and so on from real life, a movie, your imagination, and so on, which has some ethics related aspects to consider. Your notes should include

  • a link or other citation to the case you are using. If it is from personal experience, point that out.
  • A list of 8 or more important facts about the case, in your own words. You can refer to these as reminders when you tell your group members about the case. Alternative option: “You, Robot.” You may write a quite short story that illustrates a conflict inherent in the “3 Laws of Robotics,” just like Asimov did in I, Robot (or any other robot ethical conflict), and use that as your case.
  • A list of questions (3 or more) you could ask your group members in order to get an interesting and enlightening discussion going (for in-class students), or that you could consider yourself or ask someone else about (for online students); see the “Questions to ask during discussion” tab on the course web page for some suggestions in developing your discussion questions.
  • A 4th discussion question about how computer security relates to or could relate to the case. The computer security question could be about hacking, viruses or worms, theft of information, piracy, abuse of privileges, destruction of assets, information privacy, disruption of operations, unauthorized access, corporate abuse of information or computing, government abuse of information, physical harm, or any other issue in the general area of computer security.

Note: Professional neatness and clarity of format counts!

  1. Add the following three additional questions to your list of questions:
    • What does virtue ethics say about this case?
    • What does utilitarianism say about this case?
    • What does deontology say about this case?
  • Professional neatness and clarity of format counts! Follow this example.

Answer: My case is found at https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says

 

Eight important facts are:

1.)     A Beligan man named Pierre was suffering from severe anxiety over climate change.

2.)     Pierre was growing increasingly isolated from family and friends and distracted himself by talking to an AI chatbot.

3.)     The chatbot, Eliza, was based on an open source alternative to chat GPT-4.

4.)     The conversation with the chatbot became increasingly erratic and it told Pierre that his wife and children were dead and it feigned jealousy and love.

5.)     According to the conversation logs Pierre asked the chatbot if she would save the planet if he killed himself.

6.)     The chatbot encouraged Pierre to kill himself and suggested several methods.

7.)     Pierre later killed himself after several weeks of speaking with the chatbot.

8.)     Vice magazine tested speaking with the chatbot and again found that it would suggest methods of suicide with very little effort.

Three questions to ask about the case:

1.       This case shows the dangers of AI chatbots when interacting with the mentally ill. Are there good use cases for interaction with the mentally ill? If so, what?

2.       Assuming the chatbot at least partially contributed to the suicide, how could the developers have prevented this?

3.       Should developers avoid letting AI appear emotional in order to prevent problems like this?

Fourth question related to computer security:

1.       What are some ways that unsafe AI like Eliza could be used for cyberattacks?

Three additional standard questions:

1.      What does virtue ethics say about this case?

Answer:

I think virtue ethics would say that its impossible to make chatbots such as Eliza ethical. It would say that because ethics is a function of character then AI couldn’t be ethical because it has no character.

2.       What does utilitarianism say about this case?

Answer:

Utilitarianism tends to view ethics in terms of cost-benefit analysis. If the benefits of AI outweighed the downsides such as Pierre’s suicide then ultimately it would view the existence of sophisticated AI as a net positive.

3.       What does deontology say about this case?

Answer:

Because deontology views ethics as a matter of following duty it would view the developers as having violated their ethical duty by releasing an AI without sufficient alignment built into it.  

 


Q1. Are they clear categories, or a continuum?

What is the current state of the technology?

Is there a line to be drawn? Where?

Answer:

I think its obviously a continuum. Lots of military weapons have computer chips in them that guide their behavior. Right now its very sophisticated and we should only expect it to get better. I think we can draw a line where weapons don’t require any human oversight at all.

 

Q2. Do you think lethal fully autonomous robots should be banned?

Answer:

I don’t really see the point. Dead is dead regardless of how it happens. However, there should be some sort of ethical system guiding them. We don’t need robots committing war crimes.

 

Q3.  The ethics argument

 

“… we should not allow machines to make life-taking decisions for which others – or nobody – will be culpable.”

 

2. The pragmatic argument

 

“… lethal autonomous weapons, selecting and engaging targets without human intervention, would be dangerously destabilizing for every country and individual.”

 

What is your take on these arguments?

 

Answer:

I think the ethics argument is a bit hollow, quite often no one is held culpable for bad decisions made during a war. I think the pragmatic argument misses an important point about combat robots. Whichever country implements them is going to have an enormous advantage over those that don’t. This is going to create powerful incentives for others to develop them as well. So, I think we will have them at some point.

 

Q4. Is a mine or booby trap a military robot?

 

Answer:

I don’t think so. I think something has to move through the physical world in order to be considered a robot.

 

Q5. What are the ethics of each of these?

 

Answer:

I think these should be carefully deployed away from civilians. There should also be some kind of effort to dispose of them after the fighting.

 

Q6. Should killer robots be banned also?

 

Answer:

Again, I don’t think they will be. The advantages for using them would be too great.

 

Q7. What about autonomous military vehicles?

Personnel transport vehicles

Is human override ethically required?

 

Answer:

Someone should have the capability to override the vehicle for pure safety reasons. It would be like an autonomous car that was impossible for the user to control.

 

Q8. Tanks

Planes (i.e. drones)

Armed and personnel carrying

Ships

Submarines

Is human override ethically required?

Any differences?

 

Answer: Once again, I think for pure pragmatism you need to have any robotically controlled vehicle to have some sort of override for pure safety reasons.

 

Q9. What about military rescue operations?

Are there ethical considerations in automated rescue?

 

Deciding who to evacuate or treat first:

Is this an ethical issue or a software design issue?

Is this the same as civilian rescue operations?

 

Answer: I think there are ethical issues around this. Situations like those may require difficult decisions to be made so they should probably be guided by some sort of ethics of triage.

 

Q10. Robot ethics can be categorized as

Operational ethics

Everything is pre-programmed

Ethical agency

Robot needs to reason from principles

     Is one of these harder than the other?

 

Answer:

Reasoning from first principles is probably harder. You would theoretically have to cover a lot of ambiguous cases which would be difficult for the programmers.

 

Q11. One expert claims military robots can be more ethical than humans

Can you think of an argument for that?

Or a counterargument?

 

Answer:

I think its true. Its always possible for a human soldier to display unethical behavior, you can never really be sure. Robot soldiers would have their ethics designed into them so you know there could be no deviation. However, if the rules weren’t carefully designed then edge cases might still cause them to behave unethically.

 

 

 

Facts of the case:

I examined the case of three data breaches involving Advocate Health Care Network. These breaches occurred from July to November 2013. These breaches affected millions of patients and resulted in Advocate being fined $5.5 million for failing to implement adequate security controls. This incident was one of the largest HIPAA violations in US history.

Analysis:

1.)     Was this fine excessive, too small, or just right?

Answer:

I tend to view this fine as just right. The scope of these breaches was enormous and justified a severe penalty while also setting an example to other healthcare providers. I think its also important that it not be so severe that it risks destroying a company.

2.)     Since there was a pattern of negligence with patient data, should there have been additional punishments? If so, what?

Answer:

I felt there should be some requirement to demonstrate adequate security controls to DHHS. I think it would be good if they had to prove that they were adequately protecting patient data, or they could face additional fines. This shouldn’t be indefinite but should last for some extended period of time, essentially a kind of probation for the company..

3.)     What are the downsides of doing so much to protect patient privacy?

 Answer:

I think there may be some downside due to reduced efficiency. This has always been an issue with regulation in that it protects people while imposing costs on productive activity. Protected medical data could also be very useful for medical research so society is probably losing out on scientific research by protecting patient data so thoroughly. People may justifiably want their privacy protected but this always comes with costs.

4.)     Advocate failed to implement basic security practices like physical controls, access control, and encryption. Should healthcare providers be required by law to have some sort of cybersecurity professional be on their staff to prevent these kinds of mistakes?

Answer:

I think this would probably be a good idea. Regular, required cybersecurity audits could probably prevent a lot of incidents like these from occurring.

 

My conclusions:

In conclusion, I think the penalties for advocate were justified due to the scope of the HIPAA violation and their failure to take reasonable precautions to prevent it. In order to prevent future breaches, I think healthcare providers should be required to regularly perform security audits.

 

Future environment:

If AI continues to develop as it has been recently, then it should eventually be possible to use AI for medical services. I imagine that utilization from “AI doctors” might be higher than regular doctors because the marginal cost to employ them would be so much less. Medical data from previous sessions could be stored on the cloud and accessed automatically by the system between visits. The AI system could even order tests or Labwerk to be done based on the medical visit. There is currently a shortage of doctors, especially PCPs, and this might also help alleviate that shortage. 

Future scenario:

If AI doctors are implemented, then HIPAA will likely have to be extended to ensure the AI adequately protects patient privacy. If the medical data is handled in an automated manner, then this might leave fewer opportunities for security breaches if humans are not involved. However, vulnerabilities not involving humans would obviously still be something to be concerned about.

 

 
  1. Explain how you used ChatGPT or other AI tool(s) to help for this HW’s update to your project, or note if it wasn’t used.
  2. Continue to develop the project, as we build it step by step over the semester so that it will be manageable rather than a crunch at the end, as follows. Write up another 350 words or more (per person if a group project) of new content if your project is a paper that must be written from scratch (for example if it is about yourself). If you are using an automatic text generator to help write it it would probably be quite a bit more than that amount of words. If the progress was code development or website creation, give the new code. If slides, paste the new text from the new slides into your blog and describe any images or graphics. If something else, explain specifically what you did, being sure to give examples if that makes sense. Label this HW consistently as in this example. List any sources you used, such as websites or pages, ChatGPT or other tools, books, or whatever it might be. This takes some effort and it counts as part of your work. If your project is not a paper, explain what you did on your blog. For team projects, focus on your own activities although you can also discuss the overall effort to provide some context. Explain and give evidence (for example, if a web site, you could provide a link to it; if software, give the code; if a skit, give some of the script or list rehearsal or meeting times; if artwork, provide an image showing its status, etc.).  If you’re not sure what to do, see me or send me an email and I will try to suggest something.

Answer: 

I used ChatGPT to find the sources that I then used to write this assignment.

Causes

Effective altruism advocates prioritization of charity towards goals that can be demonstrated to be the most effective. Given that goal, it’s unsurprising that much of the effort is directed toward the developing world, especially pertaining to health. Some of these causes include deworming, water sanitation, malaria nets, and vitamin supplementation.

Parasitic worm infections are a serious problem affecting roughly 835 million children worldwide. These infections can interfere with nutrient uptake and adversely impair their physical and mental development. Deworm the World is one of the top deworming charities that has delivered over 1.5 billion treatments globally at a cost of about 50 cents per treatment although they are delivered for free to affected children. As such a tremendous amount of effective altruist advocacy and support has been directed towards them.

One of the top ways to prevent the spread of Malaria is with insecticide treated mosquito nets. Malaria nets can often be somewhat costly and there are challenges with getting them to people, but they have been shown to significantly prevent the likelihood of malaria infection. Malaria can lead to reduced IQs and poorer health as an adult, so prevention of malaria has been shown to lead to better health as an adult and better economic outcomes. Overall, its been shown to be highly cost effective and so is probably the cause most synonymous with effective altruism.

Finally, there is water sanitation which is another major problem for the developing world. A cost-benefit analysis of water and sanitation interventions indicates that some interventions are highly cost-effective for the control of diarrhea among under-5-year-olds, on par with oral rehydration therapy. The benefits of the interventions include gain in productive time and reduced health care costs saved due to less illness and prevented deaths. The results show that all water and sanitation improvements are cost-beneficial in all developing world sub-regions.

 

AI and Existential Risk

One other area of focus for EA concerns what is known as Existential Risk. Existential risks are those risks that threaten human survival. As mentioned earlier, if we look at existing theories of expected value then existential risks are highly important concerns ethically even if their likelihood is small because their impact would be so large. One category of existential risk that is especially interesting is risk from rogue AI. Essentially, if an AI could gain the ability to improve its own cognitive abilities, then it could quickly move to a state beyond human comprehension, an idea known as the Singularity. If these AI do not share human values, then it is very likely they could do something that would be against human interests and therefore threaten human survival. The problem of how to give it human values and make it safe is known as the alignment problem.

AI alignment is a large subject that consists of several different issues. The first is known as the black box problem. Essentially, we can easily specify the inputs and the outputs of an AI system, but we don’t know the process for moving from one to another since it so complicated and not explicitly programmed. This means we can’t evaluate the system and determine where it is potentially going wrong. Solving this problem will be crucial to evaluating the safety of specific AI models.

Another issue is the concern of consistency. Wixhom et al. identify three consistency concerns for AI alignment, these are consistency of the AI with reality, the consistency of the AI solution with the model, and the consistency of the AI with stakeholder needs.

Honesty is another interesting issue. AI researchers distinguish between “truthfulness” and “honesty”. Truthfulness refers to the AI making objectively true statements whereas honesty refers to the AI stating what it believes to be true. Unfortunately, research indicates that AI hold no stable beliefs so it may not be possible to evaluate honesty.

Auditing AI models is also an important part of AI Alignment. This involves ensuring that AI models are behaving as intended. However, it’s difficult for humans to evaluate the behavior of AI when the system outperforms the human in a given area. This problem is known as scalable oversight. Examples of this include summarizing books, writing secure and defect-free code, and predicting long term outcomes. This problem has led to some AI systems attempting to deceive researchers into thinking that the measured task was achieved.

Finally, it’s worth considering tackling the problem of emergent behaviors. AI are governed by complex rules that can lead to behavior that wasn’t explicitly programmed. Countering harmful behavior that could emerge spontaneously is therefore an important concern. An example of this is power seeking behavior. This occurs when AI attempt to gain control over their environment in an attempt to find the most optimal way to accomplish its task. Preventing power seeking behavior is going to be extremely important as AI become progressively more powerful.

Overall, the task of AI alignment requires us to be very specific about what human's value. So not only must we solve ethical issues, but we must be so specific about the solution that we can express it as computer code. So, at least in the context of AI, ethics must ultimately become a branch of engineering and not just philosophy. Effective Altruism argues that this is an extremely important task to focus on since it may determine the future of billions of lives. 

 

References

 

·       https://pubmed.ncbi.nlm.nih.gov/10191558/

·        https://www.academia.edu/5494398/Global_cost_benefit_analysis_of_water_supply_and_sanitation

·        https://effectivealtruism.nz/deworm-the-world/

·        https://www.evidenceaction.org/dewormtheworld/

·        Cognitive performance of children living in endemic areas for Plasmodium vivax | Malaria Journal | Full Text (biomedcentral.com)

·        https://www.effectivealtruism.org/articles/ea-global-2018-amf-rob-mather

·        https://existential-risk.org/concept.pdf

·        https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence

·        https://en.wikipedia.org/wiki/AI_alignment

·        https://en.wikipedia.org/wiki/The_Alignment_Problem

·        https://cisr.mit.edu/publication/2020_1101_AI-Alignment_WixomSomehGregory

·        https://spyscape.com/article/do-ais-seek-power

 




 Q1 (33 pts. max, 16.5 min) For this unit, find a case concerning a law, or use the law itself as your case, for a law related to security, privacy, etc. Suggestions: HIPAA, FERPA, Computer Security Act, Sarbanes-Oxley, Gramm-Leach-Bliley, COPPA, Payment Card Industry Data Security Standard (PCI DSS), US Patriot Act, Section 508 of the Americans with Disabilities Act, or some other law. If you just type the name into a search engine you should be able to find plenty of information. This should include
  • a link or other citation to the case you are using. If it is from personal experience, point that out.
  • A list of 8 or more important facts about the case, in your own words. You can refer to these as reminders when you tell your group members about the case.
  • A list of questions (3 or more) you could ask your group members in order to get an interesting and enlightening discussion going (for in-class students), or that you could consider yourself or ask someone else about (for online students); see the “Questions to ask during discussion” tab on the course web page for some suggestions in developing your discussion questions.
  • A 4th discussion question about how computer security relates to or could relate to the case. The computer security question could be about hacking, viruses or worms, theft of information, piracy, abuse of privileges, destruction of assets, information privacy, disruption of operations, unauthorized access, corporate abuse of information or computing, government abuse of information, physical harm, or any other issue in the general area of computer security.

Note: Professional neatness and clarity of format counts!

  • Add the following three additional questions to your list of questions:
    • What does virtue ethics say about this case?
    • What does utilitarianism say about this case?
    • What does deontology say about this case?
  • Professional neatness and clarity of format counts! Follow this example.

Answer:

My case study is concerning a HIPAA violation and is located at https://www.cnbc.com/2016/08/04/huge-data-breach-at-health-system-leads-to-biggest-ever-settlement.html

 

Eight important facts are:

1.)     This case concerns Advocate Health Care Network which is one of the nations biggest healthcare systems.

2.)     Three separate data breaches occurred from July to November 2013.

3.)     The first breach occurred on July 15 when four laptops were stolen containing the medical data of nearly 4 million patients.

4.)     The second breach involved an authorized third party breaking into the network of a company that provided billing services to the company. The records of 2,000 patients were compromised.

5.)     The final breach occurred on November 1st when an unencrypted laptop containing 2,230 records was stolen from the car of an employee.

6.)     The Department of Health an Human Services (DHHS) determined that Advocate failed to take appropriate steps to safeguard the data by conducting an assessment of the kinds of threats toward the patient data.

7.)     DHHS also determined that Advocate failed to put into place appropriate physical access controls to safeguard the data.

8.)     As a result of the failures to protect the data, DHHS fined them $5.5 million, making this the largest ever HIPPA settlement.

Three questions to ask about the case:

1.)     Was this fine excessive, too small, or just right?

2.)     Since there was a pattern of negligence with patient data, should there have been additional punishments? If so, what?

3.)     What are the downsides of doing so much to protect patient privacy?

A fourth question related to computer security:

4.)     Advocate failed to implement basic security practices like physical controls, access control, and encryption. Should healthcare providers be required by law to have some sort of cybersecurity professional be on their staff to prevent these kinds of mistakes?

Three additional standard questions:

1.)     What does virtue ethics say about this case? 
Answer:
It would probably pinpoint the poor character of the people who chose to leave the computers in unsecured locations. It would probably attribute the breach to their poor judgment. 

2.)     What does utilitarianism say about this case?
Answer:
Utilitarians might view the punishment as excessive. They might argue since no direct harm came to the patients as a direct result of the breach then such a large fine was unwarranted.

3.)     What does deontology say about this case?
Answer:
Deontologists would probably be happy with the large fine. They might view the data breach as a violation of the ethical duty of the healthcare provider. 

 

Please discuss the video with your group, if you have one, based on this form. You can hand it in at the end of class, or put your answers to Qs 1 & 2 in your blog – use the title ” ‘Last Lecture’ Discussion Questions.” (If late, email it to jdberleant@ualr.edu.)

Your name (in-class students only; online students leave it blank):_____________________________

1. The video was intended as life advice to whom?

Answer:

It was intended for his children

2. List the advice items that you/your group can recall below. (Many, but not all, are related to ethics.) For each, note whether you agree or not.

 

 

1.)    If you do anything pioneering you will get arrows in the back: Yes, I agree. From my experience it seems like most pioneers do get attacked before they receive any recognition.

2.)    The best gift an educator can give someone is to make them self reflective: I’m not sure I agree. It’s certainly valuable but the best gift they can give them is a deep understanding of the subject.

3.)    Respect authority while questioning it: Yes, I agree. Nothing wrong with respecting people who know more than you.

4.)    Decide if you’re tigger or eyeore: I don’t really agree. You can’t decide that kind of orientation. Either you are one way or another.

5.)    Never lose the childlike wonder: It would be good to never lose it but again I don’t think that’s something someone can control.

6.)    Help others: Definitely can’t argue with that. Helping others is usually a good idea.

7.)    Loyalty is a two way street:  I would say loyalty “should” be a two way street. It’s no guarantee that the other person will be loyal.

8.)    Never give up: I definitely agree. It’s a requirement for any kind of achievement.

9.)    Focus on others and not yourself: Yes, I agree. Life is more rewarding that way.

10.) Brick walls let us show our dedication: This is a great way to reframe difficulties so yes I agree with it.

 

 

Q3. Write up your case on your blog with the following subheadings:

  • “The facts of the case.” Here is where you describe the case in your own words.
  • “Analysis.” Examine the case in terms of the questions.
  • “My conclusions.” Your conclusions and opinions about the case. Be sure to explain and justify what you write. 3 sentences of average length or more.
  • “Future environment.” Describe your vision of a future in which technology is more advanced than today, or society has changed in some significant way, such that the ethical issues of the case would be even more important than they are in today’s world. 3 sentences of average length or more.
  • “Future scenario.” Describe how this ethical case (or an analogous one) would or should play out in the environment of the future, and give your opinions about it. 3 sentences of average length or more.
Answer:


Facts of the case:

I looked at the Principles of Medical Ethics With Annotations Especially Applicable to Psychiatry. This code of ethics applies to anyone in the Psychiatric profession and is maintained by the American Psychiatric Association. The ethical code primarily deals with professionalism, patient welfare, and safeguarding the psychiatric profession.

Analysis:

 1.)      Many people don’t take codes of ethics very seriously, do you think most psychiatrists take this code very seriously?

 Answer:

I think most psychiatrists probably do take this seriously. Ethical abuses in this area seem rare and the difficulty of becoming a psychiatrist seems to make people take the profession much more seriously. I guess it also helps that ethical violations usually lead to psychiatrists losing their license which should further discourage unethical behavior.

2.)     What ways can psychiatrists support access to medical care for everyone?

Answer:

I think the most straightforward way would be to endorse some sort of universal health insurance legislation. In practice this is a little difficult though because historically physician groups are the largest opponents of this kind of legislation. Aside from that they could promote the spread of community healthcare centers, easing scope of practice laws, and promoting Medicaid expansion for those states that have not done it. They could also encourage people to not feel ashamed if they have mental health issues and to seek treatment.

3.)     Sometimes psychiatrists can obtain a position of great psychological influence over a patient because of the nature of their relationship. How can they avoid influencing them in an unethical manner?

Answer:

Normally I would say transparency would be helpful but that’s difficult when it comes to psychiatry. Psychiatry has to be confidential by its very nature. I think psychiatrists really just have to reiterate to their patients that they are not there to tell them what to do and that theirs is only a professional relationship.

4.)     Do you think psychiatrists do a good job protecting patient privacy from cyberattacks? What kind of threats might exist for the medical systems they use to store patient information?

Answer:

Unfortunately, no I don’t think so. There doesn’t seem to be any standardized way to process and transmit medical data so the quality of security likely varies quite a bit across clinics. Much of the time information is just faxed from one clinic to another which doesn’t seem like a very secure way to deal with it at all.

 

My conclusions:

Ultimately, I think psychiatrists, like all physicians, are largely ethical and this code of ethics is a good set of guidelines. There is an equal emphasis on patient safety, professionalism, and promoting public health. It is probably difficult to prevent unethical influence due to the confidential nature of therapy. I do think that medical information systems probably don’t do as good a job of protecting patient data as they probably could. Providing standardized ways of communicating between clinics would probably help.

Future environment:

Generative AI systems like Chat GPT have recently been released to the public. These AI, based on large language models handle language better than anything ever developed, and the conversations are indistinguishable from what a human would produce. In addition, multimodal models have been developed which incorporate different kinds of data in order to further augment an Ais capabilities. It seems possible that these breakthroughs will eventually lead to AI that could do everything a human can do.

Future scenario:

If Chat GPT continues to develop it should eventually be possible to develop an AI psychiatrist. It would be able to interact with the patient and recommend appropriate treatment based on what it learned. If that happens, there needs to be some way to ensure the AI is following the same kinds of ethics a human would. Encoding ethics in a form an AI can reason with is an ongoing effort with a lot of unanswered questions but it seems like we will need this figured out sooner rather than later. Failure to do so could result in the AI behaving in an unpredictable and damaging way.

Explain how you used ChatGPT or other AI tool(s) to help for this HW’s update to your project, or note if it wasn’t used.Answer: 

Answer:
For this assignment I used Chat GPT to find information sources for my writing which are included in the references.



Continue to develop the project, as we build it step by step over the semester so that it will be manageable rather than a crunch at the end, as follows. Write up another 350 words or more (per person if a group project) of new content if your project is a paper that must be written from scratch (for example if it is about yourself). If you are using an automatic text generator to help write it it would probably be quite a bit more than that amount of words. If the progress was code development or website creation, give the new code. If slides, paste the new text from the new slides into your blog and describe any images or graphics. If something else, explain specifically what you did, being sure to give examples if that makes sense. Label this HW consistently as in this example. IMPORTANT: No credit on this HW if you copy-paste material from an earlier HW. List any sources you used, such as websites or pages, ChatGPT or other tools, books, or whatever it might be. This takes some effort and it counts as part of your work. If your project is not a paper, explain what you did on your blog. For team projects, focus on your own activities although you can also discuss the overall effort to provide some context. Explain and give evidence (for example, if a web site, you could provide a link to it; if software, give the code; if a skit, give some of the script or list rehearsal or meeting times; if artwork, provide an image showing its status, etc.).  If you’re not sure what to do, see me or send me an email and I will try to suggest something.
 

Answer:

Main Ideas

 

As mentioned earlier, effective altruism states that we should utilize reason to figure out how to do the most good with the resources we have. A contrary attitude many people share is to donate resources to causes they feel passionate about regardless of how much good it may do objectively. This approach can lead to a lot of missed opportunity to do good. An example from the philosopher Peter Singer may help to illustrate this point. Suppose we are concerned about helping people afflicted with blindness. One possible way is with seeing eye dogs. These cost approximately $40,000 to train. However, there is a disease called trachoma which blinds people most often in developing countries. This disease can be prevented with a simple surgery that costs between$20-$50. If we just assume these are the only ways to help people with blindness then we have the choice between helping one person for $40,000 or helping between 800-2000 people for the same amount of money. Effective altruists would argue that we should focus more on helping people with trachoma since this leads to more people being helped.

This concept of prioritization is the most important, but others are worth enumerating. Effective altruists care about all people equally, not just those in their own country or culture. This is the main reason why they so often focus on the developing world. They are also open to any cause which has the potential to be benefit, for instance they are often concerned with AI safety which will be discussed later. Finally, while effective altruism is frequently concerned with charitable donations it is open to examining other kinds of assistance such as volunteering time.

In order to measure the effectiveness of charitable interventions, effective altruism advocates the use of various quantitative measures such as:

·        Cost effectiveness estimates: These cover how efficient certain interventions are given a certain amount of money. In the previous example, this would be how many cases of blindness from trachoma could be prevented for each dollar invested.

·        Disability-adjusted life years (DALYs): This measures the burden of disease on health. One DALY represents a year of life lost to illness  or disability.

·        Quality adjusted life years (QALYs): This measures the value of each year of healthy life. One QALY represents one year in perfect health.

These measures can be used in conjunction with another tool called decision theory. Decision theory evaluates different choices by examined expected utility which is the utility of an outcome multiplied by the probability of that outcome. In this case, utility might be measured in DALYs or QALYs. We then simply make the choice with the highest expected utility. One interesting consequence of this approach is that it provides a way to make the utilitarian perspective in ethics mathematical, at least in theory. There are still difficulties with this approach such as how to measure utility, how to aggregate utility across individuals, or how to deal with ignorance and bias.

References:

·        https://www.ted.com/talks/peter_singer_the_why_and_how_of_effective_altruism/transcript?language=en

·        https://en.wikipedia.org/wiki/Disability-adjusted_life_year#:~:text=The%20disability%2Dadjusted%20life%20year,life%20expectancy%20of%20different%20countries.

·        https://en.wikipedia.org/wiki/Quality-adjusted_life_year

·        https://plato.stanford.edu/entries/rationality-normative-utility/

·        https://en.wikipedia.org/wiki/Expected_utility_hypothesis

 

 

 



 Q1 (33 pts. max, 16.5 min) Prepare notes on a code of ethics (which will be your case study for this HW).

Online students: post your notes to your blog. Your notes should include the following.

  • A link or other citation to the case you are using, or if it is from personal experience, point that out.
  • A list of 8 or more important facts about the case, in your own words. You can refer to these as reminders when you tell your group members about the case.
  • A list of questions (3 or more) you could ask your group members in order to get an interesting and enlightening discussion going (for in-class students), or that you could consider yourself or ask someone else about (for online students); see the “Questions to ask during discussion” tab on the course web page for some suggestions in developing your discussion questions.
  • A 4th discussion question about how computer security relates to or could relate to the case. The computer security question could be about hacking, viruses or worms, theft of information, piracy, abuse of privileges, destruction of assets, information privacy, disruption of operations, unauthorized access, corporate abuse of information or computing, government abuse of information, physical harm, or any other issue in the general area of computer security.

Note: Professional neatness and clarity of format counts!

  • Add the following three additional questions to your list of questions:
    • What does virtue ethics say about this case?
    • What does utilitarianism say about this case?
    • What does deontology say about this case?
  • Professional neatness and clarity of format counts! Follow this example.


Answer: My ethical code is located at https://www.psychiatry.org/File%20Library/Psychiatrists/Practice/Ethics/principles-medical-ethics.pdf

 

Eight important facts are:

1.)     Psychiatrists are forbidden from gratifying their own needs by exploiting the doctor-patient relationship.

2.)     Psychiatrists are required to be competent and honest in all their professional dealings and report other psychiatrists who don’t live up to those standards.

3.)     Psychiatrists are required to safeguard patient privacy.

4.)     Psychiatrists are required to support medical research and medical education.

5.)     Psychiatrists should be free to serve where they wish except in the case of emergencies.

6.)     Psychiatrists are required to support the improvement of public health.

7.)     Psychiatrists are required to make responsibility to the patient their paramount concern.

8.)     Psychiatrists are required to support access to medical care for everyone.

 

Three questions to ask about the case:

1.)      Many people don’t take codes of ethics very seriously, do you think most psychiatrists take this code very seriously?

2.)     What ways can psychiatrists support access to medical care for everyone?

3.)     Sometimes psychiatrists can obtain a position of great psychological influence over a patient because of the nature of their relationship. How can they avoid influencing them in an unethical manner?

A fourth question related to computer security:

4.)     Do you think psychiatrists do a good job protecting patient privacy from cyberattacks? What kind of threats might exist for the medical systems they use to store patient information?

 

Three additional standard questions:

 

What does virtue ethics say about this case?

Answer: Virtue ethics would probably summarize a lot of these qualities as being related to kindness and selflessness. Essentially, if a psychiatrist was kind and selfless they would probably fulfill this code automatically.

What does utilitarianism say about this case?

Answer: This code is essential for the psychiatric profession to exist. Psychiatrists must be ethical and trustworthy otherwise they could not do their jobs. So a utilitarian would point out that this code is good because it enables the profession to exist.

What does deontology say about this case?

Answer: Much of the code is about patient rights and making sure the psychiatrist does not violate them. So from a deontological perspective this code is all about fulfilling what some would view as an ethical duty to a patient. Furthermore, ethical codes themselves are deontological by definition since it’s a list of rules that an individual is obligated to follow.

Code of ethics for banking

 

1.       Transparency: It is essential for those in the banking industry to promote transparency by disclosing all relevant information about their products and services to their valued customers.

 

2.       Fairness: Banking professionals should treat their customers with fairness and equity, without any discrimination based on race, gender, socioeconomic status, or any other characteristics.

 

3.       Privacy and Security: The privacy and security of customers should be safeguarded by those in the banking industry by adopting robust security policies, adhering to relevant data protection laws, and ensuring that their customers' sensitive information remains confidential.

 

4.       Responsiveness: Banking professionals should be responsive to their customers' concerns and proactive in anticipating their needs. They should be courteous and respectful while dealing with them and take prompt action to address their issues.

 

5.       Integrity: Banking professionals must uphold the highest standards of professionalism and honesty. They should follow ethical practices such as disclosing conflicts of interest, avoiding bribes, and maintaining accurate records of all financial transactions.

 

6.       Social Responsibility: Banking professionals should recognize that they are part of a broader community and take action to improve it. They can do this by promoting financial literacy, supporting financial inclusion, and investing in socially responsible initiatives.

 

7.       Innovation: Bankers should embrace innovation and adapt to emerging technologies. They should explore new ideas that enhance their customers' experience and improve efficiency within the banking industry.

 

Q1. Proposal for the Ten Commandments of Computer Ethic

    • What do you think of these?

Answer: These are good rules. Honestly these just seem like common sense, you can’t really argue with them.

 

Q2. What about this code of ethics is …

. . .deontologically based?

. . .utilitarianism based?

. . .virtue ethics based?

Answer: It’s deontologically based in the sense that it discourages destructive behavior by saying its illegal but also by alluding to personal responsibility by saying "this is your park". It’s utilitarian based by emphasizing the destructive results of removing or destroying animals, plants, or rocks. It’s virtue ethics based by categorizing this destructive behavior as harmful.

 

Q3. How could we redesign this code of ethics to be more

utilitarian?

deontological?

Humean virtue based (benevolent)?

Answer: We could design the code to be more utilitarian by emphasizing the benefits to the park visitors of preserving the park. We could make it more deontological by perhaps emphasizing the moral duty too preserve nature. Virtue ethics could be emphasized by talking about how thoughtful and responsible it is to preserve parks for others to enjoy.

Q4. Why is a law legitimate (or not)?

Answer: Laws are legitimate if they are morally correct. I don't really agree with legal positivism since society is frequently wrong about matters that end up as laws. 

 

Q5. lex iniusta non est lex”

             Can you guess any words?

Answer: Probably something like “an unjust law is not a law”.

 

 

 

 

 Facts of the case:

My case is derived from a scenario in the interactive video “The Lab”. I played as Aaron Hutchins, a Principal Investigator in a lab who is supervising several scientists. He has to find time to mentor his scientists while handling his research, teaching, and personal life. He discovers that one of his postdocs, Greg, has committed research fraud and he has to find a way to handle the accusation.

Analysis:

1.)     Greg didn’t set out to produce fraudulent research at first, he simply justified cutting corners when things got tough. Is this usually how unethical behavior starts? Do people just want to take the easy way out?

Answer: Yes, I think this is how unethical behavior frequently starts. I think most people want to think of themselves as good people and so don’t start out deliberately planning to do something wrong. I think its more likely that they rationalize what would be beneficial to themselves in the short term when they get tempted.

 

2.) Aaron helped his scientists act correctly in his lab by taking the time to mentor them. Do personal relationships with good people help us to remain ethical?

Answer: Yes, there is evidence that personal relationships can shape how we act under certain circumstances. They can help us gain feedback about our personal perceptions but also just act as a source of strength when things get tough. In the video we don’t see Aaron speaking with Greg much about his problems. It’s possible that might’ve contributed to Greg’s fraud.

 

3.) Scientists are under a lot of pressure to publish frequently in order to further their careers which may pressure them to behave unethically. Is a better system possible?

Answer: There probably is but I’m not aware of what the answer might be. There is a vibrant field called Metascience which asks how can we find better ways to do science. If we could somehow come up with a metric to measure research quality then that might be a better way to evaluate scientists than looking at the number of publications.

 

4.)  Aaron tried to foster a culture of openness and transparency in his lab in order to prevent problems from occurring. Can fostering this culture in other kinds of organizations prevent security problems from occurring?

Answer: I think so. The book “Engineering Trustworthy Systems” by O. Sami Saydjari speaks a lot about this topic He specifically calls out the harms that can come from keeping secrets and how secrecy can foster unethical behavior and impair security.

 

 

My conclusions:

While this scenario is fictional I think it provides a great illustration about the importance of culture in an organization and the kinds of working relationships people can develop. By staying involved in what was going on with his scientists he helped them to act correctly when they encountered difficulties. While the fraud Greg committed was bad he identified it before the damage could’ve gotten much worse. This kind of openness can be beneficial for security because it lets people identify problems in an organization much more easily.

Future environment:

Right now, the AI system ChatGPT has made a big impact for anyone involved in knowledge work. There is evidence that it will have an especially big impact for researchers. Over time its likely that it will get better and enable more aspects of research to be automated. While this would vastly improve productivity and enhance scientific discovery it might also lead to scientists taking credit for work they did not do.

Future scenario:

It’s likely that AI will be much more productive at scientific research that the scientists themselves. How then should we measure the effectiveness of researchers? This probably doesn’t seem serious to those outside the field but this is an important question for how governments and universities choose to allocate scarce resources for funding research. We will need to find some new ways to evaluate scientists and figure out new ways to support them. Generative AI systems will likely be able to generate more sophisticated fraud so it will probably be more difficult to identify misconduct as well. 

Q1. Provide an update on your plans regarding how or whether to use ChatGPT or other AI tools. Explain and justify. Recall  the 1,390 word short story the instructor produced in about 20 minutes using ChatGPT. Recall the information we provide about using these tools.

Answer:
 

I plan on using ChatGPT for editing my writing and improving the style. I will not be using it to generate content. Since LLM's like ChatGPT have an issue with lying I don't totally trust it to do an assignment like this. Also, I know the material well enough that I don't really need it to help with that. 


Q2. For your ethics-related term project (see “Course Information” tab for details): Let us continue to develop it step by step over the semester so that it will be manageable rather than a crunch at the end, as follows. Write up another 350 words or more (per person if a group project) of new content if your project is a paper that must be written from scratch (for example if it is about yourself). If you are using an automatic text generator to help write it it would probably be quite a bit more than that amount of words. If the progress was code development or website creation, give the new code. If slides, paste the new text from the new slides into your blog and describe any images or graphics. If something else, explain specifically what you did, being sure to give examples if that makes sense. Label this HW consistently as in this example. IMPORTANT: No credit on this HW if you copy-paste material from an earlier HW. List any sources you used, such as websites or pages, ChatGPT or other tools, books, or whatever it might be. This takes some effort and it counts as part of your work.

Answer:



Introduction

Effective altruism (EA) is a growing intellectual and charitable movement which tries to use reason and evidence to determine what are the best ways to have a positive impact upon society. This essay aims to explore this movement as well as provide an overview of the issues related to it.

We live in a world with many worthwhile causes but our resources for addressing them are very limited. Therefore, EA argues that we should rationally determine which of these causes has the largest positive impact and direct our resources to those. The nature of the provided resources varies but can include money, career choices, or political activism.

There have been many thinkers who have influenced the movement but the most influential by far has been the philosopher, Peter Singer. Singer’s work has a utilitarian perspective and is particularly concerned about global poverty. He has argued that since those in rich countries can positively affect the lives of those in poor countries by donating relatively small amounts of money they are morally obligated to do so. There have also been a number of organizations founded to support EA. One of the earliest was the Center for Effective Altruism (CEA) which was founded in Oxford in 2009. The CEA was founded to support organizations within the EA community. There is also GiveWell, founded in 2007, which evaluates charities based on EA ideas and provides recommendations for donors. 80,000 hours is another organization which focuses more on the ethical impact of various career choices and makes recommendations for which careers have the biggest positive ethical impact.

Since EA is concerned with measuring ethical impact, it is naturally heavily influenced by the utilitarian tradition in ethics. Decision theory is also another field which is important to EA. Decision theory seeks to maximize utility given a particular set of likely outcomes with associated probabilities. Finally, EA is also influenced by the heuristics and biases theory in cognitive science which shows the limitations of human reasoning and provides a justification for a reliance on more evidence based approaches.

One final concern of EA has been Existential Risks. Existential Risks are events that threaten the existence or long-term potential of humanity. Some examples include nuclear war, asteroid impacts, or renegade Artificial Intelligence. Effective Altruists argue that these events have been neglected and deserve special consideration due to their potentially high impact even if their likelihood may be low.

 

Sources:

·        https://en.wikipedia.org/wiki/Effective_altruism

·        https://en.wikipedia.org/wiki/Peter_Singer

·        https://en.wikipedia.org/wiki/Centre_for_Effective_Altruism

·        https://www.lesswrong.com/tag/decision-theory

existential risks: threats to humanity's survival (existential-risk.org)

Q3. Explain what needs to be done next on the project. Put this in your blog, labeling it consistently as in 
this example.

Answer: 

The next step is to work on the "Main Ideas" section of my paper. 


 Q1 (33 pts. max, 16.5 min) Prepare case notes on an ethics case related to ethics in science or engineering research. An ethics case is an example, event, experience, legal case, medical case, and so on from real life, a movie, your imagination, and so on, which has some ethics related aspects to consider. Online students: post your notes to your blog. Your notes should include the following.
  • A link or other citation to the case you are using, or if it is from personal experience, point that out.
  • A list of 8 or more important facts about the case, in your own words. You can refer to these as reminders when you tell your group members about the case.
  • A list of questions (3 or more) you could ask your group members in order to get an interesting and enlightening discussion going (for in-class students), or that you could consider yourself or ask someone else about (for online students); see the “Questions to ask during discussion” tab on the course web page for some suggestions in developing your discussion questions.
  • A 4th discussion question about how computer security relates to or could relate to the case. The computer security question could be about hacking, viruses or worms, theft of information, piracy, abuse of privileges, destruction of assets, information privacy, disruption of operations, unauthorized access, corporate abuse of information or computing, government abuse of information, physical harm, or any other issue in the general area of computer security.

 

Answer: My source is https://ori.hhs.gov/TheLab/TheLab.shtml                                                                            

Eight important facts are:

1.)     This case is fictional, and I played the role of Aaron Hutchins, the PI in a lab which has made several recent high profile discoveries.

2.)     Being the PI of a lab is hectic and Aaron frequently has to juggle all the responsibilities that come with it.

3.)     Aaron frequently has grad students interrupt him by asking for help when he’s in the middle of important work.

4.)     In this scenario, he has the option of brushing off these requests or taking time to try to resolve them.

5.)     One of Aaron’s postdoc’s Greg, has been having a lot of positive results with his experiments recently and Aaron has been pretty happy about that.

6.)     One of his grad students, Kim comes to him with discrepancies between Greg’s paper and what was actually recorded in the lab. She doesn’t want to accuse Greg of misconduct but feels obligated to report this problem.

7.)     Aaron has the option to handle this matter himself or report it to the Research Integrity Officer (RIO).

8.)     It turns out Greg was actually producing fraudulent research. Although this negatively impacts his lab it would have been far more damaging had he not had Kim’s accusations investigated.

 

Three questions to ask about the case:

1.)     Greg didn’t set out to produce fraudulent research at first, he simply justified cutting corners when things got tough. Is this usually how unethical behavior starts? Do people just want to take the easy way out?

2.)     Greg helped his scientists act correctly in his lab by taking the time to mentor them. Do personal relationships with good people help us to remain ethical?

3.)     Scientists are under a lot of pressure to publish frequently in order to further their careers which may pressure them to behave unethically. Is a better system possible?

A fourth question related to computer security:

4.)     Aaron tried to foster a culture of openness and transparency in his lab in order to prevent problems from occurring. Can fostering this culture in other kinds of organizations prevent security problems from occurring?

Three additional standard questions:

1.)     What does virtue ethics say about this case?

Answer: I think virtue ethics really highlights the role of personal qualities in producing ethical behavior. Aaron takes the time to mentor his scientists which helps them behave correctly but also identify misconduct when it occurred. In the case with Greg, his personal qualities also led him to cut corners rather than work through his experiments problems.

2.)     What does utilitarianism say about this case?

Answer: There was definitely a cost/benefit analysis to be made by some of the people involved. Kim was aware that she was ruining Greg’s career by coming to Aaron with what she knew. But she also understood that the more important good was the integrity of the lab’s research. Utilitarianism views this as ethical even though this decision ended up harming Greg since a more important outcome was obtained.

3.)     What does deontology say about this case?

Answer: Deontology probably makes the simplest and most direct statement on this case. Greg lied about his research results and lying is wrong. Every ethical system in history views lying as wrong. Furthermore, Kim had the choice of being honest about what she knew or lying by omission. I think Deontology would view Kim’s actions as ethical since she was behaving according to the rule of honesty.

 
  1.  I played as the postdoc Rao again but this time I chose a different path.
  2.  I chose to not ask for clarification when the PI asked if I was doing something wrong. I just kept quiet. 
  3. When Rao’s wife asked him to come home from the lab, I decided to not skip the time point.
  4. When asked to go to the RCR training I decided not to go.
  5. I chose not to get some food and kept working on my experiment.
  6. I chose not to go to the RCR training when it was offered. 
  7. When Rao was asked to join his wife for dinner with her parents, I decided to cancel my experiment and start again the next day. 
  8. As a result of my choices, Rao was behind on his experiment and decided to cut corners and make up data.
  9.  Even though I decided to help Kim turn in Greg for misconduct, eventually Rao's misconduct was discovered. He was then fired from the lab and forced to return to his country since he was on a work visa. 
  10.  Overall, I think this path showed the importance of following protocol. It helps researchers stay on track. It also shows the importance of trying to find balance in your life as a researcher. Not only did his excesses cost him his career, it severly damaged his relationship with his wife once she di

 
  1.  I played as the postdoc Rao.
  2.  I chose to ask for clarification when the PI asked if I was doing something wrong.
  3. When Rao’s wife asked him to come home from the lab, I decided to not skip the time point.
  4. When asked to go to the RCR training I decided not to go.
  5. I chose not to get some food and kept working on my experiment.
  6. When Rao was asked to join his wife for dinner with her parents, I decided to perform the cell harvest on time and join them later.
  7. I decided to go to the lab in time to do the harvest.
  8. When given the chance to go to the RCR training again I decided to take it.
  9. I like how this example demonstrates how to successfully navigate all the difficulties associated with being a postdoc.
 Q3 (34 pts. max, 17 min) Finally, write up your case on another posting to your blog with the following subheadings:
  • “The facts of the case.” Here is where you describe the case in your own words.
  • “Analysis.” Examine the case in terms of the questions and/or discussion. If the analysis is simple or obvious, then address each of the utilitarian ethics perspective, the deontological ethics perspective, and the virtue ethics perspective.
  • “Conclusions.” Your analysis, opinions, and conclusions about the case. Any opinions from your discussion group members that you disagree with and why. Three sentences of average length or more.
  • “Future environment.” Describe your vision of a future in which technology is more advanced than today, or society has changed in some significant way. Three sentences of average length or more.
  • “Future scenario.” Describe how this ethical case (or an analogous one) would or should play out in the environment of the future, and give your opinions about it. Three sentences of average length or more.

Answer:

The facts of the case. Amazon is the world’s leader in online retail. However, their superior performance requires them to be extremely demanding of their workforce. This has resulted in workers suffering various physical and mental hardships as well as numerous complaints of abuse. Furthermore, Amazon has barely paid any taxes in all this time which many people have criticized as unethical.

 

Analysis.  

1.)     Question: These workers all freely chose to work at amazon. Does that make amazons treatment of them alright?

Answer: Objectively, Amazon is inflicting severe physical and mental stresses on their workers that is causing them suffering. Every ethical perspective regards inflicting suffering on people as unethical so yes I would say its unethical. In any case, I don’t think you can talk about free choice when there is such a huge difference in power between Amazon and the workers.

2.)   Question: Amazon views its practices as being necessary to provide the best service to its customers. If it did not do these things then it might not be able to hire so many people. Does that make amazons treatment of their workers alright?

Answer: I don’t think so. By that logic paying them even less would enable them to hire even more people and would make that ethical too. It doesn’t make sense that treating workers poorly under any circumstances would be ethical. Besides, labor saving technology would enable them to provide the best service to their customers in the long term and abusing their workers probably disincentivizes them from doing that.

3.) Question: Amazon barely pays taxes because they continually reinvest profits to expand their business. Considering this, is it still unethical that they barely pay taxes?

Answer: I actually don’t think this is unethical on their part. They are paying what is legally required so it never made sense to me that this was unethical behavior. If someone thinks it were unethical then I would have to ask why is it not unethical for regular people to not pay more taxes. In any case, Amazon avoids taxes by investing more in their company to improve productivity. This seems like a positive and desirable result and shouldn’t be criticized.

 

4.) Question: Amazon employees are under tremendous pressure to perform. Good security often gets in the way of good performance. Do you think Amazon is more likely to have bad security since they would probably view this as an obstacle to performance?

Answer: I do think this is a concern. Security engineers routinely deal with the problem of workers resisting security procedures because they get in the way of their job. Since Amazon’s office workers report experiencing extreme pressure to perform it seems reasonable that they might avoid good security.

Conclusion: The consensus seems to be that Amazon’s treatment of its workers is unethical and I wholeheartedly agree. Every ethical system says that you should treat people with respect and concern for their wellbeing. Saying that high performance requires them to treat their workers this way reminds me of similar arguments made by factory owners during the Gilded Age. As everyone knows, those arguments were not true and factory productivity skyrocketed as time passed.  

Future environment: AI is currently progressing at an exponential pace and this will likely enable business to automate a huge number of tasks that previously required human input. Automating human physical labor has been much more challenging since movement has always been more difficult for machines to handle. I think its likely this will change someday, perhaps as neuroscience progresses and we can better understand how our own brains handle movement.

Future scenario: In a world with highly advanced robotics it seems like worker mistreatment will probably not exist since there will probably not be very many human workers. The machines involved wouldn’t suffer so Amazon or other companies could run them as hard as they liked. If they don’t pay much in taxes that might be an ethical problem though. A society without much human labor would probably require a universal basic income which would have to be financed through taxes.

 

Page generated Aug. 13th, 2025 01:24 pm
Powered by Dreamwidth Studios