My Answers for HWproj5
Apr. 6th, 2023 11:54 pm- Explain how you used ChatGPT or other AI tool(s) to help for this HW’s update to your project, or note if it wasn’t used.
- Continue to develop the project, as we build it step by step over the semester so that it will be manageable rather than a crunch at the end, as follows. Write up another 350 words or more (per person if a group project) of new content if your project is a paper that must be written from scratch (for example if it is about yourself). If you are using an automatic text generator to help write it it would probably be quite a bit more than that amount of words. If the progress was code development or website creation, give the new code. If slides, paste the new text from the new slides into your blog and describe any images or graphics. If something else, explain specifically what you did, being sure to give examples if that makes sense. Label this HW consistently as in this example. List any sources you used, such as websites or pages, ChatGPT or other tools, books, or whatever it might be. This takes some effort and it counts as part of your work. If your project is not a paper, explain what you did on your blog. For team projects, focus on your own activities although you can also discuss the overall effort to provide some context. Explain and give evidence (for example, if a web site, you could provide a link to it; if software, give the code; if a skit, give some of the script or list rehearsal or meeting times; if artwork, provide an image showing its status, etc.). If you’re not sure what to do, see me or send me an email and I will try to suggest something.
Answer:
I used ChatGPT to find the sources that I then used to write this assignment.
Causes
Effective altruism advocates prioritization of charity towards goals that can be demonstrated to be the most effective. Given that goal, it’s unsurprising that much of the effort is directed toward the developing world, especially pertaining to health. Some of these causes include deworming, water sanitation, malaria nets, and vitamin supplementation.
Parasitic worm infections are a serious problem affecting roughly 835 million children worldwide. These infections can interfere with nutrient uptake and adversely impair their physical and mental development. Deworm the World is one of the top deworming charities that has delivered over 1.5 billion treatments globally at a cost of about 50 cents per treatment although they are delivered for free to affected children. As such a tremendous amount of effective altruist advocacy and support has been directed towards them.
One of the top ways to prevent the spread of Malaria is with insecticide treated mosquito nets. Malaria nets can often be somewhat costly and there are challenges with getting them to people, but they have been shown to significantly prevent the likelihood of malaria infection. Malaria can lead to reduced IQs and poorer health as an adult, so prevention of malaria has been shown to lead to better health as an adult and better economic outcomes. Overall, its been shown to be highly cost effective and so is probably the cause most synonymous with effective altruism.
Finally, there is water sanitation which is another major problem for the developing world. A cost-benefit analysis of water and sanitation interventions indicates that some interventions are highly cost-effective for the control of diarrhea among under-5-year-olds, on par with oral rehydration therapy. The benefits of the interventions include gain in productive time and reduced health care costs saved due to less illness and prevented deaths. The results show that all water and sanitation improvements are cost-beneficial in all developing world sub-regions.
AI and Existential Risk
One other area of focus for EA concerns what is known as Existential Risk. Existential risks are those risks that threaten human survival. As mentioned earlier, if we look at existing theories of expected value then existential risks are highly important concerns ethically even if their likelihood is small because their impact would be so large. One category of existential risk that is especially interesting is risk from rogue AI. Essentially, if an AI could gain the ability to improve its own cognitive abilities, then it could quickly move to a state beyond human comprehension, an idea known as the Singularity. If these AI do not share human values, then it is very likely they could do something that would be against human interests and therefore threaten human survival. The problem of how to give it human values and make it safe is known as the alignment problem.
AI alignment is a large subject that consists of several different issues. The first is known as the black box problem. Essentially, we can easily specify the inputs and the outputs of an AI system, but we don’t know the process for moving from one to another since it so complicated and not explicitly programmed. This means we can’t evaluate the system and determine where it is potentially going wrong. Solving this problem will be crucial to evaluating the safety of specific AI models.
Another issue is the concern of consistency. Wixhom et al. identify three consistency concerns for AI alignment, these are consistency of the AI with reality, the consistency of the AI solution with the model, and the consistency of the AI with stakeholder needs.
Honesty is another interesting issue. AI researchers distinguish between “truthfulness” and “honesty”. Truthfulness refers to the AI making objectively true statements whereas honesty refers to the AI stating what it believes to be true. Unfortunately, research indicates that AI hold no stable beliefs so it may not be possible to evaluate honesty.
Auditing AI models is also an important part of AI Alignment. This involves ensuring that AI models are behaving as intended. However, it’s difficult for humans to evaluate the behavior of AI when the system outperforms the human in a given area. This problem is known as scalable oversight. Examples of this include summarizing books, writing secure and defect-free code, and predicting long term outcomes. This problem has led to some AI systems attempting to deceive researchers into thinking that the measured task was achieved.
Finally, it’s worth considering tackling the problem of emergent behaviors. AI are governed by complex rules that can lead to behavior that wasn’t explicitly programmed. Countering harmful behavior that could emerge spontaneously is therefore an important concern. An example of this is power seeking behavior. This occurs when AI attempt to gain control over their environment in an attempt to find the most optimal way to accomplish its task. Preventing power seeking behavior is going to be extremely important as AI become progressively more powerful.
Overall, the task of AI alignment requires us to be very specific about what human's value. So not only must we solve ethical issues, but we must be so specific about the solution that we can express it as computer code. So, at least in the context of AI, ethics must ultimately become a branch of engineering and not just philosophy. Effective Altruism argues that this is an extremely important task to focus on since it may determine the future of billions of lives.
References
· https://pubmed.ncbi.nlm.nih.gov/10191558/
· https://www.academia.edu/5494398/Global_cost_benefit_analysis_of_water_supply_and_sanitation
· https://effectivealtruism.nz/deworm-the-world/
· https://www.evidenceaction.org/dewormtheworld/
· https://www.effectivealtruism.org/articles/ea-global-2018-amf-rob-mather
· https://existential-risk.org/concept.pdf
· https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence
· https://en.wikipedia.org/wiki/AI_alignment
· https://en.wikipedia.org/wiki/The_Alignment_Problem
· https://cisr.mit.edu/publication/2020_1101_AI-Alignment_WixomSomehGregory
· https://spyscape.com/article/do-ais-seek-power