Q1. Are they clear categories, or a continuum?
What is the current state of the technology?
Is there a line to be drawn? Where?
Answer:
I think its obviously a continuum. Lots of military weapons have computer chips in them that guide their behavior. Right now its very sophisticated and we should only expect it to get better. I think we can draw a line where weapons don’t require any human oversight at all.
Q2. Do you think lethal fully autonomous robots should be banned?
Answer:
I don’t really see the point. Dead is dead regardless of how it happens. However, there should be some sort of ethical system guiding them. We don’t need robots committing war crimes.
Q3. The ethics argument
“… we should not allow machines to make life-taking decisions for which others – or nobody – will be culpable.”
2. The pragmatic argument
“… lethal autonomous weapons, selecting and engaging targets without human intervention, would be dangerously destabilizing for every country and individual.”
What is your take on these arguments?
Answer:
I think the ethics argument is a bit hollow, quite often no one is held culpable for bad decisions made during a war. I think the pragmatic argument misses an important point about combat robots. Whichever country implements them is going to have an enormous advantage over those that don’t. This is going to create powerful incentives for others to develop them as well. So, I think we will have them at some point.
Q4. Is a mine or booby trap a military robot?
Answer:
I don’t think so. I think something has to move through the physical world in order to be considered a robot.
Q5. What are the ethics of each of these?
Answer:
I think these should be carefully deployed away from civilians. There should also be some kind of effort to dispose of them after the fighting.
Q6. Should killer robots be banned also?
Answer:
Again, I don’t think they will be. The advantages for using them would be too great.
Q7. What about autonomous military vehicles?
Personnel transport vehicles
Is human override ethically required?
Answer:
Someone should have the capability to override the vehicle for pure safety reasons. It would be like an autonomous car that was impossible for the user to control.
Q8. Tanks
Planes (i.e. drones)
Armed and personnel carrying
Ships
Submarines
Is human override ethically required?
Any differences?
Answer: Once again, I think for pure pragmatism you need to have any robotically controlled vehicle to have some sort of override for pure safety reasons.
Q9. What about military rescue operations?
Are there ethical considerations in automated rescue?
Deciding who to evacuate or treat first:
Is this an ethical issue or a software design issue?
Is this the same as civilian rescue operations?
Answer: I think there are ethical issues around this. Situations like those may require difficult decisions to be made so they should probably be guided by some sort of ethics of triage.
Q10. Robot ethics can be categorized as
Operational ethics
Everything is pre-programmed
Ethical agency
Robot needs to reason from principles
Is one of these harder than the other?
Answer:
Reasoning from first principles is probably harder. You would theoretically have to cover a lot of ambiguous cases which would be difficult for the programmers.
Q11. One expert claims military robots can be more ethical than humans
Can you think of an argument for that?
Or a counterargument?
Answer:
I think its true. Its always possible for a human soldier to display unethical behavior, you can never really be sure. Robot soldiers would have their ethics designed into them so you know there could be no deviation. However, if the rules weren’t carefully designed then edge cases might still cause them to behave unethically.