Motivations and Risks of Machine Ethics

Beschreibung

Mindmap am Motivations and Risks of Machine Ethics, erstellt von Jassiel Barroso am 16/04/2020.
Jassiel Barroso
Mindmap von Jassiel Barroso, aktualisiert more than 1 year ago
Jassiel Barroso
Erstellt von Jassiel Barroso vor etwa 4 Jahre
11
0

Zusammenfassung der Ressource

Motivations and Risks of Machine Ethics
  1. Categories of risk:
    1. The risk that ethically aligned machines could fail, or be turned into unethical ones.
      1. (Failure and Corruptibility)
        1. Charging machines with ethically important decisions
          1. Carries the risk of reaching morally unacceptable conclusions.
            1. That would have been recognized more easily by humans.
            2. The simplest case of this is if the machine relies on misleading information about the situations it acts in
              1. if it fails to detect that there are humans present which it ought to protect.
                1. if the moral principles or training examples that human developers supply to a system contain imperfections, or contradictions, this may lead to the robot inferring morally unacceptable principles.
              2. Many currently existing machines without the capacity for ethical reasoning are also vulnerable to error and corruptibility.
                1. Definite facts as to what a morally correct outcome or action would be
                  1. Definite facts as to what a morally correct outcome or action would be
                    1. Risks that the morally correct outcome or action might not be pursued by the automated system for one reason or another.
                      1. Most humans have a limited sphere of influence, but the same may not be true for machines that could be deployed en masse, while governed by a single algorithm.
                  2. The risk that ethically aligned machines might marginalize alternative value systems.
                    1. Value Incommensurability, Pluralism, and Imperialism.
                      1. Pluralism maintains that there are many different moral values, where “value” is understood broadly to include duties, goods, virtues, or so on.
                        1. Most humans have a limited sphere of influence
                      2. The risk of creating artificial moral patients.
                        1. Creating moral patients
                          1. While machine ethicists may be pursuing the moral imperative of building machines that promote ethically aligned decisions and improve human morality
                            1. This may result in us treating these machines as intentional agents, which in turn may lead to our granting them status as moral patients.
                              1. Humans are both moral agents and moral patients.
                                1. moral agents: the ability of knowingly act in compliance with, or in violation of, moral norms we are held responsible for our actions (or failures to act).
                                  1. moral patients: we have rights, our interests are usually thought to matter, and ethicists agree we should not be wronged or harmed without reasonable justifications.
                              2. The risk that our use of moral machines will diminish our own human moral agency.
                                1. Undermining Responsibility
                                  1. This means undermine our own capacity to make moral judgements
                                  2. three strands to this problem:
                                    1. Automated systems “accommodate incompetence” by automatically correcting mistakes.
                                      1. Even when the relevant humans are sufficiently skilled, their skills will be eroded as they are not exercised.
                                        1. Relevant to circumstances in which either the goal of the automated systems was for a machine to make ethical decisions alone
                                        2. Relevant in cases where the decision-making process is entirely automated
                                          1. (including cases where the system is intended to function at better than human level).
                                          2. Automated systems tend to fail in particularly unusual, difficult or complex situations, with the result that the need for a human to intervene is likely to arise in the most testing situations.
                                            1. These machines would also be able to recognize their own limitations, and would alert a human when they encounter a situation that exceeds their training or programming.
                                      Zusammenfassung anzeigen Zusammenfassung ausblenden

                                      ähnlicher Inhalt

                                      Gedächtnis
                                      Nicole Girard
                                      ein kleines Informatik Quiz
                                      AntonS
                                      Geographie Quiz
                                      AntonS
                                      Öff. Recht - Kommunal- und Baurecht - Streitigkeiten
                                      myJurazone
                                      B, Kapitel 1.2, Grundlagen der Sozialen Marktwirtschaft
                                      Stefan Kurtenbach
                                      Top Tools für Zusammenarbeit im Web 2.0
                                      Gaby K. Slezák
                                      Struktur und Entwicklung der Gegenwartgesellschaft Österreich im Wandel - Fragen
                                      Anita Pitsch
                                      Vetie Tierhaltung und -hygiene Quiz 2012
                                      Elisabeth Tauscher
                                      Vetie - Probefragen+Klausur Tierhaltung
                                      E. König
                                      Vetie Geflügelkrankheiten Fragebogen Röntgen 2, Haltung und Arten
                                      N. H.