Mostra i principali dati dell'item

dc.contributor.authorNegri, Gianpiero
dc.date.accessioned2023-03-13T12:12:40Z
dc.date.available2023-03-13T12:12:40Z
dc.date.issued2021-03-20
dc.identifier.urihttp://elea.unisa.it:8080/xmlui/handle/10556/6463
dc.identifier.urihttp://dx.doi.org/10.14273/unisa-4535
dc.description2019 - 2020it_IT
dc.description.abstractOne of the major technological and scientific challenges in developing autonomous machines and robots is to ensure their ethical and safe behavior towards human beings. When dealing with autonomous machines the human operator is not present, so that the overall risk complexity has to be addressed to machine artificial intelligence and decision-making systems, which must be conceived and designed in order to ensure a safe and ethical behaviour. In this work a possible approach for the development of decision-making systems for autonomous machines will be proposed, based on the definition of general ethical criteria and principles. These principles concern the need to avoid or minimize the occurrence of harm for human beings, during the execution of the task the machine has been designed for. Within this scope, four fundamental problems can be introduced: 1. First Problem: Machine Ethics Principles or Laws Identification 2. Second Problem: Incorporating Ethics in the Machine 3. Third Problem: Human-Machine Interaction Degree Definition 4. Fourth Problem: Machine Misdirection Avoidance. This Ph.D. research activity has been mainly focused on First and Second Problems, with specific reference to safety aspects. Regarding First Problem, main scope of this work is on ensuring that an autonomous machine will act in a safe way, that is: • No harm is issued for surrounding human beings (non maleficence ethical principle) • In case a human being approaching a potential source of harm, the machine must act in such a way to minimize such harm with the best possible and available action (non-inaction ethical principle) and, when possible and not conflicting with above principles: • The machine must act in such a way to preserve its own integrity (self-preservation). Concerning Second Problem, the simplified version of some ethical principles reported above has been used to build a mathematical model of a safe decision system based on a game theoretical approach. When dealing just with safety and not with general ethics, it is possible to adopt some well-defined criteria in ensuring the machine behaviour is not issuing any harms towards human beings, such as: • Always ensure the machine is keeping a proper safety distance at a certain operating velocity • Always ensure that, within a certain range, the machine can detect the distance between a human being and the location of a potential harm. [edited by Author]it_IT
dc.language.isoenit_IT
dc.publisherUniversita degli studi di Salernoit_IT
dc.subjectRoboticait_IT
dc.subjectIntelligenza artificialeit_IT
dc.subjectTeoria dei giochiit_IT
dc.titleA game theoretical approach to safe decision making system development for autonomous machinesit_IT
dc.typeDoctoral Thesisit_IT
dc.subject.miurMAT/07 FISICA MATEMATICAit_IT
dc.contributor.coordinatoreAttanasio, Carmineit_IT
dc.description.cicloXXXIIIit_IT
dc.contributor.tutorTibullo, Vincenzoit_IT
dc.identifier.DipartimentoMatematicait_IT
 Find Full text

Files in questo item

Thumbnail
Thumbnail

Questo item appare nelle seguenti collezioni

Mostra i principali dati dell'item