Description | There is much discussion about how an AI may comply with or adhere to legal and ethical principles as found in applicable laws, regulations, ethics, and value systems. This might be done by external monitor (i.e., a judge) of an AI with the aim to entirely rule out bad behaviour or to penalise the operators of the AI. Alternatively, and the focus of this project, research has been growing on how an AI might have values and norms built into the operating system. Key to the analysis is that while bad behaviour might occur, it can be reasoned with and is sometimes justifiable. The research consists of: (1) reviewing, analysing, critiquing, and summarising recent work on Computational Value Engineering and Computational Machine Ethics, and (2) implementing some specific computational model in (one of) Prolog, Python, or Haskell.
Among the topic areas are:
Value and norm representation
Value and norm learning
Value and norm agreement
Value and norm conflict resolution
Value-driven argumentation and negotiation
Value-driven decision making
Value-driven system design
Value-alignment
Value-driven explainability
Legal questions in value and norm enforcement |