In this report we describe an argument-based model - ProCLAIM - for monitoring agents' decisions in safety-critical environments intended, on the one hand, to prevent agents from undertaking decisions that do not comply with the established guidelines given by the domain, and on the other hand, by enabling agents to argue over their intended decisions, to allow agents to exceptionally undertake decisions that violate the existing guidelines when their given arguments supporting the decisions are accepted. Furthermore, ProCLAIM defines a Case-Based Reasoning component intended for revising the guidelines knowledge that control the agents' decisions. Namely, the arguments given by the agents to support their decisions are stored in a case base and eventually reused in order to revise the Guideline Knowledge so that it will accept new decisions shown to be successful despite of violating the guidelines, and at the same time, to reject decisions that in spite of being complaint with these guidelines have shown to be unsuccessful. We believe and aim to show in this report that ProCLAIM provides a number of interesting theoretical innovations in artificial intelligence which are motivated with valuable practical applications.
CitationTolchinsky, F., Cortés, C. "ProCLAIM: An argument-based model for monitoring decisions and revising guidelines knowledge in safety-critical environments". 2006.
All rights reserved. This work is protected by the corresponding intellectual and industrial property rights. Without prejudice to any existing legal exemptions, reproduction, distribution, public communication or transformation of this work are prohibited without permission of the copyright holder. If you wish to make any use of the work not provided for in the law, please contact: email@example.com