dc.rights.license | Restricted to current Rensselaer faculty, staff and students in accordance with the
Rensselaer Standard license. Access inquiries may be directed to the Rensselaer Libraries. | |
dc.contributor | Nirenburg, Sergei | |
dc.contributor | Varela, Carlos | |
dc.contributor | Bello, Paul | |
dc.contributor | Sundar Govindarajulu, Naveen | |
dc.contributor.advisor | Bringsjord, Selmer | |
dc.contributor.author | Giancola, Michael | |
dc.date.accessioned | 2023-06-01T19:13:13Z | |
dc.date.available | 2023-06-01T19:13:13Z | |
dc.date.issued | 2023-05 | |
dc.identifier.uri | https://hdl.handle.net/20.500.13015/6632 | |
dc.description | May2023 | |
dc.description | School of Science | |
dc.description.abstract | Human beings routinely encounter situations containing informal, non-quantitative uncertainty. Consider for example the following scenario: Driving toward a four-way intersection, you stop at a red light. Eventually, the light turns green, but you perceive a driver approaching from your left, their light having turned red moments ago, and subsequently perceive their car accelerate. What can we say about this situation? It certainly seems likely that the driver will drive straight through the light. Of course, it's entirely possible that the driver will change their trajectory at the last second and slam on the brakes. How can we quantify this uncertainty (assuming this is what we desired)? We could compute a probability over all recorded instances of drivers accelerating toward red lights and either going through or stopping. But clearly humans don’t engage in anything like this computation when they reason about other drivers on the road. We use likelihoods to express qualities (as opposed to quantities e.g. probabilities) of the uncertainty of beliefs. In this way, one may reason that "I believe it's highly likely that the driver will drive through the red light" and subsequently come to the conclusion that, despite having the legal right-of-way, one should wait to avoid an accident. Autonomous agents, in order to effectively interact with humans that reason this way, will need to possess and exploit the ability to model reasoning with notions of qualitative uncertainty. The present dissertation introduces Cognitive Likelihood, a framework for reasoning with uncertain beliefs. The framework is implemented within a novel logic -- the Inductive Deontic Cognitive Event Calculus (IDCEC) -- which includes a formal grammar and semantics which dictate how agents can reason within the framework. These formalisms are implemented in an automated reasoner called ShadowAdjudicator in order to enable the automatic generation of IDCEC proofs. We present the novel algorithm underlying ShadowAdjudicator which enables this automated proof discovery. Finally, we demonstrate how these contributions can be utilized to solve autonomous driving problems and to adjudicate arguments regarding a notorious probability puzzle, the Monty Hall Problem. | |
dc.language | ENG | |
dc.language.iso | en_US | |
dc.publisher | Rensselaer Polytechnic Institute, Troy, NY | |
dc.relation.ispartof | Rensselaer Theses and Dissertations Online Collection | |
dc.subject | Computer science | |
dc.title | Reasoning with cognitive likelihood for artificially-intelligent agents: formalization & implementation | |
dc.type | Electronic thesis | |
dc.type | Thesis | |
dc.date.updated | 2023-06-01T19:13:15Z | |
dc.rights.holder | This electronic version is a licensed copy owned by Rensselaer Polytechnic Institute (RPI), Troy, NY. Copyright of original work retained by author. | |
dc.creator.identifier | https://orcid.org/0000-0002-7194-5082 | |
dc.description.degree | PhD | |
dc.relation.department | Dept. of Computer Science | |