Show simple item record

dc.rights.licenseRestricted to current Rensselaer faculty, staff and students in accordance with the Rensselaer Standard license. Access inquiries may be directed to the Rensselaer Libraries.
dc.contributorNirenburg, Sergei
dc.contributorVarela, Carlos
dc.contributorBello, Paul
dc.contributorSundar Govindarajulu, Naveen
dc.contributor.advisorBringsjord, Selmer
dc.contributor.authorGiancola, Michael
dc.date.accessioned2023-06-01T19:13:13Z
dc.date.available2023-06-01T19:13:13Z
dc.date.issued2023-05
dc.identifier.urihttps://hdl.handle.net/20.500.13015/6632
dc.descriptionMay2023
dc.descriptionSchool of Science
dc.description.abstractHuman beings routinely encounter situations containing informal, non-quantitative uncertainty. Consider for example the following scenario: Driving toward a four-way intersection, you stop at a red light. Eventually, the light turns green, but you perceive a driver approaching from your left, their light having turned red moments ago, and subsequently perceive their car accelerate. What can we say about this situation? It certainly seems likely that the driver will drive straight through the light. Of course, it's entirely possible that the driver will change their trajectory at the last second and slam on the brakes. How can we quantify this uncertainty (assuming this is what we desired)? We could compute a probability over all recorded instances of drivers accelerating toward red lights and either going through or stopping. But clearly humans don’t engage in anything like this computation when they reason about other drivers on the road. We use likelihoods to express qualities (as opposed to quantities e.g. probabilities) of the uncertainty of beliefs. In this way, one may reason that "I believe it's highly likely that the driver will drive through the red light" and subsequently come to the conclusion that, despite having the legal right-of-way, one should wait to avoid an accident. Autonomous agents, in order to effectively interact with humans that reason this way, will need to possess and exploit the ability to model reasoning with notions of qualitative uncertainty. The present dissertation introduces Cognitive Likelihood, a framework for reasoning with uncertain beliefs. The framework is implemented within a novel logic -- the Inductive Deontic Cognitive Event Calculus (IDCEC) -- which includes a formal grammar and semantics which dictate how agents can reason within the framework. These formalisms are implemented in an automated reasoner called ShadowAdjudicator in order to enable the automatic generation of IDCEC proofs. We present the novel algorithm underlying ShadowAdjudicator which enables this automated proof discovery. Finally, we demonstrate how these contributions can be utilized to solve autonomous driving problems and to adjudicate arguments regarding a notorious probability puzzle, the Monty Hall Problem.
dc.languageENG
dc.language.isoen_US
dc.publisherRensselaer Polytechnic Institute, Troy, NY
dc.relation.ispartofRensselaer Theses and Dissertations Online Collection
dc.subjectComputer science
dc.titleReasoning with cognitive likelihood for artificially-intelligent agents: formalization & implementation
dc.typeElectronic thesis
dc.typeThesis
dc.date.updated2023-06-01T19:13:15Z
dc.rights.holderThis electronic version is a licensed copy owned by Rensselaer Polytechnic Institute (RPI), Troy, NY. Copyright of original work retained by author.
dc.creator.identifierhttps://orcid.org/0000-0002-7194-5082
dc.description.degreePhD
dc.relation.departmentDept. of Computer Science


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record