A Theoretically Grounded Benchmark for Evaluating Machine Commonsense
Author
Santos, Henrique; Shen, Ke; Mulvehill, Alice M.; Razeghi, Yasaman; McGuinness, Deborah L.; Kejriwal, MayankOther Contributors
Date Issued
2022-03Degree
Terms of Use
Attribution-NonCommercial-NoDerivs 3.0 United StatesFull Citation
Santos, Henrique, Ke Shen, Alice M. Mulvehill, Yasaman Razeghi, Deborah L. McGuinness, and Mayank Kejriwal. "A Theoretically Grounded Benchmark for Evaluating Machine Commonsense." arXiv preprint arXiv:2203.12184 March 2022Metadata
Show full item recordURI
https://doi.org/10.48550/arXiv.2203.12184; https://arxiv.org/abs/2203.12184; https://hdl.handle.net/20.500.13015/6437Abstract
Programming machines with commonsense reasoning (CSR) abilities is a longstanding challenge in the Artificial Intelligence community. Current CSR benchmarks use multiple-choice (and in relatively fewer cases, generative) question-answering instances to evaluate machine commonsense. Recent progress in transformer-based language representation models suggest that considerable progress has been made on existing benchmarks. However, although tens of CSR benchmarks currently exist, and are growing, it is not evident that the full suite of commonsense capabilities have been systematically evaluated. Furthermore, there are doubts about whether language models are 'fitting' to a benchmark dataset's training partition by picking up on subtle, but normatively irrelevant (at least for CSR), statistical features to achieve good performance on the testing partition. To address these challenges, we propose a benchmark called Theoretically-Grounded Commonsense Reasoning (TG-CSR) that is also based on discriminative question answering, but with questions designed to evaluate diverse aspects of commonsense, such as space, time, and world states. TG-CSR is based on a subset of commonsense categories first proposed as a viable theory of commonsense by Gordon and Hobbs. The benchmark is also designed to be few-shot (and in the future, zero-shot), with only a few training and validation examples provided. This report discusses the structure and construction of the benchmark. Preliminary results suggest that the benchmark is challenging even for advanced language representation models designed for discriminative CSR question answering tasks. Benchmark access and leaderboard: this https URL Benchmark website: this https URL;Department
Publisher
arXivRelationships
Access
Collections
The following license files are associated with this item: