• Login
    View Item 
    •   DSpace@RPI Home
    • Tetherless World Constellation
    • Tetherless World Publications
    • View Item
    •   DSpace@RPI Home
    • Tetherless World Constellation
    • Tetherless World Publications
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Exploring and Analyzing Machine Commonsense Benchmarks

    Author
    Santos, Henrique; Gordon, Minor; Liang, Jason; Forbush, Gretchen; McGuinness, Deborah
    Thumbnail
    Other Contributors
    Date Issued
    2021-02-01
    Subject
    Machine Common Sense (MCS) Multi-modal Open World Grounded Learning and Inference (MOWGLI)
    Degree
    Terms of Use
    Metadata
    Show full item record
    URI
    https://arxiv.org/abs/2012.11634
    Abstract
    Commonsense question-answering (QA) tasks, in the form of benchmarks, are constantly being introduced for challenging and comparing commonsense QA systems. The benchmarks provide question sets that systems’ developers can use to train and test new models before submitting their implementations to official leaderboards. Although these tasks are created to evaluate systems in identified dimensions (e.g. topic, reasoning type), this metadata is limited and largely presented in an unstructured format or completely not present. Because machine common sense is a fast-paced field, the problem of fully assessing current benchmarks and systems with regards to these evaluation dimensions is aggravated. We argue that the lack of a common vocabulary for aligning these approaches' metadata limits researchers in their efforts to understand systems' deficiencies and in making effective choices for future tasks. In this paper, we first discuss this MCS ecosystem in terms of its elements and their metadata. Then, we present how we are supporting the assessment of approaches by initially focusing on commonsense benchmarks. We describe our initial MCS Benchmark Ontology, an extensible common vocabulary that formalizes benchmark metadata, and showcase how it is supporting the development of a Benchmark tool that enables benchmark exploration and analysis.;
    Department
    Relationships
    https://tw.rpi.edu/project/machine-common-sense-mcs-multi-modal-open-world-grounded-learning-and-inference-mowgli;
    Access
    Collections
    • Tetherless World Publications

    Browse

    All of DSpace@RPICommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

    My Account

    Login

    DSpace software copyright © 2002-2022  DuraSpace
    Contact Us | Send Feedback
    DSpace Express is a service operated by 
    Atmire NV