Capturing Row and Column Semantics in Transformer Based Question Answering over Tables

Glass, Michael
Canim, Mustafa
Gliozzo, Alfio
Chemmengath, Saneem
Kumar, Vishwajeet
Chakravarti, Rishav
Sil, Avi
Pan, FeiFei
Bharadwaj, Samarth
Fauceglia, Nicolas Rodolfo
Thumbnail Image
Other Contributors
Issue Date
Terms of Use
Attribution-NonCommercial-NoDerivs 3.0 United States
Full Citation
Glass, Michael, Mustafa Canim, Alfio Gliozzo, Saneem Chemmengath, Vishwajeet Kumar, Rishav Chakravarti, Avi Sil, Feifei Pan, Samarth Bharadwaj, and Nicolas Rodolfo Fauceglia. "Capturing Row and Column Semantics in Transformer Based Question Answering over Tables." arXiv preprint arXiv:2104.08303. April 2021.
Transformer based architectures are recently used for the task of answering questions over tables. In order to improve the accuracy on this task, specialized pre-training techniques have been developed and applied on millions of open-domain web tables. In this paper, we propose two novel approaches demonstrating that one can achieve superior performance on table QA task without even using any of these specialized pre-training techniques. The first model, called RCI interaction, leverages a transformer based architecture that independently classifies rows and columns to identify relevant cells. While this model yields extremely high accuracy at finding cell values on recent benchmarks, a second model we propose, called RCI representation, provides a significant efficiency advantage for online QA systems over tables by materializing embeddings for existing tables. Experiments on recent benchmarks prove that the proposed methods can effectively locate cell values on tables (up to ~98% Hit@1 accuracy on WikiSQL lookup questions). Also, the interaction model outperforms the state-of-the-art transformer based approaches, pre-trained on very large table corpora (TAPAS and TaBERT), achieving ~3.4% and ~18.86% additional precision improvement on the standard WikiSQL benchmark.