MCQLL Meeting, 10/30 — Dima Bahdanau

This week at MCQLL, Dima Bahdanau presents recent work.

Title: CLOSURE: Assessing Systematic Generalization of CLEVR Models

Abstract: The CLEVR dataset of natural-looking questions about 3D-rendered scenes has recently received much attention from the research community. A number of models have been proposed for this task, many of which achieved very high accuracies of around 97-99%. In this work, we study how systematic the generalization of such models is, that is to which extent they are capable of handling novel combinations of known linguistic constructs. To this end, we define 7 additional question families which test models’ understanding of similarity-based references (such as e.g. “the object that has the same size as …”) in novel contexts. Our experiments on the thereby constructed CLOSURE benchmark show that state-of-the-art models often do not exhibit systematicity after being trained on CLEVR. Surprisingly, we find that the explicitly compositional Neural Module Network model also generalizes badly on CLOSURE, even when it has access to the ground-truth programs at test time. We improve the NMN’s systematic generalization by developing a novel Vector-NMN module architecture with vector-shaped inputs and outputs. Lastly, we investigate the extent to which few-shot transfer learning can help models that are pretrained on CLEVR to adapt to CLOSURE. Our few-shot learning experiment contrast the adaptation behavior of the models with intermediate discrete programs with that of the end-to-end continuous models.

The meeting is Wednesday from 14:30-16:00 in Room 117.

Comments are closed.

Blog authors are solely responsible for the content of the blogs listed in the directory. Neither the content of these blogs, nor the links to other web sites, are screened, approved, reviewed or endorsed by McGill University. The text and other material on these blogs are the opinion of the specific author and are not statements of advice, opinion, or information of McGill.