Learning to Generate Textual Data
Add to your list(s)
Download to your calendar using vCal
If you have a question about this talk, please contact Mohammad Taher Pilehvar.
To learn text understanding models with millions of parameters one needs massive amounts of data. In this work, we argue that generating data can compensate for this need. While defining generic data generators is difficult, we propose to allow generators to be “weakly” specified in the sense that a set of parameters controls how the data is generated. Consider for example generators where the example templates, grammar, and/or vocabulary is determined by this set of parameters. Instead of manually tuning these parameters, we learn them from the limited training data at our disposal. To achieve this, we derive an efficient algorithm called GeneRe that jointly estimates the parameters of the model and the undetermined generation parameters. We illustrate its benefits by learning to solve math exam questions using a highly parametrised sequence-to-sequence neural network.
This talk is part of the Language Technology Lab Seminars series.
This talk is included in these lists:
Note that ex-directory lists are not shown.
|