Proceedings of the 32nd International Conference on Machine Learning, PMLR 37:1043-1052, 2015.
Abstract
A classic tension exists between exact inference in a simple model and approximate inference in a complex model. The latter offers expressivity and thus accuracy, but the former provides coverage of the space, an important property for confidence estimation and learning with indirect supervision. In this work, we introduce a new approach, reified context models, to reconcile this tension. Specifically, we let the choice of factors in a graphical model (the contexts) be random variables inside the model itself. In this sense, the contexts are reified and can be chosen in a data-dependent way. Empirically, we show that our approach obtains expressivity and coverage on three sequence modeling tasks.
@InProceedings{pmlr-v37-steinhardta15,
title = {Reified Context Models},
author = {Jacob Steinhardt and Percy Liang},
booktitle = {Proceedings of the 32nd International Conference on Machine Learning},
pages = {1043--1052},
year = {2015},
editor = {Francis Bach and David Blei},
volume = {37},
series = {Proceedings of Machine Learning Research},
address = {Lille, France},
month = {07--09 Jul},
publisher = {PMLR},
pdf = {http://proceedings.mlr.press/v37/steinhardta15.pdf},
url = {http://proceedings.mlr.press/v37/steinhardta15.html},
abstract = {A classic tension exists between exact inference in a simple model and approximate inference in a complex model. The latter offers expressivity and thus accuracy, but the former provides coverage of the space, an important property for confidence estimation and learning with indirect supervision. In this work, we introduce a new approach, reified context models, to reconcile this tension. Specifically, we let the choice of factors in a graphical model (the contexts) be random variables inside the model itself. In this sense, the contexts are reified and can be chosen in a data-dependent way. Empirically, we show that our approach obtains expressivity and coverage on three sequence modeling tasks.}
}
%0 Conference Paper
%T Reified Context Models
%A Jacob Steinhardt
%A Percy Liang
%B Proceedings of the 32nd International Conference on Machine Learning
%C Proceedings of Machine Learning Research
%D 2015
%E Francis Bach
%E David Blei
%F pmlr-v37-steinhardta15
%I PMLR
%J Proceedings of Machine Learning Research
%P 1043--1052
%U http://proceedings.mlr.press
%V 37
%W PMLR
%X A classic tension exists between exact inference in a simple model and approximate inference in a complex model. The latter offers expressivity and thus accuracy, but the former provides coverage of the space, an important property for confidence estimation and learning with indirect supervision. In this work, we introduce a new approach, reified context models, to reconcile this tension. Specifically, we let the choice of factors in a graphical model (the contexts) be random variables inside the model itself. In this sense, the contexts are reified and can be chosen in a data-dependent way. Empirically, we show that our approach obtains expressivity and coverage on three sequence modeling tasks.
TY - CPAPER
TI - Reified Context Models
AU - Jacob Steinhardt
AU - Percy Liang
BT - Proceedings of the 32nd International Conference on Machine Learning
PY - 2015/06/01
DA - 2015/06/01
ED - Francis Bach
ED - David Blei
ID - pmlr-v37-steinhardta15
PB - PMLR
SP - 1043
DP - PMLR
EP - 1052
L1 - http://proceedings.mlr.press/v37/steinhardta15.pdf
UR - http://proceedings.mlr.press/v37/steinhardta15.html
AB - A classic tension exists between exact inference in a simple model and approximate inference in a complex model. The latter offers expressivity and thus accuracy, but the former provides coverage of the space, an important property for confidence estimation and learning with indirect supervision. In this work, we introduce a new approach, reified context models, to reconcile this tension. Specifically, we let the choice of factors in a graphical model (the contexts) be random variables inside the model itself. In this sense, the contexts are reified and can be chosen in a data-dependent way. Empirically, we show that our approach obtains expressivity and coverage on three sequence modeling tasks.
ER -
Steinhardt, J. & Liang, P.. (2015). Reified Context Models. Proceedings of the 32nd International Conference on Machine Learning, in PMLR 37:1043-1052
This site last compiled Sat, 06 May 2017 21:27:07 +0000