Back to Bibliography

## Abstract

For many types of machine learning algorithms, one can compute the statistically “optimal” way to select training data. In this paper, we review how optimal data selection techniques have been used with feedforward neural networks. We then show how the same principles may be used to select data for two alternative, statistically-based learning architectures: mixtures of Gaussians and locally weighted regression. While the techniques for neural networks are computationally expensive and approximate, the techniques for mixtures of Gaussians and locally weighted regression are both efficient and accurate. Empirically, we observe that the optimality criterion sharply decreases the number of training examples the learner needs in order to achieve good performance.

## Annotations

The abstract is very complete and informative. They present the optimal way to do active learning (EVSI although they don't call it that) and show that it is difficult for ANNs and then they show that it can be done more simpley for other models.

One difference between their formulation and EVSI is that they leave the Utility function out, and replace it with the expected error rate, which you can do IF that is really your utility. (in other words their utility was implied).

This paper is right on point for EVSI and should be cited, but has no NLP examples. Their problem tries to predict the end location of a robotic arm given a bunch of angles (a rather interesting controll problem, and perhaps one we should explore in the future, and perhaps one that a CMAC could tackle).

## BibTeX entry

@inproceedings{ cohn95active,

   author = "David A. Cohn and Zoubin Ghahramani and Michael I. Jordan",
title = "Active Learning with Statistical Models",
booktitle = "Advances in Neural Information Processing Systems",
volume = "7",
publisher = "The {MIT} Press",
editor = "G. Tesauro and D. Touretzky and T. Leen",
pages = "705--712",
year = "1995",
url = "citeseer.ist.psu.edu/article/cohn96active.html" }