Empirical Inference

Pruning from Adaptive Regularization

1994

Article

ei


Inspired by the recent upsurge of interest in Bayesian methods we consider adaptive regularization. A generalization based scheme for adaptation of regularization parameters is introduced and compared to Bayesian regularization.We show that pruning arises naturally within both adaptive regularization schemes. As model example we have chosen the simplest possible: estimating the mean of a random variable with known variance. Marked similarities are found between the two methods in that they both involve a "noise limit", below which they regularize with infinite weight decay, i.e., they prune.However, pruning is not always beneficial. We show explicitly that both methods in some cases may increase the generalization error. This corresponds to situations where the underlying assumptions of the regularizer are poorly matched to the environment.

Author(s): Hansen, LK. and Rasmussen, CE.
Journal: Neural Computation
Volume: 6
Number (issue): 6
Pages: 1222-1231
Year: 1994
Day: 0

Department(s): Empirical Inference
Bibtex Type: Article (article)

Digital: 0
Organization: Max-Planck-Gesellschaft
School: Biologische Kybernetik

Links: PDF
PostScript

BibTex

@article{2313,
  title = {Pruning from Adaptive Regularization},
  author = {Hansen, LK. and Rasmussen, CE.},
  journal = {Neural Computation},
  volume = {6},
  number = {6},
  pages = {1222-1231},
  organization = {Max-Planck-Gesellschaft},
  school = {Biologische Kybernetik},
  year = {1994},
  doi = {}
}