next up previous
Next: References Up: Top Previous: Evidence for Hyperparameters

Automatic Hyperparameter Search

To obtain a search algorithm for the maximizer of the evidence, we employ the derivatives of the evidence with respect to the hyperparameters. However, these derivatives are difficult to obtain because of . This matrix is the sum of the negative Hessians of the log likelihood and the log prior. In particular, the negative Hessian of the log likelihood is complicated and difficult to differentiate. Here, we substitute the expectation of the log compete likelihood used in the EM algorithm for the genuine log likelihood. Furthermore, by taking the expectation of the approximated Hessian on the data distribution, we can obtain a simple formula

The detail derivation of this formula is given in [1]. By solving the equations that the derivatives of the approximated log evidence with respect to the hyperparameters are zeros, we obtain recursive updating rules for the hyperparameters:


is the effective number of the centroids. By iterating this hyperparameter updating and a MAP estimation algorithm for the centroids parameters alternately, we can obtain all estimates.

Utsugi Akio;(6730)
Wed Nov 27 14:16:58 JST 1996