Abstract
Recent results concerning the instability of Bayes Factor search over Bayesian Networks (BN’s) lead us to ask whether learning the parameters of a selected BN might also depend heavily on the often rather arbitrary choice of prior density. Robustness of inferences to misspecification of the prior density would at least ensure that a selected candidate model would give similar predictions of future data points given somewhat different priors and a given large training data set. In this paper we derive new explicit total variation bounds on the calculated posterior density as the function of the closeness of the genuine prior to the approximating one used and certain summary statistics of the calculated posterior density. We show that the approximating posterior density often converges to the genuine one as the number of sample point increases and our bounds allow us to identify when the posterior approximation might not. To prove our general results we needed to develop a new family of distance measures called local DeRobertis distances. These provide coarse non-parametric neighbourhoods and allowed us to derive elegant explicit posterior bounds in total variation. The bounds can be routinely calculated for BNs even when the sample has systematically missing observations and no conjugate analyses are possible.
Original language | English |
---|---|
Pages (from-to) | 558-572 |
Number of pages | 15 |
Journal | International Journal of Approximate Reasoning |
Volume | 51 |
Issue number | 5 |
DOIs | |
Publication status | Published - 2010 |
Keywords
- bayesian network
- Bayesian robustness
- Isoseparation property
- Local DeRobertis distance
- Total variation distance