Connectionist learning of belief networks
References (22)
- et al.
A learning algorithm for Boltzmann machines
Cogn. Sci.
(1985) Evidential reasoning using stochastic simulation of causal models
Artif. Intell.
(1987)Probabilistic interpretation of feedforward classification network outputs, with relationships to statistical pattern recognition
- et al.
AutoClass: a Bayesian classification system
- et al.
Maximum likelihood from incomplete data via the EM algorithm (with discussion)
J. Roy. Stat. Soc. B
(1977) Variations on the Boltzmann machine learning algorithm
- et al.
Sampling-based approaches to calculating marginal densities
J. Am. Stat. Assoc.
(1990) Towards efficient probabilistic diagnosis in multiply connected belief networks
- et al.
Learning and relearning in Boltzmann machines
- et al.
Local computations with probabilities on graphical structures and their application to expert systems (with discussion)
J. Roy. Stat. Soc. B
(1988)
An introduction to the application of the theory of probabilistic functions of a Markov process to automatic speech recognition
Bell Syst. Tech. J.
(1983)
Cited by (416)
Using a Bayesian Belief Network to detect healthcare fraud
2024, Expert Systems with ApplicationsStructural learning of mixed noisy-OR Bayesian networks
2023, International Journal of Approximate ReasoningPredicting mechanical properties of silk from its amino acid sequences via machine learning
2023, Journal of the Mechanical Behavior of Biomedical MaterialsKnowledge Discovery: Methods from data mining and machine learning
2023, Social Science ResearchA recurrent neural network model for predicting two-leader car-following behavior
2023, Transportation LettersNatural Reweighted Wake–Sleep
2022, Neural NetworksCitation Excerpt :Variational AutoEncoders (VAEs) Kingma and Welling (2014) and Rezende, Mohamed, and Wierstra (2014) introduce an approximate posterior distribution over the latent variables which are then sampled, thus resulting in stochastic networks. In addition, Helmholtz Machines (HMs) Dayan, Hinton, Neal, and Zemel (1995) consist of a recognition and a generative network both modeled as Sigmoid Belief Network (SBNs) (Neal, 1992), characterized by discrete hidden variables, differently from standard VAEs which commonly adopt continuous Gaussian variables only in the bottleneck layer. The training of stochastic networks is a challenging task in deep learning (Glorot & Bengio, 2010a).
Copyright © 1992 Published by Elsevier B.V.