×

Uniqueness of the level two Bayesian network representing a probability distribution. (English) Zbl 1206.62040

Summary: Bayesian networks are graphic probabilistic models through which we can acquire, capitalize on, and exploit knowledge. They are becoming an important tool for research and applications in artificial intelligence and many other fields in the last decade. This paper presents Bayesian networks and discusses the inference problem in such models. It proposes a statement of the problem and the proposed method to compute probability distributions. It also uses D-separation for simplifying the computation of probabilities in Bayesian networks. Given a Bayesian network over a family \(I\) of random variables, this paper presents a result on the computation of the probability distribution of a subset \(S\) of \(I\) using separately a computation algorithm and D-separation properties. It also shows the uniqueness of the obtained result.

MSC:

62F15 Bayesian inference
05C90 Applications of graph theory
68T05 Learning and adaptive systems in artificial intelligence
65C60 Computational problems in statistics (MSC2010)

Software:

TETRAD
PDFBibTeX XMLCite
Full Text: DOI EuDML

References:

[1] F. V. Jensen, An Introduction to Bayesian Networks, UCL Press, 1999.
[2] F. V. Jensen, S. L. Lauritzen, and K. G. Olesen, “Bayesian updating in causal probabilistic networks by local computations,” Computational Statistics Quarterly, vol. 5, no. 4, pp. 269-282, 1990. · Zbl 0715.68076
[3] P. Spirtes, C. Glymour, and R. Scheines, Causation, Prediction, and Search, vol. 81 of Lecture Notes in Statistics, Springer, New York, NY, USA, 1993. · Zbl 0806.62001
[4] G. Dan and J. Pearl, “Axioms and algorithms for inferences involving conditional independence,” Tech. Rep. CSD 890031, R-119-I, Cognitive Systems Laboratory, University of California, Los Angeles, Calif, USA, 1989.
[5] D. Heckerman, “A tutorial on learning with Bayesian networks,” in Learning in Graphical Models, M. Jordan, Ed., MIT Press, Cambridge, Mass, USA, 1999. · Zbl 0921.62029
[6] N. L. Zhang and D. Poole, “A simple approach to bayesian network computations,” in Proc. of the Tenth Canadian Conference on Artificial Intelligence, pp. 171-178, 1994.
[7] R. Dechter, “Bucket elimination: a unifying framework for probabilistic inference,” in Uncertainty in Artificial Intelligence, pp. 211-219, Morgan Kaufmann, San Francisco, Calif, USA, 1996.
[8] L. Smail and J. P. Raoult, “Successive restrictions algorithm in Bayesian networks,” in Proceedings of the International Symposium on Intelligent Data Analysis (IDA ’05), A. F. Famili, et al., Ed., vol. 3646 of Lecture Notes in Computer Science, pp. 409-418, Springer, Berlin, Germany, 2005. · Zbl 1165.68431
[9] R. G. Cowell, A. P. Dawid, S. L. Lauritzen, and D. J. Spiegelhalter, Probabilistic Networks and Expert Systems, Statistics for Engineering and Information Science, Springer, New York, NY, USA, 1999. · Zbl 0937.68121
[10] R. E. Neapolitan, Probabilistic Reasoning in Expert Systems, A Wiley-Interscience Publication, John Wiley & Sons, New York, NY, USA, 1990.
[11] J. Pearl, Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference, The Morgan Kaufmann Series in Representation and Reasoning, Morgan Kaufmann, San Francisco, Calif, USA, 1988.
[12] P. Hájek, T. Havránek, and R. Jirou\vsek, Uncertain Information Processing in Expert Systems, CRC Press, Boca Raton, Fla, USA, 1992.
[13] G. Shafer, Probabilistic Expert Systems, vol. 67 of CBMS-NSF Regional Conference Series in Applied Mathematics, SIAM, Philadelphia, Pa, USA, 1996. · Zbl 0866.68108
[14] L. Smail, “D-separation and level two Bayesian networks,” Artificial Intelligence Review, vol. 31, no. 1-4, 2009.
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.