Dynamic Bayesian network
A Dynamic Bayesian Network (DBN) is a Bayesian network which relates variables to each other over adjacent time steps. This is often called a Two-Timeslice BN (2TBN) because it says that at any point in time T, the value of a variable can be calculated from the internal regressors and the immediate prior value (time T-1). DBNs were developed by Paul Dagum in the early 1990s when he led research funded by two National Science Foundation grants at Stanford University's Section on Medical Informatics.[1][2] Dagum developed DBNs to unify and extend traditional linear state-space models such as Kalman filters, linear and normal forecasting models such as ARMA and simple dependency models such as hidden Markov models into a general probabilistic representation and inference mechanism for arbitrary nonlinear and non-normal time-dependent domains.[3][4]
Today, DBNs are common in robotics, and have shown potential for a wide range of data mining applications. For example, they have been used in speech recognition, digital forensics, protein sequencing, and bioinformatics. DBN is a generalization of hidden Markov models and Kalman filters.[5]
See also
References
- ↑ Paul Dagum; Adam Galper; Eric Horvitz (1992). "Dynamic Network Models for Forecasting". Proceedings of the Eighth Conference on Uncertainty in Artificial Intelligence (AUAI Press): 41-48.
- ↑ Paul Dagum; Adam Galper; Eric Horvitz; Adam Seiver (1995). "Uncertain Reasoning and Forecasting". International Journal of Forecasting 11(1): 73-87.
- ↑ >Paul Dagum; Adam Galper; Eric Horvitz (June 1991). "Temporal Probabilistic Reasoning: Dynamic Network Models for Forecasting". Knowledge Systems Laboratory. Section on Medical Informatics, Stanford University.
- ↑ >Paul Dagum; Adam Galper; Eric Horvitz (1993). "Forecasting Sleep Apnea with Dynamic Network Models". Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (AUAI Press): 64-71.
- ↑ Stuart Russell; Peter Norvig (2010). Artificial Intelligence: A Modern Approach (PDF) (Third ed.). Prentice Hall. p. 566. ISBN 978-0136042594. Retrieved 22 October 2014.
dynamic Bayesian networks (which include hidden Markov models and Kalman filters as special cases)
- Murphy, Kevin (2002). Dynamic Bayesian Networks: Representation, Inference and Learning. UC Berkeley, Computer Science Division.
- Ghahramani, Zoubin (1997). "Learning Dynamic Bayesian Networks". Lecture Notes in Computer Science 1387: 168–197. CiteSeerX: 10
.1 ..1 .56 .7874 - Friedman, N.; Murphy, K.; Russell, S. (1998). Learning the structure of dynamic probabilistic networks. UAI’98. Morgan Kaufmann. pp. 139–147. CiteSeerX: 10
.1 ..1 .75 .2969
Software
- bnt on GitHub: the Bayes Net Toolbox for Matlab, by Kevin Murphy, (released under a GPL license)
- Graphical Models Toolkit (GMTK): an open source, publicly available toolkit for rapidly prototyping statistical models using dynamic graphical models (DGMs) and dynamic Bayesian networks (DBNs). GMTK can be used for applications and research in speech and language processing, bioinformatics, activity recognition, and any time series application.
- DBmcmc : Inferring Dynamic Bayesian Networks with MCMC, for Matlab (free software)
- GlobalMIT Matlab toolbox at Google Code: Modeling gene regulatory network via global optimization of dynamic bayesian network (released under a GPL license)
- libDAI: C++ library that provides implementations of various (approximate) inference methods for discrete graphical models; supports arbitrary factor graphs with discrete variables, including discrete Markov Random Fields and Bayesian Networks (released under the FreeBSD license)