000 | 03093nam a22002057a 4500 | ||
---|---|---|---|
999 |
_c2486 _d2486 |
||
003 | OSt | ||
005 | 20200224111321.0 | ||
008 | 200118b ||||| |||| 00| 0 eng d | ||
020 | _a978-1-10743995-5 | ||
028 |
_bAllied Informatics, Jaipur _c7084 _d13/01/2020 _q2019-20 |
||
040 |
_aBSDU _bEnglish _cBSDU |
||
082 |
_a006.31 _bBAR |
||
100 | _aBarber,David | ||
245 | _aBayesian Reasoning and Machine Learning | ||
260 |
_aNew Delhi _bCambridge University Press _c2019 |
||
300 | _a697 | ||
504 | _aDescriptionContentsResourcesCoursesAbout the Authors Machine learning methods extract value from vast data sets quickly and with modest resources. They are established tools in a wide range of industrial applications, including search engines, DNA sequencing, stock market analysis, and robot locomotion, and their use is spreading rapidly. People who know the methods have their choice of rewarding jobs. This hands-on text opens these opportunities to computer science students with modest mathematical backgrounds. It is designed for final-year undergraduates and master's students with limited background in linear algebra and calculus. Comprehensive and coherent, it develops everything from basic reasoning to advanced techniques within the framework of graphical models. Students learn more than a menu of techniques, they develop analytical and problem-solving skills that equip them for the real world. Numerous examples and exercises, both computer based and theoretical, are included in every chapter. Resources for students and instructors, including a MATLAB toolbox, are available online. Consistent use of modelling encourages students to see the bigger picture while they develop hands-on experience Full downloadable MATLAB toolbox, including demos, equips students to build their own models Website includes figures from the book, LaTeX code for use in slides, and additional teaching material that enables instructors to easily set exercises and assignments Contents Preface Part I. Inference in Probabilistic Models: 1. Probabilistic reasoning 2. Basic graph concepts 3. Belief networks 4. Graphical models 5. Efficient inference in trees 6. The junction tree algorithm 7. Making decisions Part II. Learning in Probabilistic Models: 8. Statistics for machine learning 9. Learning as inference 10. Naive Bayes 11. Learning with hidden variables 12. Bayesian model selection Part III. Machine Learning: 13. Machine learning concepts 14. Nearest neighbour classification 15. Unsupervised linear dimension reduction 16. Supervised linear dimension reduction 17. Linear models 18. Bayesian linear models 19. Gaussian processes 20. Mixture models 21. Latent linear models 22. Latent ability models Part IV. Dynamical Models: 23. Discrete-state Markov models 24. Continuous-state Markov models 25. Switching linear dynamical systems 26. Distributed computation Part V. Approximate Inference: 27. Sampling 28. Deterministic approximate inference Appendix. Background mathematics | ||
650 | _aMachine Learning | ||
942 |
_2ddc _cBK |