03366nam a22003258i 4500001001600000003000700016008004100023020001800064020001800082035002000100041000800120082001600128100003400144245018000178264007100358300003900429336002600468337002600494338003600520500001300556505110600569520110001675650002202775650001602797700002902813856005502842932003202897596000602929949010502935CR9781107298019UkCbUP130717s2014||||enk o ||1 0|eng|d a9781107298019 a9781107057135 a(Sirsi) a793713 aeng00a006.3/12231 aShalev-Shwartz, Shai,eauthor10aUnderstanding machine learning :bfrom theory to algorithmsh[E-Book] /cShai Shalev-Shwartz, The Hebrew University, Jerusalem, Shai Ben-David, University of Waterloo, Canada. 1aCambridge :bCambridge University Press,c2014e(CUP)fCUP20200108 a1 online resource (xvi, 397 pages) atextbtxt2rdacontent acomputerbc2rdamedia aonline resourcebcr2rdacarrier aenglisch8 aMachine generated contents note: 1. Introduction; Part I. Foundations: 2. A gentle start; 3. A formal learning model; 4. Learning via uniform convergence; 5. The bias-complexity tradeoff; 6. The VC-dimension; 7. Non-uniform learnability; 8. The runtime of learning; Part II. From Theory to Algorithms: 9. Linear predictors; 10. Boosting; 11. Model selection and validation; 12. Convex learning problems; 13. Regularization and stability; 14. Stochastic gradient descent; 15. Support vector machines; 16. Kernel methods; 17. Multiclass, ranking, and complex prediction problems; 18. Decision trees; 19. Nearest neighbor; 20. Neural networks; Part III. Additional Learning Models: 21. Online learning; 22. Clustering; 23. Dimensionality reduction; 24. Generative models; 25. Feature selection and generation; Part IV. Advanced Theory: 26. Rademacher complexities; 27. Covering numbers; 28. Proof of the fundamental theorem of learning theory; 29. Multiclass learnability; 30. Compression bounds; 31. PAC-Bayes; Appendix A. Technical lemmas; Appendix B. Measure concentration; Appendix C. Linear algebra. aMachine learning is one of the fastest growing areas of computer science, with far-reaching applications. The aim of this textbook is to introduce machine learning, and the algorithmic paradigms it offers, in a principled way. The book provides a theoretical account of the fundamentals underlying machine learning and the mathematical derivations that transform these principles into practical algorithms. Following a presentation of the basics, the book covers a wide array of central topics unaddressed by previous textbooks. These include a discussion of the computational complexity of learning and the concepts of convexity and stability; important algorithmic paradigms including stochastic gradient descent, neural networks, and structured output learning; and emerging theoretical concepts such as the PAC-Bayes approach and compression-based bounds. Designed for advanced undergraduates or beginning graduates, the text makes the fundamentals and algorithms of machine learning accessible to students and non-expert readers in statistics, computer science, mathematics and engineering. 0aMachine learning. 0aAlgorithms.1 aBen-David, Shai,eauthor40uhttps://doi.org/10.1017/CBO9781107298019zVolltext aCambridgeCore (Order 30059) a1 aXX(793713.1)wAUTOc1i793713-1001lELECTRONICmZBrNsYtE-BOOKu8/1/2020xUNKNOWNzUNKNOWN1ONLINE