Welcome to another installment of these weekly KDnuggets free eBook overviews.
While we have detoured into specialized topics over the past several weeks, including some which are more advanced in nature, we felt it was time to bring it back to basics, and have a look at a book on foundational machine learning concepts. This week we introduce Understanding Machine Learning: From Theory to Algorithms, by Shai Shalev-Shwartz and Shai Ben-David.
Directly from the book’s website:
The aim of this textbook is to introduce machine learning, and the algorithmic paradigms it offers, in a principled way. The book provides a theoretical account of the fundamentals underlying machine learning and the mathematical derivations that transform these principles into practical algorithms. Following a presentation of the basics, the book covers a wide array of central topics unaddressed by previous textbooks.
Designed for advanced undergraduates or beginning graduates, the text makes the fundamentals and algorithms of machine learning accessible to students and non-expert readers in statistics, computer science, mathematics and engineering.
This book is explanatory in nature, and focuses on the theory of a variety of machine learning concepts. There is no code to see here; you aren’t writing algorithms from scratch, nor are you using existing libraries to implement any. This is strictly for learning theory.
The book is split into four distinct parts, as outlined below:
Part I: Foundations
- A gentle start
- A formal learning model
- Learning via uniform convergence
- The bias-complexity trade-off
- The VC-dimension
- Non-uniform learnability
- The runtime of learning
Part II: From Theory to Algorithms
- Linear predictors
- Model selection and validation
- Convex learning problems
- Regularization and stability
- Stochastic gradient descent
- Support vector machines
- Kernel methods
- Multiclass, ranking, and complex prediction problems
- Decision trees
- Nearest neighbor
- Neural networks
Part III: Additional Learning Models
- Online learning
- Dimensionality reduction
- Generative models
- Feature selection and generation
Part IV: Advanced Theory
- Rademacher complexities
- Covering numbers
- Proof of the fundamental theorem of learning theory
- Multiclass learnability
- Compression bounds
In its introduction, the book sets out the following pair of goals:
The first goal of this book is to provide a rigorous, yet easy to follow, introduction to the main concepts underlying machine learning: What is learning? How can a machine learn? How do we quantify the resources needed to learn a given concept? Is learning always possible? Can we know if the learning process succeeded or failed?
The second goal of this book is to present several key machine learning algorithms. We chose to present algorithms that on one hand are successfully used in practice and on the other hand give a wide spectrum of different learning techniques.
During the course of explaining its concepts, the book relies heavily on maths in doing so. In reality, thoroughly understanding the theoretical underpinnings of these machine learning concepts is not possible without mathematics, but this is often a surprise and can be overwhelming to some, so consider yourself warned.
Once the possible shock of math-heavy theory wears off, you will find thorough treatments of topics from bias-variance trade-off to linear regression to model validation strategies to model boosting to kernel methods to prediction problems and beyond. And the benefit of such a thorough treatment is that your understanding will go deeper than just grasping the abstract intuition.
You can directly download a PDF of the book here.
If you are in the market for a rigorous deep dive into learning concepts and the theory of deep learning, be sure to add Understanding Machine Learning: From Theory to Algorithms to your short list.