Conditions of Use
Table of Contents
- From neurons to networks
- Learning from example data
- The Perceptron
- Beyond linear separability
- Feed-forward networks for regression and classification
- Distance-based classifiers
- Model evaluation and regularization
- Preprocessing and unsupervised learning
- Concluding quote
- Appendix A: Optimization
- List of figures
- List of algorithms
- Abbrev. and acronyms
Ancillary MaterialSubmit ancillary resource
About the Book
The Shallow and the Deep is a collection of lecture notes that offers an accessible introduction to neural networks and machine learning in general. However, it was clear from the beginning that these notes would not be able to cover this rapidly changing and growing field in its entirety. The focus lies on classical machine learning techniques, with a bias towards classification and regression. Other learning paradigms and many recent developments in, for instance, Deep Learning are not addressed or only briefly touched upon.
Biehl argues that having a solid knowledge of the foundations of the field is essential, especially for anyone who wants to explore the world of machine learning with an ambition that goes beyond the application of some software package to some data set. Therefore, The Shallow and the Deep places emphasis on fundamental concepts and theoretical background. This also involves delving into the history and pre-history of neural networks, where the foundations for most of the recent developments were laid. These notes aim to demystify machine learning and neural networks without losing the appreciation for their impressive power and versatility.
About the Contributors
Michael Biehl is Associate Professor of Computer Science at the Bernoulli Institute for Mathematics, Computer Science and Artificial Intelligence of the University of Groningen, where he joined the Intelligent Systems group in 2003. He also holds an honorary Professorship of Machine Learning at the Center for Systems Modelling and Quantitative Biomedicine of the University of Birmingham, UK. His research focuses on the modelling and theoretical understanding of neural networks and machine learning in general. The development of efficient training algorithms for interpretable, transparent systems is a topic of particular interest. A variety of interdisciplinary collaborations concern practical applications of machine learning in the biomedical domain, in astronomy and other areas.