Stability in Machine Learning through its algorithms
Objectives

Clarify the fundamental concepts of Machine Learning in various algorithms such as overfitting, regularization and computational cost.

Know the details of three famous and useful algorithms (models) in Machine Learning: Neural Networks, Support Vector Machines and Decision Trees.

Invite students to the Boosting method with a focus on their applications to Machine Learning as well as its limitations.

Study the details behind some stochastic algorithms in Machine Learning.

Once the above objectives have been met, we seek to initiate students in the ideas of stability within Machine Learning, we will do so through concrete examples with the algorithms and metaalgorithms detailed in the agenda.
Syllabus

Neural network principles

Linear perceptron

Theoretical justification of the Perceptron

Perceptron regularization

Gradient algorithm

Neural networks in general

Forecasting via neural networks
2. Support Vector Machines

Basic definitions and probability complements

Comparison with the Perceptron

Stability in SVM
3. Decision trees

Basic algorithms

Entropy and its relation to information theory.

Decision tree stability
4. Boosting

Joint Boosting

Stochastic boosting

Composition of linear algorithms

Boosting stability
5. Stochastic Gradient Descent

Review of the gradient method

Stochastic gradient method for the perceptron

Stochastic gradient for SVM

Stochastic gradient for neural networks in general
6. Invitation to the stochastic approximation algorithms (if time permits)

Polya Urns

Martingale principles

Lyapunov functions

Stochastic approach