top of page
Artboard 1 copy 7.png
Stability in Machine Learning through its algorithms

Objectives

  1. Clarify the fundamental concepts of Machine Learning in various algorithms such as overfitting, regularization and computational cost.

  2. Know the details of three famous and useful algorithms (models) in Machine Learning: Neural Networks, Support Vector Machines and Decision Trees.

  3. Invite students to the Boosting method with a focus on their applications to Machine Learning as well as its limitations.

  4. Study the details behind some stochastic algorithms in Machine Learning.

  5. Once the above objectives have been met, we seek to initiate students in the ideas of stability within Machine Learning, we will do so through concrete examples with the algorithms and meta-algorithms detailed in the agenda.

Syllabus

  1. Neural network principles

  • Linear perceptron

  • Theoretical justification of the Perceptron

  • Perceptron regularization

  • Gradient algorithm

  • Neural networks in general

  • Forecasting via neural networks

 

2. Support Vector Machines

  • Basic definitions and probability complements

  • Comparison with the Perceptron

  • Stability in SVM

 

3. Decision trees

  • Basic algorithms

  • Entropy and its relation to information theory.

  • Decision tree stability

 

4. Boosting

  • Joint Boosting

  • Stochastic boosting

  • Composition of linear algorithms

  • Boosting stability

 

5. Stochastic Gradient Descent

  • Review of the gradient method

  • Stochastic gradient method for the perceptron

  • Stochastic gradient for SVM

  • Stochastic gradient for neural networks in general

 

6. Invitation to the stochastic approximation algorithms (if time permits)

  • Polya Urns

  • Martingale principles

  • Lyapunov functions

  • Stochastic approach

bottom of page