Approximation stochastique et apprentissage en ligne // Stochastic Approximation: Theory and Applications to Online Learning
ABG-132181
ADUM-66253 |
Thesis topic | |
2025-05-24 |
Université Grenoble Alpes
Saint-Martin-d'Hères - Auvergne-Rhône-Alpes - France
Approximation stochastique et apprentissage en ligne // Stochastic Approximation: Theory and Applications to Online Learning
- Computer science
Apprentissage statistique, apprentissage par renforcement, Approximation stochastique, bandits
Stochastic approximation, online learning, reinforcement learning, policy gradient
Stochastic approximation, online learning, reinforcement learning, policy gradient
Topic description
(voir description en anglais)
------------------------------------------------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Stochastic approximation (SA) is a widely used algorithmic paradigm for solving fixed-point equations or optimization problems when only noisy observations are available. Introduced in the 1950s by Robbins and Monro, SA remains central to many modern algorithmic approaches in statistics, optimization, and machine learning. Fundamentally, a stochastic approximation algorithm aims to generate a sequence of estimates $\theta_n$ that converges to a target $\theta^*$—typically a solution to a root-finding problem or a minimizer of an expected loss—based on noisy observations of gradient-like information. SA forms the theoretical backbone of several key algorithmic classes in machine learning, including stochastic gradient descent (SGD), temporal-difference learning, $Q$-learning, and policy gradient.
The primary objective of this thesis is to advance the theoretical understanding of stochastic approximation schemes, with a focus on characterizing the error between the iterates $\theta_n$ and their limiting point $\theta^*$—especially in the presence of \emph{multi-scale dynamics}
------------------------------------------------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Début de la thèse : 01/10/2025
------------------------------------------------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Stochastic approximation (SA) is a widely used algorithmic paradigm for solving fixed-point equations or optimization problems when only noisy observations are available. Introduced in the 1950s by Robbins and Monro, SA remains central to many modern algorithmic approaches in statistics, optimization, and machine learning. Fundamentally, a stochastic approximation algorithm aims to generate a sequence of estimates $\theta_n$ that converges to a target $\theta^*$—typically a solution to a root-finding problem or a minimizer of an expected loss—based on noisy observations of gradient-like information. SA forms the theoretical backbone of several key algorithmic classes in machine learning, including stochastic gradient descent (SGD), temporal-difference learning, $Q$-learning, and policy gradient.
The primary objective of this thesis is to advance the theoretical understanding of stochastic approximation schemes, with a focus on characterizing the error between the iterates $\theta_n$ and their limiting point $\theta^*$—especially in the presence of \emph{multi-scale dynamics}
------------------------------------------------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Début de la thèse : 01/10/2025
Funding category
Funding further details
Concours allocations
Presentation of host institution and host laboratory
Université Grenoble Alpes
Institution awarding doctoral degree
Université Grenoble Alpes
Graduate school
217 MSTII - Mathématiques, Sciences et technologies de l'information, Informatique
Candidate's profile
Master en informatique ou mathématique appliqué.
Forte appétence pour l'apprentissage statistique.
Cours avancé de probabilité et de processus stochastique.
Master in applied mathematics or computer science. Good knowledge of applied probability and stochastic processes. Will to work on online learning and optimization.
Master in applied mathematics or computer science. Good knowledge of applied probability and stochastic processes. Will to work on online learning and optimization.
2025-06-09
Apply
Close
Vous avez déjà un compte ?
Nouvel utilisateur ?
More information about ABG?
Get ABG’s monthly newsletters including news, job offers, grants & fellowships and a selection of relevant events…
Discover our members
Nokia Bell Labs France
Laboratoire National de Métrologie et d'Essais - LNE
CESI
SUEZ
TotalEnergies
Tecknowmetrix
Ifremer
ASNR - Autorité de sûreté nucléaire et de radioprotection - Siège
Groupe AFNOR - Association française de normalisation
ANRT
ADEME
CASDEN
Institut Sup'biotech de Paris
MabDesign
PhDOOC
ONERA - The French Aerospace Lab
MabDesign
Aérocentre, Pôle d'excellence régional
Généthon