Où docteurs et entreprises se rencontrent
Menu
Connexion

Physics-Informed Wearable AI for Predictive Human–Robot Interaction

ABG-138543 Sujet de Thèse
17/04/2026 Contrat doctoral
Université Paris Est Créteil
Vitry Sur Seine - Ile-de-France - France
Physics-Informed Wearable AI for Predictive Human–Robot Interaction
  • Robotique
  • Informatique
Wearable Robotics, Machine Learning, Control Systems, Biomechanics

Description du sujet

Context and Challenges
The SIRIUS team develops intelligent robotic systems for human assistance, with a particular emphasis on mobility support and neuro-rehabilitation. Despite significant progress, current wearable robotic systems face fundamental limitations. Model-based controllers ensure interpretability and safety but often struggle to accommodate human variability and complex real-world conditions. Conversely, data-driven approaches demonstrate strong performance in locomotion recognition and state estimation but frequently lack physical consistency, transparency, and robustness when exposed to unseen scenarios.

Moreover, most existing systems remain inherently reactive, detecting locomotion changes only after they occur, typically following contact events, and are often restricted to simplified planar motion. A key challenge is therefore to move toward predictive systems capable of anticipating internal physiological states, such as joint loading and muscle fatigue, in order to provide safer, more efficient, and personalized assistance.

 

Project Objectives
This PhD project aims to develop a physics-informed wearable AI framework for predictive human–robot interaction in assistive robotics. The core objective is to bridge model-based and learning-based approaches by embedding biomechanical knowledge into data-driven models to achieve robust, physically consistent, and context-aware control.

The project will address the following key objectives:

  • Advanced Multi-modal Sensing: Integration of lightweight wearable sensors, including inertial measurement units (IMUs), force sensors, and electromyography (EMG), to capture detailed motion, interaction forces, and muscle activity.
  • Predictive Physiological Modeling: Development of machine learning models capable of anticipating motion intent while estimating internal states such as joint load and muscle fatigue in real time.
  • 3D Locomotion State Estimation: Design of models for accurate estimation and prediction of locomotion phases and transitions in realistic three-dimensional environments.
  • Adaptive Assistance Modulation: Design of control strategies that anticipate movement transitions and adapt robotic assistance online to minimize physical strain. While the primary focus is on functional assistance and rehabilitation, ergonomic considerations will be incorporated as a secondary objective to enhance user comfort.

Methodology and Environment
The research will be conducted within a multidisciplinary framework at the intersection of robotics, biomechanics, control, and artificial intelligence. The work will follow a structured methodology:

  • Data Collection and Modeling: Acquisition of real-time biomechanical data using wearable sensors to develop physically consistent AI models.
  • Predictive Strategy Optimization: Use of learned models to perform simulation-based evaluation and optimization of assistance strategies prior to deployment.
  • Experimental Validation: Integration of the developed framework into the team’s ROS2-based real-time pipeline and validation on wearable robotic platforms.

Expected Outcomes
The project is expected to deliver novel methods for predictive and physically grounded human–robot interaction. These contributions will enable the development of more intuitive, adaptive, and personalized assistive systems. In particular, the outcomes will support improved rehabilitation strategies and enhanced mobility assistance, ultimately increasing the effectiveness, safety, and usability of wearable robotic technologies in healthcare and assistive applications.

 

Références:

[1] H. Moon, O. Bey, A. Boubezoul, L. Oukhellou, S. Mohammed, "Real-Time LSTM-Driven Dynamic Gait Mode Detection for Enhanced Control of Actuated Ankle-Foot Orthosis", IEEE Transactions on Robotics, vol. 41, pp. 4794–4809, 2025, doi: 10.1109/TRO.2025.3593111.

[2] R. Jradi, O. Bey, H. Rifaï, Y. Amirat, S. Mohammed, "Adaptive Control Strategy for Ankle Assistance Using an Actuated Ankle-Foot Orthosis With Selective Adaptive Parameter Convergence", IEEE Robotics and Automation Letters, vol. 11, no. 2, pp. 1258–1265, February 2026, doi: 10.1109/LRA.2025.3641107.

[3] R. Jradi, H. Rifaï, J.F. Guerrero-Castellanos, S. Mohammed, "Contraction-based Active Disturbance Rejection Controller for an Active Ankle Foot Orthosis", Control Engineering Practice, vol. 169, 2026, 106757, doi: https://doi.org/10.1016/j.conengprac.2026.106757.

[4] O. Bey, Y. Amirat, S. Mohammed, "Adaptive Model-Free Control for Ankle-Assistive Orthosis: A Robust Approach to Real-Time Gait Tracking", Mechatronics, vol. 109, 2025, 103341.

[5] H. Moon, R. Maiti, K. Das Sharma, Y. Amirat, P. Siarry, S. Mohammed, "Hybrid Half Gaussian Adaptive Fuzzy Control of an Actuated-Ankle–Foot-Orthosis", IEEE Robotics and Automation Letters, vol. 7, no. 4, pp. 9635–9642, 2022.

[6] W. Huo, H. Moon, M. Alouane, V. Bonnet, J. Huang, Y. Amirat, R. Vaidyanathan, S. Mohammed, "Impedance Modulation Control of a Lower Limb Exoskeleton to Assist Sit-to-Stand Movements", IEEE Transactions on Robotics, vol. 38, no. 2, pp. 1230–1249, 2022.

[7] I. Jammeli, A. Chemori, H. Moon, S. Elloumi, S. Mohammed, "An Assistive Explicit Model Predictive Control Framework for a Knee Rehabilitation Exoskeleton", IEEE/ASME Transactions on Mechatronics, vol. 27, no. 5, pp. 3636–3647, 2022.

[8] S. Mohammed, A. Same, L. Oukhellou, K. Kong, W. Huo, Y. Amirat, "Recognition of Gait Cycle Phases Using Wearable Sensors", Robotics and Autonomous Systems, vol. 75, pp. 50–59, 2016.

[9] G. Khodabandelou, H. Moon, Y. Amirat, S. Mohammed, "A Fuzzy Convolutional Attention-Based GRU Network for Human Activity Recognition", Engineering Applications of Artificial Intelligence, vol. 118, 2023, doi: https://doi.org/10.1016/j.engappai.2022.105702.

Prise de fonction :

01/10/2026

Nature du financement

Contrat doctoral

Précisions sur le financement

Présentation établissement et labo d'accueil

Université Paris Est Créteil

Le Laboratoire Images, Signaux et Systèmes Intelligents (LISSI) est une unité de recherche de l'Université Paris Est Créteil (UPEC) spécialisée dans les sciences de l'information et du numérique, avec un positionnement stratégique centré sur l'intelligence artificielle et les technologies pour la santé.

L'axe 1 de l'équipe SIRIUS, intitulé « Systèmes robotiques d'assistance à la mobilité et à la rééducation », se consacre à la modélisation et au contrôle-commande de systèmes robotiques portables tels que les exosquelettes et les orthèses. Ces activités visent à créer une interaction homme-robot symbiotique en intégrant l'intention de mouvement du sujet dans les algorithmes de commande pour assister les activités quotidiennes ou la rééducation neuro-musculaire. Les recherches de cet axe mobilisent de plus en plus l'intelligence computationnelle pour la reconnaissance en temps réel des modes de marche et des activités locomotrices à partir de capteurs embarqués, tout en cherchant à garantir la robustesse et l'explicabilité des modèles.

Profil du candidat

Required Skills
Strong background in robotics, control, biomechanics, machine learning, or a related field. Solid programming skills (Python/C++) and experience with signal processing or time-series data. Good understanding of dynamical systems and/or machine learning methods. Ability to work in a multidisciplinary environment and strong English communication skills.

Preferred Skills
Experience with ROS/ROS2, wearable sensors (IMUs, EMG, force sensors), state estimation, or real-time systems. Knowledge of biomechanics or human motion analysis is a plus.

08/05/2026
Partager via
Postuler
Fermer

Vous avez déjà un compte ?

Nouvel utilisateur ?