Où docteurs et entreprises se rencontrent
Menu
Connexion

Security and privacy in agentic AI systems: threat characterization, attack surface measurement, and human factors // Security and privacy in agentic AI systems: threat characterization, attack surface measurement, and human factors

ABG-138624
ADUM-74377
Sujet de Thèse
21/04/2026
Université Côte d'Azur
Sophia Antipolis Cedex - Provence-Alpes-Côte d'Azur - France
Security and privacy in agentic AI systems: threat characterization, attack surface measurement, and human factors // Security and privacy in agentic AI systems: threat characterization, attack surface measurement, and human factors
  • Informatique
LLM agents, prompt injection, Psychologie, Experiments
LLM agents, prompt injection, Psychologie, Experiments

Description du sujet

The subject is only described in English. B2/C1 level of English is mandatory to apply.

Large language models are increasingly deployed not as passive chat interfaces but as autonomous agents equipped with tools (web browsing, code execution, API calls, file systems) and capable of multi-step planning. This architectural shift from single-turn inference to agentic, goal-directed execution fundamentally changes the threat landscape. An agent capable of taking irreversible real-world actions such as sending emails, modifying databases, or invoking APIs is a materially different object of study from a chatbot. Yet the security and privacy communities have not yet produced systematic, empirical characterizations of this new attack surface, nor do we understand how human operators perceive and manage residual risk when interacting with such systems.

The central research question is: 'What are the measurable security and privacy vulnerabilities of agentic AI systems across their execution lifecycle, and how do human operators perceive, mitigate, and inadvertently amplify these risks?'

Two axes structure the investigation. The first, system-level, empirically characterizes vulnerabilities in real agentic pipelines through controlled adversarial experiments covering prompt injection, tool-call hijacking, memory poisoning, context exfiltration, and cross-agent manipulation in multi-agent settings. The second, human-level, studies how users and operators form mental models of agentic risk, when and why they over-delegate, how interface design mediates vigilance, and what human-in-the-loop mechanisms provide genuine oversight rather than the appearance of it.

What makes this thesis distinctive is its methodological pairing: combining adversarial system experiments with controlled human studies to address a problem space that neither the systems security community nor the human factors community has tackled in isolation. This pairing is rare in the LLM security literature and constitutes the core scientific bet of the thesis.

The thesis sits at the intersection of three active research communities: LLM security (prompt injection, jailbreaks), privacy-preserving machine learning (membership inference, data leakage), and human factors in automation (automation bias, situation awareness). Its distinguishing contribution is the exclusive focus on the agentic execution paradigm, where multi-step planning and tool use create a qualitatively different attack surface from single-turn inference.

The specific research directions, experimental designs, and target systems will be refined throughout the thesis in response to emerging results and ongoing collaboration with supervisors. The above framing defines the problem space and initial methodology, not a fixed program.
------------------------------------------------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------------------------------------------

The subject is only described in English. B2/C1 level of English is mandatory to apply.

Large language models are increasingly deployed not as passive chat interfaces but as autonomous agents equipped with tools (web browsing, code execution, API calls, file systems) and capable of multi-step planning. This architectural shift from single-turn inference to agentic, goal-directed execution fundamentally changes the threat landscape. An agent capable of taking irreversible real-world actions such as sending emails, modifying databases, or invoking APIs is a materially different object of study from a chatbot. Yet the security and privacy communities have not yet produced systematic, empirical characterizations of this new attack surface, nor do we understand how human operators perceive and manage residual risk when interacting with such systems.

The central research question is: 'What are the measurable security and privacy vulnerabilities of agentic AI systems across their execution lifecycle, and how do human operators perceive, mitigate, and inadvertently amplify these risks?'

Two axes structure the investigation. The first, system-level, empirically characterizes vulnerabilities in real agentic pipelines through controlled adversarial experiments covering prompt injection, tool-call hijacking, memory poisoning, context exfiltration, and cross-agent manipulation in multi-agent settings. The second, human-level, studies how users and operators form mental models of agentic risk, when and why they over-delegate, how interface design mediates vigilance, and what human-in-the-loop mechanisms provide genuine oversight rather than the appearance of it.

What makes this thesis distinctive is its methodological pairing: combining adversarial system experiments with controlled human studies to address a problem space that neither the systems security community nor the human factors community has tackled in isolation. This pairing is rare in the LLM security literature and constitutes the core scientific bet of the thesis.

The thesis sits at the intersection of three active research communities: LLM security (prompt injection, jailbreaks), privacy-preserving machine learning (membership inference, data leakage), and human factors in automation (automation bias, situation awareness). Its distinguishing contribution is the exclusive focus on the agentic execution paradigm, where multi-step planning and tool use create a qualitatively different attack surface from single-turn inference.

The specific research directions, experimental designs, and target systems will be refined throughout the thesis in response to emerging results and ongoing collaboration with supervisors. The above framing defines the problem space and initial methodology, not a fixed program.
------------------------------------------------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Début de la thèse : 01/10/2026

Nature du financement

Précisions sur le financement

Contrat doctoral EDSTIC-DS4H

Présentation établissement et labo d'accueil

Université Côte d'Azur

Etablissement délivrant le doctorat

Université Côte d'Azur

Ecole doctorale

84 STIC - Sciences et Technologies de l'Information et de la Communication

Profil du candidat

The profile is only described in English. B2/C1 level of English is mandatory to apply. The candidate must hold a Master or equivalent degree when starting the PhD. The candidate required skills are: - C1 level in English (possibly B2 close to reach C1) - Excellent programming and systems skills. We will work in a Linux environment with python and with its data science libraries (numpy, pandas, polars, seaborn, scikit-learn, statmodels). We also use Git. If the candidate is not fluent in Python, they must be *fluent* in another language and able to learn Python fast. - Excellent communication skills. An important part of the Ph.D. is to communicate on the results. The candidate must be ready to write high quality papers and give stunning talks. These skills will be nurtured during the Ph.D. thesis. - Curious, highly motivated, hard worker, autonomous, perfectionist. A good sign you have a profile to make an excellent Ph.D. thesis is when you cannot stand to do not understand something and can work hard to get it (or make your stuff work). Before deciding to make a Ph.D. thesis, you must read references in this page to be sure you made the right decision http://www-sop.inria.fr/members/Arnaud.Legout/phdstudents.html If you apply, I expect that you will directly get in touch with me very early in the process (arnaud.legout@inria.fr) to discuss if you are a good fit for the subject and I am a good fit as a supervisor. Discussing early with a potential supervisor about the subject and the supervision style (even if you are not sure to apply) is a sign of maturity and will be highly appreciated.
The profile is only described in English. B2/C1 level of English is mandatory to apply. The candidate must hold a Master or equivalent degree when starting the PhD. The candidate required skills are: - C1 level in English (possibly B2 close to reach C1) - Excellent programming and systems skills. We will work in a Linux environment with python and with its data science libraries (numpy, pandas, polars, seaborn, scikit-learn, statmodels). We also use Git. If the candidate is not fluent in Python, they must be *fluent* in another language and able to learn Python fast. - Excellent communication skills. An important part of the Ph.D. is to communicate on the results. The candidate must be ready to write high quality papers and give stunning talks. These skills will be nurtured during the Ph.D. thesis. - Curious, highly motivated, hard worker, autonomous, perfectionist. A good sign you have a profile to make an excellent Ph.D. thesis is when you cannot stand to do not understand something and can work hard to get it (or make your stuff work). Before deciding to make a Ph.D. thesis, you must read references in this page to be sure you made the right decision http://www-sop.inria.fr/members/Arnaud.Legout/phdstudents.html If you apply, I expect that you will directly get in touch with me very early in the process (arnaud.legout@inria.fr) to discuss if you are a good fit for the subject and I am a good fit as a supervisor. Discussing early with a potential supervisor about the subject and the supervision style (even if you are not sure to apply) is a sign of maturity and will be highly appreciated.
03/05/2026
Partager via
Postuler
Fermer

Vous avez déjà un compte ?

Nouvel utilisateur ?