Lutter contre la perte de qualification dans les systèmes interactifs avec l'IA: Expériences et interventions comportementales en groupe pour une évaluation précise des informations // Fighting deskilling in Human–AI systems: Behavioral experiments and in
ABG-131738
ADUM-65314 |
Thesis topic | |
2025-05-07 |
Université Claude Bernard Lyon 1
BRON - Auvergne-Rhône-Alpes - France
Lutter contre la perte de qualification dans les systèmes interactifs avec l'IA: Expériences et interventions comportementales en groupe pour une évaluation précise des informations // Fighting deskilling in Human–AI systems: Behavioral experiments and in
- Psychology, neurosciences
Human-AI systems, behavioral economics, social networks
Human-AI systems, behavioral economics, social networks
Human-AI systems, behavioral economics, social networks
Topic description
La dépendance excessive à l'égard de l'IA, le délestage cognitif et l'érosion des compétences cognitives essentielles ont suscité des inquiétudes. La déqualification décrit la perte de compétences due aux changements technologiques, y compris l'utilisation de systèmes d'IA. Nous nous concentrons ici sur les compétences requises pour prendre des décisions dans des systèmes en réseau, dans lesquels la dépendance excessive à l'égard de l'IA peut réduire la capacité à évaluer de manière critique la véracité des titres d'actualité. Tout d'abord, nous testerons des groupes assistés par l'IA (15 équipes, 150 participants) et des groupes composés uniquement d'humains (15 équipes, 150 participants). Nous émettons l'hypothèse que : (a) les groupes assistés par l'IA, qui reçoivent des scores de crédibilité générés par l'IA pour chaque titre d'actualité, conduiront à une déqualification (réduction de la pensée critique) au fil du temps (période de 4 semaines) ; (b) un retrait supplémentaire de l'IA conduit à un engagement réduit de la pensée critique par rapport au groupe humain seul ; (c) le groupe humain seul fera preuve d'une meilleure rétention des compétences et de stratégies de discussion plus efficaces dans la phase finale de l'exercice. Cette première expérience devrait démontrer expérimentalement que l'évaluation de la véracité des informations par l'IA peut entraîner une réduction de l'esprit critique. Deuxièmement, nous proposons une expérience comportementale visant à atténuer les effets de déqualification de l'évaluation de l'actualité assistée par l'IA. Pour ce faire, nous mettrons en œuvre trois interventions d'inoculation psychologique, consistant en une formation cognitive préventive qui aide les individus à résister à la désinformation et à la confiance excessive dans l'IA. Ensemble, ces expériences et ces nouvelles stratégies d'inoculation devraient montrer comment renforcer la pensée critique dans les groupes en réseau, avec ou sans l'aide de l'IA.
------------------------------------------------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Concerns have been raised about AI over-reliance, cognitive offloading and the erosion of critical cognitive skills. Deskilling describes the loss of skills due to technological changes, including the use of AI system. Here we focus on skills required to make decisions in networked systems in which AI overreliance may reduce the ability to critically evaluate the veracity of news headlines. First, we will test AI-assisted groups (15 teams, 150 participants) and Human only groups (15 teams, 150 participants). We hypothesize that : (a) the AI-assisted groups, who receive AI-generated credibility scores for each news headline, will lead to deskilling (reduced critical thinking) over time (4 weeks period); (b) further AI removal leads to reduced critical thinking engagement compared to the Humans-only Group ; (c) The Human-Only Group will demonstrate better skill retention and more effective discussion strategies in the final phase of the task. This first experiment should demonstrate experimentally that assessment of news veracity with AI can lead to reduced critical thinking. Second, we propose a behavioral experiment to mitigate the deskilling effects of AI-assisted news evaluation. To do this, we will implement 3 psychological inoculation interventions, consisting in preemptive cognitive training that helps individuals to resist misinformation and over-reliance on AI. Together, these large experiments and novel inoculation strategies should show how to strengthen critical thinking in networked groups with or without AI assistance.
------------------------------------------------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Début de la thèse : 01/10/2025
WEB : https://www.pepr-ensemble.fr/proposition-de-these-fighting-deskilling-in-human-ai-systems-behavioral-experiments-and-interventions-in-groups-for-accurate-evaluation-of-information/
------------------------------------------------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Concerns have been raised about AI over-reliance, cognitive offloading and the erosion of critical cognitive skills. Deskilling describes the loss of skills due to technological changes, including the use of AI system. Here we focus on skills required to make decisions in networked systems in which AI overreliance may reduce the ability to critically evaluate the veracity of news headlines. First, we will test AI-assisted groups (15 teams, 150 participants) and Human only groups (15 teams, 150 participants). We hypothesize that : (a) the AI-assisted groups, who receive AI-generated credibility scores for each news headline, will lead to deskilling (reduced critical thinking) over time (4 weeks period); (b) further AI removal leads to reduced critical thinking engagement compared to the Humans-only Group ; (c) The Human-Only Group will demonstrate better skill retention and more effective discussion strategies in the final phase of the task. This first experiment should demonstrate experimentally that assessment of news veracity with AI can lead to reduced critical thinking. Second, we propose a behavioral experiment to mitigate the deskilling effects of AI-assisted news evaluation. To do this, we will implement 3 psychological inoculation interventions, consisting in preemptive cognitive training that helps individuals to resist misinformation and over-reliance on AI. Together, these large experiments and novel inoculation strategies should show how to strengthen critical thinking in networked groups with or without AI assistance.
------------------------------------------------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Début de la thèse : 01/10/2025
WEB : https://www.pepr-ensemble.fr/proposition-de-these-fighting-deskilling-in-human-ai-systems-behavioral-experiments-and-interventions-in-groups-for-accurate-evaluation-of-information/
Funding category
Funding further details
Plan Investissement d'Avenir (Idex, Labex)
Presentation of host institution and host laboratory
Université Claude Bernard Lyon 1
Institution awarding doctoral degree
Université Claude Bernard Lyon 1
Graduate school
476 NSCo - Neurosciences et Cognition
Candidate's profile
Programming of behavioral experiments, statistical analyses
Programming of behavioral experiments, statistical analyses. The ideal candidate is a highly motivated student with a strong background in computer science, human-machine interactions, behavioral economics, psychology, or related fields. A keen interest in behavioral and experimental methods is highly desirable. Experience with experimental design, programming, and statistical analysis (e.g., R, Python, Stata) is a great advantage. The candidate should be curious, independent, and able to work both autonomously and collaboratively. Strong communication skills and proficiency in academic writing in English are essential.
Programming of behavioral experiments, statistical analyses. The ideal candidate is a highly motivated student with a strong background in computer science, human-machine interactions, behavioral economics, psychology, or related fields. A keen interest in behavioral and experimental methods is highly desirable. Experience with experimental design, programming, and statistical analysis (e.g., R, Python, Stata) is a great advantage. The candidate should be curious, independent, and able to work both autonomously and collaboratively. Strong communication skills and proficiency in academic writing in English are essential.
2025-05-15
Apply
Close
Vous avez déjà un compte ?
Nouvel utilisateur ?
More information about ABG?
Get ABG’s monthly newsletters including news, job offers, grants & fellowships and a selection of relevant events…
Discover our members
Tecknowmetrix
Institut Sup'biotech de Paris
ANRT
ONERA - The French Aerospace Lab
Ifremer
ADEME
CESI
Groupe AFNOR - Association française de normalisation
ASNR - Autorité de sûreté nucléaire et de radioprotection - Siège
PhDOOC
Nokia Bell Labs France
SUEZ
TotalEnergies
Généthon
Aérocentre, Pôle d'excellence régional
MabDesign
Laboratoire National de Métrologie et d'Essais - LNE
MabDesign
CASDEN