Où docteurs et entreprises se rencontrent
Menu
Connexion

Dynamic Approximate Computing for Energy-Efficient AI Hardware Accelerators

ABG-138746 Sujet de Thèse
28/04/2026 Contrat doctoral
Logo de
Centre de Recherche Inria Rennes
- Bretagne - France
Dynamic Approximate Computing for Energy-Efficient AI Hardware Accelerators
  • Informatique
  • Electronique
Hardware Accelerators, Energy Efficiency, AI, Approximate Computing

Description du sujet

Disclaimer

A PhD is not a continuation of coursework or a natural next step after a Master’s degree. A PhD is a long-term, research-focused commitment that requires deep curiosity, self-motivation, resilience, and a certain degree of autonomy.

By research, we mean creating new knowledge, not just applying existing theories. Your task is to discover, design, or prove something that no one has done before, work that will become what future students study.

If you are mainly looking for structured classes, predefined assignments, or a repeat of your Master’s experience, you will likely find this path unfulfilling. We welcome applications from candidates who are excited by uncertainty, driven to ask original questions, and eager to shape the frontier of their field.

Context and Background

As artificial intelligence expands across edge devices, data centers, and embedded systems, the demand for computational power has grown dramatically. Deep neural networks require billions of arithmetic operations and move vast amounts of data, which drives up both energy consumption and hardware complexity. Specialized accelerators like GPUs and TPUs have delivered significant improvements over general-purpose processors, yet the increasing scale of AI models continues to push power and thermal limits. This becomes particularly challenging for edge and battery-powered devices, where energy efficiency determines not just performance but practical viability and long-term sustainability.

Approximate computing offers a compelling approach to these energy constraints by trading some computational precision for improvements in power efficiency, performance, and chip area. The fundamental insight is that many AI applications, such as image recognition, speech processing, and recommendation systems, can tolerate a degree of error. Neural networks are particularly resilient, maintaining acceptable accuracy even when operations use reduced precision, simplified circuits, or probabilistic components. Leveraging this tolerance allows designers to reduce switching activity, memory bandwidth, and overall hardware complexity.

For AI hardware accelerators, approximate computing can be implemented across multiple design layers: algorithmic techniques like model pruning and quantization, architectural approaches such as approximate memory hierarchies, and circuit-level methods including approximate multipliers and adders. Together, these strategies can substantially reduce both dynamic and static power while keeping application performance within acceptable error margins. As AI deployment scales from resource-constrained edge devices to large data centers, approximate computing has become an essential research direction for achieving sustainable and energy-efficient acceleration.

Thesis objectives

The primary objective of this thesis is to investigate and advance the design of energy-efficient AI accelerators by dynamically applying approximate computing techniques and to advance hardware-software co-design methodologies.

The research will build upon recent advancements in efficient domain-specific architectures for AI. The goal is to develop novel approaches that balance performance, energy efficiency, and accuracy, while addressing the unique challenges of implementing approximate computing in real-world AI systems.

This research explores the principles and practical implications of approximate computing as a pathway toward more energy-efficient AI hardware accelerators. It examines how different forms of approximation affect computational efficiency, prediction accuracy, and overall system-level performance. Rather than treating these techniques in isolation, the study considers their combined impact across the computing stack, with particular attention to how accuracy-efficiency trade-offs can be characterized and controlled.

A central theme of the work is the integration of hardware and software perspectives through a co-design approach. By closely aligning algorithmic characteristics with architectural features, the research aims to uncover strategies for embedding approximation mechanisms directly into accelerator designs. Emphasis is placed on adaptive and context-aware approximation techniques that can dynamically balance energy savings and output quality, ensuring that efficiency gains do not compromise application-level requirements.

To ground these ideas in practice, the research involves modeling, simulation, and experimental prototyping using representative AI workloads, including deep learning inference and computer vision applications. Through systematic evaluation and validation, the study aims to assess the feasibility, robustness, and scalability of proposed approaches, contributing insights into the design of next-generation energy-efficient AI systems.

Prise de fonction :

01/09/2026

Nature du financement

Contrat doctoral

Précisions sur le financement

ANR REAxION ANR-25-CE25-5926

Présentation établissement et labo d'accueil

Centre de Recherche Inria Rennes

The Inria center at the University of Rennes is one of eight Inria centers and has more than thirty research teams. The Inria center is a major and recognized player in the field of digital sciences. It is at the heart of a rich ecosystem of R&D and innovation, including highly innovative SMEs, large industrial groups, competitiveness clusters, research and higher education institutions, centers of excellence, and technological research institutes.

Profil du candidat

Required Skills

We seek highly motivated and passionate candidates. Autonomy is a highly appreciated quality.

Candidates should possess the following qualifications:

  • Strong HW design skills: VHDL/Verilog, HW synthesis flow (design, simulation, synthesis, and deployment through commercial tools for FPGA or ASIC)
  • Strong foundation in computer architecture and Systems design. Knowledge about hardware architectures of Neural Network accelerators is a plus.
  • Strong SW Programming/Scripting: C/C++, Python, Linux scripting
  • Familiarity or Experience with machine learning fundamentals and Deep Neural Network development frameworks, e.g., PyTorch/TensorFlow
  • Experience with approximate computing techniques (e.g., functional approximation, mixed-precision arithmetic, pruning) is a significant plus.
  • Excellent analytical and problem-solving abilities, with an interest in optimizing for energy efficiency.
  • Strong communication skills to articulate research findings clearly and effectively.
  • Languages: proficiency in written English and fluency in spoken English are required.
  • Relational skills: the candidate will work in a research team, where regular meetings will be set up. The candidate has to be able to present the progress of their work in a clear and detailed manner.

Essential qualities to fulfill a PhD thesis are feeling at ease in an environment of scientific dynamics and wanting to learn, listen, share, and work in the unknown. There is no clear and definite answer, and often no clear-cut notion of “right” or “wrong” until the scientific community has weighed in. Expect long, probing discussions with your advisor, lab-mates, conference audiences, reviewers, and peers who may challenge or disagree with you. Debate is part of the process.

Other values appreciated are open-mindedness, strong integration skills, and team spirit. 

Candidates must have a Master’s degree (or equivalent) in Computer Engineering or related areas relevant to the PhD topic.

Talented last year Master’s students may start as 6-month interns and continue as Ph.D. researchers after graduation.

30/06/2026
Partager via
Postuler
Fermer

Vous avez déjà un compte ?

Nouvel utilisateur ?