Où docteurs et entreprises se rencontrent
Menu
Connexion

Vous avez déjà un compte ?

Nouvel utilisateur ?

Vous aviez déjà un compte sur l’ancien site de l’ABG, afin de le valider aujourd’hui, merci de suivre la procédure de récupération de mot de passe.

Mot de passe oublié

R&D ML Security & Privacy

ABG-82435 Emploi Junior
28/01/2019 CDI > 45 et < 55 K€ brut annuel
Logo de
Disaitek
domicile - Ile-de-France - France
Informatique
  • Mathématiques
  • Physique
AI, IA, ML, Deep learning, Security, Privacy, Machine learning
Recherche et Développement

Employeur

Disaitek was awarded the Young Innovative Enterprise status, delivered by the French Administration.  

There is growing recognition that machine learning (ML) exposes new vulnerabilities in software systems. Some of the threat vectors explored so far include training data poisoning, adversarial examples or model extraction. Yet, the technical community's understanding of the nature and extent of the resulting vulnerabilities remains limited. In order to remedy this problem, Disaitek is creating a machine learning security auditing offer.

Disaitek specializes in the development of machine learning oriented software. We think that current generation of software will be replaced by  a new one built on AI foundations rather than rules-based and that software stack will be added on top of it to provide services to end users.

Our first product addresses the lack of efficiency in task tracking and knowledge management. These are both important topics to be able to execute projects aligned with their initial ambition (business case, deadline and budget).

To address that challenge, we connect our ML system to every flow of collaboration within the companies (email, chat, slack, confluence etc…). Capturing exchanges between humans raises the problem of confidentiality and the compliance with GDPR. 

Therefore, Disaitek has been concerned since the beginning by the safety and reliability of ML. We’ve been studying state of the art machine learning attacks/defense for more than two years (Lead R&D is a member of Intel innovation program and gives talks on these topics). We are also learning by experimentation.

We are convinced that there is a huge market in front of us for audits on Machine learning, regarding the actual explosion of ML systems applications, and the financial risks associated to these vulnerabilities. The discipline is highly technical and future experts will have great opportunities in the future.

We are looking for a PhD interested by the challenge of building the offer with the R&D team and collaborating on missions to help customers make them ML systems secure.

 

Disaitek partners

ELS (https://www.lefebvre-sarrut.eu/en)

Disaitek has signed a strong partnership with ELS (2nd largest European legal publisher). 

This partnership allows us to develop and train our deep learning models on a huge volume of real world data produced in 5 different languages (English, French, Dutch, Spanish and Italian).  Beyond that, ELS will host Disaitek prototype in order to collect valuable feedbacks from end users. 

 

ENSIIE (https://www.ensiie.fr/)

ENSIEE and Disaitek have signed a research & development collaboration contract on the topics of Machine learning security & privacy. ENSIIE is an Engineering school member associated of Paris Saclay.    

Poste et missions

We are looking for a PhD interested by the challenge of building the ML security and privacy offer with the R&D team and help customers make their ML systems robust.

 

The tasks will be the following ones:

  1. Attack implementation:
    • Adversarial samples,
    • Confidential information extraction,
    • Membership inference,
    • Model poisoning,
    • Model backdooring,
    • Model “reprogramming” using adversarial samples.
  2. Defense impementation and mitigation strategies:
    • Adversarial samples detection,
    • Model robustness evaluation,
    • Data extraction prevention (via differential privacy),
    • Data pipeline and model use context auditing.
  3. Bias in modes:
    • Creating a bias and discriminatory behavior discovery methodology using model exploration algorithms and tools such as Google “Facets” and Google “What if tool” (links in the references),
    • Implementation of bias mitigation algorithms (at the dataset or model level),
    • Ensure conformity with GDPR and European commission guidelines on AI use.

 

In the context of these tasks, the candidate will be expected to publish discoveries in scientific conferences and journals and to contribute to existing (or create new) open-source projects.

Mobilité géographique :

Pas de déplacement

Prise de fonction :

04/03/2019

Profil

In the context of these tasks, the candidate will be expected to publish discoveries in scientific conferences and journals and to contribute to existing (or create new) open-source projects.

The candidate should have the following technical skills:

  1. Strong cassical machine learning techniques knowledge (statistical learning algorithms such as random forest, SVM, logistic regression, …),
  2. Strong deep earning knowledge (MLP, CNN, RNN, classical DL techniques for NLP, …),
  3. Mathematica background necessary to understand model bias and attack / defense methods,
  4. Strong programming experience (idealy with Python),
  5. Being famiiar with at least one Python DL framework (PyTorch, TensorFlow, …).

Objectifs

In the context of these tasks, the candidate will be expected to publish discoveries in scientific conferences and journals and to contribute to existing (or create new) open-source projects.

Postuler
Fermer

Vous avez déjà un compte ?

Nouvel utilisateur ?


Mot de passe oublié
Besoin d'informations ?

Vous souhaitez recevoir une ou plusieurs lettres d’information de l’ABG. Chaque mois des actualités, des offres, des outils, un agenda…

item1 item1 item1
item1 item1 item1 item1 item1