Research
His main interest is Deep Learning for multilingual Natural Language Processing for Dialogue Systems and Information Retrieval (Now often called AI, Artificial Intelligence). He focuses on responsible models for several fields of NLP: semantic content analysis and model, machine translation, Intent Detection in real-life application at scale.
Teaching
Since years I'm giving NLP courses to master or engineer students for instance:
- Polytech'Saclay - Information Extraction (since 2023)
- EPITA - International Program (2019-2022)
- Polytech Grenoble - MOSIG - International Programm (2016)
... and many other computer science courses such like programming, databases, etc.
Biography
Christophe Servan is AI Senior Principal Scientist at Intescia Group since 2024. Formerly, he was NLP Research Scientist and Manager at Qwant from 2018 to 2024 and Deputy director of the INRIA-Qwant Lab from 2019 to 2021. He is leading the ATALA, the French National association for NLP since 2020.
Before joining Qwant, he was applied researcher at SYSTRAN to work on domain adaptation for NMT (2016-2018), member of the GETALP team at the University of Grenoble to develop Deep Learning approaches for SMT evaluation (2015-2016), part of the Xerox Research Centre Europe (XRCE) in the Machine Learning for Document Access and Translation (MLDAT) group (2013-2015), postdoc researcher in the Speech and Language Technology group at the laboratory of the University of Le Mans, France (LIUM), research engineer at the Vision & Content Engineering Laboratory (LVIC) at the CEA-LIST (2009-2010).
He received a PhD in Computer Science from the University of Avignon, France in 2008.
Since February 2022, I'm part of the LISN-CNRS lab as external collaborator.
Latest news
-
Since 2024, I'm AI Senior Principal Scientist at Intescia Group.
- the French and multilingual versions of ALBERT model (FrALBERT & mALBERT) are available on HuggingFace, check this out: https://huggingface.co/cservan
- Pierre Lepagnol, Thomas Gerald, Sahar Ghannay, Christophe Servan, Sophie Rosset. Small Language Models are Good Too: An Empirical Study of Zero-Shot Classification. In The 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), May 2024, TURIN, Italy . [PDF]
- Christophe Servan, Sahar Ghannay, Sophie Rosset. mALBERT: Is a Compact Multilingual BERT Model Still Worth It?. In The 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), May 2024, Torino, Italy . [PDF]
Contact
QWANT: https://www.qwant.com
Address:
QWANT
10 Boulevard Haussmann
75009 paris, France.
Phone: +33 (0)1 83 64 89 37
Email: c dot servan at qwant dot com
- Social networks:
Last updated on 2023-02-22