PhD position in computational neuroscience and robotics: Deadline May 25th

A PhD position is available at the Inria Bordeaux Sud-Ouest center and the
Institute of Neurodegenerative Disease in Bordeaux, France.What:  PhD position in computational neuroscience and robotics
Where: Inria, Bordeaux, France
When:  October 2020 (3 years duration)
Who:   Xavier Hinaut & Frédéric Alexandre

Application deadline: May 22th (22/05/2020)
How to apply: https://jobs.inria.fr/public/classic/en/offres/2020-02637

Title of the PhD topic
NewSpeak: Neuro-computational models of language comprehension and production
grounded in robots

Keywords
Recurrent Neural Network (RNN), Reservoir Computing, Developmental Language
Learning, Neuro-Robotics, Multimodal Language Grounding, Computational
Neuroscience, Reinforcement Learning

Candidate profile
–    Good background in maths and computer science;
–    A strong interest for neuroscience and the physiological processes underlying
learning;
–    Python programming with experience in scientific libraries Numpy/Scipy (or
similar coding language: matlab, etc.);
–    Experience in machine learning or data mining is a preferred;
–    Independence and ability to manage a project;
–    Good English reading/speaking skills.

Proposed research
We target to embody models into robots that will developmentally ground language.
The grounding of semantics should come from the robot experiencing the world
through its interactions with humans and the physical world. The goals are (1)
to test hypotheses with biologically plausible language learning models with the
Nao robot, (2) to extend the current model with unsupervised training and by
reinforcement learning, and (3) to propose a new kind of Generative Adversarial
Networks (GANs) for developmental language learning conditioned by grounded
modalities such as vision.

In order to model how a sentence can be processed, word by word (Hinaut &
Dominey 2013) or even phoneme by phoneme (Hinaut 2018), the use of recurrent
(artificial) neural networks, such as Reservoir Computing, offers interesting
advantages. In particular the possibility to compare the dynamics of the model
with data from neuroscience experiments (EEG, fMRI, …). This paradigm allows
to learn with few learning examples, and offers negligible runtime for
human-robot interactions.

The use of linguistic models with robots is not only useful to validate the
models in real conditions, it also allows to test other hypotheses, notably on
the anchoring of the language or the emergence of symbols. This involves finding
out how a learning agent can link and categorise physical stimuli (vision,
hearing, proprioception, etc.) to make the correspondence with symbols
(Harnard 1990), or even to make these symbols emerge from stimuli coming from
sensors (Taniguchi et al. 2016).

We aim that a robot could process language from morphemes to sentences,
similarly as a child, in order to better model how children acquire language.
One aim is to obtain symbolic representations that are a composition of
multimodal grounded representations. We will experiment how the newly developed
language model will be able to learn to understand utterances by exploring which
meanings the morphemes, words, … can have based on other modalities of the
robot (e.g. vision, proprioception). Starting from preliminary results (Juven &
Hinaut 2020), we will first consider merging the representations from vision
with a pre-trained CNN (Convolutional Neural Network). Then, reinforcement
learning experiments will explore how the robot can learn the meaning of
sentences: first by doing random actions for any user utterance, and then
bootstrap from the user’s feedback. We will use a concrete corpus of sentences
based on actions a robot can do (Hinaut & Twiefel 2019). We will implement
several variants of language models: (1) extension of the reservoir computing
model linked with grounded CNN, (2) adapt such model to the GAN paradigm in
order to couple language comprehension and production in a self-learning
generative mechanism (thus creating more biologically plausible GANs), (3)
explore unsupervised (cross-situational learning) and reinforcement learning
with these models. In parallel, we will adapt models features and behaviours
to the ones observed in language acquisition experiments in psychology, and
neural evidences in neuro-linguistic studies. In particular, we will explore
how the models could shed light on language developmental impairments. We will
run models in simulated humanoid robots and in a Nao robot.

More information
More information is available on the application web page:
https://jobs.inria.fr/public/classic/en/offres/2020-02637
Questions can be asked by email to Xavier Hinaut (xavier.hinaut@inria.fr).

Xavier Hinaut
Inria Researcher (CR)
Mnemosyne team, Inria
LaBRI, Université de Bordeaux
Institut des Maladies Neurodégénératives
+33 5 33 51 48 01
www.xavierhinaut.com