In face to face interaction, people refer to objects and events not only by means of speech but also by means of gesture. The present paper describes building a corpus of referential gestures. The aim is to investigate gestural reference by incorporating insights from semantic ontologies and by employing a more holistic view on referential gestures. The paper’s focus is on presenting the data collection procedure and discussing the corpus ’ design; additionally the first insights from constructing the annotation scheme are described
International audienceThe way we see the objects around us determines speech and gestures we use to ...
Summary: The study presented in this paper is dedicated to the integration of pointing gestures with...
International audienceThe way we see the objects around us determines speech and gestures we use to ...
We are interested in multimodal systems that use the following modes and modalities : speech (and na...
We are interested in multimodal systems that use the following modes and modalities : speech (and na...
We are interested in multimodal systems that use the following modes and modalities : speech (and na...
Referring actions in multimodal situations can be thought of as linguistic expressions well coordi...
While there has been a wealth of research that uses textually rendered spoken corpora (i.e. written ...
The gesture input modality considered in multimodal dialogue systems is mainly reduced to pointing o...
The gesture input modality considered in multimodal dialogue systems is mainly reduced to pointing o...
Pointing gestures are pervasive in human referring actions, and are often combined with spoken descr...
When deictic gestures are produced on a touch screen, they can take forms which can lead to several ...
In this paper an algorithm for the generation of referring expressions in a multimodal setting is pr...
The gesture input modality considered in multimodal dialogue systems is mainly reduced to pointing o...
In this thesis, I address the problem of producing semantically appropriate gestures in embodied lan...
International audienceThe way we see the objects around us determines speech and gestures we use to ...
Summary: The study presented in this paper is dedicated to the integration of pointing gestures with...
International audienceThe way we see the objects around us determines speech and gestures we use to ...
We are interested in multimodal systems that use the following modes and modalities : speech (and na...
We are interested in multimodal systems that use the following modes and modalities : speech (and na...
We are interested in multimodal systems that use the following modes and modalities : speech (and na...
Referring actions in multimodal situations can be thought of as linguistic expressions well coordi...
While there has been a wealth of research that uses textually rendered spoken corpora (i.e. written ...
The gesture input modality considered in multimodal dialogue systems is mainly reduced to pointing o...
The gesture input modality considered in multimodal dialogue systems is mainly reduced to pointing o...
Pointing gestures are pervasive in human referring actions, and are often combined with spoken descr...
When deictic gestures are produced on a touch screen, they can take forms which can lead to several ...
In this paper an algorithm for the generation of referring expressions in a multimodal setting is pr...
The gesture input modality considered in multimodal dialogue systems is mainly reduced to pointing o...
In this thesis, I address the problem of producing semantically appropriate gestures in embodied lan...
International audienceThe way we see the objects around us determines speech and gestures we use to ...
Summary: The study presented in this paper is dedicated to the integration of pointing gestures with...
International audienceThe way we see the objects around us determines speech and gestures we use to ...