This paper deals with the pragmatic interpretation of multimodal referring expressions in man-machine dialogue systems. We show the importance of building up a structure of the visual context at a semantic level, in order to enrich the significant possibilities of interpretations and to make possible the fusion of this structure with the ones obtained from the linguistic and gesture semantic analyses. Visual salience and perceptual grouping are two notions that guide such a structuring. We thus propose a hierarchy of salience criteria linked to an algorithm that detects salient objects, as well as guidelines for grouping algorithms. We show how the integration of the results of all these algorithms is a complex problem. We propose simple he...
Following the ecological approach to visual perception, this paper presents a framework that emphasi...
The way we see the objects around us determines speech and gestures we use to refer to them. The ges...
We are interested in multimodal systems that use the following modes and modalities : speech (and na...
This paper deals with the pragmatic interpretation of multimodal referring expressions in man-machin...
International audienceThe way we see the objects around us determines speech and gestures we use to ...
In this chapter we present a first attempt to score the relevance of multimodal referring expression...
In this chapter we present a first attempt to score the relevance of multimodal referring expression...
International audienceThe way we see the objects around us determines speech and gestures we use to ...
In this paper an algorithm for the generation of referring expressions in a multimodal setting is pr...
Following the ecological approach to visual perception, this paper investigates multimodal referring...
Referring actions in multimodal situations can be thought of as linguistic expressions well coordi...
ABSTRACT. This paper presents a first attempt to score the relevance of multimodal referring express...
We discuss ongoing work investigating how humans interact with multimodal systems, focusing on how s...
National audienceFor automatic comprehension or generation of referring expressions, Relevance Theor...
Following the ecological approach to visual perception, this paper presents a framework that emphasi...
The way we see the objects around us determines speech and gestures we use to refer to them. The ges...
We are interested in multimodal systems that use the following modes and modalities : speech (and na...
This paper deals with the pragmatic interpretation of multimodal referring expressions in man-machin...
International audienceThe way we see the objects around us determines speech and gestures we use to ...
In this chapter we present a first attempt to score the relevance of multimodal referring expression...
In this chapter we present a first attempt to score the relevance of multimodal referring expression...
International audienceThe way we see the objects around us determines speech and gestures we use to ...
In this paper an algorithm for the generation of referring expressions in a multimodal setting is pr...
Following the ecological approach to visual perception, this paper investigates multimodal referring...
Referring actions in multimodal situations can be thought of as linguistic expressions well coordi...
ABSTRACT. This paper presents a first attempt to score the relevance of multimodal referring express...
We discuss ongoing work investigating how humans interact with multimodal systems, focusing on how s...
National audienceFor automatic comprehension or generation of referring expressions, Relevance Theor...
Following the ecological approach to visual perception, this paper presents a framework that emphasi...
The way we see the objects around us determines speech and gestures we use to refer to them. The ges...
We are interested in multimodal systems that use the following modes and modalities : speech (and na...