The paper presents two different multimodal interfaces based on automatic recognition and interpretation of speech and gestures of user’s head and hands, developed within the framework of the SIMILAR European Network of Excellence. The architectures of ICANDO and MOWGLI multimodal interfaces, modalities recognition, information synchronization and fusion as well as qualitative comparison and quantitative evaluation using Fitt’s law experiments are described. The comparison of both contactless interfaces shows that in spite of the differences in computer vision and ASR techniques applied, they provide similar performances in contactless human-computer interaction. 1
The interpretation process of complex data sets makes the integration of effective interaction techn...
This paper presents some recent developments at DISTInfoMus Lab on multimodal and cross-modal proces...
This volume brings together, through a peer-revision process, the advanced research results obtained...
Publication in the conference proceedings of EUSIPCO, Lausanne, Switzerland, 200
Abstract. The development of interfaces has been a technology-driven process. However, the newly dev...
Colloque avec actes et comité de lecture.The paper presents a common theoretical framework for expla...
Abstract | A growing body of research shows several advantages to multimodal interfaces in-cluding i...
With such a rapid advancement in powerful mobile devices and sensors in recent years, inclusion of m...
With such a rapid advancement in powerful mobile devices and sensors in recent years, inclusion of m...
With such a rapid advancement in powerful mobile devices and sensors in recent years, inclusion of m...
The modalities of speech and gesture have different strengths and weaknesses, but combined they crea...
Multi-modal interfaces can achieve more natural and effective human-computer interaction by integrat...
The use of multiple modes of user input to interact with computers and devices is an active area of ...
Multi-Modal Interface Systems (MMIS) have proliferated in the last few decades, since they provide ...
Multimodal interfaces are the emerging technology that offers expressive, transparent, efficient, ro...
The interpretation process of complex data sets makes the integration of effective interaction techn...
This paper presents some recent developments at DISTInfoMus Lab on multimodal and cross-modal proces...
This volume brings together, through a peer-revision process, the advanced research results obtained...
Publication in the conference proceedings of EUSIPCO, Lausanne, Switzerland, 200
Abstract. The development of interfaces has been a technology-driven process. However, the newly dev...
Colloque avec actes et comité de lecture.The paper presents a common theoretical framework for expla...
Abstract | A growing body of research shows several advantages to multimodal interfaces in-cluding i...
With such a rapid advancement in powerful mobile devices and sensors in recent years, inclusion of m...
With such a rapid advancement in powerful mobile devices and sensors in recent years, inclusion of m...
With such a rapid advancement in powerful mobile devices and sensors in recent years, inclusion of m...
The modalities of speech and gesture have different strengths and weaknesses, but combined they crea...
Multi-modal interfaces can achieve more natural and effective human-computer interaction by integrat...
The use of multiple modes of user input to interact with computers and devices is an active area of ...
Multi-Modal Interface Systems (MMIS) have proliferated in the last few decades, since they provide ...
Multimodal interfaces are the emerging technology that offers expressive, transparent, efficient, ro...
The interpretation process of complex data sets makes the integration of effective interaction techn...
This paper presents some recent developments at DISTInfoMus Lab on multimodal and cross-modal proces...
This volume brings together, through a peer-revision process, the advanced research results obtained...