Our goal is to build conversational agents that combine information from speech, gesture, hand-writing, text and presentations to create an understanding of the ongoing conversation (e.g. by identifying the action items agreed upon), and that can make useful contributions to the meeting based on such an understanding (e.g. by confirming the details of the action items). To create a corpus of relevant data, we have implemented the Carnegie Mellon Meeting Recorder to capture detailed multi-modal recordings of meetings. This software differs somewhat from other meeting room architectures in that it focuses on instrumenting the individual rather than the room and assumes that the meeting space is not fixed in advance. Thus, most of the sensors ...
At ATR, we are collecting and analysing ‘meetings ’ data using a table-top sensor device consisting ...
The AMI Meeting Corpus is a multi-modal data set consisting of 100 hours of meeting recordings. It i...
The AMI Meeting Corpus is a multi-modal data set consisting of 100 hours of meeting recordings. It i...
Face-to-face meetings usually encompass several modalities including speech, gesture, handwriting, a...
Modern advances in multimedia and storage technologies have led to huge archives of human conversati...
Meetings play an important role in everyday life. Meeting minutes can serve as a summary of a meetin...
We investigate approaches to accessing information from the streams of audio data that result from m...
multimodal meeting assistants⇤ In this chapter, we will show how human-computer interaction (HCI) ca...
This paper is about interpreting human communication in meetings using audio, video and other signal...
Abstract. The AMI Meeting Corpus is a multi-modal data set consisting of 100 hours of meeting record...
Abstract. The AMI Meeting Corpus is a multi-modal data set consisting of 100 hours of meeting record...
Abstract. The AMI Meeting Corpus is a multi-modal data set con-sisting of 100 hours of meeting recor...
To support multi-disciplinary research in the AMI (Augmented Multi-party Interaction) project, a 100...
We investigate approaches to accessing information from the streams of audio data that result from m...
Modern advances in multimedia and storage technologies have led to huge archives of human conversat...
At ATR, we are collecting and analysing ‘meetings ’ data using a table-top sensor device consisting ...
The AMI Meeting Corpus is a multi-modal data set consisting of 100 hours of meeting recordings. It i...
The AMI Meeting Corpus is a multi-modal data set consisting of 100 hours of meeting recordings. It i...
Face-to-face meetings usually encompass several modalities including speech, gesture, handwriting, a...
Modern advances in multimedia and storage technologies have led to huge archives of human conversati...
Meetings play an important role in everyday life. Meeting minutes can serve as a summary of a meetin...
We investigate approaches to accessing information from the streams of audio data that result from m...
multimodal meeting assistants⇤ In this chapter, we will show how human-computer interaction (HCI) ca...
This paper is about interpreting human communication in meetings using audio, video and other signal...
Abstract. The AMI Meeting Corpus is a multi-modal data set consisting of 100 hours of meeting record...
Abstract. The AMI Meeting Corpus is a multi-modal data set consisting of 100 hours of meeting record...
Abstract. The AMI Meeting Corpus is a multi-modal data set con-sisting of 100 hours of meeting recor...
To support multi-disciplinary research in the AMI (Augmented Multi-party Interaction) project, a 100...
We investigate approaches to accessing information from the streams of audio data that result from m...
Modern advances in multimedia and storage technologies have led to huge archives of human conversat...
At ATR, we are collecting and analysing ‘meetings ’ data using a table-top sensor device consisting ...
The AMI Meeting Corpus is a multi-modal data set consisting of 100 hours of meeting recordings. It i...
The AMI Meeting Corpus is a multi-modal data set consisting of 100 hours of meeting recordings. It i...