The aim of our research is to create a system whereby human members of a team can collaborate in a natural way with robots. In this paper we describe a Wizard of Oz (WOZ) study conducted to find the natural speech and gestures people would use when interacting with a mobile robot as a team member. Results of the study show that in the beginning participants used simple speech, but once the users learned that the system understood more complicated speech, they began to use more spatially descriptive language. User responses indicate that gestures aided in spatial communication. The input mode that combined the use of speech and gestures was found to be best. We first discuss previous work and detail how our study contributes to this body of ...
Salem M, Rohlfing K, Kopp S, Joublin F. A Friendly Gesture: Investigating the Effect of Multi-Modal ...
We present a within-subjects user study to compare robot teleoperation schemes based on traditional ...
We are designing and implementing a multi-modal interface to a team of dynamically autonomous robot...
The aim of our research is to create a system whereby human members of a team can collaborate in a n...
The U.S. Army Research Laboratory (ARL) Autonomous Systems Enterprise has a vision for the future of...
When we begin to build and interact with machines or robots that either look like humans or have hum...
Multimodal communication between humans and autonomous robots is essential to enhance effectiveness ...
Abstract—Gestures support and enrich speech across various forms of communication. Effective use of ...
Invited paperWe have created an infrastructure that allows a human to collaborate in a natural manne...
This paper presents the first step in designing a speech-enabled robot that is capable of natural ma...
Abstract: We have created an infrastructure that allows a human to collaborate in a natural manner w...
Abstract: We are designing and implementing a multi-modal interface to a team of dynamically autonom...
© 2016 IEEE. This paper investigates the problem of how humans understand and control human-robot co...
This paper investigates the problem of how humans understand and control human-robot collaborative a...
As humans, we are good at conveying intentions, working on shared goals. We coordinate actions, and ...
Salem M, Rohlfing K, Kopp S, Joublin F. A Friendly Gesture: Investigating the Effect of Multi-Modal ...
We present a within-subjects user study to compare robot teleoperation schemes based on traditional ...
We are designing and implementing a multi-modal interface to a team of dynamically autonomous robot...
The aim of our research is to create a system whereby human members of a team can collaborate in a n...
The U.S. Army Research Laboratory (ARL) Autonomous Systems Enterprise has a vision for the future of...
When we begin to build and interact with machines or robots that either look like humans or have hum...
Multimodal communication between humans and autonomous robots is essential to enhance effectiveness ...
Abstract—Gestures support and enrich speech across various forms of communication. Effective use of ...
Invited paperWe have created an infrastructure that allows a human to collaborate in a natural manne...
This paper presents the first step in designing a speech-enabled robot that is capable of natural ma...
Abstract: We have created an infrastructure that allows a human to collaborate in a natural manner w...
Abstract: We are designing and implementing a multi-modal interface to a team of dynamically autonom...
© 2016 IEEE. This paper investigates the problem of how humans understand and control human-robot co...
This paper investigates the problem of how humans understand and control human-robot collaborative a...
As humans, we are good at conveying intentions, working on shared goals. We coordinate actions, and ...
Salem M, Rohlfing K, Kopp S, Joublin F. A Friendly Gesture: Investigating the Effect of Multi-Modal ...
We present a within-subjects user study to compare robot teleoperation schemes based on traditional ...
We are designing and implementing a multi-modal interface to a team of dynamically autonomous robot...