This article introduces the approach, the developed models, and ongoing research issues on expressiveness and physical interaction in interactive systems, with a special focus on human-robot communication. Research issues and related concrete examples from the interactive multimedia performance \u201cL\u2018Ala dei Sensi\u201d (literally, \u201cThe Wing of the Senses\u201d), held in Ferrara (Italy) in November 1999, are presented. \u201cL\u2018Ala dei Sensi\u201d is formed by different episodes, a subset of which include one or more dancers interacting in real-time with a robot on stage. This interaction can influence and generate computer-generated music and video. Experiments have been conduced on the concept of musical and visual \u201cc...
You Move You Interact (YMYI) is an interactive installation, in which a user engages in a body langu...
This demonstration paper describes the conception, design and implementation of a hardware/software ...
Abstract—In this paper we propose a general active audition framework for auditory-driven Human-Robo...
Abstract. This paper presents ongoing research on the modelling of expressive gesture in multimodal ...
Paper Session 3: AMH (Art/Theory/Embodiment) ISBN:978-1-4673-4663-4 eISBN:978-1-4673-4664-1Internati...
In a robot theater, developing high-quality motions for a humanoid robot requires significant time a...
In this paper we describe ongoing work, which explores the physicality of human-computer interaction...
This paper considers the impact of visual art and performance on robotics and human-computer interac...
This paper describes a robotic system that uses dance as a form of social interaction to explore the...
Contains fulltext : 74680.pdf (publisher's version ) (Closed access)This paper pre...
Abstract- Programming a humanoid robot to dance to live music is a complex task requiring contributi...
In the last twenty years, robotics have been applied in many heterogeneous contexts. Among them, the...
The augmented ballet project aims at gathering research from several fields and directing them towar...
During several decades, the research at Waseda University has been focused on developing anthropomor...
Abstract — We are currently investigating the use of rhythm and synchrony in human-robot interaction...
You Move You Interact (YMYI) is an interactive installation, in which a user engages in a body langu...
This demonstration paper describes the conception, design and implementation of a hardware/software ...
Abstract—In this paper we propose a general active audition framework for auditory-driven Human-Robo...
Abstract. This paper presents ongoing research on the modelling of expressive gesture in multimodal ...
Paper Session 3: AMH (Art/Theory/Embodiment) ISBN:978-1-4673-4663-4 eISBN:978-1-4673-4664-1Internati...
In a robot theater, developing high-quality motions for a humanoid robot requires significant time a...
In this paper we describe ongoing work, which explores the physicality of human-computer interaction...
This paper considers the impact of visual art and performance on robotics and human-computer interac...
This paper describes a robotic system that uses dance as a form of social interaction to explore the...
Contains fulltext : 74680.pdf (publisher's version ) (Closed access)This paper pre...
Abstract- Programming a humanoid robot to dance to live music is a complex task requiring contributi...
In the last twenty years, robotics have been applied in many heterogeneous contexts. Among them, the...
The augmented ballet project aims at gathering research from several fields and directing them towar...
During several decades, the research at Waseda University has been focused on developing anthropomor...
Abstract — We are currently investigating the use of rhythm and synchrony in human-robot interaction...
You Move You Interact (YMYI) is an interactive installation, in which a user engages in a body langu...
This demonstration paper describes the conception, design and implementation of a hardware/software ...
Abstract—In this paper we propose a general active audition framework for auditory-driven Human-Robo...