This work presents a interaction system for virtual environments that allows users control platform without the need of using mouse and keyboard. Through the head movement and voice commands is possible the navigation and control of virtual human (avatar), thus, better human-computer integration. The control system called SOMI (Sounds and Motion Interface) is based on speech recognition using artificial neural networks (ANN), where once the ANN are trained for the different users it possible direct a vocal command to a command of avatar, resulting in the possibility of control by voice. Head movements are recognised using the system infrared(IR) head tracking, this system is based in a IR camera detecting the position of IR leds positioned ...