Hand detection is one of the most explored areas in Egocentric Vision Video Analysis for wearable devices. Current methods are focused on pixel-by-pixel hand segmentation, with the implicit assumption of hand presence in almost all activities. However, this assumption is false in many applications for wearable cameras. Ignoring this fact could affect the whole performance of the device since hand measurements are usually the starting point for higher level inference, or could lead to inef¿cient use of computational resources and battery power. In this paper we propose a two-level sequential classi¿er, in which the ¿rst level, a hand-detector, deals with the possible presence of hands from a global perspective, and the second level, a hand-s...