The Microsoft SenseCam is a small multi-sensor camera worn around the user's neck. It was designed primarily for lifelog recording. At present, the SenseCam passively records up to 3,000 images per day as well as logging data from several on-board sensors. The sheer volume of image and sensor data captured by the SenseCam creates a number of challenges in the areas of segmenting whole day recordings into events, and searching for events. In this paper, we use content and contextual information to help aid in automatic event segmentation of a user's SenseCam images. We also propose and evaluate a number of novel techniques using Bluetooth and GPS context data to accurately locate and retrieve similar events within a user's lifelog photoset