We present a new algorithm for the generationof multimodal referring expressions(combining language and deicticgestures).1 The approach differs fromearlier work in that we allow for variousgradations of preciseness in pointing,ranging from unambiguous to vaguepointing gestures. The model predictsthat linguistic properties realized in thegenerated expression are co-dependenton the kind of pointing gesture included.The decision to point is based on a tradeoffbetween the costs of pointing and thecosts of linguistic properties, where bothkinds of costs are computed in empiricallymotivated ways. The model hasbeen implemented using a graph-basedgeneration algorithm