When using genetic programming (GP) or other techniques that try to approximate unknown functions, the principle of Occam's razor is often applied: find the simplest function that explains the given data, as it is assumed to be the best approximation for the unknown function. Using a well-known result from learning theory, it is shown in this paper, how Occam's razor can help GP in finding functions, so that the number of functions that differ from the unknown function by more than a certain degree can be bounded theoretically. Experiments show how these bounds can be used to get guaranteed quality assurances for practical applications, even though they are much too conservative. (orig.)Available from TIB Hannover: RR 8071(97-15)+a / FIZ - ...