Both stochastic learning automata and genetic algorithms have previously been shown to have valuable global optimization properties. Learning automata have however been criticized for their perceived slow rate of convergence. In this paper these two techniques are combined to provide an increase in the rate of convergence for the learning automata and also to improve the escape from local minima. The technique separates the genotype and phenotype properties of the genetic algorithm and has the advantage that the degree of convergence can be quickly ascertained. It also provides the genetic algorithm with a stopping rule and enables bounds to be given on the parameter values obtained