We deal with zero-sum limiting average stochastic games. We show that the existence of arbitrary optimal strategies implies the existence of stationary epsilon-optimal strategies, for all epsilon > 0, and the existence of Markov optimal strategies. We present such a construction for which we do not even need to know these optimal strategies. Furthermore, an example demonstrates that the existence of stationary optimal strategies is not implied by the existence of optimal strategies, so the result is sharp. More generally, one can evaluate a strategy pi for the maximizing player, player 1, by the reward phi(s)(pi) that pi guarantees to him when starting in state s. A strategy pi is called nonimproving if phi(s)(pi) greater than or equal t...