Connections between function approximation and classes of functional optimization problems, whose admissible solutions may depend on a large number of variables, are investigated. The insights obtained in this context are exploited to analyze families of nonlinear approximation schemes containing tunable parameters and enjoying the following property: when they are used to approximate the (unknown) solutions to optimization problems, the number of parameters required to guarantee a desired accuracy grows at most polynomially with respect to the number of variables in admissible solutions. Both sigmoidal neural networks and networks with kernel units are considered as approximation structures to which the analysis applies. Finally, it is sho...