AbstractA new method for unconstrained optimization in Rn is presented. This method reduces the dimension of the problem in such a way that it can lead to an iterative approximate formula for the computation of (n − 1) components of the optimum while its remaining component is computed separately using the final approximations of the other components. It converges quadratically to a local optimum and it requires storage of order (n − 1) × (n − 1). Besides, it does not require a good initial guess for one component of the optimum and it does not directly perform gradient evaluations; thus it can be applied to problems with imprecise gradient values.Moreover, a procedure for transforming the matrix formed by our method into a symmetric as wel...