Boosting is a learning scheme that combines weak learners to produce a strong composite learner, with the underlying intuition that one can obtain accurate learner by combining "rough" ones. This paper aims at developing a new boosting strategy, called resealed boosting (RBoosting), to accelerate the numerical convergence rate and, consequently, improve learning performances of the original boosting. Our studies show that RBoosting possesses the almost optimal numerical convergence rate in the sense that, up to a logarithmic factor, it can reach the minimax nonlinear approximation rate. We then use RBoosting to tackle classification problems and deduce corresponding statistical consistency and tight generalization error estimates. A series ...