Random forests have been introduced by Leo Breiman (2001) as a new learning algorithm, extend-ing the capabilities of decision trees by aggregating and randomising them. We explored the effects of the introduction of noise and irrelevant variables in the training set on the learning curve of a ran-dom forest classifier and compared them to the results of a classical decision tree algorithm inspired by Breiman's CART (1984). This study was realized by simulating 23 artificial binary concepts pre-senting a wide range of complexity and dimension (4 to 10 relevant variables), adding different noise and irrelevant variables rates to learning samples of various sizes (50 to 5000 examples). It ap-peared that random forests and individual deci...