Recursive neural networks are a new connectionist model recently introduced for processing graphs. Linear recursive networks are a special subclass where the neurons have linear activation functions. The approximation properties of recursive networks are tightly connected to the possibility of distinguishing the patterns by generating a different internal encoding for each input of the domain. In this paper, it is shown that, even if linear recursive networks can distinguish the patterns of any finite set of trees, such a result requires a prohibitive memory consumption. However, it is also proved that the problem disappears when the domain is restricted to set of trees belonging to special sub-classes