This is the author accepted manuscript. The final version is available from IEEE via the DOI in this record.It is a widely accepted fact that the computational capability of recurrent neural networks (RNNs) is maximized on the so-called "edge of criticality." Once the network operates in this configuration, it performs efficiently on a specific application both in terms of: 1) low prediction error and 2) high short-term memory capacity. Since the behavior of recurrent networks is strongly influenced by the particular input signal driving the dynamics, a universal, application-independent method for determining the edge of criticality is still missing. In this paper, we aim at addressing this issue by proposing a theoretically motivated, uns...