In this thesis we investigate the problem of optimal control, in the domain of digital controllers. Digital controllers relies only on some observation of the state that occur at the sampling instants. In Chapter 1 we recall the basic terminology of the system theory. We define the linear systems, that will be the only ones investigated in this work, and we introduce the standard quadratic cost of a control input. Finally, we recall an existence theorem for the problem of finding the input that minimizes the cost. In Chapter 2 we recall the Riccati differential equation that provides the solution of the cost minimization problem in continuous-time control systems. We compute the explicit solution for the simple case of a uni-dimensional sta...