Many problems in computer science and applied mathematics require rounding a vector ? of fractional values lying in the interval [0,1] to a binary vector ? so that, for a given matrix ?, ?? is as close to ?? as possible. For example, this problem arises in LP rounding algorithms used to approximate NP-hard optimization problems and in the design of uniformly distributed point sets for numerical integration. For a given matrix ?, the worst-case error over all choices of ? incurred by the best possible rounding is measured by the linear discrepancy of ?, a quantity studied in discrepancy theory, and introduced by Lovasz, Spencer, and Vesztergombi (EJC, 1986). We initiate the study of the computational complexity of linear discrepancy. Our inv...