Automatic evaluation of machine translation (MT) is based on the idea that the quality of the MT output is better if it is more similar to human translation (HT). Whereas automatic metrics based on this similarity idea enable fast and large-scale evaluation of MT progress and therefore are widely used, they have certain limitations. One is the fact that the automatic metrics are not able to recognise acceptable differences between MT and HT. The frequent cause of these differences are translation shifts, the optional departures from theoretical formal correspondence between source and target language units for the sake of adapting the text to the norms and conventions of the target language. This work is based on the author’s own translatio...