A multi-objective multi-armed bandit (MOMAB) problem is a sequential decision process with stochastic reward vectors. We extend knowledge gradient (KG) policy to the MOMAB problem, and we propose Pareto-KG and scalarized-KG algorithms. The Pareto-KG trades off between exploration and exploitation by combining KG policy with Pareto dominance relations. The scalarized-KG makes use of a linear or non-linear scalarization function to convert the MOMAB problem into a single-objective multi-armed bandit problem and uses KG policy to trade off between exploration and exploitation. To measure the performance of the proposed algorithms, we introduce three regret measures. We compare empirically the performance of the KG policy with UCB1 policy on a ...