We consider a classical finite horizon optimal control problem for continuous-time pure jump Markov processes described by means of a rate transition measure depending on a control parameter and controlled by a feedback law. For this class of problems the value function can often be described as the unique solution to the corresponding Hamilton\u2013Jacobi-Bellman equation. We prove a probabilistic representation for the value function, known as nonlinear Feynman\u2013Kac formula. It relates the value function with a backward stochastic differential equation (BSDE) driven by a random measure and with a sign constraint on its martingale part. We also prove existence and uniqueness results for this class of constrained BSDEs. The connection o...