International audienceArtificial Neural Networks (ANN) are being widely used in supervised Machine Learning (ML) to analyse signals or images for many applications. Using a learning database, one of the main challenges is to optimize the network weights. This optimization step is generally performed using a gradient-based approach with a back-propagation strategy. For the sake of efficiency, regularization is generally used. When non-smooth regularizers are used especially to promote sparse networks, this optimization becomes challenging. Classical gradient-based optimizers cannot be used due to differentiability issues. In this paper, we propose an MCMC-based optimization scheme formulated in a Bayesian frame-work. Hamiltonian dynamics are...