The last decade has seen the re-emergence of machine learning methods based on formal neural networks under the name of deep learning. Although these methods have enabled a major breakthrough in machine learning, several obstacles to the possibility of industrializing these methods persist, notably the need to collect and label a very large amount of data as well as the computing power necessary to perform learning and inference with this type of neural network. In this thesis, we propose to study the adequacy between inference and learning algorithms derived from biological neural networks and massively parallel hardware architectures. We show with three contribution that such adequacy drastically accelerates computation times inherent to ...
Neural networks stand out from artificial intelligence because they can complete challenging tasks, ...
The high level of realism of spiking neuron networks and their complexity require a considerable com...
Taking inspiration from machine learning libraries - where techniques such as parallel batch trainin...
The last decade has seen the re-emergence of machine learning methods based on formal neural network...
Cette dernière décennie a donné lieu à la réémergence des méthodes d'apprentissage machine basées su...
Inference and training in deep neural networks require large amounts of computation, which in many c...
From image recognition to automated driving, machine learning nowadays is all around us and impacts ...
In this thesis, we study the dedicated computational approaches of deep neural networks and more par...
Abstract. Over the past 15 years, we have developed software image processing systems that attempt t...
The arrival of graphics processing (GPU) cards suitable for massively parallel computing promises a↵...
Deep neural networks have gained popularity in recent years, obtaining outstanding results in a wide...
The generalization performance of deep neural networks comes from their ability to learn, which requ...
The intrinsic parallelism of visual neural architectures based on distributed hierarchical layers is...
High performance computing on the Graphics Processing Unit (GPU) is an emerging field driven by the ...
High performance computing on the Graphics Processing Unit (GPU) is an emerging field driven by the ...
Neural networks stand out from artificial intelligence because they can complete challenging tasks, ...
The high level of realism of spiking neuron networks and their complexity require a considerable com...
Taking inspiration from machine learning libraries - where techniques such as parallel batch trainin...
The last decade has seen the re-emergence of machine learning methods based on formal neural network...
Cette dernière décennie a donné lieu à la réémergence des méthodes d'apprentissage machine basées su...
Inference and training in deep neural networks require large amounts of computation, which in many c...
From image recognition to automated driving, machine learning nowadays is all around us and impacts ...
In this thesis, we study the dedicated computational approaches of deep neural networks and more par...
Abstract. Over the past 15 years, we have developed software image processing systems that attempt t...
The arrival of graphics processing (GPU) cards suitable for massively parallel computing promises a↵...
Deep neural networks have gained popularity in recent years, obtaining outstanding results in a wide...
The generalization performance of deep neural networks comes from their ability to learn, which requ...
The intrinsic parallelism of visual neural architectures based on distributed hierarchical layers is...
High performance computing on the Graphics Processing Unit (GPU) is an emerging field driven by the ...
High performance computing on the Graphics Processing Unit (GPU) is an emerging field driven by the ...
Neural networks stand out from artificial intelligence because they can complete challenging tasks, ...
The high level of realism of spiking neuron networks and their complexity require a considerable com...
Taking inspiration from machine learning libraries - where techniques such as parallel batch trainin...