As a result of technological advancement, the speed of computers is increasing day by day. Information is provided on the CUDA technology and its application, which allows saving time resources in the implementation of deep learning processes in artificial intelligence. By using this technology, it is possible to fully utilize the available computing resources in heterogeneous systems for extracting important features from large datasets in deep learning problems, and for working with images. In the research results, we present comparative results on tools and programming languages that support CUDA technology."Science and innovation" international scientific journal. ISSN: 2181-333
Deep Learning (DL), a groundbreaking branch of Machine Learning (ML), has emerged as a driving force...
AbstractCUDA (Compute Unified Device Architecture) is a parallel computing platform and programming ...
With the intention of introducing newbies to these fields, this article covers deep learning in addi...
The object of research is to parallelize the learning process of artificial neural networks to autom...
The object of research is to parallelize the learning process of artificial neural networks to autom...
There are many successful applications to take advantages of massive parallelization on GPU for deep...
The needs of entertainment industry in the field of personal computers always require more realistic...
Deep Learning is a significant tool that communicates with the computer to perform task as a natural...
At the dawn of the 4th Industrial Revolution, the field of Deep Learning (a sub-field of Artificial ...
The main purpose of this survey is presenting the potential ofGPGPU technology for real time markerl...
Artificial Intelligence (AI) has attracted the attention of researchers and users alike and is takin...
Mimicking the brain is the most challenging task in the field of computer science since its origin. ...
Purpose: Deep learning is a predominant branch in machine learning, which is inspired by the operati...
In the last few years, the deep learning (DL) computing paradigm has been deemed the Gold Standard i...
We present a library that provides optimized implementations for deep learning primitives. Deep lear...
Deep Learning (DL), a groundbreaking branch of Machine Learning (ML), has emerged as a driving force...
AbstractCUDA (Compute Unified Device Architecture) is a parallel computing platform and programming ...
With the intention of introducing newbies to these fields, this article covers deep learning in addi...
The object of research is to parallelize the learning process of artificial neural networks to autom...
The object of research is to parallelize the learning process of artificial neural networks to autom...
There are many successful applications to take advantages of massive parallelization on GPU for deep...
The needs of entertainment industry in the field of personal computers always require more realistic...
Deep Learning is a significant tool that communicates with the computer to perform task as a natural...
At the dawn of the 4th Industrial Revolution, the field of Deep Learning (a sub-field of Artificial ...
The main purpose of this survey is presenting the potential ofGPGPU technology for real time markerl...
Artificial Intelligence (AI) has attracted the attention of researchers and users alike and is takin...
Mimicking the brain is the most challenging task in the field of computer science since its origin. ...
Purpose: Deep learning is a predominant branch in machine learning, which is inspired by the operati...
In the last few years, the deep learning (DL) computing paradigm has been deemed the Gold Standard i...
We present a library that provides optimized implementations for deep learning primitives. Deep lear...
Deep Learning (DL), a groundbreaking branch of Machine Learning (ML), has emerged as a driving force...
AbstractCUDA (Compute Unified Device Architecture) is a parallel computing platform and programming ...
With the intention of introducing newbies to these fields, this article covers deep learning in addi...