Optimized SSD MKL backend performance (~3X boost version over version) Bumped aeon version to v1.3.0 Fixed inference performance issue of MKL batchnorm Fixed batch prediction issue for gpu backend Enabled subset_pct for MNIST_DCGAN example Updated "make clean" to clean up mkl artifacts Added dockerfile for IA mk
The fast execution speed and energy efficiency of analog hardware have made them a strong contender ...
© 2018 Curran Associates Inc.All rights reserved. Batch Normalization (BatchNorm) is a widely adopte...
Batch Normalization (BatchNorm) is a technique that enables the training of deep neural networks, es...
Optimized SSD MKL backend performance (~3X boost version over version) Bumped aeon version to v1.3.0...
Update MKLML version 20170908 that fixes a bug related to data conversions) Add SSD example for boun...
Further optimized MKL backend performance for SSD inference Updated MKLML to version 20171227 Enable...
Added support for MKL backend (-b mkl) on Linux, which boosts neon CPU performance significantly Add...
Optimized DeepSpeech2 MKL backend performance (~7X improvement over the CPU backend) Fused convoluti...
Faster RCNN model Sequence to Sequence container and char_rae recurrent autoencoder model Reshape La...
Update Data Loader to aeon https://github.com/NervanaSystems/aeon for flexible, multi-threaded data ...
Set MKL backend (-b mkl) as the default CPU backend on Linux (use -b cpu to specify original CPU bac...
Add support for 3D deconvolution Generative Adversarial Networks (GAN) implementation, and MNIST DCG...
With the rapid growth of deep learning models and higher expectations for their accuracy and through...
Skip Thought Vectors (http://arxiv.org/abs/1506.06726) example Dilated convolution support Nesterov ...
Python2/Python3 compatibility [#191] Support for Pascal GPUs Persistent RNN kernels [#262] Implemen...
The fast execution speed and energy efficiency of analog hardware have made them a strong contender ...
© 2018 Curran Associates Inc.All rights reserved. Batch Normalization (BatchNorm) is a widely adopte...
Batch Normalization (BatchNorm) is a technique that enables the training of deep neural networks, es...
Optimized SSD MKL backend performance (~3X boost version over version) Bumped aeon version to v1.3.0...
Update MKLML version 20170908 that fixes a bug related to data conversions) Add SSD example for boun...
Further optimized MKL backend performance for SSD inference Updated MKLML to version 20171227 Enable...
Added support for MKL backend (-b mkl) on Linux, which boosts neon CPU performance significantly Add...
Optimized DeepSpeech2 MKL backend performance (~7X improvement over the CPU backend) Fused convoluti...
Faster RCNN model Sequence to Sequence container and char_rae recurrent autoencoder model Reshape La...
Update Data Loader to aeon https://github.com/NervanaSystems/aeon for flexible, multi-threaded data ...
Set MKL backend (-b mkl) as the default CPU backend on Linux (use -b cpu to specify original CPU bac...
Add support for 3D deconvolution Generative Adversarial Networks (GAN) implementation, and MNIST DCG...
With the rapid growth of deep learning models and higher expectations for their accuracy and through...
Skip Thought Vectors (http://arxiv.org/abs/1506.06726) example Dilated convolution support Nesterov ...
Python2/Python3 compatibility [#191] Support for Pascal GPUs Persistent RNN kernels [#262] Implemen...
The fast execution speed and energy efficiency of analog hardware have made them a strong contender ...
© 2018 Curran Associates Inc.All rights reserved. Batch Normalization (BatchNorm) is a widely adopte...
Batch Normalization (BatchNorm) is a technique that enables the training of deep neural networks, es...