Data and Results associated with journal article: "Hardware-Efficient Compression of Neural Multi-Unit Activity Using Machine Learning Selected Static Huffman Encoders", authors: Oscar W. Savolainen, Zheng Zhang, Peilong Feng, Timothy Constandinou. Data, formatted for this work as .mat files, originally generously provided for the public by: - Flint dataset: https://pubmed.ncbi.nlm.nih.gov/22733013/ - Sabes dataset: https://zenodo.org/record/3854034#.Yhf5MejP3IV - Brochier dataset: https://www.nature.com/articles/sdata201855#data-citations Results: - Analysed behavioral decoding performance (BDP) results (.pkl) files - Bit Rate (BR) compression results Associated code and link to journal article @ https://github.com/Next-Gener...
Au cours de ces dernières années, les réseaux de neurones profonds se sont montrés être des éléments...
This paper presents Lane Compression, a lightweight lossless compression technique for machine learn...
Neural compression algorithms are typically based on autoencoders that require specialized encoder a...
Neural compression is the application of neural networks and other machine learning methods to data ...
Neural compression algorithms are typically based on autoencoders that require specialized encoder a...
Despite rapid advances in machine learning tools, the majority of neural decoding approaches still u...
To understand whether and how a certain population of neurons represent behavioral-relevant vari- ab...
After the tremendous success of convolutional neural networks in image classification, object detect...
To gain a better understanding of how neural ensembles communicate and process information, neural d...
This paper investigates the effectiveness of four Huffman-based compression schemes for different in...
International audienceThis paper compares the latency, accuracy, training time and hardware costs of...
International audienceThis article presents an approximate data encoding scheme called Significant P...
There has been an explosion of growth in the field of Machine Learning (ML) enabled by the widesprea...
peer reviewedAbstract—Many real world computer vision applications are required to run on hardware w...
Today, many image coding scenarios do not have a human as final intended user, but rather a machine ...
Au cours de ces dernières années, les réseaux de neurones profonds se sont montrés être des éléments...
This paper presents Lane Compression, a lightweight lossless compression technique for machine learn...
Neural compression algorithms are typically based on autoencoders that require specialized encoder a...
Neural compression is the application of neural networks and other machine learning methods to data ...
Neural compression algorithms are typically based on autoencoders that require specialized encoder a...
Despite rapid advances in machine learning tools, the majority of neural decoding approaches still u...
To understand whether and how a certain population of neurons represent behavioral-relevant vari- ab...
After the tremendous success of convolutional neural networks in image classification, object detect...
To gain a better understanding of how neural ensembles communicate and process information, neural d...
This paper investigates the effectiveness of four Huffman-based compression schemes for different in...
International audienceThis paper compares the latency, accuracy, training time and hardware costs of...
International audienceThis article presents an approximate data encoding scheme called Significant P...
There has been an explosion of growth in the field of Machine Learning (ML) enabled by the widesprea...
peer reviewedAbstract—Many real world computer vision applications are required to run on hardware w...
Today, many image coding scenarios do not have a human as final intended user, but rather a machine ...
Au cours de ces dernières années, les réseaux de neurones profonds se sont montrés être des éléments...
This paper presents Lane Compression, a lightweight lossless compression technique for machine learn...
Neural compression algorithms are typically based on autoencoders that require specialized encoder a...