Abstract—Low-rank tensor decomposition has many applica-tions in signal processing and machine learning, and is becoming increasingly important for analyzing big data. A significant challenge is the computation of intermediate products which can be much larger than the final result of the computation, or even the original tensor. We propose a scheme that allows memory-efficient in-place updates of intermediate matrices. Motivated by recent advances in big tensor decomposition from multiple compressed replicas, we also consider the related problem of memory-efficient tensor compression. The resulting algorithms can be parallelized, and can exploit but do not require sparsity. I
DoctorMatrix or tensor completion, which aims to accurately predict unobserved matrix or tensor entr...
Low rank decomposition of tensors is a powerful tool for learning generative models. The uniqueness ...
This dissertation is concerned with the development of novel high-performance algorithms for tensor ...
© 2019 Society for Industrial and Applied Mathematics Decomposing tensors into simple terms is often...
Linear algebra is the foundation of machine learning, especially for handling big data. We want to e...
The Canonical Polyadic Decomposition (CPD) of tensors is a powerful tool for analyzing multi-wa...
The product of a dense tensor with a vector in every mode except one, called a tensor-vector product...
The Canonical Polyadic Decomposition (CPD) of tensors is a powerful tool for analyzing multi-way dat...
We investigate an efficient parallelization of a class of algorithms for the well-known Tucker decom...
The product of a dense tensor with a vector in every mode but one, called a tensor-vector product, i...
Most visual computing domains are witnessing a steady growth in sheer data set size, complexity, and...
Low-rank tensor completion addresses the task of filling in missing entries in multi-dimensional dat...
Abstract. By a tensor problem in general, we mean one where all the data on input and output are giv...
This dissertation is concerned with the development of novel high-performance algorithms for tensor ...
Low-rank tensor completion addresses the task of filling in missing entries in multidimensional data...
DoctorMatrix or tensor completion, which aims to accurately predict unobserved matrix or tensor entr...
Low rank decomposition of tensors is a powerful tool for learning generative models. The uniqueness ...
This dissertation is concerned with the development of novel high-performance algorithms for tensor ...
© 2019 Society for Industrial and Applied Mathematics Decomposing tensors into simple terms is often...
Linear algebra is the foundation of machine learning, especially for handling big data. We want to e...
The Canonical Polyadic Decomposition (CPD) of tensors is a powerful tool for analyzing multi-wa...
The product of a dense tensor with a vector in every mode except one, called a tensor-vector product...
The Canonical Polyadic Decomposition (CPD) of tensors is a powerful tool for analyzing multi-way dat...
We investigate an efficient parallelization of a class of algorithms for the well-known Tucker decom...
The product of a dense tensor with a vector in every mode but one, called a tensor-vector product, i...
Most visual computing domains are witnessing a steady growth in sheer data set size, complexity, and...
Low-rank tensor completion addresses the task of filling in missing entries in multi-dimensional dat...
Abstract. By a tensor problem in general, we mean one where all the data on input and output are giv...
This dissertation is concerned with the development of novel high-performance algorithms for tensor ...
Low-rank tensor completion addresses the task of filling in missing entries in multidimensional data...
DoctorMatrix or tensor completion, which aims to accurately predict unobserved matrix or tensor entr...
Low rank decomposition of tensors is a powerful tool for learning generative models. The uniqueness ...
This dissertation is concerned with the development of novel high-performance algorithms for tensor ...