Abstract. By a tensor problem in general, we mean one where all the data on input and output are given (exactly or approximately) in tensor formats, the number of data representation parame-ters being much smaller than the total amount of data. For such problems, it is natural to seek for algorithms working with data only in tensor formats maintaining the same small number of represen-tation parameters — by the price of all results of computation to be contaminated by approximation (recompression) to occur in each operation. Since approximation time is crucial and depends on ten-sor formats in use, in this paper we discuss which are best suitable to make recompression inexpensive and reliable. We present fast recompression procedures with s...
Tensor decompositions, in particular the Tucker model, are a powerful family of techniques for dimen...
Abstract—Low-rank tensor decomposition has many applica-tions in signal processing and machine learn...
Algorithms are proposed for the approximate calculation of the matrix product C̃ ≈ C = A · B, where ...
By a tensor problem in general, we mean one where all the data on input and output are given (exactl...
Abstract. We consider Tucker-like approximations with an r × r × r core tensor for three-dimensional...
Linear algebra is the foundation of machine learning, especially for handling big data. We want to e...
© 2019 Society for Industrial and Applied Mathematics Decomposing tensors into simple terms is often...
Dimensionality reduction is a fundamental idea in data science and machine learning. Tensor is ubiqu...
Low rank decomposition of tensors is a powerful tool for learning generative models. The uniqueness ...
special session "Tensor Computations in Linear and Multilinear Algebra"Tensor decompositions permit ...
© 2020 Society for Industrial and Applied Mathematics. We describe a simple, black-box compression f...
© 2020 Society for Industrial and Applied Mathematics. We describe a simple, black-box compression f...
The coming century is surely the century of high dimensional data. With the rapid growth of computat...
The coming century is surely the century of high dimensional data. With the rapid growth of computat...
Abstract. New algorithms are proposed for the Tucker approximation of a 3-tensor, that access it usi...
Tensor decompositions, in particular the Tucker model, are a powerful family of techniques for dimen...
Abstract—Low-rank tensor decomposition has many applica-tions in signal processing and machine learn...
Algorithms are proposed for the approximate calculation of the matrix product C̃ ≈ C = A · B, where ...
By a tensor problem in general, we mean one where all the data on input and output are given (exactl...
Abstract. We consider Tucker-like approximations with an r × r × r core tensor for three-dimensional...
Linear algebra is the foundation of machine learning, especially for handling big data. We want to e...
© 2019 Society for Industrial and Applied Mathematics Decomposing tensors into simple terms is often...
Dimensionality reduction is a fundamental idea in data science and machine learning. Tensor is ubiqu...
Low rank decomposition of tensors is a powerful tool for learning generative models. The uniqueness ...
special session "Tensor Computations in Linear and Multilinear Algebra"Tensor decompositions permit ...
© 2020 Society for Industrial and Applied Mathematics. We describe a simple, black-box compression f...
© 2020 Society for Industrial and Applied Mathematics. We describe a simple, black-box compression f...
The coming century is surely the century of high dimensional data. With the rapid growth of computat...
The coming century is surely the century of high dimensional data. With the rapid growth of computat...
Abstract. New algorithms are proposed for the Tucker approximation of a 3-tensor, that access it usi...
Tensor decompositions, in particular the Tucker model, are a powerful family of techniques for dimen...
Abstract—Low-rank tensor decomposition has many applica-tions in signal processing and machine learn...
Algorithms are proposed for the approximate calculation of the matrix product C̃ ≈ C = A · B, where ...