In this thesis, we formulate the Gauss-Newton algorithm to make it viable for running on distributed memory architectures and comparative to Alternating least squares algorithm for CP decomposition. Alternating least squares may exhibit slow or no convergence, especially when high accuracy is required. CP decomposition problem can be formulated as a non-linear least squares problem to apply iterative Newton-like methods. Direct solution of linear systems involving an approximated Hessian is an expensive approach, however, recent advancements have shown that use of an implicit representation of the linear system makes these methods competitive with alternating least squares in terms of speed. We provide a parallel implementation of a Gauss-N...
The construction of the gradient of the objective function in gradient-based optimization algorithms...
Tensor decompositions have become a central tool in machine learning to extract interpretable patter...
Fast convergent, accurate, computationally efficient, parallelizable, and robust matrix inversion an...
CP decomposition (CPD) is prevalent in chemometrics, signal processing, data mining and many more fi...
In this thesis we are concerned with iterative parallel algorithms for solving finite difference eq...
International audience—We investigate an efficient parallelization of a class of algorithms for the ...
Triangular matrix decompositions are fundamental building blocks in computational linear algebra. Th...
Triangular matrix decompositions are fundamental building blocks in computational linear algebra. Th...
Triangular matrix decompositions are fundamental building blocks in computational linear algebra. Th...
AbstractThe solution of linear systems continues to play an important role in scientific computing. ...
We present the Alternating Anderson-Richardson (AAR) method: an efficient and scalable alternative t...
We present the Alternating Anderson-Richardson (AAR) method: an efficient and scalable alternative t...
The construction of the gradient of the objective function in gradient-based optimization algorithms...
With the development of machine learning and Big Data, the concepts of linear and non-linear optimiz...
Tensor decompositions are higher-order analogues of matrix decompositions and have proven to be powe...
The construction of the gradient of the objective function in gradient-based optimization algorithms...
Tensor decompositions have become a central tool in machine learning to extract interpretable patter...
Fast convergent, accurate, computationally efficient, parallelizable, and robust matrix inversion an...
CP decomposition (CPD) is prevalent in chemometrics, signal processing, data mining and many more fi...
In this thesis we are concerned with iterative parallel algorithms for solving finite difference eq...
International audience—We investigate an efficient parallelization of a class of algorithms for the ...
Triangular matrix decompositions are fundamental building blocks in computational linear algebra. Th...
Triangular matrix decompositions are fundamental building blocks in computational linear algebra. Th...
Triangular matrix decompositions are fundamental building blocks in computational linear algebra. Th...
AbstractThe solution of linear systems continues to play an important role in scientific computing. ...
We present the Alternating Anderson-Richardson (AAR) method: an efficient and scalable alternative t...
We present the Alternating Anderson-Richardson (AAR) method: an efficient and scalable alternative t...
The construction of the gradient of the objective function in gradient-based optimization algorithms...
With the development of machine learning and Big Data, the concepts of linear and non-linear optimiz...
Tensor decompositions are higher-order analogues of matrix decompositions and have proven to be powe...
The construction of the gradient of the objective function in gradient-based optimization algorithms...
Tensor decompositions have become a central tool in machine learning to extract interpretable patter...
Fast convergent, accurate, computationally efficient, parallelizable, and robust matrix inversion an...