Latent Semantic Analysis (LSA) is a method that allows us to automatically index and retrieve information from a set of objects by reducing the term-by-document matrix using the Singular Value Decomposition (SVD) technique. However, LSA has a high computational cost for analyzing large amounts of information. The goals of this work are (i) to improve the execution time of semantic space construction, dimensionality reduction, and information retrieval stages of LSA based on heterogeneous systems and (ii) to evaluate the accuracy and recall of the information retrieval stage. We present a heterogeneous Latent Semantic Analysis (hLSA) system, which has been developed using General-Purpose computing on Graphics Processing Units (GPGPUs) archit...
Data analysis is a rising field of interest for computer science research due to the growing amount ...
We present the design and implementation of GLDA, a library that utilizes the GPU (Graphics Processi...
applications, the main time-consuming process is string matching due to the large size of lexicon. I...
Latent Semantic Analysis (LSA) aims to reduce the dimensions of large term-document datasets using S...
The purpose of this article is to determine the usefulness of the Graphics Processing Unit (GPU) cal...
Probabilistic Latent Semantic Analysis (PLSA) has been successfully applied to many text mining task...
AbstractIn this paper, we propose a scheme to accelerate the Probabilistic Latent Semantic Indexing ...
Latent semantic analysis (LSA) is a statistical technique for representing word meaning that has bee...
Semantic indexing is a popular technique used to access and organize large amounts of unstructured t...
We develop a dynamic dictionary data structure for the GPU, supporting fast insertions and deletions...
Abstract With recent advancement on hardware technologies, new general-purpose high-performance devi...
Latent Semantic Indexing (LSI) is one of the well-liked techniques in the information retrieval fiel...
Datacenter workloads demand high throughput, low cost and power efficient solutions. In most data ce...
International audienceWe are interested in the intensive use of Factorial Correspondence Analysis (F...
The general-purpose computing capabilities of the Graphics Processing Unit (GPU) have recently been ...
Data analysis is a rising field of interest for computer science research due to the growing amount ...
We present the design and implementation of GLDA, a library that utilizes the GPU (Graphics Processi...
applications, the main time-consuming process is string matching due to the large size of lexicon. I...
Latent Semantic Analysis (LSA) aims to reduce the dimensions of large term-document datasets using S...
The purpose of this article is to determine the usefulness of the Graphics Processing Unit (GPU) cal...
Probabilistic Latent Semantic Analysis (PLSA) has been successfully applied to many text mining task...
AbstractIn this paper, we propose a scheme to accelerate the Probabilistic Latent Semantic Indexing ...
Latent semantic analysis (LSA) is a statistical technique for representing word meaning that has bee...
Semantic indexing is a popular technique used to access and organize large amounts of unstructured t...
We develop a dynamic dictionary data structure for the GPU, supporting fast insertions and deletions...
Abstract With recent advancement on hardware technologies, new general-purpose high-performance devi...
Latent Semantic Indexing (LSI) is one of the well-liked techniques in the information retrieval fiel...
Datacenter workloads demand high throughput, low cost and power efficient solutions. In most data ce...
International audienceWe are interested in the intensive use of Factorial Correspondence Analysis (F...
The general-purpose computing capabilities of the Graphics Processing Unit (GPU) have recently been ...
Data analysis is a rising field of interest for computer science research due to the growing amount ...
We present the design and implementation of GLDA, a library that utilizes the GPU (Graphics Processi...
applications, the main time-consuming process is string matching due to the large size of lexicon. I...