The direct and inverse projections (DIP) method was proposed to reduce the feature space to the given dimensions oriented to the problems of randomized machine learning and based on the procedure of “direct” and “inverse” design. The “projector” matrices are determined by maximizing the relative entropy. It is suggested to estimate the information losses by the absolute error calculated with the use of the Kullback–Leibler function (SRC method). An example illustrating these methods was given. © 2018, Pleiades Publishing, Ltd
Massive high-dimensional data sets are ubiquitous in all scientific disciplines. Extracting meaningf...
Many recent (including adaptive) MCMC methods are associated in practice to unknown rates of converg...
AbstractThe field of machine learning deals with a huge amount of various algorithms, which are able...
Dimensionality reduction (DR) aims to reveal salient properties of high-dimensional (HD) data in a l...
Estimation of Distribution Algorithms (EDA) have been proposed as an extension of genetic algorithms...
Data reduction is crucial in order to turn large datasets into information, the major purpose of dat...
Data reduction is crucial in order to turn large datasets into information, the major purpose of dat...
Data reduction is crucial in order to turn large datasets into information, the major purpose of dat...
Data reduction is crucial in order to turn large datasets into information, the major purpose of dat...
Data reduction is crucial in order to turn large datasets into information, the major purpose of dat...
Estimation of Distribution Algorithms EDA have been proposed as an extension of genetic algorithms. ...
Approximation of entropies of various types using machine learning (ML) regression methods are shown...
Approximation of entropies of various types using machine learning (ML) regression methods are shown...
This book presents a unified theory of random matrices for applications in machine learning, offerin...
Many recent (including adaptive) MCMC methods are associated in practice to unknown rates of converg...
Massive high-dimensional data sets are ubiquitous in all scientific disciplines. Extracting meaningf...
Many recent (including adaptive) MCMC methods are associated in practice to unknown rates of converg...
AbstractThe field of machine learning deals with a huge amount of various algorithms, which are able...
Dimensionality reduction (DR) aims to reveal salient properties of high-dimensional (HD) data in a l...
Estimation of Distribution Algorithms (EDA) have been proposed as an extension of genetic algorithms...
Data reduction is crucial in order to turn large datasets into information, the major purpose of dat...
Data reduction is crucial in order to turn large datasets into information, the major purpose of dat...
Data reduction is crucial in order to turn large datasets into information, the major purpose of dat...
Data reduction is crucial in order to turn large datasets into information, the major purpose of dat...
Data reduction is crucial in order to turn large datasets into information, the major purpose of dat...
Estimation of Distribution Algorithms EDA have been proposed as an extension of genetic algorithms. ...
Approximation of entropies of various types using machine learning (ML) regression methods are shown...
Approximation of entropies of various types using machine learning (ML) regression methods are shown...
This book presents a unified theory of random matrices for applications in machine learning, offerin...
Many recent (including adaptive) MCMC methods are associated in practice to unknown rates of converg...
Massive high-dimensional data sets are ubiquitous in all scientific disciplines. Extracting meaningf...
Many recent (including adaptive) MCMC methods are associated in practice to unknown rates of converg...
AbstractThe field of machine learning deals with a huge amount of various algorithms, which are able...