This work is driven by a practical question: corrections of Artificial Intelligence (AI) errors. These corrections should be quick and non-iterative. To solve this problem without modification of a legacy AI system, we propose special ‘external’ devices, correctors. Elementary correctors consist of two parts, a classifier that separates the situations with high risk of error from the situations in which the legacy AI system works well and a new decision that should be recommended for situations with potential errors. Input signals for the correctors can be the inputs of the legacy AI system, its internal signals, and outputs. If the intrinsic dimensionality of data is high enough then the classifiers for correction of small number of errors...
We discuss standard classification methods for high-dimensional data and a small number of observati...
The enormous power of modern computers has made possible the statistical modelling of data with dime...
“The curse of dimensionality ” is pertinent to many learning algorithms, and it denotes the drastic ...
This work is driven by a practical question: corrections of Artificial Intelligence (AI) errors. The...
Artificial Intelligence (AI) systems sometimes make errors and will make errors in the future, from ...
Complexity is an indisputable, well-known, and broadly accepted feature of the brain. Despite the ap...
We consider the problem of efficient “on the fly” tuning of existing, or legacy, Artificial Intellig...
This paper presents a technology for simple and computationally efficient improvements of a generic ...
The problem of non-iterative one-shot and non-destructive correction of unavoidable mistakes arises ...
In this paper we present theory and algorithms enabling classes of Artificial Intelligence (AI) syst...
This project originates from research on methods and techniques for real time analysis and handling ...
The concentrations of measure phenomena were discovered as the mathematical background to statistica...
Despite the widely-spread consensus on the brain complexity, sprouts of the single neuron revolution...
Abstract—It is believed that if machine can learn human-level invariant semantic concepts from highl...
Modern data-driven Artificial Intelligence models are based on large datasets which have been recent...
We discuss standard classification methods for high-dimensional data and a small number of observati...
The enormous power of modern computers has made possible the statistical modelling of data with dime...
“The curse of dimensionality ” is pertinent to many learning algorithms, and it denotes the drastic ...
This work is driven by a practical question: corrections of Artificial Intelligence (AI) errors. The...
Artificial Intelligence (AI) systems sometimes make errors and will make errors in the future, from ...
Complexity is an indisputable, well-known, and broadly accepted feature of the brain. Despite the ap...
We consider the problem of efficient “on the fly” tuning of existing, or legacy, Artificial Intellig...
This paper presents a technology for simple and computationally efficient improvements of a generic ...
The problem of non-iterative one-shot and non-destructive correction of unavoidable mistakes arises ...
In this paper we present theory and algorithms enabling classes of Artificial Intelligence (AI) syst...
This project originates from research on methods and techniques for real time analysis and handling ...
The concentrations of measure phenomena were discovered as the mathematical background to statistica...
Despite the widely-spread consensus on the brain complexity, sprouts of the single neuron revolution...
Abstract—It is believed that if machine can learn human-level invariant semantic concepts from highl...
Modern data-driven Artificial Intelligence models are based on large datasets which have been recent...
We discuss standard classification methods for high-dimensional data and a small number of observati...
The enormous power of modern computers has made possible the statistical modelling of data with dime...
“The curse of dimensionality ” is pertinent to many learning algorithms, and it denotes the drastic ...