We present a novel hybrid technique for improving the predictive per-formance of an online Machine Learning system: Combining advantages from both memory based and concept based procedures Selective Relearn-ing tackles the problem of learning in gradually changing domains with delayed feedback. The idea is based on training and retraining the model only on the subsegment of the historical dataset which has been identified as the one most similar to the current conditions. We exemplify the effectiveness of our approach by evaluation in a well-known artificial dataset and show that Selective Relearning is rather insen-sitive to noise. Additionally, we present preliminary experimental results for a complex synthetic dataset resembling an onlin...
Rapid online adaptation to changing tasks is an important problem in machine learning and, recently,...
Reinforcement Learning (RL) is a popular method in machine learning. In RL, an agent learns a policy...
In this paper we propose several novel approaches for incorporating forgetting mechanisms into seque...
The research that constitutes this thesis was driven by the two related goals in mind. The first one...
Machine learning models are subject to changing circumstances, and will degrade over time. Nowadays,...
Rapid advancement of machine learning makes it possible to consider large amounts of...
Machine learning (ML) has become ubiquitous in various disciplines and applications, serving as a po...
Unlike their offline traditional counterpart, online machine learning models are capable of handling...
We establish connections from optimizing Bellman Residual and Temporal Difference Loss to worstcase ...
Many estimation, prediction, and learning applications have a dynamic nature. One of the most import...
The fields of machine learning (ML) and cognitive science have developed complementary approaches to...
The development of computational power is constantly on the rise and makes for new possibilities in ...
A common assumption in machine learning is that training data is complete, and the data distribution...
Machine learning models nowadays play a crucial role for many applications in business and industry....
Abstract—Today’s systems produce a rapidly exploding amount of data, and the data further derives mo...
Rapid online adaptation to changing tasks is an important problem in machine learning and, recently,...
Reinforcement Learning (RL) is a popular method in machine learning. In RL, an agent learns a policy...
In this paper we propose several novel approaches for incorporating forgetting mechanisms into seque...
The research that constitutes this thesis was driven by the two related goals in mind. The first one...
Machine learning models are subject to changing circumstances, and will degrade over time. Nowadays,...
Rapid advancement of machine learning makes it possible to consider large amounts of...
Machine learning (ML) has become ubiquitous in various disciplines and applications, serving as a po...
Unlike their offline traditional counterpart, online machine learning models are capable of handling...
We establish connections from optimizing Bellman Residual and Temporal Difference Loss to worstcase ...
Many estimation, prediction, and learning applications have a dynamic nature. One of the most import...
The fields of machine learning (ML) and cognitive science have developed complementary approaches to...
The development of computational power is constantly on the rise and makes for new possibilities in ...
A common assumption in machine learning is that training data is complete, and the data distribution...
Machine learning models nowadays play a crucial role for many applications in business and industry....
Abstract—Today’s systems produce a rapidly exploding amount of data, and the data further derives mo...
Rapid online adaptation to changing tasks is an important problem in machine learning and, recently,...
Reinforcement Learning (RL) is a popular method in machine learning. In RL, an agent learns a policy...
In this paper we propose several novel approaches for incorporating forgetting mechanisms into seque...