Sequential portfolio selection has attracted increasing interests in the machine learning and quantitative finance communities in recent years. As a mathematical framework for reinforcement learning policies, the stochastic multi-armed bandit problem addresses the primary difficulty in sequential decision making under uncertainty, namely the exploration versus exploitation dilemma, and therefore provides a natural connection to portfolio selection. In this paper, we incorporate risk-awareness into the classic multi-armed bandit setting and introduce an algorithm to construct portfolio. Through filtering assets based on the topological structure of financial market and combining the optimal multi-armed bandit policy with the minimization of ...
This paper proposes an ex-post comparison of portfolio selection strategies. These are applied to c...
International audienceThe stochastic multi-armed bandit problem is a popular model of the exploratio...
In a multi-armed bandit (MAB) problem a gambler needs to choose at each round of play one of K arms,...
Sequential portfolio selection has attracted increasing interests in the machine learning and quanti...
Stochastic multi–armed bandits solve the Exploration–Exploitation dilemma and ultimately maximize th...
Market exposed assets like stocks yield higher return than cash but have higher risk, while cash-equ...
A multi-armed bandit is the simplest problem to study learning under uncertainty when decisions affe...
The stochastic multi-armed bandit problem is an important model for studying the exploration-exploit...
Stochastic multi--armed bandits solve the Exploration--Exploitation dilemma and ultimately maximize ...
We devise a hierarchical decision-making architecture for portfolio optimization on multiple markets...
As a fundamental problem in algorithmic trading, portfolio optimization aims to maximize the cumulat...
This dissertation examines portfolio selection under systemic risk using performance measures. In th...
Existing methods in portfolio management deterministically produce an optimal portfolio. However, ac...
Available online In this study, we analyze the portfolio selection in multiple period consumption an...
Strategic decision-making over valuable resources should consider risk-averse objectives. Many pract...
This paper proposes an ex-post comparison of portfolio selection strategies. These are applied to c...
International audienceThe stochastic multi-armed bandit problem is a popular model of the exploratio...
In a multi-armed bandit (MAB) problem a gambler needs to choose at each round of play one of K arms,...
Sequential portfolio selection has attracted increasing interests in the machine learning and quanti...
Stochastic multi–armed bandits solve the Exploration–Exploitation dilemma and ultimately maximize th...
Market exposed assets like stocks yield higher return than cash but have higher risk, while cash-equ...
A multi-armed bandit is the simplest problem to study learning under uncertainty when decisions affe...
The stochastic multi-armed bandit problem is an important model for studying the exploration-exploit...
Stochastic multi--armed bandits solve the Exploration--Exploitation dilemma and ultimately maximize ...
We devise a hierarchical decision-making architecture for portfolio optimization on multiple markets...
As a fundamental problem in algorithmic trading, portfolio optimization aims to maximize the cumulat...
This dissertation examines portfolio selection under systemic risk using performance measures. In th...
Existing methods in portfolio management deterministically produce an optimal portfolio. However, ac...
Available online In this study, we analyze the portfolio selection in multiple period consumption an...
Strategic decision-making over valuable resources should consider risk-averse objectives. Many pract...
This paper proposes an ex-post comparison of portfolio selection strategies. These are applied to c...
International audienceThe stochastic multi-armed bandit problem is a popular model of the exploratio...
In a multi-armed bandit (MAB) problem a gambler needs to choose at each round of play one of K arms,...