학술논문

Automated Stock Trading Using Deep Reinforcement Learning And Portfolio Optimization.
Document Type
Article
Source
Webology. 2022, Vol. 19 Issue 2, p428-449. 22p.
Subject
Language
ISSN
1735-188X
Abstract
Trading using intelligent agents has been widely researched since many years. The thought that arose is can self-learning agent take on the role of a human trader. Can Reinforcement Learning (RL) effectively trade to maximize reward? When trading, investor’s goal is to optimize reward, usually profits. Online trading is usually depicted as a two-step decision-making process: 1 Analyzing Market Condition & 2 Taking Optimal Action. In this report, we describe Deep Learning (DL) methods as a solution to automate this online trading process. We present an algorithm called Deep Reinforcement Learning (DRL) where the agent learns the environment, improves itself based on the rewards earned and takes correct action (makes trades) simultaneously. Applied DRL in stock markets to train a single stock trading agent with the goal of maximizing income in short and long term. To imitate a human trader, we also introduced technical indicators and unrealized returns as part of input states to the agent. We compared the DQN algorithm with Double DQN and Dueling Double DQN to find the best agent to maximize profits of the investment strategy. The challenges are being addressed using Deep Q-Network (DQN), Double DQN (DDQN) and Dueling Double DQN (DDDQN) agents in a simulated trading environment. Our model is inspired by off-policy learning and its application in video games. We trained and tested these agents with all S&P 500 stocks and show that just by using price and volume information deep-reinforcement learning agent can make money. Moreover, we also introduce this new workflow in algo-trading strategy development by incorporating the risk modelling and strategy optimization into the reward engineering of the agent. And also employs the Auto ARIMA Model and Holt-Winters model for prediction of stocks and Linear programming for portfolio optimization. Persistently preparing the model utilizing the most recent market information, assists the model with getting prepared on the most recent market conduct, so it tends to be conveyed further for all the more no of portfolio's in future; The pattern acquired can be broke down for something similar to foresee better speculation plans in the stock exchange. "Helping people optimally invest in the stocks, depending upon the present market analysis, which would help them fetch maximum return and suffer comparatively lesser loss!". [ABSTRACT FROM AUTHOR]