Skip to content

Unrealized PnL Trading Environment

Kartikay Garg edited this page May 1, 2018 · 3 revisions

Unrealized PnL Trading Environment

Let,

  • l(ti) be the amount of long currency,

  • s(ti) be the amount of short currency and

  • p(ti) be the price of the currency

at time instant ti. Following assumptions are made,

  • agent starts with 0 initial amount

  • due to short duration of episodes (maximum time range allowed is 10 minutes)

    • agent can borrow any amount of money at any timestep at 0% interest rate with a promise to settle at the end of the episode

    • future rewards are not discounted

When trading at time instant ti , the agent is reward for its portfolio status between ti and ti + 1 , since it is kept same in this entire duration.

Reward Function

At any timestamp, the reward given to the agent is the actual value its portfolio. It is defined by, equation

  • non zero intermediate rewards allow the agent to converge to a trading strategy in lesser number of iterations than realized PnL reward function

  • however, frequent intermediate rewards are often noisy and tend to destabilize the learning process

Note

Given that future rewards are not discounted, it possesses the property that sum of all the intermediate rewards is same as the single realized PnL reward at the end of episode. This guarantees convergence to optimal policy.

Usage

import gym
import gym_cryptotrading
env = gym.make('UnRealizedPnLEnv-v0')