-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WT in the paper leaks info #6
Comments
I am skeptical about the results from the model as described in the paper - that is why I am attempting to replicate both the model and apply to the raw data. The |
@JannyKul by applying the wavelet transform separately on each Example applied to |
issue reopened - this has not been entirely resolved yet, but I am confident I am on the right track
|
Implemented as of v0.1.2 / b715d88 |
Hope to see the amazing results. Please keep refining :) |
|
@JannyKul to prevent/side step the issue of the wavelet transform leaking data into the rest of the model, I'm going to see if I can save the While I'm almost certain this will lower the overall accuracy of the model, it is a technically more correct/accurate approach to prevent data from leaking. That being said, running the wavelet transform independently on each |
EDIT: |
@timothyyu I agree with your thought process but if we just take a step back for a moment and start with what we know - we can use ML to give us either a momentum signal or a mean-reversion signal. Which we construct depends on the features we extract from the time series. If we smooth out the over reaction/under-reaction movements using a simple moving average and train with this we're creating a momentum signal. If we feature engineer with volatility/range then we create a mean-reversionary signal. A WT doesn't actually help us with either; we cut off the outliers so trying to get our model to find a mean-reversionary signal will fail and the direction changes are so erratic a momentum signal will fail too. I suspect your technique of applying WT on train/val/test separately will have a model that fits v well on the train set but never generalises to the val/test set. Saving |
@JannyKul For a streaming/online model, a dynamic In attempting to replicate the results of the paper, I fully intend to go beyond what is described in the original paper (and existing attempts to implement said model) - if there are errors in implementation or design, I will use my best academic/empirical judgement to evaluate said error and address it. Also, see issue #7 - it is highly relevant to this issue, specifically in the application of scaling/denoising. |
incomplete/work in progress: |
Hey Timothy, I added a comment on DeepLearning_Financial too about this and tried to expand here. There's no other way they get to the results they do.
Interested about your thoughts
The text was updated successfully, but these errors were encountered: