Skip to content

Commit

Permalink
add_baostock_collector (#1641)
Browse files Browse the repository at this point in the history
* add_baostock_collector

* modify_comments

* fix_pylint_error

* solve_duplication_methods

* modified the logic of update_data_to_bin

* modified the logic of update_data_to_bin

* optimize code

* optimize pylint issue

* fix pylint error

* changes suggested by the review

* fix CI faild

* fix CI faild

* fix issue 1121

* format with black

* optimize code logic

* optimize code logic

* fix error code

* drop warning during code runs

* optimize code

* format with black

* fix bug

* format with black

* optimize code

* optimize code

* add comments
  • Loading branch information
SunsetWolf authored Nov 21, 2023
1 parent ceff886 commit 98f569e
Show file tree
Hide file tree
Showing 17 changed files with 722 additions and 318 deletions.
1 change: 1 addition & 0 deletions .github/workflows/test_qlib_from_source.yml
Original file line number Diff line number Diff line change
Expand Up @@ -102,6 +102,7 @@ jobs:
- name: Check Qlib with pylint
run: |
pylint --disable=C0104,C0114,C0115,C0116,C0301,C0302,C0411,C0413,C1802,R0401,R0801,R0902,R0903,R0911,R0912,R0913,R0914,R0915,R1720,W0105,W0123,W0201,W0511,W0613,W1113,W1514,E0401,E1121,C0103,C0209,R0402,R1705,R1710,R1725,R1735,W0102,W0212,W0221,W0223,W0231,W0237,W0612,W0621,W0622,W0703,W1309,E1102,E1136 --const-rgx='[a-z_][a-z0-9_]{2,30}$' qlib --init-hook "import astroid; astroid.context.InferenceContext.max_inferred = 500; import sys; sys.setrecursionlimit(2000)"
pylint --disable=C0104,C0114,C0115,C0116,C0301,C0302,C0411,C0413,C1802,R0401,R0801,R0902,R0903,R0911,R0912,R0913,R0914,R0915,R1720,W0105,W0123,W0201,W0511,W0613,W1113,W1514,E0401,E1121,C0103,C0209,R0402,R1705,R1710,R1725,R1735,W0102,W0212,W0221,W0223,W0231,W0237,W0246,W0612,W0621,W0622,W0703,W1309,E1102,E1136 --const-rgx='[a-z_][a-z0-9_]{2,30}$' scripts --init-hook "import astroid; astroid.context.InferenceContext.max_inferred = 500; import sys; sys.setrecursionlimit(2000)"
# The following flake8 error codes were ignored:
# E501 line too long
Expand Down
81 changes: 81 additions & 0 deletions scripts/data_collector/baostock_5min/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
## Collector Data

### Get Qlib data(`bin file`)

- get data: `python scripts/get_data.py qlib_data`
- parameters:
- `target_dir`: save dir, by default *~/.qlib/qlib_data/cn_data_5min*
- `version`: dataset version, value from [`v2`], by default `v2`
- `v2` end date is *2022-12*
- `interval`: `5min`
- `region`: `hs300`
- `delete_old`: delete existing data from `target_dir`(*features, calendars, instruments, dataset_cache, features_cache*), value from [`True`, `False`], by default `True`
- `exists_skip`: traget_dir data already exists, skip `get_data`, value from [`True`, `False`], by default `False`
- examples:
```bash
# hs300 5min
python scripts/get_data.py qlib_data --target_dir ~/.qlib/qlib_data/hs300_data_5min --region hs300 --interval 5min
```

### Collector *Baostock high frequency* data to qlib
> collector *Baostock high frequency* data and *dump* into `qlib` format.
> If the above ready-made data can't meet users' requirements, users can follow this section to crawl the latest data and convert it to qlib-data.
1. download data to csv: `python scripts/data_collector/baostock_5min/collector.py download_data`

This will download the raw data such as date, symbol, open, high, low, close, volume, amount, adjustflag from baostock to a local directory. One file per symbol.
- parameters:
- `source_dir`: save the directory
- `interval`: `5min`
- `region`: `HS300`
- `start`: start datetime, by default *None*
- `end`: end datetime, by default *None*
- examples:
```bash
# cn 5min data
python collector.py download_data --source_dir ~/.qlib/stock_data/source/hs300_5min_original --start 2022-01-01 --end 2022-01-30 --interval 5min --region HS300
```
2. normalize data: `python scripts/data_collector/baostock_5min/collector.py normalize_data`

This will:
1. Normalize high, low, close, open price using adjclose.
2. Normalize the high, low, close, open price so that the first valid trading date's close price is 1.
- parameters:
- `source_dir`: csv directory
- `normalize_dir`: result directory
- `interval`: `5min`
> if **`interval == 5min`**, `qlib_data_1d_dir` cannot be `None`
- `region`: `HS300`
- `date_field_name`: column *name* identifying time in csv files, by default `date`
- `symbol_field_name`: column *name* identifying symbol in csv files, by default `symbol`
- `end_date`: if not `None`, normalize the last date saved (*including end_date*); if `None`, it will ignore this parameter; by default `None`
- `qlib_data_1d_dir`: qlib directory(1d data)
if interval==5min, qlib_data_1d_dir cannot be None, normalize 5min needs to use 1d data;
```
# qlib_data_1d can be obtained like this:
python scripts/get_data.py qlib_data --target_dir ~/.qlib/qlib_data/cn_data --interval 1d --region cn --version v3
```
- examples:
```bash
# normalize 5min cn
python collector.py normalize_data --qlib_data_1d_dir ~/.qlib/qlib_data/cn_data --source_dir ~/.qlib/stock_data/source/hs300_5min_original --normalize_dir ~/.qlib/stock_data/source/hs300_5min_nor --region HS300 --interval 5min
```
3. dump data: `python scripts/dump_bin.py dump_all`
This will convert the normalized csv in `feature` directory as numpy array and store the normalized data one file per column and one symbol per directory.
- parameters:
- `csv_path`: stock data path or directory, **normalize result(normalize_dir)**
- `qlib_dir`: qlib(dump) data director
- `freq`: transaction frequency, by default `day`
> `freq_map = {1d:day, 5mih: 5min}`
- `max_workers`: number of threads, by default *16*
- `include_fields`: dump fields, by default `""`
- `exclude_fields`: fields not dumped, by default `"""
> dump_fields = `include_fields if include_fields else set(symbol_df.columns) - set(exclude_fields) exclude_fields else symbol_df.columns`
- `symbol_field_name`: column *name* identifying symbol in csv files, by default `symbol`
- `date_field_name`: column *name* identifying time in csv files, by default `date`
- examples:
```bash
# dump 5min cn
python dump_bin.py dump_all --csv_path ~/.qlib/stock_data/source/hs300_5min_nor --qlib_dir ~/.qlib/qlib_data/hs300_5min_bin --freq 5min --exclude_fields date,symbol
```
Loading

0 comments on commit 98f569e

Please sign in to comment.