Skip to content

NTHU-LSALAB/DRL-TaskMapping

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DRL-TaskMapping

A deep reinforcement learning method for solving task mapping problems.

Requirements

Containerized Environment (Recommended)

Ensure you meet the following system requirements:

Bare Metal

Installation

Download the DRL-TaskMapping Source Code

$ git clone https://github.com/NTHU-LSALAB/DRL-TaskMapping.git
$ cd DRL-TaskMapping
$ git submodule update --init --recursive --progress

Setting up the demo environment

Build the docker image

$ bash scripts/build.sh

Extract demo train/test cases

$ tar -xf data/testcases/sample-test.tar.xz -C data/testcases
$ tar -xf data/testcases/sample-train.tar.xz -C data/testcases

Launch the container

$ bash scripts/launch.sh

In the demo, we use a MPI program to explore the communication pattern. Compile it.

$ make -C /data/src

Training

Enter the DRL-TaskMapping and run the training script. The demo trains the model with only 1024 steps. Modify the num_timesteps parameter to train longer.

$ cd workspace/DRL-TaskMapping
$ bash scripts/train.sh

Inference

Run the play.sh to do the inference, the output will be logged at logs/<num_env>/<num_eval>/<checkpoint>/runtime-*

bash scripts/play.sh

Code Structure

DRL-TaskMapping
├── data
│   ├── src                # MPI application 
│   ├── testcases          # Communication pattern
│   └── xmldescs           # Architecture description
├── baselines              # Modified baseline library with our env
│   ├── scripts            # Demo scripts
│   ├── baselines          # Baselines library
│   └── ...
├── docker
│   └── Dockerfile         # Dockerfile
└── scripts                # Build & launch the Docker image

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published