-
Notifications
You must be signed in to change notification settings - Fork 122
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't get attribute 'PlayableAction_l2rpn_case14_sandbox_l2rpn_case14_sandbox' on <module 'grid2op.Space.GridObjects' #514
Comments
Hi, Please follow the readme (https://github.com/rte-france/Grid2Op#known-issues) or the doc https://grid2op.readthedocs.io/en/latest/gym.html#python-complains-about-pickle for this particular issue. Problem is known (unfortunately) but i can't find a proper solution at the moment (but the |
Hi, Thanks for letting me know about the issue and pointing me to the solution. I tried modifying the two functions based on the solution provided to the following but now I am getting a different error. Essentially instead of initializing the environment again within
And the Env_RLLIB class is
The error message I am getting is
|
Alternatively, do you have any updated training script of running Grid2Ops with latest version of RLLIB or any plans to do so in near future, then it would solve this issue of mine. Thanks! |
Hello Well i cannot update everything every time there is a problem with an external librarie unfortunately. I would really much have the time to do so however. There is no plan as of today to make things work with ray "in the near future" as far as I know. Maybe if you look at the l2rpn-baselines package there are some stuff to help you get started. You also have some examples here https://github.com/rte-france/Grid2Op/tree/dev_multiagent/examples/multi_agents if you like (in a multi agents context but you get the idea) I can spot a few stuff however: please consider doing exactly as the documentation said. You need to define the environment in a first script: import grid2op
env_name = "l2rpn_case14_sandbox"
custom_path_dir = "/Users/paula/data_grid2op/"
backend_class=LightSimBackend
reward_class=L2RPNReward
if custom_path_dir is not None:
# env = grid2op.MakeEnv.make_from_dataset_path(custom_path_dir+env_name,
env = grid2op.make(env_name,
reward_class=reward_class,
backend=backend_class())
else:
env = grid2op.make(env_name,
reward_class=reward_class,
backend=backend_class())
env.generate_classes() Run it And then you can load your classes without calling "the env.generate_classe()" Also another tips to make your issues clearer :
Your problem seems to be an issue with grid2op and not with ray (your environment does not converge). Have you tried to simply create an instance of env_rllib = Env_RLLIB(your_config) And see the issue ? It's often 2 or 3 orders of magnitude easier and faster to debug without "multi processing" involved. |
Hi, Thanks for updating the script on l2rpnbaselines repo for PPO_RLLIB. Based on this I made a new script and that one worked now. I tried developing the script I shared above based on an older version of that repo. I wasn't aware that it is possible to call the Although I think the train script can be improved further as we are creating the environment twice. First one just to convert the environment observation and action space into gym format and then pass into the There was a minor issue and I had to add the following line Thanks for the other suggestions, I will incorporate them next time to make it easier for you to debug |
Thanks, can you put these issues in the appropriate github repo please ? If it's here i'll 100% forget about them (and right now I don't have much time to work on l2rpn-baselines :-/ ) |
Sure I have created this issue on the l2rpn repo. Yes I understand take a look when possible |
Thanks a lot i'll give it a try as soon as possible (which might take a while unfortunately :-/) |
Environment
1.9.3
osx
2.6.1
Hi, I was trying to simplify the training script of Grid2Op with RLLIB and wrote the following code based on the example specified here on ray website or custom environments. But I am getting the following error. Not sure if the issue is happening because I am not using an agent derived from the
BaseAgent
class or it is something else. Any guidance in resolving would be appreciated.The code that I am using is specified below
The Env_RLLIB class is
The text was updated successfully, but these errors were encountered: