Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TypeError: No to_python (by-value) converter found for C++ type: class std::vector<unsigned char,class std::allocator<unsigned char> > #3

Open
RimelMj opened this issue Aug 5, 2021 · 11 comments

Comments

@RimelMj
Copy link

RimelMj commented Aug 5, 2021

Hi, thank you for sharing your work! It's been really helpful.
Unfortunately I have this error when I run "python straight_lane_agent_c51_training.py".
lambda sensor_data: self._sensor_callback(SensorType.COLLISION_DETECTOR, sensor_data)
File "D:\Carla9.9\WindowsNoEditor\PythonAPI\code\agent\simulation\simulation.py", line 140, in _sensor_callback
data = sensor_data.other_actor.semantic_tags
TypeError: No to_python (by-value) converter found for C++ type: class std::vector<unsigned char,class std::allocator >

Can you please help me? Thank you.

@kochlisGit
Copy link
Owner

Hello @RimelMj

This is a weird error. Maybe It' the carla version that you are using... Perhaps If you download the latest version of Carla (0.9.11), in which i wrote the scripts, then the error is fixed.

@RimelMj
Copy link
Author

RimelMj commented Aug 5, 2021

I am using Carla.0.9.9.4 version, I'll try the latest one thank you!

@kochlisGit
Copy link
Owner

Yes, please, and tell me if this solution worked for you

@RimelMj
Copy link
Author

RimelMj commented Aug 5, 2021

It worked fine thank you for your help!

@RimelMj
Copy link
Author

RimelMj commented Sep 4, 2021

Hi again! I have another question.
Is it normal that I've been training this code using transfer learning (because sometimes the simulator freezes up so I start training again using the latest checkpoints) for more than two weeks and I still get negative average returns?
Thank you!

@kochlisGit
Copy link
Owner

kochlisGit commented Sep 4, 2021

Hello,
No It's fine! I have been doing this myself, because It's too hard for the GPU to both render the environent and train the agent. However, If you want to solve this problem, You will have to run Carla Simulator on a docker (in Linux or WLS) and disable the simulation rendering, according to the documentation here:

https://carla.readthedocs.io/en/latest/build_docker/

Another solution to this is to store the replay buffer on disk alongside with the agent's policy. Once You start the simulation again, the agent will be trained from the point it crashed. It is demonstrated here:

https://github.com/tensorflow/agents/blob/master/docs/tutorials/10_checkpointer_policysaver_tutorial.ipynb

@RimelMj
Copy link
Author

RimelMj commented Sep 4, 2021

Thank you for your solutions :) May I ask how much time did it take you to reach convergence ?

@kochlisGit
Copy link
Owner

C51 Agent makes complicated computations, which means It takes lots of time. More than 250000 steps are required. If You have a fast GPU that won't be a problem. If You can't wait that long, then You would have to disable the rendering as I mentioned above.

Another Thing You might wanna try is use another agent. From my experience, PPOAgent is quite faster than this agent and can achieve astonishing results as fast as C51. I have made an example of how to use this agent here:

https://github.com/kochlisGit/DRL-Frameworks/blob/main/tf-agents/ppo_train.py

I haven't been tested it on Autonomous Driving yet, but on OpenAI Gym the PPOAgent sometimes achieves the same results as C51, but faster.

@RimelMj
Copy link
Author

RimelMj commented Sep 4, 2021

Well I am working on Windows and I am using a CPU so I guess that's why the training is so slow.

@kochlisGit
Copy link
Owner

kochlisGit commented Sep 4, 2021

Well, Unless You are willing to wait 1-2 weeks, You will have to use one of the above solutions. You are training a vehicle here...

@RimelMj
Copy link
Author

RimelMj commented Sep 4, 2021

Yes, I'll use the checkpoint policy saver! Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants