-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Questions #204
Comments
Hi, thanks for your interest and the questions
I hope this answers some of your questions, if something is still unclear please don't hesitate to ask :) |
Thanks for your answers so far. I have a few more questions:
As Giskard is a local (gradient-based) planner, it may not be able to find a collision-free trajectory.
So these designators are not yet fully implemented?
I'm using ipython 8.26.0 and python 3.10.12 and the problem definitely exists (again). |
In the case that the trajectory is not collision-free we sample another seed state for the robot and try again until a limit is reached in which case the goal is deemed unreachable.
Yes, they are not completely implemented yet. However, I'm positive that that will change soon.
Then this is definitely a problem, I would suggest for the moment you use the |
Doesn't starting from another seed state invalidate the trajectory for execution? How do you connect the current robot state to the new seed state? |
The new seed state is for calculating the specific trajectory, therefore the rest of the plan is unaffected by that. |
Thanks a lot for providing this software. I'm feeling much more comfortable with the Python code than with the Lisp code.
The last days I worked myself through the docs and I've got a couple of open questions, fundamental and concrete ones:
What is the goal/roadmap for PyCRAM? Is it intended as a replacement for CRAM?
How feature-complete is it compared to CRAM?
Particularly, I was missing the connection to RoboKudo (How is the BulletWorld populated from perception?) and KnowRob's logical reasoning. Other KnowRob features, like the inner world model and OWL, are directly implemented in PyCRAM.
I'm missing the internal simulation of a whole plan. All given examples directly
perform
a plan's individual actions. Is there a way to simulate a plan before execution? Should oneperform
the plan first in a prospection world and then on the real robot?CRAM seems to operate on a symbolic level only: plans just consider key frames and existing motion designators just teleport the robot to the desired configuration. Do you also consider trajectory planning, i.e. generating a collision-free trajectory? IMO, this should be part of a planning framework.
The difference between Motion and Action Designators became not very clear.
Sure, motion designators shall represent atomic low-level motions, while the latter are more abstract.
Considering navigation for example, there exist
NavigateAction
andMoveMotion
, which seem to perform exactly the same task: navigating the robot to a target location. The only difference is thatMoveMotion
takes a single target, whileNavigateAction
takes a list, but only the first element is used:pycram/src/pycram/designators/action_designator.py
Line 305 in f73867a
So, seemingly both designators serve exactly the same purpose.
More generally, all action designators seem to return just the first element of their targets list.
Is this intended? If so, what is the purpose of having a list?
Other designators also just resolve to the first item:
pycram/src/pycram/designator.py
Line 679 in f73867a
How/Is it possible to iterate through all possible plan resolutions (and verify them via internal simulation) instead of greedily choosing the first parameter always?
The difference between
Reachable
andAccessing Location
designators didn't become clear as well.What is their semantic difference? Both seem to evaluate the reachability of an object.
What is meant by ExecutionType
SEMI_REAL
?I noticed regular crashes of the python kernel when running a jupyter notebook with a world running in
GUI
mode.These crashes didn't occur on particular actions, but during idle time. Hence, I assume that this is related to the visualization. Is that a known problem? Do you suggest using rviz visualization via
VizMarkerPublisher
instead?The text was updated successfully, but these errors were encountered: