Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Survey planner: respect when problem instance specifies actions were completed #132

Merged
merged 4 commits into from
Feb 8, 2024

Conversation

trey0
Copy link
Contributor

@trey0 trey0 commented Feb 7, 2024

What doesn't change: The edited scripts plan_survey.py and problem_generator.py should continue to support previously used interfaces without changes. These changes are intended improve feature coverage and output quality.

Substantive changes:

  • plan_survey.py: Now respects completed predicates asserted in the problem instance and will not try to perform the relevant actions again. This could be important if we need to replan partway through an ISS activity after some goals have been completed. The PlanSys2 problem expert should already be tracking which goals were completed, meaning the operator shouldn't need to handle this issue manually (e.g., edit the relevant problem config to remove completed goals).
  • plan_survey.py: While testing the above change, I noticed and fixed a bug where the collision avoidance check in plan_survey.py was not specified as strictly as it was in the PDDL model. When a robot requests to move to a location to_pos, the PDDL model specifies that the move should be blocked depending on whether another robot has reserved either to_pos or one of its immediate neighbors. Before this fix, the check of to_pos itself was being omitted in plan_survey.py. It's kind of an edge case and it's hard to make it perform a dangerous move except with stereo actions. I should add that it's possible there are other collision check edge cases, especially for the stereo action, that are present both in plan_survey.py and in the PDDL model. But I think they're very unlikely to cause any dangerous moves in practice.
  • problem_generator.py: Added an optional completed flag to goals specified in the input config. This causes the corresponding completed-x predicates to be asserted in the output problem instance. It also removes the relevant need-stereo predicates -- this may affect OPTIC/POPF but is ignored by plan_survey.py. This new feature is not intended for use during ops (you could just as easily delete a goal as mark it completed), but is helpful for testing the planner.
  • Added test cases with completed goals in the new data/test_cases folder. See the README there for more.

Tangential changes:

  • Some previous changes had broken linter rules and I brought the files I touched back inline with the coding standards I'm working to.
  • Fixed some inaccurate comments.

@trey0
Copy link
Contributor Author

trey0 commented Feb 7, 2024

Closes #130

@bcoltin
Copy link
Member

bcoltin commented Feb 8, 2024

Looks good to me. @marinagmoreira , I know you were thinking about this, what do we want to do about the CI running out of memory? (does this happen every time or should we try re-running?)

@trey0
Copy link
Contributor Author

trey0 commented Feb 8, 2024

Looks good to me. @marinagmoreira , I know you were thinking about this, what do we want to do about the CI running out of memory? (does this happen every time or should we try re-running?)

I think the CI is running out of storage, not memory.

I skimmed through how we are doing these builds with an eye to saving space. My initial take: currently, ci_pr.yml is doing a fancy docker-compose build that builds all of our Docker images in a single GitHub CI worker. But we already have an example in ci_push.yml of how to build each image individually, and if these builds were separated into different jobs so they used different workers, the space required for any single worker would be greatly reduced. It's possible the fancy build is also pulling down an irrelevant baseline ros image that's not needed to build our custom images.

I'm curious why this space issue is just starting now. Possibly GitHub reduced how much storage they provide to workers, or the overhead of their default built-in worker tools is increasing so there's less space left for us.

@trey0
Copy link
Contributor Author

trey0 commented Feb 8, 2024

The same CI failure happens on develop. See https://github.com/nasa/isaac/actions/runs/7835088932

This seems like it should be 100% consistent. It's running out of space while downloading base Docker images before it even begins building anything new.

Copy link
Member

@bcoltin bcoltin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good once CI passes, thanks.

@trey0 trey0 merged commit 0726540 into nasa:develop Feb 8, 2024
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants