Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

1d encoder implementations #266

Draft
wants to merge 59 commits into
base: master
Choose a base branch
from
Draft

Conversation

retinfai
Copy link
Collaborator

No description provided.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These should be added to gitignore

odom[5] = odom[5] - self.offset[5]

limited_odom = odom[-2:]
# if self.firstOdom:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know it looks ugly but this needs to stay for the real car odom to work. Can probably be changed to look nicer but logic is necessary.


scan = create_lidar_msg(lidar, len(lidar_range), lidar_range)
self.processed_publisher.publish(scan)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not all cases seem to produce a variable called scan so this could break the code.

##############################################################
## TEMPORARILY OVERRIDING NETWORK CONFIG FOR TD3AE AND SACAE
##############################################################
_,_,network_config = parse_args_from_file()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If network_config is assigned here it shouldn't be receiving the variables from parse_args too

#####################################################################################################################
# CHANGE SETTINGS HERE, might be specific to environment, therefore not moved to config file (for now at least).

# Reward configuration
self.BASE_REWARD_FUNCTION:Literal["goal_hitting", "progressive"] = 'progressive'
self.EXTRA_REWARD_TERMS:List[Literal['penalize_turn']] = []
self.REWARD_MODIFIERS:List[Tuple[Literal['turn','wall_proximity'],float]] = [('turn', 0.3), ('wall_proximity', 0.7)] # [ (penalize_turn", 0.3), (penalize_wall_proximity, 0.7) ]
self.REWARD_MODIFIERS:List[Tuple[Literal['turn','wall_proximity','lin_acc'],float]] = [('turn', 0.2), ('wall_proximity', 0.6), ('lin_acc',0.2)] # [ (penalize_turn", 0.3), (penalize_wall_proximity, 0.7) ]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if it's just github showing it weird but if not this needs to be indented

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a way to modularize the AE stuff? Seems a bit clunky putting all of it within init so every algorithm gets all these unneeded variables

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Update to latest submodule

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to ensure compatibility with latest version of cares_rl

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants