Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove redundant code in tutorials #1159

Open
wants to merge 7 commits into
base: master
Choose a base branch
from

Conversation

jjshoots
Copy link
Member

Description

This removes some redundant code in the tutorials which add unnecessary confusion to new users.

Type of change

Please delete options that are not relevant.

  • This change requires a documentation update

@jjshoots jjshoots marked this pull request as draft January 21, 2024 12:05
@elliottower
Copy link
Contributor

It may be good to mention somewhere in the tutorials the LRU caching thing and then link to an environment which uses it, but yeah I agree having it in the tutorials is a bit confusing and extra verbose

@jjshoots
Copy link
Member Author

@elliottower I may have to take a different approach to this actually. Tests are failing because the space object have to return the exact same object.

@elliottower
Copy link
Contributor

@elliottower I may have to take a different approach to this actually. Tests are failing because the space object have to return the exact same object.

Ah right yeah cause the random seeding the lru cache stuff is necessary. Maybe we could make a helper function to define the static space objects and do the caching for you instead of all this boiler plate code

@jjshoots jjshoots marked this pull request as ready for review January 22, 2024 19:25
@elliottower
Copy link
Contributor

Interesting commit messages hahah looks good though, the LRU cache I guess is not necessary for the api test to pass here?

May still be worth mentioning the situations in which you do need the cache code (not 100% certain of when this is myself, I feel like even when they are separate fixed objects I’ve had to do the cache code).

The tutorials might also benefit from explaining what exactly happens when you do env.seed() and explain that you in some cases want to seed action spaces or even observation spaces if there is anything stochastic in there (e.g., our generated agents test envs).

@jjshoots
Copy link
Member Author

jjshoots commented Jan 24, 2024

Tbh the cache code seems redundant, lru_cache internally converts the function into a dict to retrieve past information, and that's kind of no different to use just calling the items of the dict directly. I don't think there's any conceivable speed gain.

lru_cache would make sense if the outputs of the function must go through some preprocessing, but in this case it doesn't.

@jjshoots
Copy link
Member Author

Ok turns out I was wrong, lru_cache is doing something else under the hood to make things marginally faster:

import timeit
import functools

stuff: dict[str, int] = dict()
for i in range(1000):
    stuff[str(i)] = i

@functools.lru_cache(maxsize=None)
def get_stuff_lru(_id: str):
    return stuff[_id]

get_stuff_no_lru = lambda _id: stuff[_id]

def test_lru():
    for key in stuff.keys():
        get_stuff_lru(key)

def test_no_lru():
    for key in stuff.keys():
        get_stuff_no_lru(key)

print("Using lru: ", timeit.timeit("test_lru()", number=100000, setup="from __main__ import test_lru"))
print("Just a dict: ", timeit.timeit("test_no_lru()", number=100000, setup="from __main__ import test_no_lru"))

Result:

Using lru:  3.5688032259986358
Just a dict:  4.305833258000348

@elliottower
Copy link
Contributor

Your call on how to handle the lru caching stuff, I honestly don't know too much about which parts of the environment are the bottlenecks, my guess is this is a super minor component that probably doesn't matter (I imagine Gymnasium doesn't use the caching anywhere for example, but they don't have dicts of agent IDs).

I did a search through the pz repo for lru_cache and found some error messages that say it in the API test and parallel API test, but it isn't used in any internal environments, and is only used in the two environment creation tutorials (aec_rps.py/parallel_rps.py and the prisoners tutorial has it as well). Honestly not sure what the best thing to do is here, would prefer to get some other opinions, I feel like when I tested it out in the past I found that there were some cases where an environment shouldn't need it but the lru cache was the only thing that worked to make the test pass. So that's a reason to at least keep it in the error message, but there may be a better way to fix it in that case / succinct way to explain the best practice.

assert env.observation_space(agent) is env.observation_space(agent), (
    "observation_space should return the exact same space object (not a copy) for an agent (ensures that observation space seeding works as expected). "
    "Consider decorating your observation_space(self, agent) method with @functools.lru_cache(maxsize=None) to enable caching, or changing it to read from a dict such as self.observation_spaces."
)
assert env.action_space(agent) is env.action_space(agent), (
    "action_space should return the exact same space object (not a copy) for an agent (ensures that action space seeding works as expected). "
    "Consider decorating your action_space(self, agent) method with @functools.lru_cache(maxsize=None) to enable caching, or changing it to read from a dict such as self.action_spaces."
)

@jjshoots
Copy link
Member Author

Actually, I'm indifferent about leaving it in the error message. I think having the function return the same object in the error message is sufficient. The lru cache idea to me just seems like extra fluff in terms of what PZ should be. If users want to make their environments faster this should be the least of their concerns. I view this as sort of legacy code that's added to fix a the seeding tests that were introduced later.

@elliottower
Copy link
Contributor

Actually, I'm indifferent about leaving it in the error message. I think having the function return the same object in the error message is sufficient. The lru cache idea to me just seems like extra fluff in terms of what PZ should be. If users want to make their environments faster this should be the least of their concerns. I view this as sort of legacy code that's added to fix a the seeding tests that were introduced later.

Makes sense, agreed that it is pretty hacky and fluff. I think if it does get removed it should be very clear how to return the correct action space or observation space, and the best practices for defining them, as removing the functools thing in the error message may lead to people being confused as to how to make the test pass.

Gymnasium errors linking to their documentation and specific pages seems like a great thing to have here, but the problem is our starter scripts (the AEC and parallel RPS ones at least) are pretty old and not necessarily the best practices either.

@jjshoots
Copy link
Member Author

I think the error message about returning the exact same object is reasonable. That's a pretty basic OOP concept.

@elliottower
Copy link
Contributor

I think the error message about returning the exact same object is reasonable. That's a pretty basic OOP concept.

Fair enough, I was just thinking in terms of like, how some of the tutorials and envs defineself._action_spaces as a dict or some just use self.action_spaces or some omit that and just do a lambda function, but if you do a lambda function you'd want to have this lru cache (if I'm understanding correctly). But those are things that we shouldn't be too heavy handed with and let people do whatever they want, best to leave the error messages to be as explicit and simple as possible

@jjshoots
Copy link
Member Author

jjshoots commented Jan 25, 2024

With the lambda function, there is no need for lru cache, I think that's kind of the most pure Python way of doing it in a way that satisfies PZ's requirements.

Your suggestion is fair, we can just leave the lru cache error message and remove lru cache from the tutorials.

@elliottower
Copy link
Contributor

With the lambda function, there is no need for lru cache, I think that's kind of the most pure Python way of doing it in a way that satisfies PZ's requirements.

Your suggestion is fair, we can just leave the lru cache error message and remove lru cache from the tutorials.

Could you fully remove it from the other tutorials in this PR as well? This stuff also has it https://pettingzoo.farama.org/tutorials/custom_environment/2-environment-logic/#code

@jjshoots
Copy link
Member Author

Sure, got it. I'll do a ripgrep and nuke all instances of it except in the warnings once I'm back behind a bigger monitor.

@jjshoots
Copy link
Member Author

jjshoots commented Feb 2, 2024

What the fk is that error, looks like it's related to #1160.

Edit: doesn't look like it.

@elliottower
Copy link
Contributor

Oh didn’t see this but that’s highly bizarre no idea why pytest plugins would be trying to be imported from pz classic

@David-GERARD
Copy link
Collaborator

Hi @jjshoots ,

I'm PettingZoo's new project manager. I see that this PR is quite old, let me know if you think you'll come back to this project, or if I should close it as a wontfix.

Best,
David

@David-GERARD David-GERARD self-assigned this Dec 9, 2024
@David-GERARD David-GERARD added the enhancement New feature or request label Dec 9, 2024
@David-GERARD David-GERARD modified the milestone: 1.24.4 release Dec 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants