-
Notifications
You must be signed in to change notification settings - Fork 267
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
This PR adds the Momentum strategy #1469
Conversation
Thanks for this @dongwonmoon. The CI is failing with:
I have been able to recreate this locally by rebuilding the tox environments:
The error is not caused by anything you're adding, it's something to do with an upstream update to task (I think!). I have fixed the error by removing this line in
I also noted when running things locally that you're going to need to run |
Thank you for the help with resolving the previous CI issue. However, a new issue has emerged. How sould i go about fixing this? @drvinceknight |
Hey 👋🏻 @dongwonmoon, This isn’t exactly where the tests are failing. These are just warnings that invoke a warning message but don’t raise an error. The tests are actually failing because of some other code that was recently added to the codebase. More specifically:
In another strategy that was introduced recently, one of the attributes’ types is not specified, and this is causing the MyPy tests to fail. Could you tweak line 65 in self.frequency_table: dict = dict() The tests should pass after that 😅 (I’m about 90% sure). I assume it’ll be okay with the rest of the team if you include this fix in this pull request. |
Thank you for the detailed explanation. @Nikoleta-v3 This is my first PR, so I wanted to check—am I allowed to fix the error myself, or should I leave it for the maintainers to handle? |
I can do it if you'd like me to do it BUT usually what happens is you would make the change on the same branch you are already have (just adding a new commit). Then when you push it here it will just update. Let me know if I can help with anything. |
Tests have been completed. Please let me know if there's anything else i should do. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good to me, I've made some minor stylistic suggestions as well as a request for a bit more documentation.
c1abcb3
to
ccf81ca
Compare
I tried running black and isort locally, but nothing changed. |
This could be an upstream update of black:
The CI installed |
Thank you. Version was different. |
axelrod/strategies/momentum.py
Outdated
self.momentum = 1.0 | ||
|
||
def __repr__(self): | ||
return f"Momentum: {self.alpha}, {self.threshold}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Include {self.momentum}
in the name as well for completeness.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you, @dongwonmoon, again for your PR
. One small comment: since the player depends on alpha
, threshold
, and momentum
, all three values can be included in the name.
Everything else looks great!
Your |
Thank you. I applied that. |
Summary: This pull request introduces a new strategy, Momentum, inspired by the concept of Gradual and the Momentum optimizer used in deep learning. The strategy models the dynamics of trust evolution in a repeated game setting, with momentum reflecting how trust shifts based on previous interactions. The momentum formula used in this strategy is derived from the Momentum optimizer in deep learning, which adjusts the velocity of a model's parameters based on gradients and previous updates. Similarly, the Momentum strategy adapts based on the opponent's previous actions.
Key Features:
Momentum Formula: The strategy uses an alpha parameter to weight the influence of the previous momentum value and the current opponent's action (cooperate or defect). This formula is inspired by the Momentum optimizer in deep learning, which accelerates convergence by adjusting updates with a combination of previous gradients and the current gradient.
Threshold: A threshold value determines the decision to cooperate or defect. If the momentum is above the threshold, the strategy cooperates; otherwise, it defects.
Initial Momentum: When the history is empty (first round), the momentum is set to 1.0, and cooperation is guaranteed.
Implementation Details:
The strategy uses an alpha (momentum decay factor) and a threshold to determine the behavior.
The momentum is updated after each round based on the opponent's previous action, using the same principle as the Momentum optimizer.
The strategy's behavior evolves over time, with a more cooperative stance when momentum is high and a defection tendency when momentum is low.