Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

extended_stimulus_presentations reward_rate calculation is incorrect #759

Open
alexpiet opened this issue May 21, 2021 · 9 comments
Open
Labels
bug Something isn't working

Comments

@alexpiet
Copy link
Collaborator

alexpiet commented May 21, 2021

  • rewarded doesn't match rewards table. I get this by comparing len(session.rewards) and np.sum(esp['rewarded'])
  • units of reward_rate are in rewards/image, not rewards/second
@alexpiet
Copy link
Collaborator Author

@matchings @dougollerenshaw @yavorska-iryna This bug scares me because it calls into question other fields in extended stimulus processing! Yikes!

@matchings
Copy link
Collaborator

@alexpiet can you specify how you loaded the extended_stimulus_presentations table to see this error? did you use loading.get_ophys_dataset(), then dataset.extended_stimulus_presentations? A code block to reproduce would be helpful

@dougollerenshaw
Copy link
Contributor

I agree with @matchings. Some more specifics would be helpful.

@alexpiet
Copy link
Collaborator Author

alexpiet commented May 21, 2021

Here is a minimal working example of the discrepancy for the "rewarded" column. Spot checking, it might only happen on some sessions.

oeid = 1010556662
session = loading.get_ophys_dataset(oeid, include_invalid_rois=False)
esp = session.extended_stimulus_presentations
print(np.sum(esp['rewarded']) # 164
print(len(session.rewards)) #138 

As far as the units for reward_rate, I determined that by visual inspection of the code, it needs to be divided by 0.75s like the lick_rate. You can verify this by loading the model output for a session, and plotting that reward -rate column

@dougollerenshaw
Copy link
Contributor

OK, I'm able to replicate the mismatch in the reward count (164 vs 138). Investigating now. I had to add imports and fix one parentheses typo to get your code block to run. Here's a full working version for reference:

from visual_behavior.data_access import loading
import numpy as np

oeid = 1010556662
session = loading.get_ophys_dataset(oeid, include_invalid_rois=False)
esp = session.extended_stimulus_presentations

print(np.sum(esp['rewarded'])) # 164
print(len(session.rewards)) #138 

@alexpiet
Copy link
Collaborator Author

yes, sorry I dropped the imports. My head is in a different codebase right now, and I was just flagging this issue for later

@alexpiet
Copy link
Collaborator Author

How come your code block has nice colors, and mine doesnt?

@dougollerenshaw
Copy link
Contributor

I just learned a new trick from @jsiegle recently. You can make github render your code with Python specific syntax highlighting by doing this:

image

Which gives:

import some_package

print(some_variable)

@dougollerenshaw
Copy link
Contributor

Hmm, this is weird. Looking at extended stimulus presentations, I see some repeats. Look at rows 3/4 (index 60)
image

@alexpiet alexpiet added the bug Something isn't working label Mar 30, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants