-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding spherized profiles #60
Conversation
Note, I didn't perform any sanity checks in this data |
I added batch two spherized profiles after merging #60 |
In #48 (comment), @shntnu noted:
I agree that an empirical test that reproduces the improvement Ted saw in regards to non-spherized vs. spherized data would make us all set. However, I don't think we do it in this pull request, and not even in this repo. Instead, @AdeboyeML can take these profiles and run them through the pipeline he created in https://github.com/broadinstitute/lincs-profiling-comparison. So, I propose the following:
edit, i'll add the PR-specific steps to the beginning of this PR |
@shntnu - this is now ready for your eyes, when you get a chance |
@@ -30,7 +30,7 @@ This repository and workflow begins after we applied cytominer-database. | |||
| Level 5 | Consensus Perturbation Profiles | `.csv.gz` | Yes | | |||
|
|||
Importantly, we include files for _two_ different types of normalization: Whole-plate normalization, and DMSO-specific normalization. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've not read the whole thing yet, but I wonder if you should mention the existence of spherized data somewhere here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah, good point
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Did to mean to propose the PR merge after this step, not before?
@AdeboyeML runs these data through his replicate reproducibility assessment pipeline
we merge first, then Adeniyi checks using data from the merge |
Got it. My concern was bloating the repo in case you need to reprocess. But I trust your judgment in figuring of what order works best. Excited to have this in the repo!! PS – if you do need to end up replacing, I'd recommend actually deleting the files as I did here |
Awesome, this is good to keep in mind. We might at some point also consider moving from gitLFS to dvc. It was super easy to get setup, and plays very nicely with AWS. I did this in the grit-benchmark repo ( in broadinstitute/grit-benchmark#28) In the most recent commit, I added a bunch of comments to two different README files. We might want to edit them before first official release, but we can open a new, documentation-focused PR then. I am going to merge! |
The notebook says
but it should say
It's not worth updating anything; I'm just adding a note here for ourselves. |
good catch. I added #77 so we can make sure to improve (I agree it is not urgent, but someone new could start there (good practice for editing a file using github 😄 )) |
I spherize all plates of batch 1 data using all DMSO profiles as a reference. I apply feature selection to the full dataframe of concatenated level 4a data, and output the spherized data.
I also needed to change the name of a script from
profile.py
toprofile_cells.py
. This solves the issues I described in #59 (comment)Merge steps