Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add PCA Centering method (experimental) #214

Merged
merged 1 commit into from
Nov 7, 2024

Conversation

ksikka
Copy link
Collaborator

@ksikka ksikka commented Nov 6, 2024

Config:

losses.pca_singleview.centering_method = null | mean | median

@ksikka ksikka marked this pull request as ready for review November 6, 2024 18:28
@ksikka ksikka changed the title Pca test Add losses.pca_singleview.centering_method (experimental) Nov 7, 2024
@ksikka ksikka changed the title Add losses.pca_singleview.centering_method (experimental) Add PCA Centering method (experimental) Nov 7, 2024
@ksikka
Copy link
Collaborator Author

ksikka commented Nov 7, 2024

This is ready for review. pytest passed.

Copy link
Collaborator

@themattinthehatt themattinthehatt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good!

@themattinthehatt themattinthehatt merged commit dc755b2 into paninski-lab:main Nov 7, 2024
@hummuscience
Copy link
Contributor

hummuscience commented Jan 9, 2025

Quick question, does centering in this case solve the problem of the animal having different "sizes" when images come from different datasets? Or would that require an additional scaling step?

@ksikka
Copy link
Collaborator Author

ksikka commented Jan 9, 2025

Hi Muad, this is different. It was an attempt to improve the PCA loss that did not produce the expected results, so it remains experimental.

Aside from this PR, we're actively workshopping a two-stage pipeline that detects and crops the animal, then passes it to the pose estimation model. However in our current plans, the crop size was fixed. It's interesting to learn about the use-case of animals being different sizes from different datasets.

To handle animals of different sizes, perhaps to some extent this could be handled by image augmentation? I.e. randomly cropping the image prior to the resize transform (which resizes to a fixed height and width). This would effectively scale the animal.
https://github.com/paninski-lab/lightning-pose/blob/main/lightning_pose/data/augmentations.py#L119

@hummuscience
Copy link
Contributor

I thought that LP already implemented scaling augmentation through dlc_top_down, but that does not seem to be the case. I was surprised because all the other augmentations are implemented. Until I realized that DLC does "scale augmentation" not through imgaug but through resizing all images to a sampled resizing factor for each batch: https://github.com/DeepLabCut/DeepLabCut/blob/main/deeplabcut/pose_estimation_tensorflow/datasets/pose_imgaug.py

I wonder what would be the best way to do that. The nice thing about the DLC approach is that the random rescaling happens for every batch.

@hummuscience
Copy link
Contributor

hummuscience commented Jan 9, 2025

I think I will give this a try in the pipeline:

data_transform.append(
            iaa.Sometimes(
                1,  # probability of applying scaling
                iaa.Affine(
                    scale=(0.5, 1.5),
                    keep_size=False
                )
            )
)

This should resize each batch between 50 and 150%. Since the images will be resized back to a fixed height/width, it will probably mainly be affected by the 0.4 probability of cropping.

What do you think?

@themattinthehatt
Copy link
Collaborator

themattinthehatt commented Jan 10, 2025

@hummuscience can you clarify what you mean by "The nice thing about the DLC approach is that the random rescaling happens for every batch."? As @ksikka mentioned we do a random crop and pad for each image in each batch, which amounts to a version of scaling.

@hummuscience
Copy link
Contributor

If I understand the LP dlc-top-down augmentation correctly, in 40% of the cases, the image is cropped by up to 15% from each side and due to the keep_size=False, it leads to a resizing of the image due to the final transformation, right?

The DLC augmentation does an additional step in every batch and that is the general scaling, set with scale_jitter_up/down (https://github.com/DeepLabCut/DeepLabCut/blob/d4da23a44c4e969eac437c1425b7670bc6c4ce14/deeplabcut/pose_estimation_tensorflow/datasets/pose_imgaug.py#L50).

It is a general "augmentation" that is done before any of the augmentation options where it samples from the scaling range (https://github.com/DeepLabCut/DeepLabCut/blob/d850b5e70c6c1905d5f53bf0069ac0eda680e37d/deeplabcut/pose_estimation_tensorflow/datasets/pose_base.py#L33) and then applies that to the data before any augmentation (https://github.com/DeepLabCut/DeepLabCut/blob/d4da23a44c4e969eac437c1425b7670bc6c4ce14/deeplabcut/pose_estimation_tensorflow/datasets/pose_imgaug.py#L304).

I am only discussing this as I am trying to figure out why I am getting these "side switches" in the predictions. This happens only in models which I train using the backbone I trained from the Superanimal dataset from DLC (#158 (comment)). This dataset has the animals in very different "size" and I am wondering if the difference in scaling between DLC and LP is what is causing it.

@themattinthehatt
Copy link
Collaborator

Ah I see. One question I have is whether or not you've looked at the outputs of the DLC Superanimal model - does it have the same side-switching problem? If not then that would suggest the issue lies in a difference between DLC and LP; if so that would suggest another potential issue, either with both models or with the dataset itself.

@hummuscience
Copy link
Contributor

hummuscience commented Jan 13, 2025

Yes. The side-switching does not happen in the DLC-superanimal model.
Also, the side switching does not happen if I am just training LP with a single dataset (where the scale is fixed). I haven't tested that thoroughly but more and more is mointing towards that.

@themattinthehatt
Copy link
Collaborator

Interesting - quite unexpected, I would say. Are you interested in adding a scaling augmentation to the pipeline to see if that resolves the problem?

@hummuscience
Copy link
Contributor

I am currently testing a scaling augmentation (dlc-top-down vs. dlc-top-down-scale) and will report.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants