Skip to content

Commit

Permalink
FAQ dropdown menus (#137)
Browse files Browse the repository at this point in the history
  • Loading branch information
themattinthehatt authored Apr 4, 2024
1 parent b2de0df commit b0d5a4a
Show file tree
Hide file tree
Showing 4 changed files with 43 additions and 47 deletions.
1 change: 1 addition & 0 deletions docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,7 @@
'sphinx.ext.githubpages', # allows integration with github
'sphinx_automodapi.automodapi',
'sphinx_copybutton', # add copy button to code blocks
'sphinx_design', # dropdowns
'sphinx_rtd_dark_mode',
]

Expand Down
1 change: 1 addition & 0 deletions docs/requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ sphinx_rtd_theme
sphinx-rtd-dark-mode
sphinx-automodapi
sphinx-copybutton
sphinx-design
fiftyone
h5py
hydra-core
Expand Down
87 changes: 40 additions & 47 deletions docs/source/faqs.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,50 +2,43 @@
FAQs
#############

* :ref:`Can I import a pose estimation project in another format? <faq_can_i_import>`
* :ref:`What if I encounter a CUDA out of memory error? <faq_oom>`
* :ref:`Why does the network produce high confidence values for keypoints even when they are occluded? <faq_nan_heatmaps>`

.. _faq_can_i_import:

**Q: Can I import a pose estimation project in another format?**

We currently support conversion from DLC projects into Lightning Pose projects
(if you would like support for another format, please
`open an issue <https://github.com/danbider/lightning-pose/issues>`_).
You can find more details in the :ref:`Organizing your data <directory_structure>` section.

.. _faq_oom:

**Q: What if I encounter a CUDA out of memory error?**

We recommend a GPU with at least 8GB of memory.
Note that both semi-supervised and context models will increase memory usage
(with semi-supervised context models needing the most memory).
If you encounter this error, reduce batch sizes during training or inference.
You can find the relevant parameters to adjust in :ref:`The configuration file <config_file>`
section.

.. _faq_nan_heatmaps:

**Q: Why does the network produce high confidence values for keypoints even when they are occluded?**

Generally, when a keypoint is briefly occluded and its location can be resolved by the network, we are fine with
high confidence values (this will happen, for example, when using temporal context frames).
However, there may be scenarios where the goal is to explicitly track whether a keypoint is visible or hidden using
confidence values (e.g., quantifying whether a tongue is in or out of the mouth).
In this case, if the confidence values are too high during occlusions, try the suggestions below.

First, note that including a keypoint in the unsupervised losses - especially the PCA losses -
will generally increase confidence values even during occlusions (by design).
If a low confidence value is desired during occlusions, ensure the keypoint in question is not
included in those losses.

If this does not fix the issue, another option is to set the following field in the config file:
``training.uniform_heatmaps_for_nan_keypoints: true``.
[This field is not visible in the default config but can be added.]
This option will force the model to output a uniform heatmap for any keypoint that does not have
a ground truth label in the training data.
The model will therefore not try to guess where the occluded keypoint is located.
This approach requires a set of training frames that include both visible and occluded examples
of the keypoint in question.
.. dropdown:: Can I import a pose estimation project in another format?

We currently support conversion from DLC projects into Lightning Pose projects
(if you would like support for another format, please
`open an issue <https://github.com/danbider/lightning-pose/issues>`_).
You can find more details in the :ref:`Organizing your data <directory_structure>` section.

.. dropdown:: What if I encounter a CUDA out of memory error?

We recommend a GPU with at least 8GB of memory.
Note that both semi-supervised and context models will increase memory usage
(with semi-supervised context models needing the most memory).
If you encounter this error, reduce batch sizes during training or inference.
You can find the relevant parameters to adjust in :ref:`The configuration file <config_file>`
section.

.. dropdown:: Why does the network produce high confidence values for keypoints even when they are occluded?

Generally, when a keypoint is briefly occluded and its location can be resolved by the network,
we are fine with high confidence values (this will happen, for example, when using temporal
context frames).
However, there may be scenarios where the goal is to explicitly track whether a keypoint is
visible or hidden using confidence values (e.g., quantifying whether a tongue is in or out of
the mouth).
In this case, if the confidence values are too high during occlusions, try the suggestions
below.

First, note that including a keypoint in the unsupervised losses - especially the PCA losses -
will generally increase confidence values even during occlusions (by design).
If a low confidence value is desired during occlusions, ensure the keypoint in question is not
included in those losses.

If this does not fix the issue, another option is to set the following field in the config file:
``training.uniform_heatmaps_for_nan_keypoints: true``.
[This field is not visible in the default config but can be added.]
This option will force the model to output a uniform heatmap for any keypoint that does not
have a ground truth label in the training data.
The model will therefore not try to guess where the occluded keypoint is located.
This approach requires a set of training frames that include both visible and occluded examples
of the keypoint in question.
1 change: 1 addition & 0 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -84,6 +84,7 @@ def get_cuda_version():
"sphinx-rtd-dark-mode",
"sphinx-automodapi",
"sphinx-copybutton",
"sphinx-design",
},
"extra_models": {
"lightning-bolts", # resnet-50 trained on imagenet using simclr
Expand Down

0 comments on commit b0d5a4a

Please sign in to comment.