Skip to content

Commit

Permalink
FIX: Fixed unformatted link, wrong title, and image links
Browse files Browse the repository at this point in the history
  • Loading branch information
itellaetxe committed Aug 22, 2024
1 parent 60f1229 commit ebab5e9
Showing 1 changed file with 11 additions and 11 deletions.
22 changes: 11 additions & 11 deletions posts/2024/2024_08_22_inigo_final_report.rst
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@

.. |conclusions-title| raw:: html

<span class="gsoc-title">Pull Requests</span>
<span class="gsoc-title">Conclusions</span>

.. |timeline-title| raw:: html

Expand Down Expand Up @@ -112,7 +112,7 @@ The objective of the project is to generate synthetic human tractograms with tun
* **Replicated the** `FINTA <https://doi.org/10.1016/j.media.2021.102126>`_ **architecture originally implemented in PyTorch (vanilla AutoEncoder, not variational) using TensorFlow2+Keras.**
Validated the results using the FiberCup dataset. The model is found in the ``ae_model.py`` module of the `TractoEncoder GSoC <https://github.com/itellaetxe/tractoencoder_gsoc>`_ repository. The architecture can be summarized as in the following image:

.. image:: https://github.com/dipy/dipy.org/blob/master/_static/images/gsoc/2024/inigo/inigo_vanilla_autoencoder.png
.. image:: /_static/images/gsoc/2024/inigo/inigo_vanilla_autoencoder.png
:alt: AE architecture diagram.
:align: center
:width: 800
Expand All @@ -121,7 +121,7 @@ The objective of the project is to generate synthetic human tractograms with tun
* The upsampling layers of the Decoder block use linear interpolation in PyTorch by default, whereas in TensorFlow2, there is no native implementation for this, and nearest neighbor (NN) interpolation is used instead. This is a significant difference in implementations, and to replicate the PyTorch behavior a custom linear interpolating upsampling layer was implemented in TF2. However, after training with both NN and linear interpolation, the results were very similar, so the custom layer was not used in the final implementation. This work was developed in a `separate branch <https://github.com/itellaetxe/tractoencoder_gsoc/tree/feature_linear_upsampling>`_.
* Training was run for 120 epochs, using a data set containing plausible and implausible streamlines. All the training experiments in my GSoC work were done with this data set. The figure below shows a summary of the replication results, which consist of running a set of unseen plausible streamlines through the model (encoder and decoder):

.. image:: https://github.com/dipy/dipy.org/blob/master/_static/images/gsoc/2024/inigo/fibercup_replicated.png
.. image:: /_static/images/gsoc/2024/inigo/fibercup_replicated.png
:alt: Replication of the FINTA architecture results on the FiberCup dataset.
:align: center
:width: 800
Expand All @@ -135,7 +135,7 @@ The objective of the project is to generate synthetic human tractograms with tun

* The model is found in the ``vae_model.py`` module of the `TractoEncoder GSoC <https://github.com/itellaetxe/tractoencoder_gsoc>`_ repository. The architecture can be summarized in the figure below:

.. image:: https://github.com/dipy/dipy.org/blob/master/_static/images/gsoc/2024/inigo/inigo_variational_autoencoder.png
.. image:: /_static/images/gsoc/2024/inigo/inigo_variational_autoencoder.png
:alt: VAE architecture diagram.
:align: center
:width: 800
Expand All @@ -144,7 +144,7 @@ The objective of the project is to generate synthetic human tractograms with tun
* Weighing the Kullback-Leibler loss component was also implemented, based on the `Beta-VAE <https://openreview.net/forum?id=Sy2fzU9gl>`_ work, aiming for a stronger disentanglement of the latent space. However, this parameter was never explored (it was always set to 1.0) due to its trade-off with the reconstruction accuracy, but it is a potential improvement for future work in the context of hyperparameter optimization.
* The figure below shows a summary of the VAE results, also using the FiberCup dataset:

.. image:: https://github.com/dipy/dipy.org/blob/master/_static/images/gsoc/2024/inigo/vanilla_vae_120_epoch_results.png
.. image:: /_static/images/gsoc/2024/inigo/vanilla_vae_120_epoch_results.png
:alt: VAE architecture results on the FiberCup dataset.
:align: center
:width: 600
Expand All @@ -156,7 +156,7 @@ The objective of the project is to generate synthetic human tractograms with tun

* The model is found in the ``cond_vae_model.py`` module of the `TractoEncoder GSoC <https://github.com/itellaetxe/tractoencoder_gsoc>`_ repository. The model was trained on the FiberCup dataset, and the conditioning variable in this case was chosen to be the length of the streamlines, hypothesizing that this is a relatively simple feature to capture by the model based on their geometry. The majority of the architecture is based on the VAE from the previous point (also based on the `FINTA <https://doi.org/10.1016/j.media.2021.102126>`_ architecture), to which I added two dense layers to output the :math:`\sigma_r` and :math:`\text{log}\sigma^2_r` of the regressed attribute, as well as the *generator* block. A diagram of the architecture can be seen below:

.. image:: https://github.com/dipy/dipy.org/blob/master/_static/images/gsoc/2024/inigo/conditional_vae_architecture_diagram.png
.. image:: /_static/images/gsoc/2024/inigo/conditional_vae_architecture_diagram.png
:alt: condVAE architecture diagram.
:align: center
:width: 800
Expand All @@ -166,7 +166,7 @@ The objective of the project is to generate synthetic human tractograms with tun

* By exploring the latent space of the VAE and condVAE models, we can compare the organization of the samples in the latent space, and see whether there is a difference aligned with the conditioning variable. After training for 64 epochs just to check how the model was progressing, I projected the 32-dimensional latent space using the t-SNE algorithm, to visualize it easily. This particular algorithm was chosen due to its popularity, speed, and availability in widespread libraries like `scikit-learn`. The projections only show the plausible fibers The results are shown in the figures below:

.. image:: https://github.com/dipy/dipy.org/blob/master/_static/images/gsoc/2024/inigo/latent_space_comparison_VAE_cVAE_colored_by_streamline_length.png
.. image:: /_static/images/gsoc/2024/inigo/latent_space_comparison_VAE_cVAE_colored_by_streamline_length.png
:alt: t-SNE latent space comparison between condVAE and VAE models.
:align: center
:width: 600
Expand All @@ -175,7 +175,7 @@ The objective of the project is to generate synthetic human tractograms with tun

* Another aspect to validate was the capability of the model to correctly capture the conditioning variable in the training data. To do so, we retrained the model until we got a close-to-zero *label loss* (in charge of capturing this variability), and computed the :math:`MSE` and the :math:`R^2` metrics between the predicted and the true conditioning variable. In addition, we plotted the latent space 2D projection again. The results are shown in the figure below:

.. image:: https://github.com/dipy/dipy.org/blob/master/_static/images/gsoc/2024/inigo/vae_conditioning_validation.png
.. image:: /_static/images/gsoc/2024/inigo/vae_conditioning_validation.png
:alt: condVAE conditioning validation results.
:align: center
:width: 600
Expand All @@ -186,7 +186,7 @@ The objective of the project is to generate synthetic human tractograms with tun

* Since the generator block is trying to predict the whole latent vector :math:`z` from a single number (:math:`r`), the model was going to probably have trouble getting the necessary geometrical variability, and overall problems for generating different things from the same number. The model was not designed for generating samples and rather to regress their associated conditioning variable, and this was a major issue that had not been foreseen. This was a good lesson to learn, and it was a good opportunity to think about the importance of the generative process in the model design, and how it should be aligned with the model's objectives. The figure below shows a set of generated samples of lengths 30 and 300 (left and right), seeing that the model was generating a very constant shape, only scaled in length:

.. image:: https://github.com/dipy/dipy.org/blob/master/_static/images/gsoc/2024/inigo/streamlines_short_long.png
.. image:: /_static/images/gsoc/2024/inigo/streamlines_short_long.png
:alt: condVAE conditioning validation results.
:align: center
:width: 600
Expand All @@ -203,7 +203,7 @@ The objective of the project is to generate synthetic human tractograms with tun

* The model is found in the ``adv_ae_model.py`` module of the `TractoEncoder GSoC <https://github.com/itellaetxe/tractoencoder_gsoc>`_ repository. The proposed architecture can be summarized in the figure below:`

.. image:: https://github.com/dipy/dipy.org/blob/master/_static/images/gsoc/2024/inigo/adversarial_ae_with_abr.png
.. image:: /_static/images/gsoc/2024/inigo/adversarial_ae_with_abr.png
:alt: condVAE conditioning validation results.
:align: center
:width: 600
Expand Down Expand Up @@ -233,7 +233,7 @@ Apart from my project, I got to contribute to the DIPY project in other ways too

* I opened a PR that got merged into the main DIPY repository, which was a `nomenclature issue with spherical harmonics <https://github.com/dipy/dipy/issues/2970>`_. It took some time to agree on how to solve it, but it was very nice to see that the community was open to discussing the issue and to find a solution that was good for everyone. This was the contribution that gave me access to GSoC with DIPY, and it was a very nice start to the journey. Link to the PR: https://github.com/dipy/dipy/pull/3086
* I reviewed the code of my fellow GSoC students `Kaustav <https://github.com/deka27>`_, `Wachiou <https://github.com/WassCodeur>`_ and `Robin <https://github.com/robinroy03>`_. It felt very good to understand their projects and to be able to help them with their work, which is completely different from my project. I also reviewed their blogs and participated in the reviews I got from them. It was very pleasant to see how engaging the community is, and how everyone is willing to help each other.
* Lastly, I opened an `issue in the dipy.org repository <https://github.com/dipy/dipy.org/issues/40>`_ that got solved thanks to a contribution from my mentor `Serge Koudoro <https://github.com/skoudoro>`.
* Lastly, I opened an `issue in the dipy.org repository <https://github.com/dipy/dipy.org/issues/40>`_ that got solved thanks to a contribution from my mentor `Serge Koudoro <https://github.com/skoudoro>`_.

|future-work-title|
-------------------
Expand Down

0 comments on commit ebab5e9

Please sign in to comment.