Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DOC] Adds Iñigo's Community Bonding Period Blog #36

Merged
merged 5 commits into from
May 29, 2024
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion posts/2023/2023_05_29_Shilpi_Week_0_1.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ About Myself
Hey there! I'm Shilpi, a Computer Science and Engineering undergrad at Dayananda Sagar College of Engineering, Bangalore. I'm on track to grab my degree in 2024.
My relationship with Python started just before I started college - got my hands dirty with this awesome Python Specialization course on Coursera.
When it comes to what makes me tick, it's all things tech. I mean, new technology always excites me. Ubuntu, with its fancy terminal and all, used to intimidate me at first, but now, I get a thrill out of using it to do even the simplest things.
Up until 2nd year I used to do competitive programming and a bit of ML. But from 3rd year I've been into ML very seriously, doing several courses on ML as well solving ML problems on kaggle. ML is very fun and I've done a few project on ML as well.
Up until 2nd year I used to do competitive programming and a bit of ML. But from 3rd year I've been into ML very seriously, doing several courses on ML as well solving ML problems on Kaggle. ML is very fun and I've done a few project on ML as well.
Coding? Absolutely love it. It's like, this is what I was meant to do, y'know? I got introduced to git and GitHub in my first year - was super curious about how the whole version control thing worked. And then, I stumbled upon the world of open source in my second year and made my first contribution to Tardis: (`<https://github.com/tardis-sn/tardis/pull/1825>`_)
Initially, I intended on doing GSoC during my second year but ended up stepping back for reasons. This time, though, I was fired up to send in a proposal to at least one organization in GSoC. And, well, here we are!

Expand Down
2 changes: 1 addition & 1 deletion posts/2023/2023_05_29_vara_week1.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Learning models helped me quickly learn Tensorflow. As the next step, I read VQ-
understood the tensorflow open source implementation. VQ-VAE addresses 'posterior collapse'
seen in traditional VAEs and overcomes it by discretizing latent space. This in turn also
improved the generative capability by producing less blurrier images than before.
Familiarizing about VQ-VAE early on helps in understading the latents used in Diffusion models
Familiarizing about VQ-VAE early on helps in understanding the latents used in Diffusion models
in later steps. I also explored a potential dataset - `IXI (T1 images) <https://brain-development.org/ixi-dataset/>`_
- and performed some exploratory data analysis, such as age & sex distribution. The images contain
entire skull information, it may require brain extraction & registration. It maybe more useful
Expand Down
2 changes: 1 addition & 1 deletion posts/2023/2023_08_21_vara_week_12_13.rst
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ Using existing training parameters, carried out two experiments, one on CC359 al
:alt: Combined trainings plots for all experiments
:width: 800

Inference results on the best performing model, B12-both, is shown below, where every two rows correspond to reconstructions & original volumes respectively, with equally spaced slices in each row. These slices visualised are anterior-posterior topdown & ventral-dorsal LR.
Inference results on the best performing model, B12-both, is shown below, where every two rows correspond to reconstructions & original volumes respectively, with equally spaced slices in each row. These slices visualized are anterior-posterior topdown & ventral-dorsal LR.

.. image:: /_static/images/vqvae-monai-B12-both.png
:alt: VQVAE-Monai-B12-both reconstructions & originals showing equally spaced 5 slices for 2 different test samples
Expand Down
19 changes: 17 additions & 2 deletions posts/2024/2024_05_27_Inigo_week_0.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
Community Bonding Period Summary
Community Bonding Period Summary and first impressions
===================================================
itellaetxe marked this conversation as resolved.
Show resolved Hide resolved

.. post:: May 27 2024
Expand All @@ -7,9 +7,24 @@ Community Bonding Period Summary
:category: gsoc


About Iñigo
~~~~~~~~~~~~~~~~~~~~
itellaetxe marked this conversation as resolved.
Show resolved Hide resolved
Hi everyone! I am Iñigo Tellaetxe Elorriaga, BSc in Biomedical Engineering and MSc in Biomedical Technologies in Mondragon Unibertsitatea, Basque Country. I am a first year PhD student in the Computational Neuroimaging Laboratory in the Biobizkaia Health Research Institute, also in the Basque Country. In the lab, our main paradigm is brain connectivity, so I am familiar with diffusion MRI and tractography. My main lines of research are brain aging, age modelling, and neurorehabilitation, all in the presence of neurodegenerative diseases and acute brain injuries.
As of my programming skills, I am mainly a Python developer and I am one of the main contributors to the `ageml` library, which we are developing at our lab as part of my PhD thesis.
itellaetxe marked this conversation as resolved.
Show resolved Hide resolved
I also worked in the industry as a research engineer in the field of medical computer vision for Cyber Surgery, developing new methods to generate synthetic CT images from MRI for reducing ionizing radiation in spinal surgery patients, using generative diffusion models.
I have been using DIPY for a while now for my research and other projects, so I am obviously really excited to contribute to the project this summer.

How did I get involved with DIPY
~~~~~~~~~~~~~~~~~~~~
itellaetxe marked this conversation as resolved.
Show resolved Hide resolved
My `thesis supervisor <https://github.com/erramuzpe>`__, who was a professor at my master's and also a participant and mentor in other editions, told me about GSoC. As a person that has been naturally attracted to research and open science I got really interested in open source software. I was also lucky enough to meet `@drombas <https://github.com/drombas>`__, who took part in GSoC in 2021 with DIPY. He told me about his work and encouraged me to participate in DIPY, as he positively valued his experience.
After starting my PhD, I saw the perfect opportunity to contribute to the organization and potentially also to my research field. That is why I wanted to fuse tractography with age modelling in the context of Alzheimer's Disease.

What I did this week
What I did this week and in the Community Bonding Period
~~~~~~~~~~~~~~~~~~~~
itellaetxe marked this conversation as resolved.
Show resolved Hide resolved
During this period, I had the opportunity to meet the other GSoC participants in the organization and my mentors. It was perfect to learn about how we should contribute to DIPY aside from our project and to get up and running with the environment, the coding style guidelines, and the community guidelines.

Briefly, the objective of my project is to implement a new feature to generate synthetic tractograms in DIPY, being able to specify the "age" and the clinical status (healthy or Alzheimer's Disease affected) of the requested tractogram.

I talked with my mentors and we agreed on the first tasks to carry out. Jon Haitz provided me with the data he used to train his AutoEncoder (AE) network. These are 2 datasets: Tractoinferno and FiberCup.
itellaetxe marked this conversation as resolved.
Show resolved Hide resolved
I forked the Tractolearn repo and translated his AE architecture from PyTorch to TensorFlow. I updated the Dockerfile in my fork and created a working Docker image of the repo to run experiments in the DIPC cluster when the experiments phase starts.
I also started writing the training loop for the AE, this is WIP.
Expand Down