Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding more loss options during training #32

Open
wants to merge 2 commits into
base: dev
Choose a base branch
from
Open

Conversation

gattia
Copy link
Collaborator

@gattia gattia commented Jul 14, 2022

The first commit is just a documentation error I noted.

The main changes are thoroughly described in the second commit. Briefly:

  • I added the softmax option for building the 'avg_dice_no_reduce'.
  • I also added new functionality that enables calling multiple additional losses during training.
  • To go along with the multiple loss, I added the ability to use DiceLoss to return the loss of just a single class/tissue, this way we can monitor how individual tissues are progressing during the training process.

This is probably still a bit of a work in progress - but I wanted to see if there are any thought or feedback as I work on this.

I've added the second commit message w/ the majority of the changes below to make the content easier to see in this pull request:

  • config.py updated to take a list called LOSS_METRICS.
    This is a list of extra loss metrics to run during train/val steps.
    Each item in the list is structured as:
    [[loss_type, activation], weights]. The first item is a list that
    mimics the current cfg.LOSS input into build_loss
    and the second item is the weights.

  • losses.py now explicitly includes "softmax" version of
    "avg_dice_no_reduce" ("avg_dice_no_reduce", "softmax")
    which calls DiceLoss. Though, this might be redundant,
    becuase it does the exact same thing as calling
    ("avg_dice_no_reduce", "sigmoid"). From this standpoint,
    it might be useful to break LOSS into the actual loss part
    ('avg_dice_no_reduce') and the activation
    ('softmax'/'sigmoid').

    The current implementation was a bit confusing to me -
    I thought I had to pass the softmax as the activation to
    DiceLoss which caused issues. Breaking it up into
    these parts would be clearer.

  • losses.py got a new function that creates a one-hot-encoded
    set of weights if an integer is inputted instead of
    a list of weights. This is useful it the goal is to just find the
    loss of a single tissue/class during training.

  • build_loss was updated to enable building the additional
    loss functions mentioned in config.py above.

  • trainer.py was updated so that it builds a list of the loss metrics
    mentioned in config.py and enabled by the updates to
    build_loss.

  • reduce_tensor in utils.py was updated so that if the reduce was
    'none' and the weights were one-hot encoded, it scales the weights
    value so that it makes sense. Otherwise, when keras does its
    built in reduce it averages over all of the zero dims making the loss
    seem artificially low (by a factor of the number of categories/classes
    in the loss).

gattia added 2 commits April 24, 2022 18:42
- `config.py` updated to take a list called `LOSS_METRICS`.
    This is a list of extra loss metrics to run during train/val steps.
    Each item in the list is structured as:
    [[loss_type, activation], weights]. The first item is a list that
    mimics the current `cfg.LOSS` input into `build_loss`
    and the second item is the weights.

- `losses.py` now explicitly includes "softmax" version of
    "avg_dice_no_reduce" `("avg_dice_no_reduce", "softmax")`
    which calls `DiceLoss`. Though, this might be redundant,
    becuase it does the exact same thing as calling
    `("avg_dice_no_reduce", "sigmoid")`. From this standpoint,
    it might be useful to break `LOSS` into the actual loss part
    (`'avg_dice_no_reduce'`) and the activation
    (`'softmax'`/`'sigmoid'`).

    The current implementation was a bit confusing to me -
    I thought I had to pass the `softmax` as the activation to
    `DiceLoss` which caused issues. Breaking it up into
    these parts would be clearer.

- `losses.py` got a new function that creates a one-hot-encoded
    set of `weights` if an integer is inputted instead of
    a list of weights. This is useful it the goal is to just find the
    loss of a single tissue/class during training.

- `build_loss` was updated to enable building the additional
    loss functions mentioned in `config.py` above.

- `trainer.py` was updated so that it builds a list of the loss metrics
    mentioned in `config.py` and enabled by the updates to
    `build_loss`.

- `reduce_tensor` in `utils.py` was updated so that if the reduce was
    `'none'` and the weights were one-hot encoded, it scales the weights
    value so that it makes sense. Otherwise, when keras does its
    built in reduce it averages over all of the zero dims making the loss
    seem artificially low (by a factor of the number of categories/classes
    in the loss).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant