From 0eff82dd12f00458d48ba00897d2ecf15ad358cf Mon Sep 17 00:00:00 2001 From: Benjamin Morris Date: Fri, 3 Nov 2023 10:22:13 -0700 Subject: [PATCH 1/3] clarify no equiv on windows --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 28e5777b8..b126607eb 100644 --- a/README.md +++ b/README.md @@ -29,11 +29,11 @@ As part of the [Allen Institute for Cell Science's](allencell.org) mission to un The bulk of `CytoDL`'s underlying structure bases the [lightning-hydra-template](https://github.com/ashleve/lightning-hydra-template) organization - we highly recommend that you familiarize yourself with their (short) docs for detailed instructions on running training, overrides, etc. -Our currently available code is roughly split into two domains: image-to-image transformations and representation learning. The image-to-image code (denoted im2im) contains configuration files detailing how to train and predict using models for resolution enhancement (e.g. predicting 100x images from 20x images), semantic and instance segmentation, and label-free prediction. Representation learning code includes a wide variety of Variational Auto Encoder (VAE) architectures. Note that these default models are very small and by default run on heavily downsampled data in order to make tests run efficiently - for best performance, the model size should be increased and downsampling removed. +Our currently available code is roughly split into two domains: image-to-image transformations and representation learning. The image-to-image code (denoted im2im) contains configuration files detailing how to train and predict using models for resolution enhancement using conditional GANs (e.g. predicting 100x images from 20x images), semantic and instance segmentation, and label-free prediction. Representation learning code includes a wide variety of Variational Auto Encoder (VAE) architectures. Due to dependency issues, equivariant autoencoders are not currently supported on Windows. As we rely on recent versions of pytorch, users wishing to train and run models on GPU hardware will need up-to-date NVIDIA drivers. Users with older GPUs should not expect code to work out of the box. Similarly, we do not currently support training/predicting on Mac GPUs. In most cases, cpu-based training should work when GPU training fails. -For Im2Im models, we provide a handful of example 3D images for training the basic image-to-image tranformation-type models and default model configuration files for users to become comfortable with the framework and prepare them for training and applying these models on their own data. +For im2im models, we provide a handful of example 3D images for training the basic image-to-image tranformation-type models and default model configuration files for users to become comfortable with the framework and prepare them for training and applying these models on their own data. Note that these default models are very small and train on heavily downsampled data in order to make tests run efficiently - for best performance, the model size should be increased and downsampling removed from the data configuration. ## How to run From f4481d467a2b464e1703dbf5f561c15d443df37c Mon Sep 17 00:00:00 2001 From: Benjamin Morris Date: Fri, 3 Nov 2023 10:22:33 -0700 Subject: [PATCH 2/3] add normal lie learn --- pyproject.toml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pyproject.toml b/pyproject.toml index b6d11f15e..7eee26846 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -50,7 +50,7 @@ requires-python = ">=3.8,<3.11" [project.optional-dependencies] equiv = [ - "lie_learn @ git+https://github.com/colobas/lie_learn.git@d5be2ab", + "lie_learn==0.0.1.post1", "escnn~=1.0.7", "py3nj==0.1.2", "e3nn~=0.5.1" From ce63095a4ded23bf1ea974e2dd7208b88150d2af Mon Sep 17 00:00:00 2001 From: Benjamin Morris Date: Fri, 3 Nov 2023 10:25:24 -0700 Subject: [PATCH 3/3] lint --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index b126607eb..bf06c274f 100644 --- a/README.md +++ b/README.md @@ -29,7 +29,7 @@ As part of the [Allen Institute for Cell Science's](allencell.org) mission to un The bulk of `CytoDL`'s underlying structure bases the [lightning-hydra-template](https://github.com/ashleve/lightning-hydra-template) organization - we highly recommend that you familiarize yourself with their (short) docs for detailed instructions on running training, overrides, etc. -Our currently available code is roughly split into two domains: image-to-image transformations and representation learning. The image-to-image code (denoted im2im) contains configuration files detailing how to train and predict using models for resolution enhancement using conditional GANs (e.g. predicting 100x images from 20x images), semantic and instance segmentation, and label-free prediction. Representation learning code includes a wide variety of Variational Auto Encoder (VAE) architectures. Due to dependency issues, equivariant autoencoders are not currently supported on Windows. +Our currently available code is roughly split into two domains: image-to-image transformations and representation learning. The image-to-image code (denoted im2im) contains configuration files detailing how to train and predict using models for resolution enhancement using conditional GANs (e.g. predicting 100x images from 20x images), semantic and instance segmentation, and label-free prediction. Representation learning code includes a wide variety of Variational Auto Encoder (VAE) architectures. Due to dependency issues, equivariant autoencoders are not currently supported on Windows. As we rely on recent versions of pytorch, users wishing to train and run models on GPU hardware will need up-to-date NVIDIA drivers. Users with older GPUs should not expect code to work out of the box. Similarly, we do not currently support training/predicting on Mac GPUs. In most cases, cpu-based training should work when GPU training fails.