Pipeline for segmenting and quantifying nuclear marker expression in organoid slices. Written by Damian Dalle Nogare at the BioImage Analysis Infrastruture Unit of the National Facility for Data Handling and Analysis at Human Technopole, Milan. Licenced under BSD-3.
- Copy the contents of the pipeline to a folder (you can pull the latest version into a git repository by using the command
git pull https://github.com/nobias-fht/harschnitz-organoid-nuclei
) - In the terminal, navigate to that folder
- Create a conda environment by typing
conda env create -f environment.yml
- In Fiji, go to Help → Update and then click “Manage Update Sites”
- Add the sites:
BaSiC
andLabkit
,IJBP-Plugins
, andLocal Z Projector
- Move the file “local_z.py” into the Fiji folder, under the
scripts/plugins
folder. If this folder does not exist, you may have to make it.
Before running, ensure that you have the latest version of the script by running the terminal command "git pull" from the folder you have the scripts installed in.
-
Renaming the images and converting from nd2 to tif
- Open an anaconda terminal and navigate to where you stored the pipeline
- Activate the environment by typing
conda activate harschnitz_pipeline
- Create a folder to store the pre-processed images by typing
mkdir folder_name
(iemkdir step_0_output
) - run the file
step_0_rename_files.py
by typingpython3 step_0_rename_files.py
- When prompted, select the location of the nd2 files, and then the output folder you made above
-
Pre-processing the data
- Open the script “step_1_preprocessing.ijm by dragging it into Fiji
- Run the script and fill in the parameters
- Select Folder that the processed images from the previous step are in
- Choose a folder to store the output images in (important: this will be used in the following steps)
- For the
labkit classifier
, navigate to the ‘models’ folder and selectwhole_slide_classifier.classifier
-
Segmentation and Quantification
-
Confirm that the conda environment is activated. If not activate it by typing
conda activate harschnitz_pipeline
-
Open the
config.yaml
file- Update the
channel_names
entry - Update the
weighting
entry (if necessary) - Update the
channels
entry (if necessary)
- Update the
-
Go back to the terminal
-
Make an output folder to store the results in by typing
mkdir folder_name
(iemkdir step_2_output)
-
Run the script postprocessing and segmentation step by typing
python3 step_2_postprocesing_and_segmentation.py
into the terminal -
Follow the prompts to select the output folder from step one, and the input folder you just made
-
Checking the results
- Check files and thresholds using
python3 check_thresholds.py
- Click the
Load an Image
button, and open therestitched
folder. Within this folder, open a folder containing the image to check and select OK - Select a channel using the dropdown, and click the
apply new threshold
button - Check the predictions of the pipeline
- If necessary, adjust the appropriate weight using the slider
- Repeat as necessary for other channels and other files (opening a new file using the
Load and Image
button when needed) - Update the weights in the
config.yaml
file when finished
- Check files and thresholds using
-
Quantify the data
- Run the quantification by typing
python3 step_3_quantification.py
- When prompted, select the output folder from step 2
- Within the output folder, there will be two new folders
report
will contain summarycsv
files that contain the results of the analysispositive_cell_images
will contain a single binary image for each image and channel, where the positive cells are marked
- Run the quantification by typing