- mp4 to png done with convert_video.bat
- segmentations done with slicer
- The annotations were saved as
.nrrd
in the mask directory - Then converted to
.png
files for training using prepare_dataset.py
- The current version utilizes transfer learning and data augmentation with the fastai library to train a segmentation model with a small hand-annotated dataset
- turned into a command line program using hedgiefinder:
python segmentation/hedgiefinder.py path/to/hedgehog_video.mp4
- gif made with make_preview_gif.bat
prepare_dataset includes option to convert corrected nrrd files
- using the segmentation maps generated with hedgiefinder, Xiaomi's location over time can be tracked as the coordinates (x, y) at the center of each segmentation, (found using sci-kit image regionprops)
- label_centers is the relevant script for finding these coordinates
- Finally, the sum of all these points over time is overlaid across a single from using where_is_xiaomi.py to get a heat map of the night's activity
from hedgiefinder import predict_overlay_url
predict_overlay_url('old/vid1.mp4', 'https://emojipedia-us.s3.dualstack.us-west-1.amazonaws.com/thumbs/240/mozi
...: lla/36/pile-of-poo_1f4a9.png', videoname = 'stinky_girl.mp4', sz = (100, 100))
- consider making a Docker image of the model and make a streamlit app
- have upload option to upload single photo, group of photos or video
- see result and then option to download as video or gif
- option for overlay or segmentation
- could host using digital ocean again: https://cloud.digitalocean.com/droplets/new?i=f7a007&size=s-4vcpu-8gb®ion=sfo3
- Alternatively can checkout this blog: https://towardsdatascience.com/deploy-machine-learning-model-on-google-kubernetes-engine-94daac85108b
docker run --ipc=host --gpus all -p 8888:8888 fastdotai/fastai ./run_jupyter.sh
got errors if I didn't have ipc==host
to use gpus needed a special wsl nvidia cuda drivers installed