Skip to content

Latest commit

 

History

History
103 lines (73 loc) · 3.93 KB

README.md

File metadata and controls

103 lines (73 loc) · 3.93 KB

SAM2_1_fine_tune

The Segment Anything Model 2 (SAM 2) is an advanced foundational model designed to tackle prompt-based visual segmentation in both images and videos. The model leverages a simple transformer architecture enhanced with streaming memory for optimized processing. SAM 2, trained on a customized dataset, achieves robust performance through targeted fine-tuning techniques.

model_diagram

source

Description

The key distinction between fine-tuning a model and training one from scratch lies in the initial state of the weights and biases. When training from scratch, these parameters are randomly initialized based on a specific strategy, meaning the model starts with no prior knowledge of the task and performs poorly initially. Fine-tuning, however, begins with pre-existing weights and biases, allowing the model to adapt more effectively to the custom dataset.

The dataset used for fine-tuning SAM 2 consisted of 8-bit RGB images with 50 cm resolution for binary segmentation tasks.

Getting Started

Dependencies

Installation

Installation Instructions (For Windows)

# 1. Create and activate the Conda environment
conda create -n sam2_1 python=3.11
conda activate sam2_1

# 2. Install PyTorch and its dependencies
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118

# 3. Go to the main folder where the script is located
cd ../sam2_1_fine_tune-main

# 4. Clone the SAM2 repository and rename the folder to avoid conflicts
git clone https://github.com/facebookresearch/sam2.git
mv sam2 sam2_conf

# 5. Change into the 'sam2_conf' directory and copy the 'sam2' folder to the 'sam2_1_fine_tune-main' folder
cd sam2_conf
cp -r sam2 ../sam2/

# 6. Install the SAM2 package in editable mode
pip install -e .

# 7. Navigate to the 'checkpoints' folder and download model checkpoints
cd checkpoints && download_ckpts.sh
cd ../..
cd checkpoints_sam2 && download_ckpts.sh

# 8. Go two directories up and install additional dependencies
cd environment
pip install -r requirements.txt

Installation Instructions (For Linux)

# 1. Create and activate the Conda environment
conda create -n sam2_1 python=3.11
conda activate sam2_1

# 2. Install PyTorch and its dependencies
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118

# 3. Go to the main folder where the script is located
cd ../sam2_1_fine_tune-main

# 4. Clone the SAM2 repository and rename the folder to avoid conflicts
git clone https://github.com/facebookresearch/sam2.git
mv sam2 sam2_conf

# 5. Change into the 'sam2_conf' directory and copy the 'sam2' folder to the 'sam2_1_fine_tune-main' folder
cd sam2_conf
cp -r sam2 ../sam2/

# 6. Install the SAM2 package in editable mode
pip install -e .

# 7. Navigate to the 'checkpoints' folder and download model checkpoints
cd checkpoints && download_ckpts.sh
cd ../..
cd checkpoints_sam2 && download_ckpts.sh

# 8. Go two directories up and install additional dependencies
cd environment
pip install -r requirements_2.txt

Executing program

set parameters and run in run_pipeline.py

Authors

Acknowledgments