Skip to content

SixK/coco-annotator-ng

Repository files navigation

COCO Annotator next generation

This version of COCO Annotator is a strait port from JsBroks COCO Annotator official version to vue3.2+

Before going further, if you already use JsBroks COCO Annotator and want to switch to this version, you will have to change user password encryption methode in mongo database (Werkzeug 3 break change).
For this, you will have to install an old and compatible Werkzeug python library and use the change_password_hash_type.py python script:

pip install werkzeug==2.0.3
pip install pymongo
python change_password_hash_type.py

By default change_password_hash_type.py only change password hash type for the admin user.
Edit this file and run it again to migrate others users accounts.
You will have to know all users password to migrate all passwords.
You can then use any recent werkzeug version.

Features

Now we can talk about what this version will provide:

  • vue 3.2+ style code
  • upgraded python libraries versions
  • upgraded javascript packages
  • use Bootstrap5
  • use pinia instead of vuex
  • use vite instead of vue-cli
  • added segment anything tool (SAM-HQ) to help to segment objects (1 click to segment an object)
  • activate GPU (seem's this was not really activated even when using docker-compose.gpu.yml)
  • fixed some bugs and javascript errors
  • can use detectron2 models to help segment objects
  • moved IA to a dedicated container to make coco-annotator lighter when not using them
  • moved DEXTR from tensorflow to pytorch
  • add SAM, SAM2 (Segment Anything Model) and Zim (Zero-Shot Image Matting for Anything aka Zim Anything) support for 1 click object segmentation
  • maybe more ...

what features you will loose or bugs are introduced:

  • watchdog to detect new images has been disabled (this was freezing the application. This feature may be reactivated later)
  • pinch zoom has been removed (need to find a library to replace and use a tablet to test it)
  • exported json annotations files seem's ok but are actually not fully tested
  • hope to not have more bugs and features removed...

Build docker images

There is actually no pre-built docker images.
You will have to build docker images by yourself.
Note that docker images are actually using around 15Gb disk space. Make sure to have at least 30Gb disk space to build all images.

Building docker images:

First build base images:

    bash ./build_base_image.sh

Then build compnents images depending of your needs.

Production images with no IA support:

docker compose -f ./docker-compose.build.yml build
docker compose -f ./docker-compose.yml up

Dev images with IA support:

docker compose -f ./docker-compose.dev.yml build
docker compose -f ./docker-compose.dev.yml up

Production images with IA support:

docker compose -f /docker-compose.gpu.yml build
docker compose -f /docker-compose.gpu.yml up  

FeaturesWikiGetting StartedIssuesLicense


COCO Annotator is a web-based image annotation tool designed for versatility and efficiently label images to create training data for image localization and object detection. It provides many distinct features including the ability to label an image segment (or part of a segment), track object instances, labeling objects with disconnected visible parts, efficiently storing and export annotations in the well-known COCO format. The annotation process is delivered through an intuitive and customizable interface and provides many tools for creating accurate datasets.


Join our growing discord community of ML practitioner


Image annotations using COCO Annotator

Checkout the video for a basic guide on installing and using COCO Annotator.


Note: This video is from v0.1.0 and many new features have been added.


If you enjoy my work please consider supporting me


Features

Several annotation tools are currently available, with most applications as a desktop installation. Once installed, users can manually define regions in an image and creating a textual description. Generally, objects can be marked by a bounding box, either directly, through a masking tool, or by marking points to define the containing area. COCO Annotator allows users to annotate images using free-form curves or polygons and provides many additional features were other annotations tool fall short.

  • Directly export to COCO format
  • Segmentation of objects
  • Ability to add key points
  • Useful API endpoints to analyze data
  • Import datasets already annotated in COCO format
  • Annotate disconnect objects as a single instance
  • Labeling image segments with any number of labels simultaneously
  • Allow custom metadata for each instance or object
  • Advanced selection tools such as, DEXTR, MaskRCNN and Magic Wand
  • Annotate images with semi-trained models
  • Generate datasets using google images
  • User authentication system
  • Auto Annotate using MaskRCNN, MaskFormer (thank's to rune-l work) or Detectron2 models
  • Auto Annotate using SAM (Facebook Segment Anything)

For examples and more information check out the wiki.

Vue3 porting attempt

  • Make source code work with Vue3 in Vue2 compatibility mode
  • Modify to Vuex4 style files using Vuex store
  • convert mixins to composable API and components
  • Fix eslint errors 
  • Remove JQuery library
  • find a library to replace vue-touch for pinching
  • Make source code work without Vue2 compatibility mode
  • Understand Keypoints and make them fully work
  • Restore all shortcuts
  • Fix undefined Category error when clicking on some annotations
  • Fix recursive warnings and make prod version work
  • understand why categories and annotations are not updated in some objects till we click on a category or an annotation after going to next or previous image. annotations id are not the right ones in this case.

At this state, source code tested only using docker-compose.dev.yml. Lot of eslint errors appears, but application is functionnal

Using SAM (Segment Anything) - Deprecated to SAM-HQ

This coco annotator version is a vue3 port from original jsbrok coco-annotator based on vue2. This version still has some bugs mainly introduced by vue3 new behaviour or vue3 conversion, so use it at your own risks. Most of libraries has been updated to more recent versions.

To use SAM you will need a Cuda capable graphic card (or modify sources to use CPU. Untested). First rebuild a new base image using the following command from build_base_image.sh:

docker build -f ./backend/Dockerfile . -t jsbroks/coco-annotator:python-env

Download SAM model :

cd models;bash sam_model.sh

Then rebuild/build coco-annotator images using docker-compose.dev.yml or docker-compose.gpu.yml files :

docker compose -f ./docker-compose.dev.yml build

You can then run coco-annotator:

docker compose -f ./docker-compose.dev.yml up

Now select or create a new annotation.
Select the new SAM button in the left pannel (under DEXTR button).
Click on the object you want to create mask.
A new mask should be created.

Using SAM-HQ (Segment Anything in High Quality)

SAM-HQ is a drop in place of SAM with more precision: https://github.com/SysCV/sam-hq

Simply download SAM-HQ model:

cd models;bash sam_hq_model.sh

Rebuild docker images :

docker compose -f ./docker-compose.dev.yml build

You can then run coco-annotator:

docker compose -f ./docker-compose.dev.yml up

SAM-HQ is now default, but you still can use original SAM modifying backend/webserver/Dockerfile
to comment line with sam-hq and uncomment line before installing segment-anything. # RUN pip install segment-anything RUN git clone https://github.com/SysCV/sam-hq.git && cd sam-hq && pip install -e .

And modify docker-compose.dev.yml:

- SAM_MODEL_FILE=/models/sam_vit_b_01ec64.pth # - SAM_MODEL_FILE=/models/sam_hq_vit_b.pth

Download SAM model (if not already done) rebuild docker images and restart coco-annotator using docker-compose.

Using SAM2 (Segment Anything 2)

Sam2 is the latest Segmentation tool from Meta: https://github.com/facebookresearch/sam2/

Download Sam2 model:

cd models;bash sam2_model.sh

Rebuild docker images :

docker compose -f ./docker-compose.dev.yml build

You can then run coco-annotator:

docker compose -f ./docker-compose.dev.yml up

In coco-annotator select a class, click on Sam2 button then click an object.
Object should be segmented.

Default model is sam2.1_hiera_base_plus. It seem's to be a good compromise between speed and efficacity.
You can use any other Sam2 model if needed.
Simply download the model and adapt docker-compose lines.

Using ZIM (ZIM Anything) (Zero-Shot Image Matting for Anything)

Zim is the latest Segmentation tool from NAVER Cloud Corp: https://github.com/naver-ai/ZIM/

Download Zim model:

cd models;bash zim_model.sh

Rebuild docker images :

docker compose -f ./docker-compose.dev.yml build

You can then run coco-annotator:

docker compose -f ./docker-compose.dev.yml up

In coco-annotator select a class, click on Zim button then click an object.
Object should be segmented.

Default model is Vit_b. It seem's to be a good compromise between speed, memory and efficacity.
You can use any other Zim model if needed.
Simply download the model and adapt docker-compose lines.

Demo

Login Information
Username: admin
Password: password

https://annotator.justinbrooks.ca/

Backers

If you enjoy the development of coco-annotator or are looking for an enterprise annotation tool, consider checking out DataTorch.

https://datatorch.io · [email protected] · Next generation of coco-annotator

Built With

Thanks to all these wonderful libaries/frameworks:

Backend

  • Flask - Python web microframework
  • MongoDB - Cross-platform document-oriented database
  • MongoEngine - Python object data mapper for MongoDB

Frontend

  • Vue - JavaScript framework for building user interfaces
  • Axios - Promise based HTTP client
  • PaperJS - HTML canvas vector graphics library
  • Bootstrap - Frontend component library

License

MIT

Citation

  @MISC{cocoannotator,
    author = {Justin Brooks},
    title = {{COCO Annotator}},
    howpublished = "\url{https://github.com/jsbroks/coco-annotator/}",
    year = {2019},
  }

About

JsBroks COCO Annotator modernized

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published