From 59a840ece70c1a002ccbbc99f9ccf152a0385334 Mon Sep 17 00:00:00 2001 From: RizwanMunawar Date: Sun, 24 Nov 2024 15:17:11 +0000 Subject: [PATCH] deploy: 3ad34b420e8f7a714b7c223364ce248b65f9e5d4 --- 404.html | 2 + index.html | 23 + search/search_index.json | 2 +- sitemap.xml | 4 + sitemap.xml.gz | Bin 257 -> 275 bytes usecases/apple-counting/index.html | 23 + usecases/bread-counting/index.html | 41 + usecases/crowd-density-estimation/index.html | 1133 +++++++++++++++++ usecases/items-counting/index.html | 23 + .../index.html | 23 + .../segmentation-masks-detect-sam2/index.html | 23 + 11 files changed, 1296 insertions(+), 1 deletion(-) create mode 100644 usecases/crowd-density-estimation/index.html diff --git a/404.html b/404.html index acf7c8b..104065b 100644 --- a/404.html +++ b/404.html @@ -343,6 +343,8 @@ + + diff --git a/index.html b/index.html index cd5f3ec..ef13879 100644 --- a/index.html +++ b/index.html @@ -356,6 +356,8 @@ + + @@ -512,6 +514,27 @@ + + + + + + +
  • + + + + + Crowd Density Estimation + + + + +
  • + + + + diff --git a/search/search_index.json b/search/search_index.json index 4272186..fd4a682 100644 --- a/search/search_index.json +++ b/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Computer Vision Use Cases | Detection, Segmentation and More","text":"

    Watch a quick overview of our top computer vision projects in action \ud83d\ude80

    "},{"location":"#object-detection-advanced-applications","title":"Object Detection | Advanced Applications","text":"

    Object detection is a pivotal computer vision technique that identifies and locates objects in images or videos. It integrates classification and localization to recognize object types and mark positions using bounding boxes. Common applications include autonomous driving, surveillance, and industrial automation.

    Object Detection Use Cases using Ultralytics YOLO"},{"location":"#featured-use-cases","title":"Featured Use Cases:","text":"

    Explore key object detection projects we\u2019ve implemented, complete with technical insights:

    "},{"location":"#object-tracking-monitoring-movement","title":"Object Tracking | Monitoring Movement","text":"

    Object tracking monitors object movement across video frames. Starting with detection in the first frame, it tracks positions and interactions in subsequent frames. Common applications include surveillance, traffic monitoring, and sports analysis.

    Object Tracking Use Cases using Ultralytics YOLO"},{"location":"#featured-use-cases_1","title":"Featured Use Cases:","text":"

    Explore our object tracking projects, showcasing technical depth and practical applications:

    "},{"location":"#pose-estimation-key-point-analysis","title":"Pose Estimation | Key Point Analysis","text":"

    Pose estimation predicts spatial positions of key points on objects or humans, enabling machines to interpret dynamics. This technique can be used in sports analysis, healthcare, and animation.

    Pose Estimation Use Cases using Ultralytics YOLO"},{"location":"#featured-use-cases_2","title":"Featured Use Cases:","text":"

    Uncover our pose estimation projects with practical applications:

    1. Dog Pose Estimation: \ud83d\udc3e Learn how to estimate dog poses using Ultralytics YOLO11, unlocking new possibilities in animal behavior analysis.
    "},{"location":"#object-counting-automation-at-scale","title":"Object Counting | Automation at Scale","text":"

    Object counting identifies and tallies objects in images or videos. Leveraging detection or segmentation techniques, it\u2019s widely used in industrial automation, inventory tracking, and crowd management.

    Object Counting Use Cases using Ultralytics YOLO"},{"location":"#featured-use-cases_3","title":"Featured Use Cases:","text":"

    Explore our object counting projects, complete with practical applications:

    "},{"location":"#image-segmentation-precise-pixel-level-analysis","title":"Image Segmentation | Precise Pixel-Level Analysis","text":"

    Image segmentation divides an image into meaningful regions to identify objects or areas of interest. Unlike object detection, it provides a precise outline of objects by labeling individual pixels. This technique is widely used in medical imaging, autonomous vehicles, and scene understanding.

    Instance Segmentation Use Cases using Ultralytics YOLO"},{"location":"#featured-use-cases_4","title":"Featured Use Cases:","text":"

    Delve into our instance segmentation projects, featuring technical details and real-world applications:

    "},{"location":"#faq","title":"FAQ","text":""},{"location":"#what-makes-ultralytics-yolo-models-unique","title":"What makes Ultralytics YOLO models unique?","text":"

    Ultralytics YOLO models excel in real-time performance, high accuracy, and versatility across tasks like detection, tracking, segmentation, and counting. They are optimized for edge devices and seamlessly integrate into diverse workflows.

    "},{"location":"#how-does-the-tracking-module-enhance-object-detection","title":"How does the tracking module enhance object detection?","text":"

    The tracking module goes beyond detection by monitoring objects across video frames, providing trajectories and interactions. It's ideal for real-time applications like traffic monitoring, surveillance, and sports analysis.

    "},{"location":"#can-the-object-counting-implementation-handle-dynamic-environments","title":"Can the Object Counting implementation handle dynamic environments?","text":"

    Yes, the Object Counting implementation is designed for dynamic settings, like conveyor belts or crowded scenes, by accurately detecting and counting objects in real-time, ensuring operational efficiency.

    "},{"location":"usecases/apple-counting/","title":"Apple Counting on Conveyor Belt using Ultralytics YOLO11","text":"

    Accurate counting of apples in agricultural setups plays a significant role in yield estimation, supply chain optimization, and resource planning. Leveraging computer vision and AI, we can automate this process with impressive accuracy and efficiency.

    Apple Counting on Conveyor Belt using Ultralytics YOLO11"},{"location":"usecases/apple-counting/#hardware-model-and-dataset-information","title":"Hardware, Model and Dataset Information","text":""},{"location":"usecases/apple-counting/#real-world-applications-for-fruit-counting","title":"Real-World Applications for Fruit Counting","text":""},{"location":"usecases/apple-counting/#social-resources","title":"Social Resources","text":""},{"location":"usecases/bread-counting/","title":"Bread Counting on Conveyor Belt Using Ultralytics YOLO11","text":"

    Automating bread counting on conveyor belts enhances efficiency in bakeries, ensuring accurate packaging, minimizing wastage, and optimizing production workflows. Leveraging Ultralytics YOLO11 for object detection, this solution streamlines the bread production process with precision and speed.

    Bread Counting on Conveyor Belt using Ultralytics YOLO11"},{"location":"usecases/bread-counting/#hardware-model-and-dataset-information","title":"Hardware, Model, and Dataset Information","text":""},{"location":"usecases/bread-counting/#real-world-applications-for-bread-counting","title":"Real-World Applications for Bread Counting","text":""},{"location":"usecases/bread-counting/#social-resources","title":"Social Resources","text":""},{"location":"usecases/items-counting/","title":"Item Counting in Trolleys for Smart Shopping Using Ultralytics YOLO11","text":"

    Efficiently counting items in shopping trolleys can transform the retail industry by automating checkout processes, minimizing errors, and enhancing customer convenience. By leveraging computer vision and AI, this solution enables real-time item detection and counting, ensuring accuracy and efficiency.

    Item Counting in Trolleys for Smart Shopping using Ultralytics YOLO11"},{"location":"usecases/items-counting/#hardware-model-and-dataset-information","title":"Hardware, Model, and Dataset Information","text":""},{"location":"usecases/items-counting/#real-world-applications-for-item-counting-in-retail","title":"Real-World Applications for Item Counting in Retail","text":""},{"location":"usecases/items-counting/#social-resources","title":"Social Resources","text":""},{"location":"usecases/items-segmentation-supermarket-ai/","title":"Revolutionizing Supermarkets: Items Segmentation and Counting with Ultralytics YOLO11 \u2764\ufe0f\u200d\ud83d\udd25","text":"

    Discover how to leverage the power of Ultralytics YOLO11 to achieve precise object segmentation and counting. In this guide, you'll learn step-by-step how to use YOLO11 to streamline processes, enhance accuracy, and unlock new possibilities in computer vision applications.

    "},{"location":"usecases/items-segmentation-supermarket-ai/#system-specifications-used-to-create-this-demo","title":"System Specifications Used to Create This Demo","text":""},{"location":"usecases/items-segmentation-supermarket-ai/#how-to-perform-items-segmentation-and-counting","title":"How to Perform Items Segmentation and Counting","text":""},{"location":"usecases/items-segmentation-supermarket-ai/#step-1-train-or-fine-tune-yolo11","title":"Step 1: Train or Fine-Tune YOLO11","text":"

    To get started, you can train the YOLO11 model on a custom dataset tailored to your specific use case. However, if the pre-trained YOLO11 model already performs well for your application, there's no need for customization, you can directly use the pre-trained weights for faster and efficient deployment. Explore the full details in the Ultralytics Documentation.

    "},{"location":"usecases/items-segmentation-supermarket-ai/#step-2-how-to-draw-the-segmentation-masks","title":"Step 2: How to draw the segmentation masks","text":"
    # Install ultralytics package\n# pip install ultralytics\n\nimport cv2\nfrom ultralytics import YOLO\nfrom ultralytics.utils.plotting import Annotator, colors\nfrom collections import Counter\n\nmodel = YOLO(model=\"path/to/model/file.pt\")\ncap = cv2.VideoCapture(\"path/to/video/file.mp4\")\nw, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH,\n                                       cv2.CAP_PROP_FRAME_HEIGHT,\n                                       cv2.CAP_PROP_FPS))\n\nvwriter = cv2.VideoWriter(\"instance-segmentation4.avi\",\n                          cv2.VideoWriter_fourcc(*\"MJPG\"), fps, (w, h))\nwhile True:\n    ret, im0 = cap.read()\n    if not ret:\n        break\n\n    object_counts = Counter()  # Initialize a counter for objects detected\n    results = model.track(im0, persist=True)\n    annotator = Annotator(im0, line_width=3)\n\n    if results[0].boxes.id is not None and results[0].masks is not None:\n        masks = results[0].masks.xy\n        track_ids = results[0].boxes.id.int().cpu().tolist()\n        clss = results[0].boxes.cls.cpu().tolist()\n        boxes = results[0].boxes.xyxy.cpu()\n        for mask, box, cls, t_id in zip(masks, boxes, clss, track_ids):\n            if mask.size>0:\n                object_counts[model.names[int(cls)]] += 1\n                color = colors(t_id, True)\n                mask_img = im0.copy()\n                cv2.fillPoly(mask_img, [mask.astype(int)], color)\n                cv2.addWeighted(mask_img, 0.7, im0, 1 - 0.7, 0, im0)\n                annotator.seg_bbox(mask=mask, mask_color=color,\n                                   label=str(model.names[int(cls)]),\n                                   txt_color=annotator.get_txt_color(color))\n\n    vwriter.write(im0)\n    cv2.imshow(\"Ultralytics FastSAM\", im0)\n\n    if cv2.waitKey(1) & 0xFF == ord(\"q\"):\n        break\n\nvwriter.release()\ncap.release()\ncv2.destroyAllWindows()\n
    Fig-1: Image segmentation using YOLO11."},{"location":"usecases/items-segmentation-supermarket-ai/#step-3-count-segmented-objects","title":"Step 3: Count Segmented Objects","text":"

    Now, we have already drawn the object masks, we can now count the objects.

    import cv2\nfrom ultralytics import YOLO\nfrom ultralytics.utils.plotting import Annotator, colors\nfrom collections import Counter\n\nmodel = YOLO(model=\"path/to/model/file.pt\")\ncap = cv2.VideoCapture(\"path/to/video/file.mp4\")\nw, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH,\n                                       cv2.CAP_PROP_FRAME_HEIGHT,\n                                       cv2.CAP_PROP_FPS))\n\nvwriter = cv2.VideoWriter(\"instance-segmentation4.avi\",\n                          cv2.VideoWriter_fourcc(*\"MJPG\"), fps, (w, h))\nwhile True:\n    ret, im0 = cap.read()\n    if not ret:\n        break\n\n    object_counts = Counter()  # Initialize a counter for objects detected\n    results = model.track(im0, persist=True)\n    annotator = Annotator(im0, line_width=3)\n\n    if results[0].boxes.id is not None and results[0].masks is not None:\n        masks = results[0].masks.xy\n        track_ids = results[0].boxes.id.int().cpu().tolist()\n        clss = results[0].boxes.cls.cpu().tolist()\n        boxes = results[0].boxes.xyxy.cpu()\n        for mask, box, cls, t_id in zip(masks, boxes, clss, track_ids):\n            if mask.size>0:\n                object_counts[model.names[int(cls)]] += 1\n                color = colors(t_id, True)\n                mask_img = im0.copy()\n                cv2.fillPoly(mask_img, [mask.astype(int)], color)\n                cv2.addWeighted(mask_img, 0.7, im0, 1 - 0.7, 0, im0)\n                annotator.seg_bbox(mask=mask, mask_color=color,\n                                   label=str(model.names[int(cls)]),\n                                   txt_color=annotator.get_txt_color(color))\n\n    # Display total counts in the top-right corner\n    x, y = im0.shape[1] - 200, 30\n    margin = 10\n\n    for i, (label, count) in enumerate(object_counts.items()):\n        text = f\"{label}={count}\"\n        font_scale = 1.4\n        font_thickness = 4\n        padding = 15\n        text_size, _ = cv2.getTextSize(text, cv2.FONT_HERSHEY_SIMPLEX,\n                                       font_scale, font_thickness)\n        rect_x2 = im0.shape[1] - 10\n        rect_x1 = rect_x2 - (text_size[0] + padding * 2)\n\n        y_position = y + i * (text_size[1] + padding * 2 + 10)\n        if y_position + text_size[1] + padding * 2 > im0.shape[0]:\n            break\n        rect_y1 = y_position\n        rect_y2 = rect_y1 + text_size[1] + padding * 2\n        cv2.rectangle(im0, (rect_x1, rect_y1), (rect_x2, rect_y2),\n                      (255, 255, 255), -1)\n        text_x = rect_x1 + padding\n        text_y = rect_y1 + padding + text_size[1]\n        cv2.putText(im0, text, (text_x, text_y), cv2.FONT_HERSHEY_SIMPLEX,\n                    font_scale, (104, 31, 17), font_thickness)\n\n    vwriter.write(im0)\n    cv2.imshow(\"Ultralytics FastSAM\", im0)\n\n    if cv2.waitKey(1) & 0xFF == ord(\"q\"):\n        break\n\nvwriter.release()\ncap.release()\ncv2.destroyAllWindows()\n
    Fig-2: Object Counting in Shopping Trolley using YOLO11."},{"location":"usecases/items-segmentation-supermarket-ai/#applications-in-retail","title":"Applications in Retail","text":""},{"location":"usecases/items-segmentation-supermarket-ai/#explore-more","title":"Explore More","text":"

    Transform your retail operations with YOLO11 today! \ud83d\ude80

    "},{"location":"usecases/segmentation-masks-detect-sam2/","title":"How to Generate Accurate Segmentation Masks Using Object Detection and SAM2 Model","text":"

    Segmentation masks are vital for precise object tracking and analysis, allowing pixel-level identification of objects. By leveraging a fine-tuned Ultralytics YOLO11 model alongside the Segment Anything 2 (SAM2) model, you can achieve unparalleled accuracy and flexibility in your workflows.

    Fig-1: Instance segmentation using Ultralytics YOLO11 and SAM2 model."},{"location":"usecases/segmentation-masks-detect-sam2/#hardware-and-software-setup-for-this-demo","title":"Hardware and Software Setup for This Demo","text":""},{"location":"usecases/segmentation-masks-detect-sam2/#how-to-generate-segmentation-masks","title":"How to Generate Segmentation Masks","text":""},{"location":"usecases/segmentation-masks-detect-sam2/#step-1-prepare-the-model","title":"Step 1: Prepare the Model","text":"

    Train or fine-tune a custom YOLO11 model, or use the Ultralytics Pretrained Models for object detection tasks.

    "},{"location":"usecases/segmentation-masks-detect-sam2/#step-2-auto-annotation-with-sam2","title":"Step 2: Auto Annotation with SAM2","text":"

    Integrate the SAM2 model to convert bounding boxes into segmentation masks.

    # Install the necessary library\n# pip install ultralytics\n\nfrom ultralytics.data.annotator import auto_annotate\n\n# Automatically annotate images using YOLO and SAM2 models\nauto_annotate(data=\"Path/to/images/directory\",\n              det_model=\"yolo11n.pt\",\n              sam_model=\"sam2_b.pt\")\n
    "},{"location":"usecases/segmentation-masks-detect-sam2/#step-3-generate-and-save-masks","title":"Step 3: Generate and Save Masks","text":"

    Run the script to save segmentation masks as .txt files in the images_auto_annotate_labels folder.

    "},{"location":"usecases/segmentation-masks-detect-sam2/#step-4-visualize-the-results","title":"Step 4: Visualize the Results","text":"

    Use the following script to overlay segmentation masks on images.

    import os\nimport cv2\nimport numpy as np\nfrom ultralytics.utils.plotting import colors\n\n# Define folder paths\nimage_folder = \"images_directory\"   # Path to your images directory\nmask_folder = \"images_auto_annotate_labels\" # Annotation masks directory\noutput_folder = \"output_directory\"  # Path to save output images\n\nos.makedirs(output_folder, exist_ok=True)\n\n# Process each image\nfor image_file in os.listdir(image_folder):\n    image_path = os.path.join(image_folder, image_file)\n    mask_file = os.path.join(mask_folder, \n                             os.path.splitext(image_file)[0] + \".txt\")\n\n    img = cv2.imread(image_path)   # Load the image\n    height, width, _ = img.shape\n\n    with open(mask_file, \"r\") as f:  # Read the mask file\n        lines = f.readlines()\n\n    for line in lines:\n        data = line.strip().split()\n        color = colors(int(data[0]), True)\n\n        # Convert points to absolute coordinates\n        points = np.array([(float(data[i]) * width, float(data[i + 1])*height) \n                           for i in range(1, len(data), 2)], \n                           dtype=np.int32).reshape((-1, 1, 2))\n\n        overlay = img.copy()\n        cv2.fillPoly(overlay, [points], color=color)\n        alpha = 0.6\n        cv2.addWeighted(overlay, alpha, img, 1 - alpha, 0, img)\n        cv2.polylines(img, [points], isClosed=True, color=color, thickness=3)\n\n    # Save the output\n    output_path = os.path.join(output_folder, image_file)\n    cv2.imwrite(output_path, img)\n    print(f\"Processed {image_file} and saved to {output_path}\")\n\nprint(\"Processing complete.\")\n
    Fig-2: Visualization of instance segmentation results.

    That's it! After completing Step 4, you'll be able to segment objects and view the total count for each segmented object in every frame.

    "},{"location":"usecases/segmentation-masks-detect-sam2/#real-world-applications","title":"Real-World Applications","text":" Fig-3: Applications of instance segmentation in various fields."},{"location":"usecases/segmentation-masks-detect-sam2/#explore-more","title":"Explore More","text":"

    Start building your object segmentation workflow today!\ud83d\ude80

    "}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Computer Vision Use Cases | Detection, Segmentation and More","text":"

    Watch a quick overview of our top computer vision projects in action \ud83d\ude80

    "},{"location":"#object-detection-advanced-applications","title":"Object Detection | Advanced Applications","text":"

    Object detection is a pivotal computer vision technique that identifies and locates objects in images or videos. It integrates classification and localization to recognize object types and mark positions using bounding boxes. Common applications include autonomous driving, surveillance, and industrial automation.

    Object Detection Use Cases using Ultralytics YOLO"},{"location":"#featured-use-cases","title":"Featured Use Cases:","text":"

    Explore key object detection projects we\u2019ve implemented, complete with technical insights:

    "},{"location":"#object-tracking-monitoring-movement","title":"Object Tracking | Monitoring Movement","text":"

    Object tracking monitors object movement across video frames. Starting with detection in the first frame, it tracks positions and interactions in subsequent frames. Common applications include surveillance, traffic monitoring, and sports analysis.

    Object Tracking Use Cases using Ultralytics YOLO"},{"location":"#featured-use-cases_1","title":"Featured Use Cases:","text":"

    Explore our object tracking projects, showcasing technical depth and practical applications:

    "},{"location":"#pose-estimation-key-point-analysis","title":"Pose Estimation | Key Point Analysis","text":"

    Pose estimation predicts spatial positions of key points on objects or humans, enabling machines to interpret dynamics. This technique can be used in sports analysis, healthcare, and animation.

    Pose Estimation Use Cases using Ultralytics YOLO"},{"location":"#featured-use-cases_2","title":"Featured Use Cases:","text":"

    Uncover our pose estimation projects with practical applications:

    1. Dog Pose Estimation: \ud83d\udc3e Learn how to estimate dog poses using Ultralytics YOLO11, unlocking new possibilities in animal behavior analysis.
    "},{"location":"#object-counting-automation-at-scale","title":"Object Counting | Automation at Scale","text":"

    Object counting identifies and tallies objects in images or videos. Leveraging detection or segmentation techniques, it\u2019s widely used in industrial automation, inventory tracking, and crowd management.

    Object Counting Use Cases using Ultralytics YOLO"},{"location":"#featured-use-cases_3","title":"Featured Use Cases:","text":"

    Explore our object counting projects, complete with practical applications:

    "},{"location":"#image-segmentation-precise-pixel-level-analysis","title":"Image Segmentation | Precise Pixel-Level Analysis","text":"

    Image segmentation divides an image into meaningful regions to identify objects or areas of interest. Unlike object detection, it provides a precise outline of objects by labeling individual pixels. This technique is widely used in medical imaging, autonomous vehicles, and scene understanding.

    Instance Segmentation Use Cases using Ultralytics YOLO"},{"location":"#featured-use-cases_4","title":"Featured Use Cases:","text":"

    Delve into our instance segmentation projects, featuring technical details and real-world applications:

    "},{"location":"#faq","title":"FAQ","text":""},{"location":"#what-makes-ultralytics-yolo-models-unique","title":"What makes Ultralytics YOLO models unique?","text":"

    Ultralytics YOLO models excel in real-time performance, high accuracy, and versatility across tasks like detection, tracking, segmentation, and counting. They are optimized for edge devices and seamlessly integrate into diverse workflows.

    "},{"location":"#how-does-the-tracking-module-enhance-object-detection","title":"How does the tracking module enhance object detection?","text":"

    The tracking module goes beyond detection by monitoring objects across video frames, providing trajectories and interactions. It's ideal for real-time applications like traffic monitoring, surveillance, and sports analysis.

    "},{"location":"#can-the-object-counting-implementation-handle-dynamic-environments","title":"Can the Object Counting implementation handle dynamic environments?","text":"

    Yes, the Object Counting implementation is designed for dynamic settings, like conveyor belts or crowded scenes, by accurately detecting and counting objects in real-time, ensuring operational efficiency.

    "},{"location":"usecases/apple-counting/","title":"Apple Counting on Conveyor Belt using Ultralytics YOLO11","text":"

    Accurate counting of apples in agricultural setups plays a significant role in yield estimation, supply chain optimization, and resource planning. Leveraging computer vision and AI, we can automate this process with impressive accuracy and efficiency.

    Apple Counting on Conveyor Belt using Ultralytics YOLO11"},{"location":"usecases/apple-counting/#hardware-model-and-dataset-information","title":"Hardware, Model and Dataset Information","text":""},{"location":"usecases/apple-counting/#real-world-applications-for-fruit-counting","title":"Real-World Applications for Fruit Counting","text":""},{"location":"usecases/apple-counting/#social-resources","title":"Social Resources","text":""},{"location":"usecases/bread-counting/","title":"Bread Counting on Conveyor Belt Using Ultralytics YOLO11","text":"

    Automating bread counting on conveyor belts enhances efficiency in bakeries, ensuring accurate packaging, minimizing wastage, and optimizing production workflows. Leveraging Ultralytics YOLO11 for object detection, this solution streamlines the bread production process with precision and speed.

    Bread Counting on Conveyor Belt using Ultralytics YOLO11"},{"location":"usecases/bread-counting/#hardware-model-and-dataset-information","title":"Hardware, Model, and Dataset Information","text":""},{"location":"usecases/bread-counting/#real-world-applications-for-bread-counting","title":"Real-World Applications for Bread Counting","text":""},{"location":"usecases/bread-counting/#social-resources","title":"Social Resources","text":""},{"location":"usecases/crowd-density-estimation/","title":"Accurate Crowd Density Estimation Using Ultralytics YOLO11 \ud83c\udfaf","text":"

    Discover how to utilize Ultralytics YOLO11 for accurate crowd density estimation. This guide will take you through a step-by-step implementation using a YOLO11-based system to measure and monitor crowd density in various environments, improving safety and event management capabilities.

    "},{"location":"usecases/crowd-density-estimation/#system-specifications-used-for-this-implementation","title":"System Specifications Used for This Implementation","text":""},{"location":"usecases/crowd-density-estimation/#how-to-implement-crowd-density-estimation","title":"How to Implement Crowd Density Estimation","text":""},{"location":"usecases/crowd-density-estimation/#step-1-setup-and-model-initialization","title":"Step 1: Setup and Model Initialization","text":"

    To get started, the code utilizes a pre-trained YOLO11 model for person detection. This model is loaded into the CrowdDensityEstimation class, which is designed to track individuals in a crowd and estimate crowd density in real time.

    "},{"location":"usecases/crowd-density-estimation/#code-to-initialize-and-track-with-yolo11","title":"Code to Initialize and Track with YOLO11","text":"
    import cv2\nfrom estimator import CrowdDensityEstimation\n\ndef main():\n    estimator = CrowdDensityEstimation()  \n\n    # Open video capture (0 for webcam, or video file path)\n    cap = cv2.VideoCapture(\"path/to/video/file.mp4\")\n\n    # Get video properties for output\n    frame_width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))\n    frame_height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))\n    fps = int(cap.get(cv2.CAP_PROP_FPS))\n    fourcc = cv2.VideoWriter_fourcc(*'mp4v')\n    out = cv2.VideoWriter('path/to/video/output-file.mp4', fourcc, fps, (frame_width, frame_height))\n\n    while True:\n        ret, frame = cap.read()\n        if not ret:\n            break\n\n        # Process frame\n        processed_frame, density_info = estimator.process_frame(frame)\n\n        # Display output\n        estimator.display_output(processed_frame, density_info)\n\n        # Write output frame\n        out.write(processed_frame)\n\n        # Break loop on 'q' press\n        if cv2.waitKey(1) & 0xFF == ord('q'):\n            break\n\n    # Cleanup\n    cap.release()\n    cv2.destroyAllWindows()\n\nif __name__ == \"__main__\":\n    main()\n

    This setup captures frames from a video source, processes them using YOLO11 to detect people, and calculates crowd density.

    "},{"location":"usecases/crowd-density-estimation/#step-2-real-time-crowd-detection-and-tracking","title":"Step 2: Real-Time Crowd Detection and Tracking","text":"

    The core of the implementation relies on tracking individuals in each frame using the YOLO11 model and estimating the crowd density. This is achieved through a series of steps, which include detecting people, calculating density, and classifying the crowd level.

    "},{"location":"usecases/crowd-density-estimation/#code-for-crowd-density-estimation","title":"Code for Crowd Density Estimation","text":"

    The main class CrowdDensityEstimation includes the following functionality:

    # Install ultralytics package\n# pip install ultralytics\n\nimport cv2\nimport numpy as np\nfrom ultralytics import YOLO\nfrom collections import defaultdict\n\nclass CrowdDensityEstimation:\n    def __init__(self, model_path='yolo11n.pt', conf_threshold=0.3):\n        self.model = YOLO(model_path)\n        self.conf_threshold = conf_threshold\n        self.track_history = defaultdict(lambda: [])\n        self.density_levels = {\n            'Low': (0, 0.2), # 0-0.2 persons/m\u00b2\n            'Medium': (0.2, 0.5), # 0.2-0.5 persons/m\u00b2\n            'High': (0.5, 0.8), # 0.5-0.8 persons/m\u00b2\n            'Very High': (0.8, float('inf')) # >0.8 persons/m\u00b2\n        }\n\n    def extract_tracks(self, im0):\n        results = self.model.track(im0, persist=True, conf=self.conf_threshold, classes=[0])\n        return results\n\n    def calculate_density(self, results, frame_area):\n        if not results or len(results) == 0:\n            return 0, 'Low', 0\n\n        person_count = len(results[0].boxes)\n        density_value = person_count / frame_area * 10000  \n\n        density_level = 'Low'\n        for level, (min_val, max_val) in self.density_levels.items():\n            if min_val <= density_value < max_val:\n                density_level = level\n                break\n\n        return density_value, density_level, person_count\n
    "},{"location":"usecases/crowd-density-estimation/#step-3-visualizing-density-and-results","title":"Step 3: Visualizing Density and Results","text":"

    Once density is calculated, the processed frame is annotated with information like density level, person count, and a tracking visualization. This enhances situational awareness by providing clear visual cues.

    "},{"location":"usecases/crowd-density-estimation/#displaying-density-information-on-video-frames","title":"Displaying Density Information on Video Frames","text":"
    def display_output(self, im0, density_info):\n    density_value, density_level, person_count = density_info\n\n    cv2.rectangle(im0, (0, 0), (350, 150), (0, 0, 0), -1)\n\n    cv2.putText(im0, f'Density Level: {density_level}', (10, 30),\n                cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2)\n    cv2.putText(im0, f'Person Count: {person_count}', (10, 70),\n                cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2)\n    cv2.putText(im0, f'Density Value: {density_value:.2f}', (10, 110),\n                cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2)\n\n    # Display the frame\n    cv2.imshow('Crowd Density Estimation', im0)\n
    "},{"location":"usecases/crowd-density-estimation/#applications-of-crowd-density-estimation","title":"Applications of Crowd Density Estimation","text":""},{"location":"usecases/crowd-density-estimation/#explore-more","title":"Explore More","text":"

    Unlock the potential of advanced crowd monitoring using YOLO11 and streamline operations for various sectors! \ud83d\ude80

    "},{"location":"usecases/items-counting/","title":"Item Counting in Trolleys for Smart Shopping Using Ultralytics YOLO11","text":"

    Efficiently counting items in shopping trolleys can transform the retail industry by automating checkout processes, minimizing errors, and enhancing customer convenience. By leveraging computer vision and AI, this solution enables real-time item detection and counting, ensuring accuracy and efficiency.

    Item Counting in Trolleys for Smart Shopping using Ultralytics YOLO11"},{"location":"usecases/items-counting/#hardware-model-and-dataset-information","title":"Hardware, Model, and Dataset Information","text":""},{"location":"usecases/items-counting/#real-world-applications-for-item-counting-in-retail","title":"Real-World Applications for Item Counting in Retail","text":""},{"location":"usecases/items-counting/#social-resources","title":"Social Resources","text":""},{"location":"usecases/items-segmentation-supermarket-ai/","title":"Revolutionizing Supermarkets: Items Segmentation and Counting with Ultralytics YOLO11 \u2764\ufe0f\u200d\ud83d\udd25","text":"

    Discover how to leverage the power of Ultralytics YOLO11 to achieve precise object segmentation and counting. In this guide, you'll learn step-by-step how to use YOLO11 to streamline processes, enhance accuracy, and unlock new possibilities in computer vision applications.

    "},{"location":"usecases/items-segmentation-supermarket-ai/#system-specifications-used-to-create-this-demo","title":"System Specifications Used to Create This Demo","text":""},{"location":"usecases/items-segmentation-supermarket-ai/#how-to-perform-items-segmentation-and-counting","title":"How to Perform Items Segmentation and Counting","text":""},{"location":"usecases/items-segmentation-supermarket-ai/#step-1-train-or-fine-tune-yolo11","title":"Step 1: Train or Fine-Tune YOLO11","text":"

    To get started, you can train the YOLO11 model on a custom dataset tailored to your specific use case. However, if the pre-trained YOLO11 model already performs well for your application, there's no need for customization, you can directly use the pre-trained weights for faster and efficient deployment. Explore the full details in the Ultralytics Documentation.

    "},{"location":"usecases/items-segmentation-supermarket-ai/#step-2-how-to-draw-the-segmentation-masks","title":"Step 2: How to draw the segmentation masks","text":"
    # Install ultralytics package\n# pip install ultralytics\n\nimport cv2\nfrom ultralytics import YOLO\nfrom ultralytics.utils.plotting import Annotator, colors\nfrom collections import Counter\n\nmodel = YOLO(model=\"path/to/model/file.pt\")\ncap = cv2.VideoCapture(\"path/to/video/file.mp4\")\nw, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH,\n                                       cv2.CAP_PROP_FRAME_HEIGHT,\n                                       cv2.CAP_PROP_FPS))\n\nvwriter = cv2.VideoWriter(\"instance-segmentation4.avi\",\n                          cv2.VideoWriter_fourcc(*\"MJPG\"), fps, (w, h))\nwhile True:\n    ret, im0 = cap.read()\n    if not ret:\n        break\n\n    object_counts = Counter()  # Initialize a counter for objects detected\n    results = model.track(im0, persist=True)\n    annotator = Annotator(im0, line_width=3)\n\n    if results[0].boxes.id is not None and results[0].masks is not None:\n        masks = results[0].masks.xy\n        track_ids = results[0].boxes.id.int().cpu().tolist()\n        clss = results[0].boxes.cls.cpu().tolist()\n        boxes = results[0].boxes.xyxy.cpu()\n        for mask, box, cls, t_id in zip(masks, boxes, clss, track_ids):\n            if mask.size>0:\n                object_counts[model.names[int(cls)]] += 1\n                color = colors(t_id, True)\n                mask_img = im0.copy()\n                cv2.fillPoly(mask_img, [mask.astype(int)], color)\n                cv2.addWeighted(mask_img, 0.7, im0, 1 - 0.7, 0, im0)\n                annotator.seg_bbox(mask=mask, mask_color=color,\n                                   label=str(model.names[int(cls)]),\n                                   txt_color=annotator.get_txt_color(color))\n\n    vwriter.write(im0)\n    cv2.imshow(\"Ultralytics FastSAM\", im0)\n\n    if cv2.waitKey(1) & 0xFF == ord(\"q\"):\n        break\n\nvwriter.release()\ncap.release()\ncv2.destroyAllWindows()\n
    Fig-1: Image segmentation using YOLO11."},{"location":"usecases/items-segmentation-supermarket-ai/#step-3-count-segmented-objects","title":"Step 3: Count Segmented Objects","text":"

    Now, we have already drawn the object masks, we can now count the objects.

    import cv2\nfrom ultralytics import YOLO\nfrom ultralytics.utils.plotting import Annotator, colors\nfrom collections import Counter\n\nmodel = YOLO(model=\"path/to/model/file.pt\")\ncap = cv2.VideoCapture(\"path/to/video/file.mp4\")\nw, h, fps = (int(cap.get(x)) for x in (cv2.CAP_PROP_FRAME_WIDTH,\n                                       cv2.CAP_PROP_FRAME_HEIGHT,\n                                       cv2.CAP_PROP_FPS))\n\nvwriter = cv2.VideoWriter(\"instance-segmentation4.avi\",\n                          cv2.VideoWriter_fourcc(*\"MJPG\"), fps, (w, h))\nwhile True:\n    ret, im0 = cap.read()\n    if not ret:\n        break\n\n    object_counts = Counter()  # Initialize a counter for objects detected\n    results = model.track(im0, persist=True)\n    annotator = Annotator(im0, line_width=3)\n\n    if results[0].boxes.id is not None and results[0].masks is not None:\n        masks = results[0].masks.xy\n        track_ids = results[0].boxes.id.int().cpu().tolist()\n        clss = results[0].boxes.cls.cpu().tolist()\n        boxes = results[0].boxes.xyxy.cpu()\n        for mask, box, cls, t_id in zip(masks, boxes, clss, track_ids):\n            if mask.size>0:\n                object_counts[model.names[int(cls)]] += 1\n                color = colors(t_id, True)\n                mask_img = im0.copy()\n                cv2.fillPoly(mask_img, [mask.astype(int)], color)\n                cv2.addWeighted(mask_img, 0.7, im0, 1 - 0.7, 0, im0)\n                annotator.seg_bbox(mask=mask, mask_color=color,\n                                   label=str(model.names[int(cls)]),\n                                   txt_color=annotator.get_txt_color(color))\n\n    # Display total counts in the top-right corner\n    x, y = im0.shape[1] - 200, 30\n    margin = 10\n\n    for i, (label, count) in enumerate(object_counts.items()):\n        text = f\"{label}={count}\"\n        font_scale = 1.4\n        font_thickness = 4\n        padding = 15\n        text_size, _ = cv2.getTextSize(text, cv2.FONT_HERSHEY_SIMPLEX,\n                                       font_scale, font_thickness)\n        rect_x2 = im0.shape[1] - 10\n        rect_x1 = rect_x2 - (text_size[0] + padding * 2)\n\n        y_position = y + i * (text_size[1] + padding * 2 + 10)\n        if y_position + text_size[1] + padding * 2 > im0.shape[0]:\n            break\n        rect_y1 = y_position\n        rect_y2 = rect_y1 + text_size[1] + padding * 2\n        cv2.rectangle(im0, (rect_x1, rect_y1), (rect_x2, rect_y2),\n                      (255, 255, 255), -1)\n        text_x = rect_x1 + padding\n        text_y = rect_y1 + padding + text_size[1]\n        cv2.putText(im0, text, (text_x, text_y), cv2.FONT_HERSHEY_SIMPLEX,\n                    font_scale, (104, 31, 17), font_thickness)\n\n    vwriter.write(im0)\n    cv2.imshow(\"Ultralytics FastSAM\", im0)\n\n    if cv2.waitKey(1) & 0xFF == ord(\"q\"):\n        break\n\nvwriter.release()\ncap.release()\ncv2.destroyAllWindows()\n
    Fig-2: Object Counting in Shopping Trolley using YOLO11."},{"location":"usecases/items-segmentation-supermarket-ai/#applications-in-retail","title":"Applications in Retail","text":""},{"location":"usecases/items-segmentation-supermarket-ai/#explore-more","title":"Explore More","text":"

    Transform your retail operations with YOLO11 today! \ud83d\ude80

    "},{"location":"usecases/segmentation-masks-detect-sam2/","title":"How to Generate Accurate Segmentation Masks Using Object Detection and SAM2 Model","text":"

    Segmentation masks are vital for precise object tracking and analysis, allowing pixel-level identification of objects. By leveraging a fine-tuned Ultralytics YOLO11 model alongside the Segment Anything 2 (SAM2) model, you can achieve unparalleled accuracy and flexibility in your workflows.

    Fig-1: Instance segmentation using Ultralytics YOLO11 and SAM2 model."},{"location":"usecases/segmentation-masks-detect-sam2/#hardware-and-software-setup-for-this-demo","title":"Hardware and Software Setup for This Demo","text":""},{"location":"usecases/segmentation-masks-detect-sam2/#how-to-generate-segmentation-masks","title":"How to Generate Segmentation Masks","text":""},{"location":"usecases/segmentation-masks-detect-sam2/#step-1-prepare-the-model","title":"Step 1: Prepare the Model","text":"

    Train or fine-tune a custom YOLO11 model, or use the Ultralytics Pretrained Models for object detection tasks.

    "},{"location":"usecases/segmentation-masks-detect-sam2/#step-2-auto-annotation-with-sam2","title":"Step 2: Auto Annotation with SAM2","text":"

    Integrate the SAM2 model to convert bounding boxes into segmentation masks.

    # Install the necessary library\n# pip install ultralytics\n\nfrom ultralytics.data.annotator import auto_annotate\n\n# Automatically annotate images using YOLO and SAM2 models\nauto_annotate(data=\"Path/to/images/directory\",\n              det_model=\"yolo11n.pt\",\n              sam_model=\"sam2_b.pt\")\n
    "},{"location":"usecases/segmentation-masks-detect-sam2/#step-3-generate-and-save-masks","title":"Step 3: Generate and Save Masks","text":"

    Run the script to save segmentation masks as .txt files in the images_auto_annotate_labels folder.

    "},{"location":"usecases/segmentation-masks-detect-sam2/#step-4-visualize-the-results","title":"Step 4: Visualize the Results","text":"

    Use the following script to overlay segmentation masks on images.

    import os\nimport cv2\nimport numpy as np\nfrom ultralytics.utils.plotting import colors\n\n# Define folder paths\nimage_folder = \"images_directory\"   # Path to your images directory\nmask_folder = \"images_auto_annotate_labels\" # Annotation masks directory\noutput_folder = \"output_directory\"  # Path to save output images\n\nos.makedirs(output_folder, exist_ok=True)\n\n# Process each image\nfor image_file in os.listdir(image_folder):\n    image_path = os.path.join(image_folder, image_file)\n    mask_file = os.path.join(mask_folder, \n                             os.path.splitext(image_file)[0] + \".txt\")\n\n    img = cv2.imread(image_path)   # Load the image\n    height, width, _ = img.shape\n\n    with open(mask_file, \"r\") as f:  # Read the mask file\n        lines = f.readlines()\n\n    for line in lines:\n        data = line.strip().split()\n        color = colors(int(data[0]), True)\n\n        # Convert points to absolute coordinates\n        points = np.array([(float(data[i]) * width, float(data[i + 1])*height) \n                           for i in range(1, len(data), 2)], \n                           dtype=np.int32).reshape((-1, 1, 2))\n\n        overlay = img.copy()\n        cv2.fillPoly(overlay, [points], color=color)\n        alpha = 0.6\n        cv2.addWeighted(overlay, alpha, img, 1 - alpha, 0, img)\n        cv2.polylines(img, [points], isClosed=True, color=color, thickness=3)\n\n    # Save the output\n    output_path = os.path.join(output_folder, image_file)\n    cv2.imwrite(output_path, img)\n    print(f\"Processed {image_file} and saved to {output_path}\")\n\nprint(\"Processing complete.\")\n
    Fig-2: Visualization of instance segmentation results.

    That's it! After completing Step 4, you'll be able to segment objects and view the total count for each segmented object in every frame.

    "},{"location":"usecases/segmentation-masks-detect-sam2/#real-world-applications","title":"Real-World Applications","text":" Fig-3: Applications of instance segmentation in various fields."},{"location":"usecases/segmentation-masks-detect-sam2/#explore-more","title":"Explore More","text":"

    Start building your object segmentation workflow today!\ud83d\ude80

    "}]} \ No newline at end of file diff --git a/sitemap.xml b/sitemap.xml index a70fbbc..5ac385b 100644 --- a/sitemap.xml +++ b/sitemap.xml @@ -12,6 +12,10 @@ https://visionusecases.com/usecases/bread-counting/ 2024-11-24 + + https://visionusecases.com/usecases/crowd-density-estimation/ + 2024-11-24 + https://visionusecases.com/usecases/items-counting/ 2024-11-24 diff --git a/sitemap.xml.gz b/sitemap.xml.gz index ed7b07acb6a97e54406bba29042d4d5ac6602a3b..a37d063ff312176b2952babec5298c62f0c20dd8 100644 GIT binary patch delta 261 zcmV+g0s8)d0+Rxe7JtQ4!EVDK488X&BJMyE^{`DzdfOMQAHax}q6Ubt({z77C+Vi` zxLxoCY(2gAuyJ#``?0t|LKj0TDpiUCf{nckLo3d2ukuNpxVyzbp6 L_s6X79s~dYI=6kj delta 243 zcmVV;=5QXU2gUq|8OJ$(AU{>R zY&Y&~U`9{xeF*6GS2tA`NN9uaStDvzfj42We(c%d^Mmi$UTwNLO$bzFvB#dBh~m4H z%d!Zx1Wrd3A&nBvDK{veLs^SArZ!csV({4AtDAyR#Rap?#cY=F8Np}-6ec*?$s+i! z*GMi5s;%2CZyMfiv*bB@1jw(1UuL};sbidgn=pIIr(cKRm_QFJGnC82f5Ct-I`E_^ t>p0F4lGEu7#I;=q>jzMN>1X5v5E$a9UHf;Y`;tGHx&htT`cset001kSb*caW diff --git a/usecases/apple-counting/index.html b/usecases/apple-counting/index.html index b5c1b60..92b83f3 100644 --- a/usecases/apple-counting/index.html +++ b/usecases/apple-counting/index.html @@ -358,6 +358,8 @@ + + @@ -581,6 +583,27 @@ + + + + + + +
  • + + + + + Crowd Density Estimation + + + + +
  • + + + + diff --git a/usecases/bread-counting/index.html b/usecases/bread-counting/index.html index 26bc5ee..2efa06b 100644 --- a/usecases/bread-counting/index.html +++ b/usecases/bread-counting/index.html @@ -18,6 +18,8 @@ + + @@ -356,6 +358,8 @@ + + @@ -579,6 +583,27 @@ + + + + + + +
  • + + + + + Crowd Density Estimation + + + + +
  • + + + + @@ -755,6 +780,22 @@

    Comments

    + + + + + + diff --git a/usecases/crowd-density-estimation/index.html b/usecases/crowd-density-estimation/index.html new file mode 100644 index 0000000..600dd08 --- /dev/null +++ b/usecases/crowd-density-estimation/index.html @@ -0,0 +1,1133 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + Crowd Density Estimation - Computer Vision Use Cases + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    + + + +
    + + + + + + +
    + + + + + + + +
    + +
    + + + + +
    +
    + + + +
    +
    +
    + + + + + + + +
    +
    +
    + + + + + + + +
    +
    + + + + + + + + + + + + +

    Accurate Crowd Density Estimation Using Ultralytics YOLO11 🎯

    +

    Discover how to utilize Ultralytics YOLO11 for accurate crowd density estimation. This guide will take you through a step-by-step implementation using a YOLO11-based system to measure and monitor crowd density in various environments, improving safety and event management capabilities.

    +
    + +
    + +

    System Specifications Used for This Implementation

    +
      +
    • CPU: Intel® Core™ i7-10700 CPU @ 2.90GHz for efficient processing.
    • +
    • GPU: NVIDIA RTX 3060 for faster object detection.
    • +
    • RAM & Storage: 32 GB RAM and 512 GB SSD for optimal performance.
    • +
    • Model: Pre-trained YOLO11 model for person detection.
    • +
    • Dataset: Custom dataset for various crowd scenarios to fine-tune YOLO11 performance.
    • +
    +

    How to Implement Crowd Density Estimation

    +

    Step 1: Setup and Model Initialization

    +

    To get started, the code utilizes a pre-trained YOLO11 model for person detection. This model is loaded into the CrowdDensityEstimation class, which is designed to track individuals in a crowd and estimate crowd density in real time.

    +

    Code to Initialize and Track with YOLO11

    +
    import cv2
    +from estimator import CrowdDensityEstimation
    +
    +def main():
    +    estimator = CrowdDensityEstimation()  
    +
    +    # Open video capture (0 for webcam, or video file path)
    +    cap = cv2.VideoCapture("path/to/video/file.mp4")
    +
    +    # Get video properties for output
    +    frame_width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
    +    frame_height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
    +    fps = int(cap.get(cv2.CAP_PROP_FPS))
    +    fourcc = cv2.VideoWriter_fourcc(*'mp4v')
    +    out = cv2.VideoWriter('path/to/video/output-file.mp4', fourcc, fps, (frame_width, frame_height))
    +
    +    while True:
    +        ret, frame = cap.read()
    +        if not ret:
    +            break
    +
    +        # Process frame
    +        processed_frame, density_info = estimator.process_frame(frame)
    +
    +        # Display output
    +        estimator.display_output(processed_frame, density_info)
    +
    +        # Write output frame
    +        out.write(processed_frame)
    +
    +        # Break loop on 'q' press
    +        if cv2.waitKey(1) & 0xFF == ord('q'):
    +            break
    +
    +    # Cleanup
    +    cap.release()
    +    cv2.destroyAllWindows()
    +
    +if __name__ == "__main__":
    +    main()
    +
    +

    This setup captures frames from a video source, processes them using YOLO11 to detect people, and calculates crowd density.

    +

    Step 2: Real-Time Crowd Detection and Tracking

    +

    The core of the implementation relies on tracking individuals in each frame using the YOLO11 model and estimating the crowd density. This is achieved through a series of steps, which include detecting people, calculating density, and classifying the crowd level.

    +

    Code for Crowd Density Estimation

    +

    The main class CrowdDensityEstimation includes the following functionality:

    +
      +
    • Person Detection: Using YOLO11 to detect individuals in each frame.
    • +
    • Density Calculation: Based on the number of detected persons relative to the frame area.
    • +
    • Tracking: Visualization of tracking history for each detected person.
    • +
    +
    # Install ultralytics package
    +# pip install ultralytics
    +
    +import cv2
    +import numpy as np
    +from ultralytics import YOLO
    +from collections import defaultdict
    +
    +class CrowdDensityEstimation:
    +    def __init__(self, model_path='yolo11n.pt', conf_threshold=0.3):
    +        self.model = YOLO(model_path)
    +        self.conf_threshold = conf_threshold
    +        self.track_history = defaultdict(lambda: [])
    +        self.density_levels = {
    +            'Low': (0, 0.2), # 0-0.2 persons/m²
    +            'Medium': (0.2, 0.5), # 0.2-0.5 persons/m²
    +            'High': (0.5, 0.8), # 0.5-0.8 persons/m²
    +            'Very High': (0.8, float('inf')) # >0.8 persons/m²
    +        }
    +
    +    def extract_tracks(self, im0):
    +        results = self.model.track(im0, persist=True, conf=self.conf_threshold, classes=[0])
    +        return results
    +
    +    def calculate_density(self, results, frame_area):
    +        if not results or len(results) == 0:
    +            return 0, 'Low', 0
    +
    +        person_count = len(results[0].boxes)
    +        density_value = person_count / frame_area * 10000  
    +
    +        density_level = 'Low'
    +        for level, (min_val, max_val) in self.density_levels.items():
    +            if min_val <= density_value < max_val:
    +                density_level = level
    +                break
    +
    +        return density_value, density_level, person_count
    +
    +

    Step 3: Visualizing Density and Results

    +

    Once density is calculated, the processed frame is annotated with information like density level, person count, and a tracking visualization. This enhances situational awareness by providing clear visual cues.

    +

    Displaying Density Information on Video Frames

    +
    def display_output(self, im0, density_info):
    +    density_value, density_level, person_count = density_info
    +
    +    cv2.rectangle(im0, (0, 0), (350, 150), (0, 0, 0), -1)
    +
    +    cv2.putText(im0, f'Density Level: {density_level}', (10, 30),
    +                cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2)
    +    cv2.putText(im0, f'Person Count: {person_count}', (10, 70),
    +                cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2)
    +    cv2.putText(im0, f'Density Value: {density_value:.2f}', (10, 110),
    +                cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2)
    +
    +    # Display the frame
    +    cv2.imshow('Crowd Density Estimation', im0)
    +
    +

    Applications of Crowd Density Estimation

    +
      +
    • Public Safety:
    • +
    • Early Warning System: Detecting unusual crowd formations.
    • +
    • Emergency Response: Identifying areas of high density for quick intervention.
    • +
    • Event Management:
    • +
    • Capacity Monitoring: Real-time tracking of crowd sizes in venues.
    • +
    • Safety Compliance: Ensuring attendance stays within safe limits.
    • +
    • Flow Analysis: Understanding movement patterns for better event planning.
    • +
    • Urban Planning:
    • +
    • Space Utilization: Analyzing how people use public spaces.
    • +
    • Infrastructure Planning: Designing facilities based on crowd patterns.
    • +
    +

    Explore More

    + +

    Unlock the potential of advanced crowd monitoring using YOLO11 and streamline operations for various sectors! 🚀

    + + + + + + + + + + + + + + + + + +

    Comments

    + + +
    + + + +
    +
    + + + + + +
    + + + +
    + + + +
    +
    +
    +
    + +
    + + + + + + + + + + + + \ No newline at end of file diff --git a/usecases/items-counting/index.html b/usecases/items-counting/index.html index 996ac91..2af19ce 100644 --- a/usecases/items-counting/index.html +++ b/usecases/items-counting/index.html @@ -358,6 +358,8 @@ + + @@ -581,6 +583,27 @@ + + + + + + +
  • + + + + + Crowd Density Estimation + + + + +
  • + + + + diff --git a/usecases/items-segmentation-supermarket-ai/index.html b/usecases/items-segmentation-supermarket-ai/index.html index 1fc64d2..6b34112 100644 --- a/usecases/items-segmentation-supermarket-ai/index.html +++ b/usecases/items-segmentation-supermarket-ai/index.html @@ -358,6 +358,8 @@ + + @@ -623,6 +625,27 @@ + + + + + + +
  • + + + + + Crowd Density Estimation + + + + +
  • + + + + diff --git a/usecases/segmentation-masks-detect-sam2/index.html b/usecases/segmentation-masks-detect-sam2/index.html index 3c75c50..4cd461e 100644 --- a/usecases/segmentation-masks-detect-sam2/index.html +++ b/usecases/segmentation-masks-detect-sam2/index.html @@ -358,6 +358,8 @@ + + @@ -632,6 +634,27 @@ + + + + + + +
  • + + + + + Crowd Density Estimation + + + + +
  • + + + +