Generative AI

Creating a Tracking of an Equipment Financial and Analytics System with Roboflow Supervision

In this advanced Roboflow surveillance The lesson, we build a full pipe for the acquisition of a full object with the Library of the Guide. We start by setting the actual tracking of the actual time using the Byteetracker, added access to the acquisition, and describing the polygon areas to monitor specific districts in a particular river. As we process the frames, we appreciate the binding boxes, object IDs, and speed data, to enable us to track and analyze something over time. Our goal is to show how we can combine detection, tracking, an analysis based on the support, as well as the virtue of a logical video transaction. Look Full codes here.

!pip install supervision ultralytics opencv-python
!pip install --upgrade supervision 


import cv2
import numpy as np
import supervision as sv
from ultralytics import YOLO
import matplotlib.pyplot as plt
from collections import defaultdict


model = YOLO('yolov8n.pt')

We begin by installing the required packages, including supervision, ultralysis, and OpenCV. After confirming that we have the latest version of the supervision, we import all the required libraries. We are starting the Yolov8n model, working as a basic detector in our pipe. Look Full codes here.

try:
   tracker = sv.ByteTrack()
except AttributeError:
   try:
       tracker = sv.ByteTracker()
   except AttributeError:
       print("Using basic tracking - install latest supervision for advanced tracking")
       tracker = None


try:
   smoother = sv.DetectionsSmoother(length=5)
except AttributeError:
   smoother = None
   print("DetectionsSmoother not available in this version")


try:
   box_annotator = sv.BoundingBoxAnnotator(thickness=2)
   label_annotator = sv.LabelAnnotator()
   if hasattr(sv, 'TraceAnnotator'):
       trace_annotator = sv.TraceAnnotator(thickness=2, trace_length=30)
   else:
       trace_annotator = None
except AttributeError:
   try:
       box_annotator = sv.BoxAnnotator(thickness=2)
       label_annotator = sv.LabelAnnotator()
       trace_annotator = None
   except AttributeError:
       print("Using basic annotators - some features may be limited")
       box_annotator = None
       label_annotator = None 
       trace_annotator = None


def create_zones(frame_shape):
   h, w = frame_shape[:2]
  
   try:
       entry_zone = sv.PolygonZone(
           polygon=np.array([[0, h//3], [w//3, h//3], [w//3, 2*h//3], [0, 2*h//3]]),
           frame_resolution_wh=(w, h)
       )
      
       exit_zone = sv.PolygonZone(
           polygon=np.array([[2*w//3, h//3], [w, h//3], [w, 2*h//3], [2*w//3, 2*h//3]]),
           frame_resolution_wh=(w, h)
       )
   except TypeError:
       entry_zone = sv.PolygonZone(
           polygon=np.array([[0, h//3], [w//3, h//3], [w//3, 2*h//3], [0, 2*h//3]])
       )
       exit_zone = sv.PolygonZone(
           polygon=np.array([[2*w//3, h//3], [w, h//3], [w, 2*h//3], [2*w//3, 2*h//3]])
       )
  
   return entry_zone, exit_zone

We put the most important components in the library, including the Byteetetetrack trail, with smooth choices using the dedicated devys, and labels, and tracking. To ensure compliance with translations, we use the attempts to try – without blocks to return to other categories or basic performance when needed. In addition, it describes the strong Polygon areas within the framework to monitor specific districts such as accessible and exits, enabling Analytics of Advanced Spatial. Look Full codes here.

class AdvancedAnalytics:
   def __init__(self):
       self.track_history = defaultdict(list)
       self.zone_crossings = {"entry": 0, "exit": 0}
       self.speed_data = defaultdict(list)
      
   def update_tracking(self, detections):
       if hasattr(detections, 'tracker_id') and detections.tracker_id is not None:
           for i in range(len(detections)):
               track_id = detections.tracker_id[i]
               if track_id is not None:
                   bbox = detections.xyxy[i]
                   center = np.array([(bbox[0] + bbox[2]) / 2, (bbox[1] + bbox[3]) / 2])
                   self.track_history[track_id].append(center)
                  
                   if len(self.track_history[track_id]) >= 2:
                       prev_pos = self.track_history[track_id][-2]
                       curr_pos = self.track_history[track_id][-1]
                       speed = np.linalg.norm(curr_pos - prev_pos)
                       self.speed_data[track_id].append(speed)
  
   def get_statistics(self):
       total_tracks = len(self.track_history)
       avg_speed = np.mean([np.mean(speeds) for speeds in self.speed_data.values() if speeds])
       return {
           "total_objects": total_tracks,
           "zone_entries": self.zone_crossings["entry"],
           "zone_exits": self.zone_crossings["exit"],
           "avg_speed": avg_speed if not np.isnan(avg_speed) else 0
       }


def process_video(source=0, max_frames=300):
   """
   Process video source with advanced supervision features
   source: video path or 0 for webcam
   max_frames: limit processing for demo
   """
   cap = cv2.VideoCapture(source)
   analytics = AdvancedAnalytics()
  
   ret, frame = cap.read()
   if not ret:
       print("Failed to read video source")
       return
  
   entry_zone, exit_zone = create_zones(frame.shape)
  
   try:
       entry_zone_annotator = sv.PolygonZoneAnnotator(
           zone=entry_zone,
           color=sv.Color.GREEN,
           thickness=2
       )
       exit_zone_annotator = sv.PolygonZoneAnnotator(
           zone=exit_zone,
           color=sv.Color.RED,
           thickness=2
       )
   except (AttributeError, TypeError):
       entry_zone_annotator = sv.PolygonZoneAnnotator(zone=entry_zone)
       exit_zone_annotator = sv.PolygonZoneAnnotator(zone=exit_zone)
  
   frame_count = 0
   results_frames = []
  
   cap.set(cv2.CAP_PROP_POS_FRAMES, 0) 
  
   while ret and frame_count < max_frames:
       ret, frame = cap.read()
       if not ret:
           break
          
       results = model(frame, verbose=False)[0]
       detections = sv.Detections.from_ultralytics(results)
      
       detections = detections[detections.class_id == 0]
      
       if tracker is not None:
           detections = tracker.update_with_detections(detections)
      
       if smoother is not None:
           detections = smoother.update_with_detections(detections)
      
       analytics.update_tracking(detections)
      
       entry_zone.trigger(detections)
       exit_zone.trigger(detections)
      
       labels = []
       for i in range(len(detections)):
           confidence = detections.confidence[i] if detections.confidence is not None else 0.0
          
           if hasattr(detections, 'tracker_id') and detections.tracker_id is not None:
               track_id = detections.tracker_id[i]
               if track_id is not None:
                   speed = analytics.speed_data[track_id][-1] if analytics.speed_data[track_id] else 0
                   label = f"ID:{track_id} | Conf:{confidence:.2f} | Speed:{speed:.1f}"
               else:
                   label = f"Conf:{confidence:.2f}"
           else:
               label = f"Conf:{confidence:.2f}"
           labels.append(label)
      
       annotated_frame = frame.copy()
      
       annotated_frame = entry_zone_annotator.annotate(annotated_frame)
       annotated_frame = exit_zone_annotator.annotate(annotated_frame)
      
       if trace_annotator is not None:
           annotated_frame = trace_annotator.annotate(annotated_frame, detections)
      
       if box_annotator is not None:
           annotated_frame = box_annotator.annotate(annotated_frame, detections)
       else:
           for i in range(len(detections)):
               bbox = detections.xyxy[i].astype(int)
               cv2.rectangle(annotated_frame, (bbox[0], bbox[1]), (bbox[2], bbox[3]), (0, 255, 0), 2)
      
       if label_annotator is not None:
           annotated_frame = label_annotator.annotate(annotated_frame, detections, labels)
       else:
           for i, label in enumerate(labels):
               if i < len(detections):
                   bbox = detections.xyxy[i].astype(int)
                   cv2.putText(annotated_frame, label, (bbox[0], bbox[1]-10),
                              cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 2)
      
       stats = analytics.get_statistics()
       y_offset = 30
       for key, value in stats.items():
           text = f"{key.replace('_', ' ').title()}: {value:.1f}"
           cv2.putText(annotated_frame, text, (10, y_offset),
                      cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255, 255, 255), 2)
           y_offset += 30
      
       if frame_count % 30 == 0:
           results_frames.append(cv2.cvtColor(annotated_frame, cv2.COLOR_BGR2RGB))
      
       frame_count += 1
      
       if frame_count % 50 == 0:
           print(f"Processed {frame_count} frames...")
  
   cap.release()
  
   if results_frames:
       fig, axes = plt.subplots(2, 2, figsize=(15, 10))
       axes = axes.flatten()
      
       for i, (ax, frame) in enumerate(zip(axes, results_frames[:4])):
           ax.imshow(frame)
           ax.set_title(f"Frame {i*30}")
           ax.axis('off')
      
       plt.tight_layout()
       plt.show()
  
   final_stats = analytics.get_statistics()
   print("n=== FINAL ANALYTICS ===")
   for key, value in final_stats.items():
       print(f"{key.replace('_', ' ').title()}: {value:.2f}")
  
   return analytics


print("Starting advanced supervision demo...")
print("Features: Object detection, tracking, zones, speed analysis, smoothing")

It describes Addanalytics class to track the item movement, calculating speed, and calculating the area, which enables a rich understanding of real-time video. Inside Process_Findio work, we read each frame in the video source and run through our acquisition, tracking, and a smooth pipe. We put frettions with binding boxes, labels, local labels, and live statistics, which gives us a powerful, variable analyzing item and analial analysis. For all the loops, we also collect data visual data and print the final statistics, indicate the end of the end of the end of the Roboflow end. Look Full codes here.

def create_demo_video():
   """Create a simple demo video with moving objects"""
   fourcc = cv2.VideoWriter_fourcc(*'mp4v')
   out = cv2.VideoWriter('demo.mp4', fourcc, 20.0, (640, 480))
  
   for i in range(100):
       frame = np.zeros((480, 640, 3), dtype=np.uint8)
      
       x1 = int(50 + i * 2)
       y1 = 200
       x2 = int(100 + i * 1.5)
       y2 = 250
      
       cv2.rectangle(frame, (x1, y1), (x1+50, y1+50), (0, 255, 0), -1)
       cv2.rectangle(frame, (x2, y2), (x2+50, y2+50), (255, 0, 0), -1)
      
       out.write(frame)
  
   out.release()
   return 'demo.mp4'


demo_video = create_demo_video()
analytics = process_video(demo_video, max_frames=100)


print("nTutorial completed! Key features demonstrated:")
print("✓ YOLO integration with Supervision")
print("✓ Multi-object tracking with ByteTracker")
print("✓ Detection smoothing")
print("✓ Polygon zones for area monitoring")
print("✓ Advanced annotations (boxes, labels, traces)")
print("✓ Real-time analytics and statistics")
print("✓ Speed calculation and tracking history")

To test our full pipeline, producing a demo moving demeaning video that imitate the tracks. This allows us to confirm, tracking, monitoring of local care, and speed analysis without requiring real land input. We then run the work of the policy_Video to the production. Finally, we print a summary of all the important aspects we have used, reflects the power of roboflow for real-time analytics.

In conclusion, we have successfully implemented a full pipe that includes the discovery of something, tracking, monitoring area, and actual analysis. We show that we can visualize the key understanding as speed, crossing a place, and tracking history through video frames described. This setup gives us the power to pass the basic income and develop a good look at the screening or analytical system using open tools. Whether the use of research or production, we now have a powerful basis to increase the most advanced skills.


Look Full codes here. Feel free to look our GITHUB page for tutorials, codes and letters of writing. Also, feel free to follow it Sane and don't forget to join ours 100K + ml subreddit Then sign up for Our newspaper.


Asphazzaq is a Markteach Media Inc. According to a View Business and Developer, Asifi is committed to integrating a good social intelligence. His latest attempt is launched by the launch of the chemistrylife plan for an intelligence, MarktechPost, a devastating intimate practice of a machine learning and deep learning issues that are clearly and easily understood. The platform is adhering to more than two million moon visits, indicating its popularity between the audience.

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button