Skip to the content.

Multiple Object Tracking of Military Vehicles


Introduction

This is the code for my project “Multiple Object Tracking for Military Vehicles”, which is a part of my BSc in electrical engineering at Technion Israel. The project was done under the supervision of Gabi Davidov PhD - Thanks for his guidance and support durring the whole process.

The project can be divided into roughly 3 parts:

  1. Create a Custom Dataset: After looking online, one can see that a dataset for military vehicles detection can’t be found. Therefore, a custom dataset containing around 4000 images was collected and labeled for the project.

  2. Train an Object Detection Model: This project uses YOLOv5[3] for object detection. As popular datasets used for training (such as COCO, ImageNet, etc.) have limited amount of military vehicles images, training an object on a custom dataset is necessary.

  3. Combine with an Object Tracking Model: After obtaining the object detections (bounding box and class for the objects in each frame), the purpose of the tracking phase is to understand the relation between the objects over different frames. For this purpose, the DeepSort[1] algorithm was chosen, with a pre-trained Pytorch implementation[2].

Repository Structure

├─ src
│  ├─ deep_sort_pytorch
│  ├─ utils
│  │  ├─ common_images_dataset_downloader.ipynb
│  │  ├─ download_Udacity_self_driving_car_dataset.ipynb
│  │  ├─ feature_matching_LoFTR.ipynb
│  │  ├─ Google_images.ipynb
│  │  ├─ Google_images.py
│  │  ├─ super_resolution.ipynb
│  │  ├─ super_resolution.py
│  │  └─ README.md
│  ├─ data_utils.py
│  ├─ plot_utils.py
│  ├─ tracker.py
│  └─ video.py
├─ figures
├─ notebooks
│  ├─ Compare Detectors.ipynb
│  ├─ test.ipynb
│  └─ Train YOLOv5.ipynb
└─ README.md

Use the Object Tracker

The usage of the object tracking model is pretty straightforward, and should be similar to this snippet:

import torch
from src.video import Video
from src.tracker import MultiObjectTracker
from src.plot_utils import plot_bounding_boxes

# initialize a video
video = Video(f'{test_videos_path}/{video_name}')

# initialize object detector
detector = torch.hub.load('ultralytics/yolov5', 'custom', weights_path).to(device)

# initialize object tracker
tracker = MultiObjectTracker(video, results_path, detector)

# iterate over the frames in the video
for frame, bounding_boxes in tracker:
    plot_bounding_boxes(frame, bounding_boxes)
    tracker.video_writer.write(frame)

# save the results
tracker.video_writer.release()

It’s recommended to check out the example notebook:


References