Exploring AI-based Video Segmentation and Saliency Computation to Optimize Imagery-acquisition From Moving Vehicles

Overview

Mobile sensing has offered efficient, cost-effective data collection procedures that opened new research frontiers, specifically in urban sensing and transportation. In the past, due to highly costly and time-consuming data collection procedures, a limited number of urban indicators were measured and made available to researchers. Hence, our understanding of cities on many frontiers was bounded by the ability to collect, record, manage, and store data. Recent advancements in producing low-cost sensing devices, together with the advent of new techniques in computer vision and machine learning, lead to the creation of massive data sets collected by fleets of sensor-equipped vehicles moving through streets.

Objectives

In this project, we propose to employ machine learning techniques for creating adaptive sampling profiles and a data-driven, opportunistic approach to data acquisition from moving sensors. Our immediate goal is to drastically cut down the cost of deploying video and image sensors, making them more practical. To this end, we plan to explore a novel research direction: detecting the salient frames in video data captured by sensors using computer vision, video segmentation algorithms. Then, a data-driven approach using ML will be employed to find the control features that enhance sensor data acquisition and prevent huge waste to the memory and storage resources. We plan to evaluate our proposed methods by demonstrating their effectiveness in a pedestrian mobility analysis. We provide a method to count pedestrians from a moving car instead of relying on the conventional methods of using fixed sensors or human counters, which due to their high cost, suffer from very limited spatial coverage.

Personnel

Claudio Silva

Claudio Silva

PRINCIPAL INVESTIGATOR

Kaan Ozbay

Headshot of Kaan Ozbay

COLLABORATOR

Erik K. Tokuda

STUDENT RESEARCHER

Jianzhe Lin

STUDENT RESEARCHER

Maryam Hosseini

STUDENT RESEARCHER

Principal InvestigatorClaudio Silva
Funding SourceC2SMART, NYU Tandon, and VIDA
Total Project Cost$88,674
USDOT Award #69A3551747124 
Implementation of Research OutcomesWe will work with our partner Carmera on transitioning the technology once we have demonstrated its accuracy and usefulness. The benchmark data will be provided by Carmera, and we will purchase two sets of sensing cameras (Vantrue N2 Pro https://www.vantrue.net/Goods/detail/gid/29.html) to test the effectiveness of our suggested optimizing procedures. The models/algorithms for the proposed salient frame detection method will be trained on our own machines.
Impacts/Benefits of Implementation

First, our proposed work has the potential to make large-scale sensing from moving vehicles drastically cheaper and more efficient. By allocating much smaller resources, using salient frame detection algorithms, we minimize the chances of losing relevant information while reducing the size of collected data and the costs associated with data cleaning and maintenance. On the other hand, our practical application revolves around pedestrian mobility, a pressing issue that, given the current social distancing concerns, has become the subject of popular debate. By laying down the foundations for estimating the volume of pedestrians from moving cars, we provide a reliable method for large-scale estimation of pedestrian volume, with significantly lower cost.

Leave a Reply