Distributed semantic processing in camera networks

Overview

In many applications, sensor networks can be used to monitor large geographical regions. This typically produces large quantities of data that need to be associated, summarized and classified in order to arrive to a semantically meaningful descriptions of the phenomena being monitored. The long-term guiding vision of this project is a distributed network that can perform this analysis autonomously, over long periods of times, and in a scalable way. As a concrete application, this research focuses on smart camera networks with nodes that are either static or part of robotic agents. The planned work will result in systems that are more efficient, accurate, and resilient. The algorithms developed will find wide applications, including in security (continuously detecting suspicious individuals in real time) and the Internet of Things. As part of the broader impacts, the project will produce educational material to explain the scientific results of the project to a K12 audience.

QuickMatch: Fast Multi-Image Matching via Density-Based Clustering

Illustration of the idea of finding multi-image correspondences by seeing each match as a cluster C_c (blue squares). This view automatically prevents inconsistent matches (red lines).

The first result of this project is an algorithm, QuickMatch, that performs consistent matching across multiple images. Quickmatch formulates the problem as a clustering problem (see figure) and then uses a modified density-based algorithm to separate the points in clusters that represents consistent matches across images.

In particular, with respect to previous work, QuickMatch 1) represents a novel application of density-based clustering; 2) directly outputs consistent multi-image matches without explicit pre-processing (e.g., initial pairwise decisions) or post-processing (e.g., thresholding of a matrix); 3) is non-iterative, deterministic, and initialization-free; 4) produces better results in a small fraction of the time (it is up to 62 times faster in some benchmarks); 5) can scale to large datasets that previous methods cannot handle (it has been tested with more than 20k+ features); 6) takes advantage of the distinctiveness of the descriptor as done in traditional matching to counteract the problem of repeated structures; 7) does not assume a one-to-one correspondence of features between images; 8) does not need a-priori knowledge of the number of entities (i.e., clusters) present in the images. Code is available under the Software page.

“NetMatch: the Game”, and educational board game

We  developed an alpha version of a board game, called NetMatch, that provides a tangible and fun way to explain the main research challenges in the project. This game is for two to four players, whose goal is to move their pawns across a network (one hop at a time) from the edges in order to match pawns with similar symbols. When all the pawns for a symbol are matched, a letter for a secret word is revealed. The player that discovers all the letters of his word is the winner.

To start playing, simply download, print, and cut the pieces from the PDF document.

The majority of the game’s components components (board, cards, pawns) are procedurally generated. The code is made freely available, so that it is possible to easily generate variations of the game. The code made available on a git repository (https://bitbucket.org/tronroberto/pythonnetmatchgame).

If you play the game, and have comments or suggestions, please email them to tron@bu.edu.

Publications and other resources

You can also follow the journey of one of the undergraduate students involved on the project, Brandon Sookraj, on his blog on his blog (https://sookrajrobotics.wordpress.com).

Funding and support

This project is supported by the National Science Foundation grant “III: Small: Distributed Semantic Information Processing Applied to Camera Sensor Networks” (Award number 1717656).
NSF Logo

Disclaimer: Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.