{"id":254,"date":"2015-10-14T15:54:00","date_gmt":"2015-10-14T19:54:00","guid":{"rendered":"https:\/\/sites.bu.edu\/data\/?page_id=254"},"modified":"2015-10-14T15:54:00","modified_gmt":"2015-10-14T19:54:00","slug":"reid","status":"publish","type":"page","link":"https:\/\/sites.bu.edu\/data\/reid\/","title":{"rendered":"Person Re-identification"},"content":{"rendered":"<p>Person re-identification is an emerging problem in visual surveillance,\u00a0deals with maintaining entities of individuals while they traverse various locations across a camera network. From a visual perspective re-id is\u00a0challenging due to significant changes in visual appearance of individuals in cameras with different pose, illumination and calibration.<\/p>\n<p>Researchers approaches these difficulties with designing of distinctive view-invariant person representations, and learning of effective distance\/similarity metrics. Similar to the two different focuses, we propose two algorithms: one designs a novel appearance model that takes into the visual pattern co-occurrence across different views; and the other formulates the problem in a global structured matching setting.<\/p>\n<h2>Person Re-identification with Visual Word Co-occurrence Model<\/h2>\n<p><strong>Downloads:<\/strong> [<a href=\"http:\/\/arxiv.org\/abs\/1410.6532\">Paper<\/a>]<\/p>\n<p><strong>Summary:<\/strong> We propose a novel visual word co-occurrence model to deal with the appearance variations across different views. We first map each pixel of an image to a\u00a0visual word using a codebook, which is learned in an unsupervised manner. The appearance transformation\u00a0between camera views is encoded by a co-occurrence matrix of visual word joint distributions in probe and\u00a0gallery images. Our appearance model naturally accounts for spatial similarities and variations caused by\u00a0pose, illumination &amp; configuration change across camera views. Linear SVMs are then trained as\u00a0classifiers using these co-occurrence descriptors.On the VIPeR \u00a0and CUHK Campus \u00a0benchmark\u00a0datasets, our method achieves 83.86% and 85.49% at rank-15 on the Cumulative Match Characteristic\u00a0(CMC) curves, and beats the state-of-the-art results by 10.44% and 22.27%.<\/p>\n<p><strong>Illustration:<\/strong><\/p>\n<p><a href=\"\/data\/files\/2015\/03\/20150301_PersonReID_Co-occurrence.png\"><img loading=\"lazy\" src=\"\/data\/files\/2015\/03\/20150301_PersonReID_Co-occurrence-636x249.png\" alt=\"20150301_PersonReID_Co-occurrence\" class=\"alignnone size-medium wp-image-306\" height=\"249\" width=\"636\" srcset=\"https:\/\/sites.bu.edu\/data\/files\/2015\/03\/20150301_PersonReID_Co-occurrence-636x249.png 636w, https:\/\/sites.bu.edu\/data\/files\/2015\/03\/20150301_PersonReID_Co-occurrence.png 967w\" sizes=\"(max-width: 636px) 100vw, 636px\" \/><\/a>Illustration of codeword co-occurrence in positive image pairs (i.e. two images from different camera views per column belong to a same person) and negative image\u00a0pairs (i.e. two images from different camera views per column belong to different persons). For positive (or negative) pairs, in each row the enclosed regions are assigned\u00a0the same codeword.<\/p>\n<p><strong>Results:<\/strong><\/p>\n<p><a href=\"\/data\/files\/2015\/03\/20150301_PersonReID_Co-occurrence_results.png\"><img loading=\"lazy\" src=\"\/data\/files\/2015\/03\/20150301_PersonReID_Co-occurrence_results-636x233.png\" alt=\"20150301_PersonReID_Co-occurrence_results\" class=\"alignnone size-medium wp-image-309\" height=\"233\" width=\"636\" srcset=\"https:\/\/sites.bu.edu\/data\/files\/2015\/03\/20150301_PersonReID_Co-occurrence_results-636x233.png 636w, https:\/\/sites.bu.edu\/data\/files\/2015\/03\/20150301_PersonReID_Co-occurrence_results-1024x376.png 1024w, https:\/\/sites.bu.edu\/data\/files\/2015\/03\/20150301_PersonReID_Co-occurrence_results.png 1043w\" sizes=\"(max-width: 636px) 100vw, 636px\" \/><\/a><\/p>\n<h2>Person Re-identification via Structured Matching<\/h2>\n<p><strong>Downloads:<\/strong>\u00a0[<a href=\"http:\/\/arxiv.org\/abs\/1406.4444\">Paper<\/a>]<\/p>\n<p><strong>Summary:\u00a0<\/strong>From a visual perspective re-id is\u00a0challenging due to significant changes in visual appearance of individuals in cameras with different pose, illumination and calibration. Globally the challenge arises\u00a0from the need to maintain consistent matches\u00a0among all the individual entities across different camera\u00a0views.\u00a0We propose PRISM, a structured matching method\u00a0to jointly account for these challenges. We view the global problem as a weighted graph matching problem, and learn the edge weights (pairwise similarity scores) based on the co-occurrence of visual patters in the training examples.\u00a0These co-occurrence based scores in turn account for \u00a0appearance changes by inferring likely and unlikely visual co-occurrences appearing in training instances. We implement\u00a0PRISM on single shot and multi-shot scenarios. PRISM uniformly outperforms state of the art by as\u00a0much as 10%-30% in matching rate while being computationally efficient.<\/p>\n<p><strong>Illustration:<\/strong><\/p>\n<p><a href=\"\/data\/files\/2015\/03\/20150301_PersonReID.png\"><img loading=\"lazy\" src=\"\/data\/files\/2015\/03\/20150301_PersonReID-636x306.png\" alt=\"20150301_PersonReID\" class=\"alignnone size-medium wp-image-310\" height=\"306\" width=\"636\" srcset=\"https:\/\/sites.bu.edu\/data\/files\/2015\/03\/20150301_PersonReID-636x306.png 636w, https:\/\/sites.bu.edu\/data\/files\/2015\/03\/20150301_PersonReID.png 717w\" sizes=\"(max-width: 636px) 100vw, 636px\" \/><\/a>This is an overview of our method, PRISM, consisting of two levels where\u00a0(a) entity-level structured matching is imposed on top of (b) image-level\u00a0visual word deformable matching. In (a), each color represents an entity, and this example illustrates the general situation for real world re-id,\u00a0including single-shot, multi-shot, and no matches. In (b), the idea of visual word co-occurrence for measuring image similarities is illustrated in a\u00a0probabilistic way, where l 1 , l 2 denote the person entities, u 1 , u 2, v 1 , v 2\u00a0denote different visual words, and h 1 , h 2 denote two locations.<\/p>\n<p><strong>Results:<\/strong><\/p>\n<p><a href=\"\/data\/files\/2015\/03\/20150301_PersonReID_structured_results.png\"><img loading=\"lazy\" src=\"\/data\/files\/2015\/03\/20150301_PersonReID_structured_results-636x204.png\" alt=\"20150301_PersonReID_structured_results\" class=\"alignnone size-medium wp-image-308\" height=\"204\" width=\"636\" srcset=\"https:\/\/sites.bu.edu\/data\/files\/2015\/03\/20150301_PersonReID_structured_results-636x204.png 636w, https:\/\/sites.bu.edu\/data\/files\/2015\/03\/20150301_PersonReID_structured_results.png 966w\" sizes=\"(max-width: 636px) 100vw, 636px\" \/><\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Person re-identification is an emerging problem in visual surveillance,\u00a0deals with maintaining entities of individuals while they traverse various locations across a camera network. From a visual perspective re-id is\u00a0challenging due to significant changes in visual appearance of individuals in cameras with different pose, illumination and calibration. Researchers approaches these difficulties with designing of distinctive view-invariant [&hellip;]<\/p>\n","protected":false},"author":9349,"featured_media":0,"parent":0,"menu_order":3,"comment_status":"closed","ping_status":"closed","template":"","meta":[],"_links":{"self":[{"href":"https:\/\/sites.bu.edu\/data\/wp-json\/wp\/v2\/pages\/254"}],"collection":[{"href":"https:\/\/sites.bu.edu\/data\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/sites.bu.edu\/data\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/sites.bu.edu\/data\/wp-json\/wp\/v2\/users\/9349"}],"replies":[{"embeddable":true,"href":"https:\/\/sites.bu.edu\/data\/wp-json\/wp\/v2\/comments?post=254"}],"version-history":[{"count":14,"href":"https:\/\/sites.bu.edu\/data\/wp-json\/wp\/v2\/pages\/254\/revisions"}],"predecessor-version":[{"id":488,"href":"https:\/\/sites.bu.edu\/data\/wp-json\/wp\/v2\/pages\/254\/revisions\/488"}],"wp:attachment":[{"href":"https:\/\/sites.bu.edu\/data\/wp-json\/wp\/v2\/media?parent=254"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}