{"id":18,"date":"2014-09-07T20:14:22","date_gmt":"2014-09-08T01:14:22","guid":{"rendered":"https:\/\/sites.bu.edu\/data\/?page_id=18"},"modified":"2016-02-01T12:15:36","modified_gmt":"2016-02-01T17:15:36","slug":"projects","status":"publish","type":"page","link":"https:\/\/sites.bu.edu\/data\/research\/projects\/","title":{"rendered":"Vision &#038; Learning Projects"},"content":{"rendered":"<p>The goal of this effort is to develop statistical learning methods in the context of applications arising from visual media. Fundamental issues such as visual ambiguity and spatial distortion arise in the context of vision that must be accounted for while developing statistical learning methods. We are particularly interested in interactions that arise as a consequence of multiple views. These problems include zero shot learning and person re-identification.<\/p>\n<table style=\"width: 100%; height: 100%;\" cellpadding=\"0\" cellspacing=\"50\">\n<colgroup>\n<col width=\"140\" \/>\n<col width=\"300\" \/><\/colgroup>\n<tbody>\n<tr style=\"width: 50px;\" align=\"left\" valign=\"top\">\n<td scope=\"row\" dir=\"ltr\" style=\"width: 200px; vertical-align: top;\" align=\"left\"><a href=\"\/data\/files\/2015\/10\/gc-ex2.png\"><img loading=\"lazy\" src=\"\/data\/files\/2015\/10\/gc-ex2.png\" alt=\"gc-ex2\" class=\"alignnone  wp-image-490\" height=\"163\" width=\"236\" \/><\/a><a href=\"\/data\/files\/2015\/10\/gc-ex1.png\"><img loading=\"lazy\" src=\"\/data\/files\/2015\/10\/gc-ex1.png\" alt=\"gc-ex1\" class=\"alignnone  wp-image-491\" height=\"168\" width=\"244\" \/><\/a><a href=\"\/data\/files\/2015\/10\/gc-ex2.png\"><\/a><\/td>\n<td scope=\"colgroup\" dir=\"ltr\" style=\"width: 1px; vertical-align: top;\" align=\"left\">\n<p style=\"text-align: center;\"><strong><span style=\"color: #0000ff;\">Group Membership Prediction<\/span><\/strong><\/p>\n<p>Illustration of group membership prediction (GMP) in the visual\u00a0recognition tasks of kinship verification and\u00a0<a href=\"https:\/\/sites.bu.edu\/data\/reid\">person re-identification<\/a>. In kinship verification our task is to decide whether the facial images are from the same family. In person re-identification our task is to decide whether the four pedestrians are\u00a0the same person.\u00a0These images are borrowed from VIPeR dataset\u00a0Family101 dataset.<\/td>\n<\/tr>\n<tr align=\"left\" valign=\"top\">\n<td scope=\"colgroup\" dir=\"ltr\" style=\"vertical-align: top;\" align=\"left\"><a href=\"\/data\/files\/2015\/10\/zsl_semantic.png\"><img loading=\"lazy\" src=\"\/data\/files\/2015\/10\/zsl_semantic-636x284.png\" alt=\"zsl_semantic\" class=\"  wp-image-483 aligncenter\" height=\"192\" width=\"430\" srcset=\"https:\/\/sites.bu.edu\/data\/files\/2015\/10\/zsl_semantic-636x284.png 636w, https:\/\/sites.bu.edu\/data\/files\/2015\/10\/zsl_semantic-1024x457.png 1024w\" sizes=\"(max-width: 430px) 100vw, 430px\" \/><\/a><\/td>\n<td scope=\"colgroup\" dir=\"ltr\" style=\"vertical-align: top;\" align=\"left\">\n<p style=\"text-align: center;\"><strong><span style=\"color: #0000ff;\">Zero Shot Learning<\/span><\/strong><\/p>\n<div title=\"Page 1\">\n<div class=\"page\" title=\"Page 1\">\n<div class=\"section\">\n<div class=\"layoutArea\">\n<div class=\"column\">\n<p><span>In zero-shot learning we observe source and target domain data from a subset of classes. The goal during test-time is to accurately predict the class label of an unobserved class.<\/span><span>\u00a0Our method is based on viewing each source or target data as a mixture of seen class proportions and we postulate that the mixture patterns have to be similar if the two instances belong to the same unseen class.<\/span><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p><span> <\/span><\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/td>\n<\/tr>\n<tr align=\"left\" valign=\"top\">\n<td scope=\"colgroup\" dir=\"ltr\" style=\"vertical-align: top;\" align=\"left\">\u00a0<a href=\"\/data\/files\/2015\/10\/zs_retrieval.jpg\"><img loading=\"lazy\" src=\"\/data\/files\/2015\/10\/zs_retrieval-636x371.jpg\" alt=\"zs_retrieval\" class=\"alignnone  wp-image-505\" height=\"276\" width=\"473\" srcset=\"https:\/\/sites.bu.edu\/data\/files\/2015\/10\/zs_retrieval-636x371.jpg 636w, https:\/\/sites.bu.edu\/data\/files\/2015\/10\/zs_retrieval-1024x598.jpg 1024w\" sizes=\"(max-width: 473px) 100vw, 473px\" \/><\/a>The graphical representation and retrieval of a \u201cmeeting\u201d event.<\/td>\n<td scope=\"colgroup\" dir=\"ltr\" style=\"vertical-align: top;\" align=\"left\">\n<p style=\"text-align: center;\"><a href=\"http:\/\/blogs.bu.edu\/yutingch\/exploratory-video-search\/\"><span style=\"text-decoration: underline; color: #0000ff;\">Zero Shot\u00a0Search &amp; Retrieval<\/span><\/a><\/p>\n<p style=\"text-align: left;\">In the archival step, we take in incoming data, extract attributes and relationships and store them in hash tables. In the query creation step, a user utilizes our GUI to create a query graph that is used to extract the We then calculate the maximally discriminative spanning tree from the query graph, retrieve matches to it, and assemble them into ranked search results for the user<\/p>\n<p style=\"text-align: left;\">\n<\/td>\n<\/tr>\n<tr align=\"left\" valign=\"top\">\n<td scope=\"colgroup\" dir=\"ltr\" style=\"vertical-align: top;\" align=\"left\"><\/td>\n<td scope=\"colgroup\" dir=\"ltr\" style=\"vertical-align: top;\" align=\"left\">\n<p style=\"text-align: center;\"><a href=\"http:\/\/blogs.bu.edu\/yutingch\/video-anomaly-detection\/\">Anomalous Activity Detection<\/a><\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n","protected":false},"excerpt":{"rendered":"<p>The goal of this effort is to develop statistical learning methods in the context of applications arising from visual media. Fundamental issues such as visual ambiguity and spatial distortion arise in the context of vision that must be accounted for while developing statistical learning methods. We are particularly interested in interactions that arise as a [&hellip;]<\/p>\n","protected":false},"author":9227,"featured_media":0,"parent":410,"menu_order":3,"comment_status":"closed","ping_status":"closed","template":"page-templates\/no-sidebars.php","meta":[],"_links":{"self":[{"href":"https:\/\/sites.bu.edu\/data\/wp-json\/wp\/v2\/pages\/18"}],"collection":[{"href":"https:\/\/sites.bu.edu\/data\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/sites.bu.edu\/data\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/sites.bu.edu\/data\/wp-json\/wp\/v2\/users\/9227"}],"replies":[{"embeddable":true,"href":"https:\/\/sites.bu.edu\/data\/wp-json\/wp\/v2\/comments?post=18"}],"version-history":[{"count":44,"href":"https:\/\/sites.bu.edu\/data\/wp-json\/wp\/v2\/pages\/18\/revisions"}],"predecessor-version":[{"id":536,"href":"https:\/\/sites.bu.edu\/data\/wp-json\/wp\/v2\/pages\/18\/revisions\/536"}],"up":[{"embeddable":true,"href":"https:\/\/sites.bu.edu\/data\/wp-json\/wp\/v2\/pages\/410"}],"wp:attachment":[{"href":"https:\/\/sites.bu.edu\/data\/wp-json\/wp\/v2\/media?parent=18"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}