{"id":19,"date":"2016-09-19T17:38:28","date_gmt":"2016-09-19T21:38:28","guid":{"rendered":"https:\/\/sites.bu.edu\/lavalab\/?page_id=19"},"modified":"2019-08-02T13:52:59","modified_gmt":"2019-08-02T17:52:59","slug":"projects","status":"publish","type":"page","link":"https:\/\/sites.bu.edu\/lavalab\/research\/projects\/","title":{"rendered":"Current Projects"},"content":{"rendered":"<h4><strong>What does joint attention look like in interactions between deaf children and their parents?<\/strong><\/h4>\n<table width=\"100%\">\n<tbody>\n<tr>\n<td width=<\/strong> <strong>Joint attention refers to moments within an interaction when children and their parent or caregiver are both attending to the same thing at the same time. In spoken language, joint attention occurs when parents and children are looking at an object, and parents label or talk about it. How does this multi-modal process adapt when all input (language and information about the world) is perceived visually? We examine interactions between deaf children and their deaf and hearing caregivers to see how joint visual attention is achieved. \u00a0We do this by recording parents and children during naturalistic play, and then coding these interactions to understand how eye gaze, handling objects, and language input are coordinated.<\/strong>\n<\/td>\n<td width=\"1%\"><div style=\"width: 500px;\" class=\"wp-video\"><!--[if lt IE 9]><script>document.createElement('video');<\/script><![endif]-->\n<video class=\"wp-video-shortcode\" id=\"video-19-1\" width=\"500\" height=\"360\" preload=\"metadata\" controls=\"controls\"><source type=\"video\/mp4\" src=\"\/lavalab\/files\/2019\/08\/Joint-Attention-3.0-1.mp4?_=1\" \/><a href=\"\/lavalab\/files\/2019\/08\/Joint-Attention-3.0-1.mp4\">\/lavalab\/files\/2019\/08\/Joint-Attention-3.0-1.mp4<\/a><\/video><\/div><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<div class=\"bu-slideshow-container parent-&amp;-child-interaction. autoplay\" id=\"bu-slideshow-container-1715\" data-slideshow-name=\"parent-&-child-interaction.\" data-slideshow-delay=\"5000\" style=\"width: auto; \"><div class='slideshow-loader active'><div class='loader-animation'><\/div><p>loading slideshow...<\/p><\/div><div class=\"bu-slideshow-slides\"><ul class=\"bu-slideshow transition-slide\" id=\"bu-slideshow-1715\"><li id=\"bu-slideshow-1715_0\" class=\"slide \"><div class=\"bu-slide-container slide-caption-bottom-right\"><img src=\"\/lavalab\/files\/2019\/08\/Screen-Shot-2019-08-02-at-12.09.13-PM-768x496.png\" alt=\"\" \/><\/div><\/li><li id=\"bu-slideshow-1715_1\" class=\"slide \"><div class=\"bu-slide-container slide-caption-bottom-right\"><img src=\"\/lavalab\/files\/2019\/08\/Screen-Shot-2019-08-02-at-12.06.42-PM-768x475.png\" alt=\"\" \/><\/div><\/li><\/ul><\/div><div class=\"bu-slideshow-navigation-container\"><ul class=\"bu-slideshow-navigation nav-icon\" id=\"bu-slideshow-nav-1715\" aria-hidden=\"true\"><li><a href=\"#\" id=\"pager-1\" class=\" active\" aria-hidden=\"true\"><span>1<\/span><\/a><\/li> <li><a href=\"#\" id=\"pager-2\" class=\"\" aria-hidden=\"true\"><span>2<\/span><\/a><\/li> <\/ul><\/div><\/div>\n<p>&nbsp;<\/p>\n<h4><strong>How do deaf children learn new ASL signs?<\/strong><\/h4>\n<table width=\"100%\">\n<tbody>\n<tr>\n<td width=<\/strong> <strong>Early childhood is a time of rapid word learning. How do children map new labels to new objects? We study the process of word learning in ASL. In particular, we investigate how deaf children learn to manage their eye gaze and visual attention so that they can connect language and objects. Our studies use eye-tracking technology, which allows us to monitor children&#8217;s gaze as they perceive signs, pictures, and videos on a computer. We also record parents and children interacting with novel objects to determine how new labels are introduced during naturalistic play.<\/strong>\n<\/td>\n<td width=\"1%\"><div style=\"width: 500px;\" class=\"wp-video\"><video class=\"wp-video-shortcode\" id=\"video-19-2\" width=\"500\" height=\"360\" preload=\"metadata\" controls=\"controls\"><source type=\"video\/mp4\" src=\"\/lavalab\/files\/2019\/08\/Deaf-children-New-ASL-3.1.mp4?_=2\" \/><a href=\"\/lavalab\/files\/2019\/08\/Deaf-children-New-ASL-3.1.mp4\">\/lavalab\/files\/2019\/08\/Deaf-children-New-ASL-3.1.mp4<\/a><\/video><\/div><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>&nbsp;<\/p>\n<h4><strong>How do we know if deaf children are reaching their language milestones?<\/strong><\/h4>\n<table width=\"100%\">\n<tbody>\n<tr>\n<td width=<\/strong> <strong>Vocabulary development in young childhood is an important predictor of later language outcomes. Yet, few measures exist to track the development of ASL in very young children. With our collaborators Dr. Naomi Caselli and Dr. Jennie Pyers, we are developing measures of productive and receptive language for use with infants, toddlers, and children learning ASL.\n<\/td>\n<p><\/strong><\/p>\n<td width=\"1%\"><div style=\"width: 500px;\" class=\"wp-video\"><video class=\"wp-video-shortcode\" id=\"video-19-3\" width=\"500\" height=\"360\" preload=\"metadata\" controls=\"controls\"><source type=\"video\/mp4\" src=\"\/lavalab\/files\/2019\/08\/Deaf-Children-Milestones-3.1.mp4?_=3\" \/><a href=\"\/lavalab\/files\/2019\/08\/Deaf-Children-Milestones-3.1.mp4\">\/lavalab\/files\/2019\/08\/Deaf-Children-Milestones-3.1.mp4<\/a><\/video><\/div><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<div class=\"bu-slideshow-container deaf-children-&amp;-language-milestones autoplay\" id=\"bu-slideshow-container-1721\" data-slideshow-name=\"deaf-children-&-language-milestones\" data-slideshow-delay=\"5000\" style=\"width: auto; \"><div class='slideshow-loader active'><div class='loader-animation'><\/div><p>loading slideshow...<\/p><\/div><div class=\"bu-slideshow-slides\"><ul class=\"bu-slideshow transition-slide\" id=\"bu-slideshow-1721\"><li id=\"bu-slideshow-1721_0\" class=\"slide \"><div class=\"bu-slide-container slide-caption-bottom-right\"><img src=\"\/lavalab\/files\/2019\/08\/Screen-Shot-2019-08-02-at-12.23.38-PM-768x510.png\" alt=\"\" \/><\/div><\/li><li id=\"bu-slideshow-1721_1\" class=\"slide \"><div class=\"bu-slide-container slide-caption-bottom-right\"><img src=\"\/lavalab\/files\/2019\/08\/Screen-Shot-2019-08-02-at-12.23.05-PM-768x502.png\" alt=\"\" \/><\/div><\/li><li id=\"bu-slideshow-1721_2\" class=\"slide \"><div class=\"bu-slide-container slide-caption-bottom-right\"><img src=\"\/lavalab\/files\/2019\/08\/Screen-Shot-2019-08-02-at-12.25.10-PM-768x510.png\" alt=\"\" \/><\/div><\/li><\/ul><\/div><div class=\"bu-slideshow-navigation-container\"><ul class=\"bu-slideshow-navigation nav-icon\" id=\"bu-slideshow-nav-1721\" aria-hidden=\"true\"><li><a href=\"#\" id=\"pager-1\" class=\" active\" aria-hidden=\"true\"><span>1<\/span><\/a><\/li> <li><a href=\"#\" id=\"pager-2\" class=\"\" aria-hidden=\"true\"><span>2<\/span><\/a><\/li> <li><a href=\"#\" id=\"pager-3\" class=\"\" aria-hidden=\"true\"><span>3<\/span><\/a><\/li> <\/ul><\/div><\/div>\n<p>&nbsp;<\/p>\n<h4><strong>How do adult ASL-signers process language?<\/strong><\/h4>\n<table width=\"100%\">\n<tbody>\n<tr>\n<td width=<\/strong> <strong>We study ASL production, comprehension, and processing in deaf adults from a range of backgrounds. We are interested in how adults process ASL as they are perceiving signs; how ASL phonology and semantics influences comprehension; and how signers choose to express various concepts in ASL. We use a range of approaches, primarily eye-tracking, to understand adult ASL processing.<\/strong>\n<\/td>\n<td width=\"1%\"><div style=\"width: 500px;\" class=\"wp-video\"><video class=\"wp-video-shortcode\" id=\"video-19-4\" width=\"500\" height=\"360\" preload=\"metadata\" controls=\"controls\"><source type=\"video\/mp4\" src=\"\/lavalab\/files\/2019\/08\/Adult-ASL-signers-3.1.mp4?_=4\" \/><a href=\"\/lavalab\/files\/2019\/08\/Adult-ASL-signers-3.1.mp4\">\/lavalab\/files\/2019\/08\/Adult-ASL-signers-3.1.mp4<\/a><\/video><\/div><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n","protected":false},"excerpt":{"rendered":"<p>What does joint attention look like in interactions between deaf children and their parents?<\/p>\n","protected":false},"author":12525,"featured_media":0,"parent":12,"menu_order":2,"comment_status":"closed","ping_status":"closed","template":"page-templates\/no-sidebars.php","meta":[],"_links":{"self":[{"href":"https:\/\/sites.bu.edu\/lavalab\/wp-json\/wp\/v2\/pages\/19"}],"collection":[{"href":"https:\/\/sites.bu.edu\/lavalab\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/sites.bu.edu\/lavalab\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/sites.bu.edu\/lavalab\/wp-json\/wp\/v2\/users\/12525"}],"replies":[{"embeddable":true,"href":"https:\/\/sites.bu.edu\/lavalab\/wp-json\/wp\/v2\/comments?post=19"}],"version-history":[{"count":51,"href":"https:\/\/sites.bu.edu\/lavalab\/wp-json\/wp\/v2\/pages\/19\/revisions"}],"predecessor-version":[{"id":2020,"href":"https:\/\/sites.bu.edu\/lavalab\/wp-json\/wp\/v2\/pages\/19\/revisions\/2020"}],"up":[{"embeddable":true,"href":"https:\/\/sites.bu.edu\/lavalab\/wp-json\/wp\/v2\/pages\/12"}],"wp:attachment":[{"href":"https:\/\/sites.bu.edu\/lavalab\/wp-json\/wp\/v2\/media?parent=19"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}