{"id":70,"date":"2018-09-28T14:26:03","date_gmt":"2018-09-28T18:26:03","guid":{"rendered":"https:\/\/sites.bu.edu\/emsconf\/?page_id=70"},"modified":"2018-12-11T13:46:21","modified_gmt":"2018-12-11T18:46:21","slug":"face-off-facial-recognition-technologies-and-humanity-in-an-era-of-big-data","status":"publish","type":"page","link":"https:\/\/sites.bu.edu\/emsconf\/past-conferences\/face-off-facial-recognition-technologies-and-humanity-in-an-era-of-big-data\/","title":{"rendered":"April 18, 2018 &#8211; Face-Off: Facial Recognition Technologies and Humanity in an Era of Big Data"},"content":{"rendered":"<p><a name=\"about\"><\/a><\/p>\n<h3>About the Event<\/h3>\n<p>As facial recognition technology becomes increasingly sophisticated, and the presence of such devices proves ubiquitous in both public and private spheres, it is critical for researchers to examine the potential effects on both individuals and society as a whole. To this end, the Division of Emerging Media Studies of Boston University\u2019s College of Communication is holding an international symposium to bring together diverse perspectives from social scientists, philosophers, policy-makers, and computer scientists to explore the social, behavioral, and psychological dimensions of this new technological terrain. This unique collection of voices intends to illuminate the various and often competing dimensions of a challenging, complex area of research. Ultimately, it hopes to trace out the implications for society, and the choices that we must collectively and individually make.<\/p>\n<p>Organized and chaired by:<br \/>\nJames E. Katz, Boston University <\/p>\n<hr class=\"no-line\"\/>\n<p><a name=\"speakers\"><\/a><\/p>\n<h3>Speakers &#038; Abstracts<\/h3>\n<div class=\"bu_collapsible_container \" aria-live=\"polite\" data-customize-animation=\"false\"><h4 class=\"bu_collapsible\" aria-expanded=\"false\"tabindex=\"0\" role=\"button\">Margrit Betke, Boston University<\/h4><div class=\"bu_collapsible_section\" style=\"display: none;\"><\/p>\n<p><strong><em>How Does the Face Recognition Technology Work?<\/em><\/strong><br \/>\nFace recognition is the task of identifying or verifying a person from an image or video. In recent years, the advances in artificial intelligence, in particular, the use of deep neural networks have produced astonishingly accurate face recognition systems.  This talk will present a high-level introduction to how current face recognition systems work and what they are capable of.  <\/p>\n<p><\/div>\n<\/div>\n\n<div class=\"bu_collapsible_container \" aria-live=\"polite\" data-customize-animation=\"false\"><h4 class=\"bu_collapsible\" aria-expanded=\"false\"tabindex=\"0\" role=\"button\">Mark Frank, University at Buffalo<\/h4><div class=\"bu_collapsible_section\" style=\"display: none;\"><\/p>\n<p><strong><em>The Biology of Facial Recognition<\/em><\/strong><br \/>\nA very underappreciated aspect of facial recognition is that it was important for survival throughout our evolutionary history.  This presentation will discuss the history, the reasons, and the social importance of our hard-wired abilities to recognize others. It will then extend those findings and ideas to suggest that any new technology that moderates this role would have important implications for our social interaction and internal feelings.  The presentation will end with informed speculation as to what those implications may be. <\/p>\n<p><\/div>\n<\/div>\n\n<div class=\"bu_collapsible_container \" aria-live=\"polite\" data-customize-animation=\"false\"><h4 class=\"bu_collapsible\" aria-expanded=\"false\"tabindex=\"0\" role=\"button\">Vanessa Nurock, Epidapo CNRS-UCLA & Universit\u00e9 Paris 8<\/h4><div class=\"bu_collapsible_section\" style=\"display: none;\"><\/p>\n<p><strong><em>The Risks and Benefits of Facial Recognition? A New Direction Needed<\/em><\/strong><br \/>\nWhat are the risks and benefits of facial recognition? This question is usually considered as the most relevant and efficient way to deal with ethical and political issues about facial recognition.  This talk suggests that a different approach may be useful to propose a fuller analysis of the ethical issues of facial recognition. I develop two different but related arguments. First, I argue that it is both necessary and urgent to go beyond this risk\/benefit stance in order to examine facial recognition as a particular case of an emerging technology. Second, I rely on philosophy and cognitive science to argue that a full-fledged ethics of facial recognition should deal with the three meanings of recognition. &#8216;Recognition&#8217; means, first, a form of identification, but second, it is also an act of intellectual apprehension and third, it is an act of acknowledging or respecting someone. Together, these two arguments will contribute to suggest a few directions for an ethics of facial recognition. <\/p>\n<p><\/div>\n<\/div>\n\n<div class=\"bu_collapsible_container \" aria-live=\"polite\" data-customize-animation=\"false\"><h4 class=\"bu_collapsible\" aria-expanded=\"false\"tabindex=\"0\" role=\"button\">Laura Spector-Sullivan, Harvard Medical School\u2019s Center for Bioethics <\/h4><div class=\"bu_collapsible_section\" style=\"display: none;\"><\/p>\n<p><strong><em>The Ethical Significance of the Face and AI Facial Recognition<\/em><\/strong><br \/>\nMany analyses of facial recognition focus on how to use this technology without ethical missteps. Some of these real and potential issues include biases in data sets making their way into algorithm operation, consent for the use of a face as a piece of biometric data, privacy of faces as intimate information that are nevertheless externally apparent, and accountability for facial recognition algorithms to be transparent and fair. While these are all very real and pressing issues, I would like to take a slightly different approach. I will consider the effects that facial recognition technology might have on ethics itself, where ethics is understood as a social practice of caring for and considering each other. I argue that part of this social practice is attunement and sensitivity to the nuances of other humans\u2019 facial expressions. I will then consider two questions: whether algorithms can be trained to interpret facial expressions in the same way, and the implications of offloading human facial recognition practices onto artificial systems. <\/p>\n<p><\/div>\n<\/div>\n\n<div class=\"bu_collapsible_container \" aria-live=\"polite\" data-customize-animation=\"false\"><h4 class=\"bu_collapsible\" aria-expanded=\"false\"tabindex=\"0\" role=\"button\">Derek Christensen, Accenture<\/h4><div class=\"bu_collapsible_section\" style=\"display: none;\"><\/p>\n<p><strong><em>An Industry Perspective on Facial Recognition Technology<\/em><\/strong><br \/>\nFacial recognition and cognitive services are advancing at a rapid pace and driving real change in industry. This segment will explore industry applications of facial, image, and video recognition, the underlying trend of mass personalization, and the combinatory power of facial recognition with adjacent technologies.<\/p>\n<p><\/div>\n<\/div>\n\n<div class=\"bu_collapsible_container \" aria-live=\"polite\" data-customize-animation=\"false\"><h4 class=\"bu_collapsible\" aria-expanded=\"false\"tabindex=\"0\" role=\"button\">Pierre Piazza, Cergy-Pontoise University<\/h4><div class=\"bu_collapsible_section\" style=\"display: none;\"><\/p>\n<p><strong><em>Alphonse Bertillon: Issues in the \u201cScientific\u201d Identification of Persons by Means of Facial Features at the Turn of the 20th Century<\/em><\/strong><br \/>\nAuthorities had been attempting to determine the true identity of individuals based on written descriptions of their faces since the Middle Ages at least, as evidenced by the development of various forms of descriptions and wanted notices at the time. Attempts at formalizing and\/or codifying this can be observed in the early 19th century. Reliance on facial features by policing authorities to identify individuals was not, however, the most rigorous of processes yet, as both its terminology and practices focused essentially on the most conspicuous physical characteristics\u2014even going as far as assessing an individual\u2019s degree of dangerousness from certain elements of their facial morphology.  This talk specifically explores the portrait parl\u00e9 (French for \u201cspoken portrait\u201d), the real \u201crevolution\u201d in the field, which was a method for describing facial features developed and implemented by Alphonse Bertillon over the last three decades of the 19th century \u2013 initially as he worked for the Paris pr\u00e9fecture de police (police department). The signalement descriptif (\u201cdescriptive portrayal\u201d), as it was also called, was intended to supplement the anthropometric measurements that were already being relied upon to discriminate among individuals. The objective was to capture human physiognomy as accurately as possible, and the tool soon proved extremely useful as an evidence-based method for establishing the uniqueness of each repeat offender\u2019s identity.\n<\/p>\n<p><\/div>\n<\/div>\n\n<div class=\"bu_collapsible_container \" aria-live=\"polite\" data-customize-animation=\"false\"><h4 class=\"bu_collapsible\" aria-expanded=\"false\"tabindex=\"0\" role=\"button\">Clare Garvie and Alvaro Bedoya, Georgetown Center for Privacy and Technology Jonathan Frankle, Massachusetts Institute of Technology <\/h4><div class=\"bu_collapsible_section\" style=\"display: none;\"><\/p>\n<p><strong><em>The Perpetual Line-Up: Unregulated Police Face Recognition in America.<\/em><\/strong><br \/>\nPeople often think of face recognition technology as some science fiction future. In reality, half of all American adults are in a police or FBI face recognition network. Police face recognition technology is much more pervasive than people realize &#8211; yet it is not subject to any meaningful system of regulation. In this talk, Clare Garvie, Alvaro Bedoya and Jonathan Frankle will discuss The Perpetual Line-Up: Unregulated Police Face Recognition in America, the Center&#8217;s year-long investigation into police use of face recognition. In a world where we have no (formal) reasonable expectation of privacy in public, how should the law protect Americans&#8217; privacy, civil liberties, and civil rights?<\/p>\n<p><\/div>\n<\/div>\n\n<div class=\"bu_collapsible_container \" aria-live=\"polite\" data-customize-animation=\"false\"><h4 class=\"bu_collapsible\" aria-expanded=\"false\"tabindex=\"0\" role=\"button\">Luke Stark, Dartmouth College<\/h4><div class=\"bu_collapsible_section\" style=\"display: none;\"><\/p>\n<p><strong><em>Emotion, Classification, and Race in Facial Recognition Systems <\/em><\/strong><br \/>\nFacial recognition systems are increasingly common components of commercial smartphones such as the iPhone X and the Samsung Galaxy S9. These technologies are also increasingly being put to use in consumer-facing social media video-sharing applications, such as Apple\u2019s animoji, Facebook Messenger\u2019s masks and filters and Samsung\u2019s AR Emoji. These animations serve as technical phenomena translating moments of affective and emotional expression into mediated socially legible forms. Through an analysis of company patents, technical and promotional materials, and broader literature on digital animation, this paper considers the ways these facial recognition systems classify and categorize racial identities in human faces. The paper considers the potential both for racializing logics as part of these systems of classification, and how data regarding emotional expression gathered through these systems might interact with identity-based forms of classification. <\/p>\n<p><\/div>\n<\/div>\n\n<div class=\"bu_collapsible_container \" aria-live=\"polite\" data-customize-animation=\"false\"><h4 class=\"bu_collapsible\" aria-expanded=\"false\"tabindex=\"0\" role=\"button\">Daniel Halpern, Pontifical Catholic University of Chile<\/h4><div class=\"bu_collapsible_section\" style=\"display: none;\"><\/p>\n<p><strong><em>Public Perceptions and Concerns: A View from Chile<\/em><\/strong><br \/>\nThis experiment, conducted with 2,380 Chilean professionals, aimed to understand how individuals react to positive and negative scenarios about the impact of facial recognition technologies, and which psychological factors affect subjects&#8217; attitudes toward the acceptance of this technology. Results showed that in the negative condition individuals are less willing to accept its use, and this relationship is mediated by the attitude to this technology (Cyber-Utopism and Dystopianism) and the level of security elicited by the experiment. The study also confirmed how several predictors such as exposition to sci-fi series or movies, recognition of human characteristics in Artificial Intelligence, religiosity and conspiracy mentality impact the acceptance of facial recognition.<\/p>\n<p><\/div>\n<\/div>\n\n<div class=\"bu_collapsible_container \" aria-live=\"polite\" data-customize-animation=\"false\"><h4 class=\"bu_collapsible\" aria-expanded=\"false\"tabindex=\"0\" role=\"button\">Lora Appel, OpenLab University Health Network, Toronto; York University, Toronto<\/h4><div class=\"bu_collapsible_section\" style=\"display: none;\">\n<p><strong><em>Putting a Face to a Name in Clinical Settings: Recognizing Faces in Manifestation and Outcomes of Clinician Anonymity.<\/em><\/strong><br \/>\nHealthcare is characterized by \u201cmore to do, more to know, more to manage, and more people involved than ever before.\u201d This reality brings challenges to staff working in rapidly changing teams and stressful environments. The inability to recognize colleagues (by face for example) and the general lack of familiarity within teams undermines the effective collaboration that is critical to the delivery of patient care. This phenomenon, termed \u2018clinician anonymity,\u2019 was neither well defined nor understood, and few studies focused on interventions that addressed its causes and effects.  This study focused on two novel elements: studying the phenomenon of \u2018clinician anonymity\u2019 and its effects on inter-professional communication in hospital practice and applying design science theory and methodology to understand and address communication problems\n<\/p>\n<p><\/div>\n<\/div>\n<br \/>\n<hr class=\"no-line\"\/><\/p>\n<h4>Speaker Biographies<\/h4>\n<p><em><strong>Lora Appel<\/strong><\/em> is an assistant professor of Health Informatics at the Faculty of Health at York University, and a Research Scientist at OpenLab, and innovation Centre housed at University Health Network, the largest medical research organization in Canada. She leads \u201cPrescribing Virtual Reality (VRx)\u201d a collection of studies that introduce and evaluate AR\/VR\/MR interventions for patients, caregivers, and healthcare providers. Lora has received several grants from the Centre for Aging in Brain Health innovation (CABHI) to pursue this work in aging and dementia care. She is also involved in the design of a new curriculum incorporating VR for the School of Nursing at York University. Lora received her PhD from the School of Communication and Information at Rutgers University and was awarded the Gerald R. Miller Outstanding Doctoral Dissertation Award in 2017 for her work defining clinician anonymity and designing \u201cFace2Name\u201d a tool to improve interprofessional communication in clinical settings. Her expertise is in applying design thinking and science methodologies to healthcare innovation; she is passionate about designing new technological interventions that provide care in the pursuit of a cure. www.PrescribingVR.com<\/p>\n<p>\t<div  class=\"responsiveVideo-wrapper col-full\">\n\t\t<div class=\"responsiveVideo\">\n\t\t\t<iframe src=https:\/\/www.bu.edu\/buniverse\/interface\/embed\/embed.html?v=1lMLDt0 width=550 height=310 frameborder=0><\/iframe>\n\t\t<\/div>\n\t\t\n\t<\/div><br \/>\n<hr class=\"no-line\"\/><\/p>\n<p><em><strong>Alvaro M. Bedoya <\/strong><\/em> is the founding executive director of the Center on Privacy and Technology at Georgetown Law, where he teaches a joint privacy course with the Massachusetts Institute of Technology. Prior to joining Georgetown Law, he served as Chief Counsel to the Senate Judiciary Subcommittee on Privacy, Technology and the Law. In 2016, he co-authored the Center&#8217;s year-long investigation into police use of face recognition, The Perpetual Line-Up: Unregulated Police Face Recognition in America. He is a graduate of Harvard College and Yale Law School, where he received the Paul &#038; Daisy Soros Fellowship for New Americans and was an editor of the Yale Law Journal. You can follow him on Twitter @alvarombedoya.<\/p>\n<hr class=\"no-line\"\/>\n<p><em><strong>Margrit Betke <\/strong><\/em> is a Professor in the Department of Computer Science and a Co-Director of the Artificial Intelligence Research (AIR) Initiative at Boston University.  Her research interests include video-based human-computer interfaces, facial expressivity and gesture analysis, and assistive technology.<\/p>\n<p>\t<div  class=\"responsiveVideo-wrapper col-full\">\n\t\t<div class=\"responsiveVideo\">\n\t\t\t<iframe src=https:\/\/www.bu.edu\/buniverse\/interface\/embed\/embed.html?v=z5jSG0 width=550 height=310 frameborder=0><\/iframe>\n\t\t<\/div>\n\t\t\n\t<\/div><br \/>\n<hr class=\"no-line\"\/><\/p>\n<p><em><strong>Derek Christensen <\/strong><\/em> is the Innovation Lead at Accenture\u2019s Boston Liquid Studio, a rapid prototyping group focused on new and emerging technologies. In this role he facilitates conversations and workshops with companies to discuss how their business problems could be addressed with current or future-state technology, as well as exploring non-technical innovative solutions. Derek brings a unique blend of functional knowledge and technical capabilities. His work experience includes virtual agent delivery, API design, mobile application development, and extensive program and project management. He has a BS and MS in Information Systems from Brigham Young University and joined Accenture in 2008.<\/p>\n<p>\t<div  class=\"responsiveVideo-wrapper col-full\">\n\t\t<div class=\"responsiveVideo\">\n\t\t\t<iframe src=https:\/\/www.bu.edu\/buniverse\/interface\/embed\/embed.html?v=1tEOyn0 width=550 height=310 frameborder=0><\/iframe>\n\t\t<\/div>\n\t\t\n\t<\/div><br \/>\n<hr class=\"no-line\"\/><\/p>\n<p><em><strong>Mark G. Frank <\/strong><\/em> received his Ph.D. in Social Psychology from Cornell University, followed by a National Research Service Award postdoc in the Psychiatry Department at the University of California at San Francisco Medical School.  From there he joined the School of Psychology at the University of New South Wales in Sydney, Australia, where he worked for 4 years until he joined the Communication Department at Rutgers University.  In 2005 he accepted a position at the University of Buffalo.  He has published numerous papers on facial expressions, emotion, interpersonal deception, and violence in extremist groups, and has recently won the SUNY Chancellor\u2019s Award for Excellence in Scholarship and Creative Activities.  He has had research funding from The National Science Foundation, US Department of Homeland Security, and the US Department of Defense to examine deception and hidden emotion behaviors in checkpoint, law enforcement, and counter-terrorism situations, as well as aggression in extremist groups.  He is also the co-developer of a patented automated computer system to read facial expressions, for which he won a Visionary Innovator Award. He has used these findings to lecture, consult with and train virtually all US Federal Law Enforcement\/Intelligence Agencies, as well as local\/state and select foreign countries such as Canada, Australian and The UK.  He has also done briefings on deception and counter-terrorism to the US Congress as well as the US National Academies of Sciences. He was also an original member of the FBI Behavioral Science Unit\u2019s Terrorism Research and Analysis Project.  <\/p>\n<p>\t<div  class=\"responsiveVideo-wrapper col-full\">\n\t\t<div class=\"responsiveVideo\">\n\t\t\t<iframe src=https:\/\/www.bu.edu\/buniverse\/interface\/embed\/embed.html?v=1aLXAo0 width=550 height=310 frameborder=0><\/iframe>\n\t\t<\/div>\n\t\t\n\t<\/div><br \/>\n<hr class=\"no-line\"\/><\/p>\n<p><em><strong>Jonathan Frankle <\/strong><\/em> is a PhD student in the Computer Science and Artificial Intelligence Laboratory at MIT, where he is a member of the Internet Policy Research Initiative. He studies the basic science of neural networks, applications of homomorphic encryption, and connections between the two. Prior to arriving at MIT, he served as the first staff technologist at Georgetown Law&#8217;s Center on Privacy and Technology, where he worked on The Perpetual Lineup and taught the inaugural offering of Computer Programming for Lawyers. He earned his bachelor&#8217;s and master&#8217;s degrees in computer science at Princeton. He has spent summers at Google (encryption key management, cryptography research) and Microsoft; he will spend this summer at Google Brain.<\/p>\n<hr class=\"no-line\"\/>\n<p><em><strong>Clare Garvie <\/strong><\/em> is an associate with the Center on Privacy &#038; Technology at Georgetown Law. Her current research focuses on how government use of face recognition impacts the average citizen, and the ways citizens, public defenders, and policymakers can ensure that the technology is under control. She is a co-author of The Perpetual Line-Up: Unregulated Police Face Recognition in America. She received her J.D. from Georgetown Law and her B.A. from Barnard College. Prior to her current position she worked on human rights issues and international criminal law with the International Center for Transitional Justice. You can follow her on Twitter @clareangelyn.<\/p>\n<p>\t<div  class=\"responsiveVideo-wrapper col-full\">\n\t\t<div class=\"responsiveVideo\">\n\t\t\t<iframe src=https:\/\/www.bu.edu\/buniverse\/interface\/embed\/embed.html?v=1cji6x0 width=550 height=310 frameborder=0><\/iframe>\n\t\t<\/div>\n\t\t\n\t<\/div><br \/>\n<hr class=\"no-line\"\/><\/p>\n<p><em><strong>Daniel Halpern <\/strong><\/em> is an associate professor in the School of Communications at the Catholic University of Chile and Director of TrenDigital (www.tren-digital.cl), a think tank where he studies, teaches and does consulting work on social media and online behavior. His research focuses on the social consequences of the use of Information and Communication Technologies. He has published several books and articles on journals such as Journal of Computer-Mediated Communication, International Journal of Communication, Computers in Human Behavior, Personality and Individual Differences and Behaviour &#038; Information Technology.<\/p>\n<p>\t<div  class=\"responsiveVideo-wrapper col-full\">\n\t\t<div class=\"responsiveVideo\">\n\t\t\t<iframe src=https:\/\/www.bu.edu\/buniverse\/interface\/embed\/embed.html?v=1Vsn2W0 width=550 height=310 frameborder=0><\/iframe>\n\t\t<\/div>\n\t\t\n\t<\/div><br \/>\n<hr class=\"no-line\"\/><\/p>\n<p><em><strong>Vanessa Nurock  <\/strong><\/em> is associate professor in Political Theory and Ethics at Paris 8 University and visiting associate researcher in philosophy at EPIDAPO (CNRS-UCLA) in 2016-2018. Her research stands at the interface between ethics, politics and emerging science.  Her books and articles address issues concerning bioethics, nanoethics, neuroethics, environmental and animal ethics, robots ethics, as well as ethics and politics of care and justice. Selected books : Sommes-nous naturellement moraux (In French PUF 2012) and  Rawls, pour une soci\u00e9t\u00e9 juste (in French : Michalon 2008, in Spanish : Jusbaires 2015). Selected articles on new and emergent technologies (nanotechnologies) in English : \u2018Nanoethics: ethics for, from or with nanotechnologies?\u2019 Hyl\u00e9 2010 16-1, p.31-42, S. Pell\u00e9, V. Nurock, \u2018Of nanochips and persons: toward an ethics of diagnostic technology in personalized medicine\u2019, Nanoethics 2012, 6(3), p. 155-165, Nurock, V &#038; Panissal N., \u2018Teaching a care approach to Nanotechnologies\u2019 In From Nanotechnologies to Emerging Technologies: Towards a Global Responsibility,D. M. Bowman et al. (ed.) AKA Verlag, 2016, p. 125-137.\n<\/p>\n<p>\t<div  class=\"responsiveVideo-wrapper col-full\">\n\t\t<div class=\"responsiveVideo\">\n\t\t\t<iframe src=https:\/\/www.bu.edu\/buniverse\/interface\/embed\/embed.html?v=1GkBS60 width=550 height=310 frameborder=0><\/iframe>\n\t\t<\/div>\n\t\t\n\t<\/div><br \/>\n<hr class=\"no-line\"\/><\/p>\n<p><em><strong>Pierre Piazza <\/strong><\/em> is a lecturer in political science at Cergy-Pontoise (CESDIP-LEJEP- CLAMOR), University near Paris. He is a specialist of the social history of state identification systems and techniques. He has published several papers and books on the Bertillon system (anthropometry), finger-printing (dactyloscopy), identity cards, police files and biometrics.<\/p>\n<p>\t<div  class=\"responsiveVideo-wrapper col-full\">\n\t\t<div class=\"responsiveVideo\">\n\t\t\t<iframe src=https:\/\/www.bu.edu\/buniverse\/interface\/embed\/embed.html?v=1rRwdK0 width=550 height=310 frameborder=0><\/iframe>\n\t\t<\/div>\n\t\t\n\t<\/div><br \/>\n<hr class=\"no-line\"\/><\/p>\n<p><em><strong>Laura Specker Sullivan<\/strong><\/em> PhD, a specialist in interdisciplinary and cross-cultural ethics, is a research fellow at the Center for Bioethics, Harvard Medical School. From 2015-2017 she was a postdoctoral fellow at the Center for Sensorimotor Neural Engineering, University of Washington and the National Core for Neuroethics, University of British Columbia. She received her PhD in philosophy from the University of Hawaii at Manoa in 2015, after spending two years as an international researcher at the Kokoro Research Center, Kyoto University. She is currently the chair of the Neuroethics Affinity Group for the American Society for Bioethics and Humanities and a member of the Philosophy and Medicine committee of the American Philosophical Association. She has published articles in the Journal of Neural Engineering, Science, Science and Engineering Ethics, the American Journal of Bioethics &#8211; Neuroscience, the Journal of Medical Ethics, the Kennedy Institute of Ethics Journal, the International Journal of Philosophical Studies, and Social Science and Medicine.<\/p>\n<p>\t<div  class=\"responsiveVideo-wrapper col-full\">\n\t\t<div class=\"responsiveVideo\">\n\t\t\t<iframe src=https:\/\/www.bu.edu\/buniverse\/interface\/embed\/embed.html?v=27Tq690 width=550 height=310 frameborder=0><\/iframe>\n\t\t<\/div>\n\t\t\n\t<\/div><br \/>\n<hr class=\"no-line\"\/><\/p>\n<p><em><strong>Luke Stark <\/strong><\/em> is a Postdoctoral Fellow in the Department of Sociology at Dartmouth College, and a Fellow at the Berkman Klein Center for Internet &#038; Society at Harvard University. Luke studies the historical and contemporary intersections of digital media and behavioral science, and how psychological techniques are incorporated into social media platforms, mobile apps, and artificial intelligence (AI) systems. His broader research interrogates how these behavioral technologies affect human privacy, emotional expression, and digital labor, and the social and political challenges that technologists, policymakers, and the wider public face as a result. Luke holds a PhD from the Department of Media, Culture, and Communication at New York University, and an Honours BA and MA from the University of Toronto; he has been a Fellow of the NYU School of Law\u2019s Information Law Institute (ILI), and an inaugural Fellow with the University of California Berkeley\u2019s Center for Technology, Society, and Policy (CTSP). He tweets @luke_stark; learn more at https:\/\/starkcontrast.co.<\/p>\n\t<div  class=\"responsiveVideo-wrapper col-full\">\n\t\t<div class=\"responsiveVideo\">\n\t\t\t<iframe src=https:\/\/www.bu.edu\/buniverse\/interface\/embed\/embed.html?v=6BRSM0 width=550 height=310 frameborder=0><\/iframe>\n\t\t<\/div>\n\t\t\n\t<\/div>\n<hr class=\"no-line\"\/>\n<h3>Overview<\/h3>\n<p>Efforts to understand facial expressions and determine identity through technological means has existed since at least the 1960s (Gates, 2011). Decades of technological advancement have amplified the capacity for machines to discern individual identities, and today, facial recognition technology offers promising opportunities in sundry domains; algorithmically- informed predictability can offer substantial benefits in policing and security (Ricanek &#038; Boehnen, 2012), medicine (Tan, Gilani, Mayberry, Mian, Hunt, Walters, &#038; Whitehouse, 2017), and commercial endeavors (Deng, Navarathna, Carr, Mandt, Yue, Matthews, &#038; Mori, 2017). However, these opportunities are simultaneously met with several challenges, such as the lack of regulation (Garvie, Bedoya, &#038; Frankle, 2016), potential for flawed data through algorithmic bias (Introna, 2005; Introna &#038; Wood, 2004), and infringements on personal privacy, particularly with the influx of photo sharing via social media platforms and resultant access to big data (Gasser, 2016; Mohapatra, 2016; Nakar &#038; Greenbaum, 2017; Shaw, 2012).<\/p>\n<p>To more fully understand the complexities of facial recognition technology and its consequences, the Division of Emerging Media Studies at Boston University presents an international symposium, where scholars from a variety of fields will discuss the promises and perils. An interdisciplinary, cross-cutting approach will help to facilitate an in-depth examination of the topic through paper presentations, panel discussions, and a poster session. The symposium will encourage the audience, both in-person and via virtual livestream, to participate actively with questions and debate. The goal of the event is for participants to not only develop a deep understanding of the competing issues at play but also identify actionable next steps within their fields of study.\n<\/p>\n<hr\/>\n<p><a name=\"agenda\"><\/a><\/p>\n<h3>Agenda (Subject to Change)<\/h3>\n<ul>\n<li>April 17th: Welcome Reception for Speakers and Panelists, location TBD<\/li>\n<li>April 18th: Hillel Center, 213 Bay State Rd., Boston, MA, USA<\/li>\n<\/ul>\n<table>\n<caption style=\"font-size: 1em;\">April 18 Schedule<\/caption>\n<colgroup>\n<col style=\"width:25%;\">\n<col style=\"width:75%;\">\n<\/colgroup>\n<thead>\n<tr>\n<th>Time<\/th>\n<th>Session<\/th>\n<\/tr>\n<\/thead>\n<tr>\n<td><strong>9:00 &#8211; 9:30 <\/strong><\/td>\n<td>Coffee &#038; Registration<\/td>\n<\/tr>\n<tr>\n<td><strong>9:30 &#8211; 9:40 <\/strong><\/td>\n<td>Welcoming Remarks<\/td>\n<\/tr>\n<tr>\n<td><strong>9:40 &#8211; 10:40 <\/strong><\/td>\n<td>Panel 1: Understanding Facial Recognition <\/td>\n<\/tr>\n<tr>\n<td><strong>10:40 &#8211; 10:50<\/strong><\/td>\n<td>Coffee Break<\/td>\n<\/tr>\n<tr>\n<td><strong>10:50 &#8211; 12:00pm <\/strong><\/td>\n<td>Panel 2: Ethical Concerns and Practical Benefits<\/td>\n<\/tr>\n<tr>\n<td><strong>12pm &#8211; 12:40 <\/strong><\/td>\n<td>Lunch<\/td>\n<\/tr>\n<tr>\n<td><strong>12:40 &#8211; 2:00 <\/strong><\/td>\n<td>Panel 3: Historical and Contemporary Uses and Abuses<\/td>\n<\/tr>\n<tr>\n<td><strong>2:00 &#8211; 2:10 <\/strong><\/td>\n<td>Coffee Break<\/td>\n<\/tr>\n<tr>\n<td><strong>2:10 &#8211; 3:10 <\/strong><\/td>\n<td>Panel 4: Applications and Perceptions<\/td>\n<\/tr>\n<tr>\n<td><strong>3:10 &#8211; 3:15 <\/strong><\/td>\n<td>Conclusions &#038; Final Remarks<\/td>\n<\/tr>\n<tr>\n<td><strong>3:20 <\/strong><\/td>\n<td>Adjournment<\/td>\n<\/tr>\n<tr>\n<td><strong>3:30 &#8211; 4:30 <\/strong><\/td>\n<td>Attendees invited to <a href=\"http:\/\/www.bu.edu\/com\/calendar\/?eid=208237\" rel=\"noopener\">DeFleur Distinguished Lectureship<\/a> or the Computer Science Distinguished Lectureship<\/td>\n<\/tr>\n<\/table>\n<p><a name=\"board\"><\/a><\/p>\n<h3>International Scientific Advisory Board<\/h3>\n<ul>\n<li>Appel, Lora &#8211; OpenLab, University Health Network, Toronto<\/li>\n<li>Betke, Margrit \u2013 Boston University <\/li>\n<li>Brito, Eliane P. Z. \u2013 Funda\u00e7\u00e3o Getulio Vargas, S\u00e3o Paulo <\/li>\n<li>Caronia, Letizia &#8211; Universit\u00e0 degli Studi di Bologna<\/li>\n<li>Chen, Yi-Fan \u2013 Miami University<\/li>\n<li>Cushman, Ellen \u2013 Northeastern University<\/li>\n<li>Floyd, Juliet \u2013 Boston University<\/li>\n<li>Laugier, Sandra \u2013 Sorbonne University<\/li>\n<li>Lim, Sun Sun \u2013 Singapore University of Technology &#038; Design<\/li>\n<li>Neff, Gina \u2013 Oxford University<\/li>\n<li>Poiger, Uta \u2013 Northeastern University<\/li>\n<li>Soysal, Zeynep \u2013 Boston University<\/li>\n<li>Takahashi, Toshie &#8211; Waseda University<\/li>\n<\/ul>\n<hr\/>\n<p><iframe style=\"width:100%; margin-bottom:1em;\" src=\"https:\/\/www.google.com\/maps\/embed?pb=!1m18!1m12!1m3!1d2948.6113913654767!2d-71.10529968431112!3d42.35080794367792!2m3!1f0!2f0!3f0!3m2!1i1024!2i768!4f13.1!3m3!1m2!1s0x89e379fa0b91817d%3A0x8487448a54c2a3c3!2sBoston+University+Hillel+Foundation!5e0!3m2!1sen!2sus!4v1515703145697\" height=\"450\" frameborder=\"0\" style=\"border:0\" allowfullscreen><\/iframe><br \/>\n<a href=\"#top\">Back to top<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>About the Event As facial recognition technology becomes increasingly sophisticated, and the presence of such devices proves ubiquitous in both public and private spheres, it is critical for researchers to examine the potential effects on both individuals and society as a whole. To this end, the Division of Emerging Media Studies of Boston University\u2019s College [&hellip;]<\/p>\n","protected":false},"author":14812,"featured_media":0,"parent":40,"menu_order":3,"comment_status":"closed","ping_status":"closed","template":"","meta":[],"_links":{"self":[{"href":"https:\/\/sites.bu.edu\/emsconf\/wp-json\/wp\/v2\/pages\/70"}],"collection":[{"href":"https:\/\/sites.bu.edu\/emsconf\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/sites.bu.edu\/emsconf\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/sites.bu.edu\/emsconf\/wp-json\/wp\/v2\/users\/14812"}],"replies":[{"embeddable":true,"href":"https:\/\/sites.bu.edu\/emsconf\/wp-json\/wp\/v2\/comments?post=70"}],"version-history":[{"count":13,"href":"https:\/\/sites.bu.edu\/emsconf\/wp-json\/wp\/v2\/pages\/70\/revisions"}],"predecessor-version":[{"id":216,"href":"https:\/\/sites.bu.edu\/emsconf\/wp-json\/wp\/v2\/pages\/70\/revisions\/216"}],"up":[{"embeddable":true,"href":"https:\/\/sites.bu.edu\/emsconf\/wp-json\/wp\/v2\/pages\/40"}],"wp:attachment":[{"href":"https:\/\/sites.bu.edu\/emsconf\/wp-json\/wp\/v2\/media?parent=70"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}