{"id":116,"date":"2018-05-16T23:35:50","date_gmt":"2018-05-17T03:35:50","guid":{"rendered":"https:\/\/sites.bu.edu\/aiem\/?page_id=116"},"modified":"2023-08-10T16:38:49","modified_gmt":"2023-08-10T20:38:49","slug":"research","status":"publish","type":"page","link":"https:\/\/sites.bu.edu\/aiem\/research\/","title":{"rendered":"Research"},"content":{"rendered":"<p>Please check out our research projects:<\/p>\n<ul>\n<li><a href=\"https:\/\/sites.bu.edu\/aiem\/visual-journalism-dalle\/\">Affective Response towards AI Generated Multi-modal News<\/a><\/li>\n<\/ul>\n<p style=\"padding-left: 30px;\"><span style=\"color: #008080;\">Sejin Paik, Sarah Bonna, Ekaterina Novozhilova, Ge Gao, Jongin Kim, Derry Tanti Wijaya, Margrit Betke<\/span><\/p>\n<p style=\"padding-left: 30px;\">This study explores the affective responses and newsworthiness perceptions of generative AI for visual journalism. While generative AI offers advantages for newsrooms in terms of producing unique images and cutting costs, the potential misuse of AI-generated news images is a cause for concern. For our study, we designed a 3-part news image codebook for affect-labeling news images based on journalism ethics and photography guidelines. We collected 200 news headlines and images retrieved from a variety of U.S. news sources on the topics of gun violence and climate change, generated corresponding news images from DALL-E 2 and asked study participants to annotate their emotional responses to the human-selected and AI-generated news images following the codebook. We also examined the impact of modality on emotions by measuring the effects of visual and textual modalities on emotional responses. The findings of this study provide insights into the quality and emotional impact of generative news images produced by humans and AI. Further, results of this work can be useful in developing technical guidelines as well as policy measures for the ethical use of generative AI systems in journalistic production. The codebook, images and annotations are made publicly available to facilitate future research in affective computing, specifically tailored to civic and public-interest journalism.<\/p>\n<ul>\n<li><a href=\"https:\/\/sites.bu.edu\/aiem\/multi-modal-emotion-reaction-prediction-towards-gun-violence-news\/\">Multi-modal Emotion prediction towards Gun Violence News<\/a><\/li>\n<\/ul>\n<p style=\"padding-left: 30px;\"><span style=\"color: #008080;\">Ge Gao, Sejin Paik, Carley Reardon, Yanling Zhao, Lei Guo, Prakash Ishwar, Margrit Betke, Derry Tanti Wijaya<\/span><\/p>\n<p style=\"padding-left: 30px;\"><span>This study created a novel dataset BU-NEmo+ and provided a benchmark for predicting people&#8217;s emotional reactions towards multi-modal (images and headlines) news content related to gun violence. In curating the dataset, we developed methods to identify news items that will trigger similar versus divergent emotional responses. <\/span>All prediction models outperformed our baselines by significant margins across several metrics. News consumers and social media platforms could use our models to safeguard against manipulative news content and predict whether a post is likely to be click-bait.<\/p>\n<ul>\n<li><a href=\"https:\/\/sites.bu.edu\/aiem\/detecting-frames-in-news-headlines-and-its-application-to-analyzing-news-framing-trends-surrounding-u-s-gun-violence\/\">Detecting frames in news headlines and its application to analyzing news framing trends surrounding U.S. gun violence<\/a><\/li>\n<\/ul>\n<p style=\"padding-left: 30px;\"><span style=\"color: #008080;\"> Siyi Liu, Lei Guo, Kate Mays, Margrit Betke, Derry Tanti Wijaya<\/span><\/p>\n<p style=\"padding-left: 30px;\"><span>This Gun Violence Frame Corpus (GVFC) was curated and annotated by journalism and communication experts. Our proposed approach sets a new state-of-the-art performance for multiclass news frame detection, significantly outperforming a recent baseline by 35.9% absolute difference in accuracy. We apply our frame detection approach in a large scale study of 88k news headlines about the coverage of gun violence in the U.S. between 2016 and 2018.<\/span><\/p>\n<ul>\n<li><a href=\"https:\/\/sites.bu.edu\/aiem\/evaluatingcrowdcoding\/\">Accurate, Fast, But Not Always Cheap: Evaluating \u201cCrowdcoding\u201d as an Alternative Approach to Analyze Social Media Data<\/a><\/li>\n<\/ul>\n<p style=\"padding-left: 30px;\"><span style=\"color: #008080;\"> Lei Guo, Kate Mays, Sha Lai, Mona Jalal, Prakash Ishwar, Margrit Betke<\/span><\/p>\n<p style=\"padding-left: 30px;\"><span>This study evaluated the validity and efficiency of crowdcoding based on the analysis of 4,000 tweets about the 2016 U.S. presidential election. The results show that compared with the traditional quantitative content analysis, crowdcoding yielded comparably valid results and was superior in efficiency, but was more expensive under most circumstances.<\/span><\/p>\n<ul>\n<li><a href=\"https:\/\/sites.bu.edu\/aiem\/dynamic-alloc\/\">Dynamic Allocation of Crowd Contributions for Sentiment Analysis during the 2016 U.S. Presidential Election<\/a><\/li>\n<\/ul>\n<p style=\"padding-left: 30px;\"><span style=\"color: #008080;\">Mehrnoosh Sameki, Mattia Gentil, Kate K. Mays, Lei Guo, Margrit Betke<\/span><\/p>\n<p style=\"padding-left: 30px;\"><span>We explore two dynamic-allocation methods: (1) The number of workers queried to label a tweet is computed offline based on the predicted difficulty of discerning the sentiment of a particular tweet. (2) The number of crowd workers is determined online, during an iterative crowd sourcing process, based on inter-rater agreements between labels.We applied our approach to 1,000 twitter messages about the four U.S. presidential candidates Clinton, Cruz, Sanders, and Trump, collected during February 2016.\u00a0<\/span><\/p>\n<ul>\n<li><a href=\"https:\/\/sites.bu.edu\/aiem\/dict-based-analysis-and-unsupervised-modeling\/\">Big Social Data Analytics in Journalism and Mass Communication: Comparing Dictionary-Based Text Analysis and Unsupervised Topic Modeling<\/a><\/li>\n<\/ul>\n<p style=\"padding-left: 30px;\"><span style=\"color: #008080;\">Lei Guo, Chris J. Vargo, Zixuan Pan, Weicong Ding, Prakash Ishwar<\/span><\/p>\n<p style=\"padding-left: 30px;\"><span>By applying two \u201cbig data\u201d methods to make sense of the same dataset\u201477 million tweets about the 2012 U.S. presidential election\u2014the study provides a starting point for scholars to evaluate the efficacy and validity of different computer-assisted methods for conducting journalism and mass communication research, especially in the area of political communication.<\/span><\/p>\n<ul>\n<li><a href=\"https:\/\/sites.bu.edu\/aiem\/crowdsourcing-to-crowdcoding\/\">From Crowdsourcing to Crowdcoding: An Alternative Approach to Annotate Big Data in Communication Research<\/a><\/li>\n<\/ul>\n<p style=\"padding-left: 30px;\"><span style=\"color: #008080;\">Lei Guo, Kate Mays, Sha Lai, Mona Jalal, Prakash Ishwar, Margrit Betke<\/span><\/p>\n<p style=\"padding-left: 30px;\"><span>This study evaluated the validity and efficiency of crowdcoding based on the analysis of 4,000 tweets about the 2016 U.S. presidential election. The results show that compared with the traditional quantitative content analysis, crowdcoding yielded comparably valid results and was superior in efficiency, but was more expensive under most circumstances.<\/span><\/p>\n<ul>\n<li><a href=\"https:\/\/sites.bu.edu\/aiem\/performance-comparison-of-tools\/\">Performance Comparison of Crowdworkers and NLP Tools on Named-Entity Recognition and Sentiment Analysis of Political Tweets<\/a><\/li>\n<\/ul>\n<p style=\"padding-left: 30px;\"><span style=\"color: #008080;\">Mona Jalal, Kate K. Mays, Lei Guo, Margrit Betke<\/span><\/p>\n<p style=\"padding-left: 30px;\">Our experiments show that, for our dataset of political tweets, the most accurate NER system, Google Cloud NL, performed almost on par with crowdworkers, but the most accurate ELS analysis system, TensiStrength, did not match the accuracy of crowdworkers by a large margin of more than 30 percent points.<\/p>\n<p>&nbsp;<\/p>\n<p style=\"padding-left: 30px;\">These research have been sponsored to date by:<\/p>\n<p style=\"padding-left: 30px;\"><img loading=\"lazy\" src=\"\/aiem\/files\/2020\/04\/download.jpeg\" alt=\"\" width=\"224\" height=\"225\" class=\"alignnone size-full wp-image-325\" srcset=\"https:\/\/sites.bu.edu\/aiem\/files\/2020\/04\/download.jpeg 224w, https:\/\/sites.bu.edu\/aiem\/files\/2020\/04\/download-150x150.jpeg 150w, https:\/\/sites.bu.edu\/aiem\/files\/2020\/04\/download-100x100.jpeg 100w\" sizes=\"(max-width: 224px) 100vw, 224px\" \/><img loading=\"lazy\" src=\"\/aiem\/files\/2020\/04\/share-9cd7266ef5001b20f98e01062c26189fa69ed6c784df04caf809668887fd339a-636x151.png\" alt=\"\" width=\"636\" height=\"151\" class=\"alignnone size-medium wp-image-326\" srcset=\"https:\/\/sites.bu.edu\/aiem\/files\/2020\/04\/share-9cd7266ef5001b20f98e01062c26189fa69ed6c784df04caf809668887fd339a-636x151.png 636w, https:\/\/sites.bu.edu\/aiem\/files\/2020\/04\/share-9cd7266ef5001b20f98e01062c26189fa69ed6c784df04caf809668887fd339a-768x183.png 768w, https:\/\/sites.bu.edu\/aiem\/files\/2020\/04\/share-9cd7266ef5001b20f98e01062c26189fa69ed6c784df04caf809668887fd339a-1024x244.png 1024w\" sizes=\"(max-width: 636px) 100vw, 636px\" \/><\/p>\n<p style=\"padding-left: 30px;\"><img loading=\"lazy\" src=\"\/aiem\/files\/2020\/04\/Screen-Shot-2020-04-10-at-3.58.03-AM.png\" alt=\"\" width=\"564\" height=\"107\" class=\"alignnone size-full wp-image-327\" \/><\/p>\n<div id=\"gtx-trans\" style=\"position: absolute; left: 384px; top: 2083px;\">\n<div class=\"gtx-trans-icon\"><\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Please check out our research projects: Affective Response towards AI Generated Multi-modal News Sejin Paik, Sarah Bonna, Ekaterina Novozhilova, Ge Gao, Jongin Kim, Derry Tanti Wijaya, Margrit Betke This study explores the affective responses and newsworthiness perceptions of generative AI for visual journalism. While generative AI offers advantages for newsrooms in terms of producing unique [&hellip;]<\/p>\n","protected":false},"author":13014,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":[],"_links":{"self":[{"href":"https:\/\/sites.bu.edu\/aiem\/wp-json\/wp\/v2\/pages\/116"}],"collection":[{"href":"https:\/\/sites.bu.edu\/aiem\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/sites.bu.edu\/aiem\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/sites.bu.edu\/aiem\/wp-json\/wp\/v2\/users\/13014"}],"replies":[{"embeddable":true,"href":"https:\/\/sites.bu.edu\/aiem\/wp-json\/wp\/v2\/comments?post=116"}],"version-history":[{"count":33,"href":"https:\/\/sites.bu.edu\/aiem\/wp-json\/wp\/v2\/pages\/116\/revisions"}],"predecessor-version":[{"id":379,"href":"https:\/\/sites.bu.edu\/aiem\/wp-json\/wp\/v2\/pages\/116\/revisions\/379"}],"wp:attachment":[{"href":"https:\/\/sites.bu.edu\/aiem\/wp-json\/wp\/v2\/media?parent=116"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}