News

AIEM Created OpenFraming.org to Help People with/without Computational Background to Do Automated Framing Analysis

By Yimeng SunJuly 25th, 2020

AIEM Created OpenFraming.org to Help People with/without Computational Background to Do Automated Framing Analysis.

When journalists cover a news story, they can cover the story from multiple angles or perspectives. A news article written about COVID-19 for example, might focus on personal preventative actions such as mask wearing, while another might focus on COVID-19’s impact on the economy. These perspectives are called “frames,” which when used may influence public perception and opinion of the issue. We introduce a Web-based system for analyzing and classifying frames in text documents. Our goal is to make effective tools for automatic frame discovery and labeling based on topic modeling and deep learning widely accessible to researchers from a diverse array of disciplines. To this end, we provide both state-of-the-art pre-trained frame classification models on various issues as well as a user-friendly pipeline for training novel classification models on user-provided corpora. Re-searchers can submit their documents and obtain frames of the documents.The degree of user involvement is flexible: they can run models that have been pre-trained on select issues; submit labeled documents and train a new model for frame classification; or submit unlabeled documents and obtain potential frames of the documents. The code making up our system is also open-sourced and well documented, making the system transparent and expandable. The system is available on-line athttp://www.openframing.org and via our GitHub page https://github.com/davidatbu/openFraming

 

AIEM crowdcoding workshop at #ICA2018

By guoleiMay 16th, 2018

BU AIEM will host a pre-conference on crowdcoding at the 2018 annual conference of International Communication Association at Prague, the Czech republic.

ICA Preconference: Crowdsourcing as a Content Analysis Tool

May 24, 13:00 - 17:00

(Location: The Hilton Prague – Istanbul)

Crowdsourcing is a popular method in computer science for categorizing and classifying text and objects. This preconference introduces crowdsourcing for communication researchers as an emerging content analysis method. Rather than rely on a few human coders to carry out a content analysis, the crowdsourcing approach outsources coding tasks to numerous people online (e.g., multiple people code the same item), and applies an aggregation policy to make a decision on a given item.

Through a hands-on workshop and research presentations, this preconference explores:

1) Different crowdsourcing platforms to carry out a project. A demonstration will be given to show how to use two different crowdsourcing platforms: Amazon’s Mechanical Turk (MTurk), a well-established service, and Figure Eight (previously Crowdflower), an emerging service. In the hands-on workshop, attendees will set up their own project on Figure Eight, and be guided in the steps, from set-up to extracting and analyzing the crowdworkers’ results.

2) Crowdsourcing as a method that is both cheaper and more efficient than manual content analysis. Can it also be as valid and reliable? Research presentations on both “crowdcoding” and traditional manual coding will be given, as well as a discussion about cost structure and features to ensure quality results.

Organizer:

Artificial Intelligence & Emerging Media (AIEM) Research Team at Boston University

Lei Guo, Assistant Professor of Emerging Media Studies

Kate Mays, PhD student of Emerging Media Studies

Margrit Betke, Professor of Computer Science

Mona Jalal, PhD student of Computer Science

Presenters:

Lei Guo, Assistant Professor of Emerging Media Studies

Kate Mays, PhD student of Emerging Media Studies

Margrit Betke, Professor of Computer Science

Hajo Boomgaarden, Professor of Empirical Social Science Methods, University of Vienna

Brendan Watson, Assistant Professor of Journalism, Michigan State University

More