By Burt Glass

More Americans are adopting tools such as ChatGPT, Gemini and Claude, but a new opinion survey suggests scoring in their own ability to evaluate the accuracy, reliability, completeness, and biases of the text generated by artificial intelligence is cause for concern.

According to Yi Grace Ji, assistant professor at Boston University’s College of Communication and the primary investigator of the survey, in partnership with Ipsos, said the average result – a mean score of 3.26 out of 5, with a 5 for individuals who strongly agree that they can perform a set of specified tasks in critically evaluating AI-generated responses – is worrisome, especially because respondents tend to overestimate their own abilities.

Read full story here.