The advancement of technology has expanded the reach and scope of modern journalism, but it has also created an environment for misinformation and libel cases are increasing in prevalence. Dedicated newsrooms and journalists are working to combat this issue, but with the breadth of the internet, it’s no easy task. Fortunately, a pair of journalists is harnessing the power of AI to protect overstretched newsrooms and spot misinformation before it makes its way out into the world.
CaliberAI is an AI program launched in November 2020 which works as a warning system for potential libel, especially for European journalists who are more vulnerable to libel than American journalists protected by the First Amendment. While defamation is never good, even unintentional defamation can lead to costly legal action for publications.
The program was built by father-son duo Conor Brady and Neil Brady with help from Carl Vogel, a professor of computational linguistics at Trinity College Dublin. The AI program looks for key signs of libel: the explicitly stated name of an individual or group; a claim as fact; or use of some sort of taboo language or idea. Although the program cannot discern truth from fiction with 100 percent accuracy, it can flag potential issues for review by journalists.
The AI program was trained by Conor and Neil Brady themselves, who generated defamatory statements to feed into the program and taught it how to rate not only potentially problematic content but also to give it a score from 0 to 100, 0 being sound and accurate and 100 being wildly defamatory.
Right now, the system works much like spell-check. Users can sample a demo version on the company’s website. The creators hope the system will be helpful for improving news accuracy as a whole, and also for protecting small and medium-sized publications that do not have legal counsel on retainer.
This story is part of our ‘Best of 2021’ series highlighting our top solutions from the year. Today we’re featuring science solutions.