Fake news offers unexpected opportunities for trusted media
June 17, 2024Disinformation is acknowledged as one of journalism's, if not the democratic world's, biggest problems. Fake news and misleading visuals have deepened social division and interfered with elections, as well as having other destructive aspects. And generative artificial intelligence, or AI, where, for example, advanced computing allows users to make a minutes-long video from one photograph of a politician, is only about to make things worse.
However at DW's annual Global Media Forum (GMF) in Bonn there was some unexpectedly positive news regarding the increase and spread of disinformation.
People are much more conscious of it, Renate Nikolay, deputy director-general for communications networks, content and technology at the European Commission, told delegates at the international conference on Monday.
"If you look at where we were five years ago, people are so much more aware these days," Nikolay said, citing the various awareness and information campaigns that European Union counties have undertaken. "Just informing people that, watch out, there might be disinformation, has had a really important effect," she argued.
More knowledge of 'fake news'
Nikolay was not the only expert to have noticed this. Representatives from a diverse set of media outlets attending the GMF also remarked on growing awareness of disinformation.
That is supported by research in the Reuters Institute's 2024 Digital News Report, released this week. Around 59% of people are worried about what's real and what's fake, the report said. People are also increasingly worried about the use of AI to create "fake news" related to politics or conflicts and are particularly concerned about how to recognize untrustworthy content on platforms such as TikTok and X (formerly Twitter).
For some of the GMF delegates, this has actually translated to an opportunity. "I think people [now] have to differentiate between propaganda and factual, professional news," Tom Rhodes, a veteran journalist and editor based in East Africa, who manages Sudan's Ayin Media. As a result of that search for trusted sources, Rhodes said their audience reach had been "skyrocketing."
"Our audience see us as something they can trust," he explained.
Impact on elections
"Kenyans learned a lot during the last elections," added Emmanuel Chenze, chief operating officer of the pan-African investigative journalism outlet, Africa Uncensored, based in Nairobi, Kenya.
For example, during the 2017 elections Kenya was infamously used as a test case for how social media could influence politics by the now defunct and disgraced British consultancy, Cambridge Analytica. "They [media users] didn't know what it was then but they do now," Chenze pointed out.
Similar things happened in Taiwan, said Bay Fang, chief executive officer of the Washington-based outlet, Radio Free Asia. Chinese misinformation around the Taiwan elections didn't work as well recently because Taiwanese voters had already seen a lot of it during the last election. "And they had learned from that," she noted.
"[In India] audiences are beginning to look far more carefully for trusted sources in a very crowded media environment," added Anant Goenka, executive director of the Indian Express Group.
"The top 10 or 20 newspapers in India have really benefitted from that because people are very aware of where they consume their news now, which wasn't the case three or four years ago. Credibility is our main asset," he stated.
That's also impacted how his journalists deal with AI-related issues. "When it comes to AI, we decided to compromise on the speed in order to get it right," he said. This was so as not to let technology damage their reputation for trustworthy reporting.
Is a smarter audience enough?
Potentially renewed enthusiasm for trusted sources and professional journalism may be an unexpected byproduct of troubling misinformation. But there's still plenty of work to do, argued another of the speakers at the Global Media Forum, freelance technology consultant Madhav Chinnappa.
"Tech is a tool," the former director of news ecosystem development at Google told the audience. "It's not good. It's not bad. It's how you use it. That is just the way it's going to be and we need to acknowledge that."
When it comes to audiences becoming savvier and more skeptical about misinformation and unreal AI-generated content, it's also important to acknowledge there is no single audience, Chinnappa told DW on the sidelines of the GMF. "Any one individual could be well versed on this, or distinctly not [well versed]," he explained. "For me the foundational element is more media literacy for everybody."
Chinnappa recounted how he too had recently been taken in by an AI-generated video clip recently and that the most worrying thing is what he calls the "good enough" content — that is, fake AI-generated content that still has the signs of a fake (if you look more closely) but that manages to fool most people. This leads to a slow erosion of trust.
In that sense, media literacy is fundamental, Chinnappa concluded. "I live in London and there all the kids are taught how to be safe online. I think they should also be being taught how to be smart online, how to think critically about the information they're getting."
Edited by: Anne Thomas