Navigating online misinformation and disinformation

Dr Andrea Carson's research explored  the voluntary and legislative approaches of governments tackling online misinformation and disinformation around the world.

Writes Rei Fortes

False information online is  a prominent issue around the world as digital technologies advance, allowing us more accessibility to read and share information. In the last two years, the COVID-19 pandemic brought a new wave of online misinformation and disinformation that posed new types of threats, including to public health.

Social media platforms such as Facebook, Instagram, SnapChat and TikTok are avenues where individuals or organisations can spread information intentionally to target a specific group of people. This was evident during the 2016 US presidential elections that saw a private Russian company manipulate social media posts to support a US presidential candidate.

The dangerous effects of ‘fake news’ have also been displayed across Asia, such as the 2019 post-election riots in Indonesia, after false theories and information regarding the presidential election sparked rage on the streets of Jakarta. In Singapore, the government has implemented laws to take action against online misinformation and disinformation, but also using it to target  the reporting of government critics such as Singporean-Australian blogger Alex Tan.

“Fake news has been around for centuries and we’ve always known that it is a problem. The difference is propaganda was used by leaders during the world wars and now individuals can spread fake news around the world just as much as organisations do. All you need is a keyboard and an internet connection,” says Dr Andrea Carson, Associate Professor in Journalism from the Department of Politics, Media and Philosophy at La Trobe University.

Dr Carson led an independent research study funded by Facebook, exploring the voluntary and legislative approaches of governments tackling online misinformation and disinformation around the world. Working with Liam Fallon, who is now a researcher at the Brotherhood of St Laurence, the study aimed to better understand how Australia could approach preventive measures in reducing the spread of false information online. It compared government actions in Indonesia and Singapore to the responses of the European Union.

“There’s not much agreement on how to best define misinformation. I use a fairly simple, broad definition which is verifiably false information that may or may not cause harm,” says Dr Carson.

“Disinformation is more specific—it is the intentional spread of misinformation. And that is to operationalise intention by looking at an individual’s actions or an organisation’s actions.”

The spread of false information online, or “hoaxes”, has been an issue for Indonesia, with disinformation targeting the political opposition and the diverse ethnic, religious and cultural groups of people. In September 2019, the government discussed the possibility of implementing new changes to include laws regarding online misinformation and disinformation in Indonesia’s Kitab Undang-Undang Hukum Pidana (KUHP) criminal code.

Indonesia also has the 2016 Information and Electronic Transactions Law (ITE) with the focus on removing or taking down false information in the country.

“Anyone accused of spreading misinformation or hoaxes, which is more akin to disinformation, risks being jailed or heavily fined,” says Dr Carson.

The nation’s Ministry of Communication and Information Technology (KOMINFO) persuaded social media platforms to also do their part in preventing the spread of false information online. But human rights organisations are criticising KOMINFO and the National Cyber and Encryption Agency (BSSN), the KUHP and ITE laws as being used against journalists and any oppositional groups.

“The laws have been criticised because political opponents of the government, journalists and minority groups have been caught up in that legislation and are being penalised. So the law in that way has been weaponised, which can stifle free speech and diverse political expressions of journalists and critics of the government,” says Dr Carson.

There is a similar issue in Singapore with the government’s approach in tackling false information online with the introduction of the Protection from Online Falsehoods and Manipulations Act (POFMA) in April 2019.

The government proposed to use POFMA to prevent the spread of false information but also to retain people’s freedom of speech. However, that is not the case with the law being used to silence political opposition, journalists and academics who criticise the Singaporean government as evident during the 2020 elections.

“Singapore has a number of strategies to deal with misinformation, from simply labelling it as fake news but not taking it down, right through to ordering the platforms to remove it. This gives the government a lot of power and they get to determine what is and what isn’t fake news,” says Dr Carson.

“This particularly effects freelance journalists in Singapore, who find themselves needing to self-censor when approaching sensitive topics such as capital punishment. At times, their only safe option to reach their Singaporean readers is through international newspapers like The Guardian.”

Overall, the study shows that a legislative approach to online misinformation and disinformation can present a danger to free speech if used inappropriately by the government and is not applicable to Australia. The advisable route is to continue with the voluntary approach in Australia with a multi-sectoral action involving the government, tech companies and the public’s response.

In February, DIGI, a not-for-profit industry association  representing the major tech companies launched the Disinformation Code. The new Code is voluntary and is designed to help digital platforms cut the spread of misinformation and disinformation in Australia.

“Combating online misinformation needs to involve sincere collaboration between tech  platforms and civil society actors, especially human rights activists, who are often at the forefront and can recognise when there’s misinformation and disinformation campaigns happening,” says Dr Carson.

Dr Carson is currently working on another research study also funded by Facebook in determining the most effective fact-checking method to mitigate COVID-19 misinformation. She and her team from three universities will be conducting surveys and is expected to have results by late 2022.