Managing truth and misinformation in online forums
Researchers from the Monash University Malaysia are developing a platform to moderate and verify content shared to popular online forms and discussion boards. Social media platforms and online discussion forums have enabled individuals to intentionally spread misinformation and fake news, without holding them accountable for what they say.
The platform uses a combination of graph algorithms and machine learning technology to extract valuable tacit information from platforms like Reddit, StackExchange and Quora, to apply a score that estimates the reliability of someone’s post.
Project lead Dr Ian Lim Wern Han said this score technique can offer users insight into the content they consume online.
“By assigning numbers to users of various online discussion forums we’re able to reward those people who are sharing credible and trustworthy content, while punishing others who are pushing incorrect and misinformed content. The reward or punishment aspect is tied to the visibility and engagement of someone’s profile or content,” Dr Lim said.
Dr Lim explained that if users are credible, their content is placed higher up on the page for more visibility and their Reddit votes worth more when they vote on other threads or comments.
Users deemed untrustworthy will have their posts placed lower on the page, or hidden from the public. Their votes will also have less worth.
A study by the Annenberg Public Policy Centre of the University of Pennsylvania found that people who relied on social media for information were more likely to be misinformed about vaccines than those who relied on traditional media platforms.
The dissemination of fake news related to health and politics will remain a challenge, unless there is an application that can moderate and verify content online.
The accuracy of Dr Lim’s approach is validated, having collected more than 700,000 threads across a variety of online forums from almost two million users.
The research profiled each individual with a rating, with these numbers used to predict a user’s contribution on a subsequent day. The figures were updated daily and the process was repeated in the ensuing days.
Dr Lim noted that there is an abundance of social media platforms with hundreds of thousands of threads, making it difficult and expensive to process each thread, especially given their unstructured nature.
“I decided to review these threads from a user’s point of view and identified trustworthy users. I measured the value of trust and reliability of other online profiles based on my profiling methods,” Dr Lim said.
Using measures of confidence and volatility on a complex network of interactions ensured the most credible sources of information appear at the top of a thread on online forums. However, the very same rating can be used to match the questioner with suitable and reliable responses.
This methodology can also be applied to social media influencers, to ensure those with influence do not disseminate incorrect or misleading information or public service announcements.
“How can we classify the social media influence of a person who could potentially be spreading misinformation? Recently in the US, players in the National Basketball Association made headlines for their beliefs that the COVID-19 pandemic was being overblown and that there was a hidden agenda to it. Each time these players share a tweet, they have the ability to influence millions of people — for this reason it’s essential that we prevent the spread of misinformation online,” Dr Lim said.
British technology and engineering consultancy BJSS will establish its APAC headquarters in...
TOP500 has released its latest list of the world's fastest supercomputers, with the Fugaku...
The AIIA has announced the eight winners of the 2020 National iAwards in a range of categories,...