Social media, whether good or bad, has united generations of people in communication. The generations of today all participate in some form of social media, almost daily.
Facebook/Meta and Twitter are two of the largest social media companies in the world, with billions of users collectively and while these platforms have brought people together and revolutionized the way we communicate, that communication can come in the form of harm, misinformation or propaganda.
Misinformation is a complex issue that has serious consequences. False information can undermine public trust in institutions, sow division and conflict, and even pose a threat to public health and safety.
Platforms like Facebook and Twitter have recognized this problem and have attempted to take steps to address it. Both companies have implemented policies and procedures to identify and remove false information from their platforms.
Yet, how well do they work? Let’s take a closer look at what these policies entail and how they are being used.
Facebook’s Policies to Curb Misinformation
Facebook’s policies on misinformation are outlined in its Community Standards document. This document provides information on what types of content are prohibited, how Facebook identifies and removes false information, and what users can do if they believe a post is false.
One of the ways Facebook identifies and removes false information is through its third-party fact-checking program. Facebook partners with third-party fact-checking organizations to help identify and label false information on its platform.
These fact-checkers use a combination of human and automated tools to review posts that have been flagged by users or identified by Facebook’s algorithms. If a post is found to be false, it is labeled as such and downgraded in Facebook’s algorithm, making it less likely to be seen by users.
Facebook also uses warning labels to help users identify false information. This feature is used and probably seen a lot by Facebook users. I am not a HUGE Facebook user, but Meta does own Instagram, and I use that app a lot.
Look at this AP article, that fact checks a video that was being shared across social media, and Facebook, after the Beirut explosion that claimed to show a missile hitting Lebanon city. It’s unclear if warning labels can be applied to videos, and it took the work of a professional to dismantle this video frame by frame.
Facebook’s labels appear on posts and work to provide users with additional context and information to help them evaluate the accuracy of the post. I believe that this is a good policy because it forces a user to perform critical thinking, which is imperative in fighting and identifying misinformation.
Facebook has also implemented group moderation policies to prevent false information from spreading in closed groups. These policies require group administrators to monitor and remove false information posted in their groups.
Again, what are the qualifications of the group administrator to be able to spot this type of thing?
Twitter’s Policies to Curb Misinformation
Twitter’s policies on misinformation are outlined in its Rules and Policies document. This document has what types of content is prohibited, how Twitter identifies and removes false information, and what users can do if they believe a tweet is false.
Twitter uses a variety of labels and warning messages to help identify false information on its platform. For example, tweets that contain false or misleading information about COVID-19 are labeled with a warning message that directs users to credible sources of information. Similarly, tweets that contain manipulated media are labeled with a warning message indicating that the media has been altered.
Twitter also uses its algorithm to identify and remove false information. The platform’s algorithms prioritize content from credible sources and downgrade content that is identified as false or misleading.
Furthermore, Twitter has implemented a policy that allows users to report tweets that they believe contain false information. Once a tweet has been reported, it is reviewed by Twitter’s team of fact-checkers.
If the tweet is found to be false, it is labeled as such and downgraded in Twitter’s algorithm. Therefore, it is further removed from public consumption.
Evaluating the Merits of Facebook and Twitter Policies
While these policies have the potential to be effective in reducing the spread of misinformation, it’s important to note that misinformation is a complex issue that cannot be solved by policies alone. Critical thinking and misinformation tools, which you can find on my blog here, are needed to aid this fight.
Misinformation can take many forms, including false information, distorted information, and incomplete information. It is often built to spread rapidly through social media networks.
Therefore, these policies must be accompanied by a broader effort to promote digital literacy and critical thinking. This means educating users on how to spot misinformation, providing reliable sources of information, and encouraging users to engage with diverse perspectives.
But, the platforms could be doing more. What is the name of their third-party fact-checkers? What are the qualifications of those who work on the misinformation team? Why can’t a post be removed from the platform if it contains misinformation?
To improve their existing efforts to curb misinformation, Facebook and Twitter could increase transparency by providing more information about their policies, procedures, and enforcement actions.

Leave a comment