Press Releases
WASHINGTON – The day after Mark Zuckerberg testified before a congressional committee exploring the proliferation of misinformation on social media, U.S. Sen. Mark R. Warner (D-VA) pressed the Facebook CEO about the continued proliferation of anti-vaccine content on the company’s platforms, particularly Instagram.
In a letter, Sen. Warner wrote, “Anti-vaccination groups and other health conspiracy groups have long utilized – and been enabled by – Facebook’s platforms to disseminate misinformation. Studies show a rapid increase in the spread of health misinformation online since the start of the pandemic. Yet on the very day that Facebook introduced its updated standards touted to address health misinformation media organizations noted that several of the top-ranked search results for ‘covid vaccine’ on Instagram were anti-vaccine accounts. I am deeply concerned that Facebook’s new policies will continue to lack the adequate enforcement needed to reduce the spread of harmful misinformation on its platforms.”
In the correspondence, Warner noted the importance of promoting accurate information about vaccine safety, with the Centers for Disease Control and Prevention (CDC) recently finding that nearly a third of U.S. adults surveyed reported they did not intend to get vaccinated, despite the proven safety and effectiveness of multiple COVID-19 vaccines. Experts estimate that 70 to 90 percent of Americans will need to be vaccinated before herd immunity can be achieved.
“Facebook has previously committed to reducing the spread of misinformation on its platforms, implementing a ban on false claims about vaccines in groups, pages, and ads in April 2020 and promising to remove COVID-19 and vaccine misinformation from the platform in an effort to promote authoritative health information in February 2021,” Warner explained. “However, despite these promises, Facebook’s enforcement of its own policies is consistently and demonstrably insufficient, a trend we have seen in other areas where Facebook has pledged to address misuse of its products or instances of its products amplifying harmful content.”
A recent report from the Center for Countering Digital Hate found that Instagram’s algorithm promoted unsolicited content that featured anti-vaccine and COVID-19 misinformation to users across several features of the platform, including in the “Suggested Posts” section, which was introduced in August 2020 and directs users to recommended posts from accounts they do not follow based on users’ engagement with related posts.
“The events of January 6th prove that there are real-world consequences when harmful misinformation is allowed to run rampant online, and I am concerned that Instagram – a platform which has generally escaped the level of scrutiny directed at Facebook, itself – is similarly enabling the spread of harmful misinformation that could hinder COVID-19 mitigation efforts and, ultimately, result in lives lost,” Warner wrote.
In the letter, Warner pressed Zuckerberg to respond to a series of questions about the platform’s policies and procedures for dealing with health misinformation, and requested that the CEO produce the company’s internal research into Instagram’s amplification of anti-vaccine content, groups, pages, and verified figures by April 23, 2021.
Warner has long pressed social media platforms to crack down on the rapid proliferation of extremist content and harmful misinformation. In June 2020, Warner pressed Facebook regarding its failure to prevent the propagation of white supremacist groups online and its role providing these extremist groups with a platform to organize and radicalize other users. In October, Warner urged Facebook, Twitter and Google to implement robust transparency and accountability standards before the November election to minimize the spread of political misinformation.
Sen. Warner has written and introduced a series of bipartisan bills designed to protect consumers and reduce the power of giant social media platforms like Facebook, Twitter and Google. Among these are the Designing Accounting Safeguards to Help Broaden Oversight And Regulations on Data (DASHBOARD) Act – bipartisan legislation to require data harvesting companies to tell consumers and financial regulators exactly what data they are collecting from consumers and how it is being leveraged by the platform for profit; the Deceptive Experiences To Online Users Reduction (DETOUR) Act – bipartisan legislation to prohibit large online platforms from using deceptive user interfaces to trick consumers into handing over their personal data; and the Augmenting Compatibility and Competition by Enabling Service Switching (ACCESS) Act – bipartisan legislation to encourage market-based competition to dominant social media platforms by requiring the largest companies to make user data portable – and their services interoperable – with other platforms, and to allow users to designate a trusted third-party service to manage their privacy and account settings, if they so choose.
Last month, Sen. Warner introduced the Safeguarding Against Fraud, Exploitation, Threats, Extremism and Consumer Harms (SAFE TECH) Act to reform Section 230 and allow social media companies to be held accountable for enabling cyber-stalking, targeted harassment, and discrimination on their platforms.
A copy of today’s letter is available here, and the text appears in full below.
Dear Mr. Zuckerberg,
I write to you today to express my concern for your companies’ continued amplification of harmful misinformation, particularly the spread of COVID-19 and vaccine misinformation promoted by the Instagram algorithm.
As the pandemic endures, the importance of promoting reliable health information only grows. A recent study from the Centers for Disease Control and Prevention (CDC) found that nearly a third of U.S. adults surveyed reported they did not intend to get vaccinated. With experts estimating 70 percent to 90 percent of Americans will need to be immunized before achieving herd immunity, it is critical that individuals who are experiencing COVID-19 vaccine hesitancy are exposed to accurate information that will help them make informed decisions about the vaccine.
Facebook has previously committed to reducing the spread of misinformation on its platforms, implementing a ban on false claims about vaccines in groups, pages, and ads in April 2020 and promising to remove COVID-19 and vaccine misinformation from the platform in an effort to promote authoritative health information in February 2021. However, despite these promises, Facebook’s enforcement of its own policies is consistently and demonstrably insufficient, a trend we have seen in other areas where Facebook has pledged to address misuse of its products or instances of its products amplifying harmful content. Indeed, a coalition of State Attorneys General, including the Attorney General of Virginia, just last week wrote to you and the CEO of Twitter, accusing your companies of not taking “sufficient action to identify violations and enforce [existing] guidelines.”
Anti-vaccination groups and other health conspiracy groups have long utilized – and been enabled by – Facebook’s platforms to disseminate misinformation. Studies show a rapid increase in the spread of health misinformation online since the start of the pandemic. Yet on the very day that Facebook introduced its updated standards touted to address health misinformation media organizations noted that several of the top-ranked search results for “covid vaccine” on Instagram were anti-vaccine accounts. I am deeply concerned that Facebook’s new policies will continue to lack the adequate enforcement needed to reduce the spread of harmful misinformation on its platforms.
Further, a recent report from the Center for Countering Digital Hate found that Instagram’s algorithm promoted unsolicited content that featured anti-vaccine and COVID-19 misinformation to users across several features of the platform, including in the “Suggested Posts” section, which was only introduced in August 2020 and directs users to recommended posts from accounts they do not follow based on users’ engagement with related posts. If Facebook is truly committed to “[removing] false claims on Facebook and Instagram about COVID-19, COVID-19 vaccines and vaccines in general during the pandemic,” as the company has stated, its own algorithms should not be amplifying misinformation and promoting harmful content to users.
For several years now, I have raised concerns that your content recommendation algorithms have disproportionately surfaced disinformation, misinformation, violent extremist content, and other harmful content. In June 2020, I wrote to you with concern that white supremacist and violent right-wing extremist groups were radicalizing users on your platforms and that Facebook’s algorithms – including its group recommendation feature – aided in that radicalization. In October of the same year, I wrote again to urge Facebook and other social media companies to implement robust transparency and accountability standards before the November election to minimize the spread of political misinformation. The events of January 6th prove that there are real-world consequences when harmful misinformation is allowed to run rampant online, and I am concerned that Instagram – a platform which has generally escaped the level of scrutiny directed at Facebook, itself – is similarly enabling the spread of harmful misinformation that could hinder COVID-19 mitigation efforts and, ultimately, result in lives lost.
These examples demonstrate Facebook’s continued unwillingness or inability to enforce its own Community Standards and take action to reduce the spread of misinformation on its platforms. More concerningly, a recent report suggests that Facebook has failed to address the ways in which its products directly contribute towards radicalization, misinformation proliferation, and hate speech – deprioritizing or dismissing a range of proposed product reforms and interventions because of their tendency to depress user engagement with your products.
Eliminating misinformation on your platforms is a valuable and necessary undertaking as online health misinformation can have a substantive impact on users’ intent to get vaccinated, with people exposed to COVID-19 and vaccine misinformation shown to be more likely to express vaccine hesitancy than those who were not. Further, public health authorities shoulder an even greater burden – at a time of profound resource and budget strain – to combat misinformation amplified by platforms like Instagram, Facebook and WhatsApp. Given that over half of Americans rely on social media to get their news, with Facebook in particular serving as a “regular source of news” for about a third of Americans, it is critical that Facebook take seriously its influence on users’ health decisions.
To address these concerns, I request that you provide responses to the following questions by April 23, 2021:
1. What procedures does Facebook have to exclude misinformation from its recommendation algorithm, specifically on Instagram?
2. Please provide my office with Facebook internal research of the platform’s amplification of anti-vaccine content, groups, pages, and verified figures.
3. Why were posts with content warnings about health misinformation promoted into Instagram feeds?
4. When developing the new Suggested Posts function, what efforts did Facebook make to ensure that the new tool was only recommending reliable information?
5. What is the process for the removal of prominent anti-vaccine accounts, and what is the rationale for disabling such users’ accounts from one of Facebook’s platforms but not others?
6. How often are you briefed on the COVID-19 misinformation on Instagram and across Facebook platforms?
7. Did Facebook perform safety checks to prevent the algorithmic amplification of COVID-19 misinformation? What did those safety protocols entail?
8. Will anti-vaccine content continue to be monitored and removed after the COVID-19 pandemic?
9. Please provide my office with Facebook’s policies for informing users that they were exposed to misinformation and how Facebook plans to remedy those harms.
10. Combatting health misinformation amplified by large social media platforms puts an additional strain on the time, resources, and budgets of public health agencies – often requiring them to spend on online ads on the very platforms amplifying and propelling misinformation they must counter. Will you commit to provide free advertising for state and local public health authorities working to combat health misinformation?
Health misinformation on social media platforms like Facebook is a serious threat to COVID-19 mitigation efforts and could ultimately prolong this public health emergency. Given the urgency and severity of these consequences, I appreciate your prompt attention to this matter.
Sincerely,
###