Introducing CREDBOT: Reliable Source Evaluation Across the Wikiverse

‘Credibility Bot’ makes it easier for Wikipedians to collaborate and consistently improve article source quality.

It’s been a year since NewsQ first announced a project aimed at helping achieve more robust, defensible consensus around reliable sources on Wikipedia, and we are excited to provide an update! ‘Credibility Bot,’ or CREDBOT, is aimed at making it easier for Wikipedians to increase the use of credible citations on Wikipedia articles in consistent ways.

The topic of vaccines is one example of the need to evaluate article sources. While the Wikipedia community works to maintain the reliability and quality of vaccine-related articles, assessment of these sources typically happens ad-hoc on the Talk pages of individual articles. 

Talk pages are well-known to most Wikipedia contributors, but as meta-commentary these pages are effectively hidden, lack visibility, and do not offer an immediate assessment of article sources and citations. Talk discussions are also not always top of mind for Wikipedia editors, meaning it can be laborious to find and replace poor sources. 

CREDBOT, a new toolkit under development for source tracking across Wikimedia projects, aims to address this challenge by integrating a templating framework that editors can use to easily coordinate using WikiProjects. Using CREDBOT, Wikipedia editors will be able to set up a page for alerts about sources for any Wikipedia subject area, with detailed reports on what sources are being used in all related articles. NewsQ has been pleased to collaborate with Knowledge Futures Group (KFG) over the past year on this project. 


There are already existing Wikipedia guidelines intended to ensure that articles should be supported by reliable sources. But how are these guidelines implemented in practice? On WikiProjects such as vaccine safety, Wikipedia editors and administrators currently review and edit articles to maintain or improve quality such as addressing inaccurate information. Since Wikipedia is a large online community powered by volunteers, practices for discussing and maintaining quality are uneven, impromptu, and are therefore difficult to scale.

As the community of editors and authors is decentralized, Wikipedia editors lack centralized tracking of citations and centralized task management – Wikipedia editors struggle to get the signals they need to collectively discuss or prioritize their work assessing and improving article quality. Even if volunteer editors do have the capability to assess sources in one topic area, they lack the tools to do so across other topics, and in other languages on Wikipedia. 

One method for more efficiently maintaining reliability at scale across different Wikipedia topics is compiling lists of rated domains. These “perennial sources lists” are where Wikipedians discuss and achieve consensus for evaluations around contentious sources that generate debate. These source lists help Wikipedia editors understand if and when certain sources are regarded as reliable and should be cited, or are unreliable and should be avoided, all while referring to previous discussions to support this validation. 

In order to make it easier for editors to discuss and evaluate source reliability, CREDBOT is designed to provide proactive alerts for Wikipedia projects as the toolkit tracks how often sources are cited. CREDBOT also compares citations against a WikiProject’s own source-quality assessment list, providing participants an opportunity to reflect on whether the citations meet Wikipedia’s guidelines on reliable sourcing. Source-quality assessments themselves are valuable, and can be archived as citable, timestamped snapshots. Finally, these assessments can then be entered in Wikidata where they are globally available, are interlinked with other metadata, and are openly accessible.

In order to make sure we were building a capable, functional toolkit, NewsQ decided to work on vaccine safety as an initial test project. Ultimately, CREDBOT is intended to expand scope beyond the topic of vaccines, to improve article source quality for any subject on Wikipedia.

Making Vaccines “WP:VSAFE”

WP:VSAFE, the Wikipedia vaccine safety and sources project, now highlights statistics about sources in the vaccines category, including the proportion of sources flagged as reliable when known, and recent changes to source usage. We overhauled WP:VSAFE in order to track and monitor the use of new sources. Whenever a source that is not already in the project’s perennial sources list appears in a vaccine-related article, a notification is generated for the project.

CREDBOT alerts (e.g., for flagged domains, frequent domains, new domains, etc.) trigger discussions among Wikipedia contributors and editors. Image license: CC-BY-4.0.

An alert or report is also triggered by the addition of sources considered unreliable, as defined by project participants in its own list. In order to kick off the CREDBOT project, the list of quality ratings was seeded from previous discussions or judgements among Wikipedians within English-language Wikipedia and WikiProject Medicine. External sources such as the World Health Organization’s Vaccine Safety Net were also used to seed quality ratings. 

Approximately 800 articles were selected and reviewed for all sources; WP:VSAFE currently tracks over 5,000 unique domains. The Internet Archive Reference Inventory tool (IARI, on GitHub) is used to keep the data up to date, as new additions are immediately included and reported for checking.

Excerpt of perennial sources list for vaccine-safety related Wikipedia articles, as labeled by Wikipedian community participants. Image license: CC-BY-4.0.


The intent of the CREDBOT project is to ensure source assessment is easy, expanded, integrated, and automated for all Wikipedia contributors for any topic – not just for vaccine-related topics.

As a proof-of-concept, CREDBOT demonstrates the ability to establish a source credibility reviewing system for any subject, if desired, by members of a Wikipedia community who are engaged in addressing and editing a particular topic, or in different languages across Wikipedia. This workflow and interface are both maintained through a scalable, modular template system paired with an automated bot. The software toolchain, or set of software programming tools, is generalizable, setting the stage for future work in other subjects and languages.

As a proof-of-concept, CREDBOT demonstrates the ability to establish a source credibility reviewing system for any subject on Wikipedia, if desired by members of a Wikipedian community.

Beyond these capabilities, integrating source credibility ratings into Wikidata has far-reaching benefits in linking initiatives across Wikimedia: Wikipedia authors and editors will be able to analyze, track, compare, and improve source usage between projects, providing the Wikipedia community with a credibility vantage point that wasn’t possible before.

CREDBOT builds these processes into a toolkit that brings source-quality assessments and source prevalence on Wikipedia together with actionable reports and alerts. On top of that, the data can be archived as a citable, timestamped snapshot, and made globally available in Wikidata; this data can then be interlinked with other metadata to be accessible anywhere on the web.

CREDBOT Next Steps

In terms of what’s next, the CREDBOT team is collecting support and feedback from the community to assess the need for an  expanded and adaptable beta version of the project. Interested editors or extended community members can comment at

CREDBOT was developed with support from CUNY Newmark Graduate School of Journalism and Craig Newmark Philanthropies. This blog post and any derivative works are licensed under CC BY-SA 4.0.