Valurank
We are developing the first natural language processing engine for web content evaluation.

Click here to try it out.

Click here to send us comments/feedback.
Getting Information in Order
Our mission
is to improve the world’s information eco-system.
Our vision is a world in which:
Users
See objective quality scores for the webpage they are browsing
Search engines & social media
Use objective quality scores to decide what to show and promote
Content creators
Use objective quality scores to decide how to optimize their content
FAQ
What is the valurank score?
The Valurank score is a quantitative assessment of an article’s "informative" quality on a 100 point scale. Think of it as a scale that goes from pure conjecture to encyclopedia articles.
How is the valurank score calculated?
At valurank, we are developing an automated system that reads article text and uses natural language processing (NLP) algorithms to identify markers of poor quality, such as offensive or sensationalist language, biased language, forms of propaganda or persuasion, unsupported claims, and other indicators.
Can I see a breakdown of the score into individual indicators?
Yes! You can always click "View full report" in the extension to see the outputs of all the various NLP models we ran on this content.
I thought that the AI algorithms used to rank content are making the problem worse. Why should I trust your algorithm?
The algorithmic feeds that we rely on for content recommendations are designed to maximize user engagement. As a result, these algorithms learn to identify content that is attention-grabbing and polarizing - content gets rewarded for evoking strong reactions, which are often a result of outrage, anger, disbelief, and other human emotions that do not necessarily coincide with content quality.

The algorithms used to calculate the Valurank score learn to identify content that is presented objectively, free of biases, and supported by verifiable sources. The system aims to evaluate the intrinsic quality of content, in the same way that the academic peer review process evaluates the quality of scientific papers before publication.

Furthermore, we believe that the algorithms used to evaluate content should not be veiled in secrecy, but should rather be transparent and auditable. We are planning to open-source our models (source code & training data) so anyone can audit them and make sure they are free of bias.
I disagree with the score given to an article I read. What should I do?
No system is perfect and ours is no exception. We love to get specific feedback on cases where you disagree with the score given to an article. You can give feedback directly from the browser extension, by clicking the “Provide feedback” button, or we can reach out to us at by clicking here.
Our Team

A small group of thoughtful people could change the world.


Indeed, it’s the only thing that ever has.