The algorithmic feeds that we rely on for content recommendations are designed to maximize user engagement. As a result, these algorithms learn to identify content that is attention-grabbing
- content gets rewarded for evoking strong reactions, which are often a result of outrage, anger, disbelief, and other human emotions that do not necessarily coincide with content quality.
The algorithms used to calculate the Valurank score learn to identify content that is presented objectively, free of biases, and supported by verifiable sources. The system aims to evaluate the intrinsic quality of content, in the same way that the academic peer review process evaluates the quality of scientific papers before publication.
Furthermore, we believe that the algorithms used to evaluate content should not be veiled in secrecy, but should rather be transparent and auditable. We are planning to open-source our models (source code & training data) so anyone can audit them and make sure they are free of bias.