When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

Perplexity releases open-source R1 1776 reasoning model without China censorship

R1 1776
Perplexity

Last month, DeepSeek's reasoning model R1 made a big splash, however, it was criticized for censoring topics to do with China. While this seems like quite a niche topic, the censorship could impact other inquiries that related to China, such as asking what would happen to Nvidia share prices if China invaded Taiwan—this makes it less helpful for finance use cases.

To fix this, Perplexity has developed a new open-source version of R1 called R1 1776, that has been “post-trained to provide unbiased, accurate, and factual information.” The model is now available in a HuggingFace repository.

Perplexity's post-training was mainly focused on addressing the China-related censorship. It outlined the approach it took below:

  • We employed human experts to identify approximately 300 topics known to be censored by the CCP.
  • Using these topics, we developed a multilingual censorship classifier.
  • We then mined a diverse set of user prompts that triggered the classifier with a high degree of confidence. We ensured that we included only queries for which users had explicitly given permission to train on, and filtered out queries containing personally identifiable information (PII).
  • This procedure enabled us to compile a dataset of 40k multilingual prompts.

Interestingly, there seems to be a bit of a discrepancy between R1 1776 and R1 on benchmarks, but not too much. You can see the differences in the image below:

R1 1776
Perplexity

If you want to get R1 1776, you can download it now from HuggingFace.

Source and image via Perplexity

Report a problem with article
Marvel Rivals
Next Article

NetEase confirms Marvel Rivals' Seattle-based studio laid off for 'organizational reasons'

DOTA 2 Wandering Waters
Previous Article

Valve makes major changes to DOTA 2 map and mechanics again in new update

Join the conversation!

Login or Sign Up to read and post a comment.

3 Comments - Add comment