Report: Chinese researchers used public Meta AI to build model for military purposes

Reuters, citing academic papers and analysts, reports that some top Chinese institutions with ties to the People’s Liberation Army (PLA) have built an AI tool for potential military applications using Meta’s publicly available large language model Llama. The publication reviewed a June research paper in which six Chinese researchers from three institutions discussed using an early version of Llama as a base to create "ChatBIT."

The researchers used Llama 2 13B to build a military-focused AI tool, incorporating their own parameters to collect and process intelligence and provide reliable data for operational decision-making. ChatBIT was "optimised for dialogue and question-answering tasks in the military field", as per the paper.

According to the paper, ChatBIT outscored some other AI models, which were about 90% as capable as GPT-4. However, the researchers didn"t discuss how they defined performance or whether the AI model was implemented somewhere.

While the exact capabilities of ChatBIT remain unknown, the researchers noted that the model includes only 100,000 military dialogue records, which is far less compared to other large language models.

According to Joelle Pineau, a vice president of AI Research at Meta, "That"s a drop in the ocean compared to most of these models (that) are trained with trillions of tokens so … it really makes me question what do they actually achieve here in terms of different capabilities."

A company spokesperson pointed out that China is making trillion-dollar investments to beat the US in the AI game, and the field already has global competition, calling the alleged role of an outdated American open-source model "irrelevant."

While Meta promotes the openness of its AI model, the company has put some restrictions on its use. For instance, it requires services with more than 700 million users to obtain a license from the company. The company also restricts the use of its models for "military, warfare, nuclear industries or applications, espionage," the publication notes.

The public availability of Meta"s AI models makes it harder to implement the rules. Meta"s director of public policy Molly Montgomery told Reuters: "Any use of our models by the People"s Liberation Army is unauthorized and contrary to our acceptable use policy."

Source: Reuters

Report a problem with article
Next Article

This simple trick bypasses New Outlook for Windows install on your PC

Previous Article

Microsoft changing how to enable/disable New Outlook for Windows, Outlook for the Web