Authors: James Pomfret and Jessie Pang
(Reuters) – Leading Chinese research institutions linked to the People’s Liberation Army have used the publicly available Lama Meta (NASDAQ:) model to develop an artificial intelligence tool for potential military applications, according to academic and analyst publications.
In a June article reviewed by Reuters, six Chinese researchers from three institutions, including two at the People’s Liberation Army’s (PLA) leading research facility, the Academy of Military Sciences (AMS), detailed how they used an early version of the Meta llama as the basis for what he calls “ChatBIT”.
Scientists used Meta’s prior Llama 2 13B Large Tongue Model (LLM) with its own parameters to construct a military-grade AI tool that can gather and process intelligence and provide true and reliable information for operational decision-making.
As we read in the article, ChatBIT has been refined and “optimized for tasks related to dialogue and answering questions in the military field.” It was found to outperform some other AI models that were approximately 90% as competent as OpenAI’s powerful ChatGPT-4. The researchers did not describe in detail how they defined performance, nor did they specify whether the AI model had been put into utilize.
“For the first time, there is significant evidence that PLA military experts in China are systematically exploring and trying to exploit the potential of open-source LLM solutions, especially those from Meta, for military purposes,” said Sunny Cheung, an associate at the Jamestown Foundation specializing in emerging dual-use technologies in China, including artificial intelligence.
Meta has adopted open release of many of its AI models, including Llama. It places restrictions on their utilize, including requiring services with more than 700 million users to apply for a license from the company.
Its terms also prohibit the models from being used for “military, warfare, industrial or nuclear uses, espionage” and other activities subject to U.S. defense export controls, as well as the development of weapons and content intended to “incite and promote violence.”
However, because Meta’s models are public, the company has restricted ability to enforce these regulations.
In response to questions from Reuters, Meta cited its acceptable utilize policy and said it had taken steps to prevent inappropriate utilize.
“Any use of our models by the People’s Liberation Army is unauthorized and contrary to our acceptable use policy,” Molly Montgomery, director of public policy at Meta, told Reuters.
Meta added that the United States must embrace open innovation.
“In the global competition for artificial intelligence, the alleged role of a single and outdated version of the US open source model is irrelevant when we know that China is already investing over a trillion dollars to overtake the US in the field of artificial intelligence,” Meta, the spokesman said in a statement.
The Chinese researchers include Geng Guotong and Li Weiwei from the AMS Military Information Science Research Center and the National Institute of Defense Technology Innovation, as well as researchers from the Beijing Institute of Technology and Minzu University.
“In the future, with technological improvements, ChatBIT will not only be used for intelligence analysis, but also… strategic planning, simulation training and command decision-making will be studied,” the article says.
China’s Ministry of Defense did not respond to a request for comment, nor did any of the institutions or researchers.
Reuters could not confirm ChatBIT’s capabilities and computing power, although researchers noted that its model used only 100,000 transcripts of military dialogues, a relatively compact number compared to other LLMs.
“It’s a drop in the bucket compared to most of these models (which) are trained with trillions of tokens, so… I really wonder what they’re actually achieving here in terms of different capabilities,” said Joelle Pineau, vice president of AI Research at Meta and professor of computer science at McGill University in Canada.
The study comes amid a heated debate in U.S. national security and technology circles over whether companies like Meta should make their models publicly available.
US President Joe Biden in October 2023 signed an executive order to manage the development of artificial intelligence, noting that while innovation can bring significant benefits, there are also “significant security risks, such as the removal of safeguards within the model.”
This week, Washington said it was finalizing legislation aimed at limiting U.S. investment in artificial intelligence and other technology sectors in China that could threaten national security.
Pentagon spokesman John Supple said the Defense Department recognizes that open source models have both advantages and disadvantages and that “we will continue to closely monitor and evaluate competitor capabilities.”
“JAR OF COOKIES”
Some observers say China’s progress in developing indigenous artificial intelligence, including the creation of dozens of research labs, has already made it difficult to stop the country from closing the technology gap with the United States.
In a separate academic paper reviewed by Reuters, two researchers from the Aerospace Industries Corporation of China (AVIC), which the United States has identified as a PLA-affiliated company, described the use of Lama 2 for “training in air warfare interference strategies.”
China’s use of Western-developed artificial intelligence has also extended to homeland security. A June article described how Llama was used for “intelligence policing” to process large amounts of data and improve police decision-making.
In April, the state-owned PLA Daily published a commentary on how artificial intelligence could help “accelerate the research and development of weapons and equipment, lend a hand develop combat simulations and improve the effectiveness of military training.”
“Can you keep them (China) out of the cookie jar? No, I don’t know how you can do that,” William Hannas, principal analyst at Georgetown University’s Center for Security and Emerging Technologies (CSET), told Reuters. A 2023 paper published by CSET found that 370 Chinese institutions whose researchers have published papers related to general AI, helping China’s national strategy to lead the world in AI by 2030.
“There is too much collaboration between China’s top AI scientists and America’s top AI scientists to be excluded from development efforts,” Hannas added.