Chinese Researchers Develop Military AI Model Using Meta’s Llama
Top Chinese research institutions associated with the People’s Liberation Army (PLA) have utilised Meta’s publicly available Llama model to develop an AI tool for potential military applications, as reported by academic papers and analysts.
In a paper reviewed in June by a news agency, six Chinese researchers from three institutions, including two affiliated with the PLA’s leading research body, the Academy of Military Science (AMS), described how they built an AI tool called “ChaBItT’’ using an early version of Meta’s Llama model. They incorporated their own parameters into the earlier Llama 2 13B large language model to create a military-focused tool for gathering and processing intelligence, aiming to provide accurate information for operational decision-making.
According to the researchers, ChatBIT was fine-tuned and optimised for dialogue and question-answering tasks in military contexts. They reported that it outperformed other AI models that were approximately 90% as capable as OpenAI’s ChatGPT-4, though they did not specify how they measured performance or whether the model is currently in use.
Sunny Cheung, an associate fellow at the Jamestown Foundation, remarked that this is the first substantial evidence of PLA experts in China systematically researching open-source LLMs, particularly Meta’s, for military purposes.
Meta has openly released many of its AI models, including Llama, but imposes restrictions on their use, prohibiting applications in military or warfare contexts, among other areas. However, as these models are publicly available, Meta has limited means of enforcing these provisions.
In response to inquiries, Molly Montgomery, Meta’s director of public policy, stated, “Any use of our models by the People’s Liberation Army is unauthorized and contrary to our acceptable use policy.”
Also Read: Meta Platforms to integrate Reuters news content into AI chatbot
The Chinese researchers included Geng Guotong and Li Weiwei from the AMS and the National Innovation Institute of Defence Technology, alongside researchers from the Beijing Institute of Technology and Minzu University. They indicated that, in the future, ChatBIT could be used not only for intelligence analysis but also for strategic planning and command decision-making.
The researchers noted ChatBIT was built using only 100,000 military dialogue records, a relatively small dataset compared to other LLMs, which typically utilise trillions of tokens. Joelle Pineau, a vice president of AI Research at Meta, expressed scepticism about what capabilities could be achieved with such a limited dataset.
This research comes amid ongoing discussions in U.S. national security circles about the implications of making AI models publicly available. In October 2023, President Joe Biden signed an executive order aimed at managing AI developments due to potential security risks.
Some experts contend that China’s advancements in indigenous AI development and the establishment of numerous research labs may already be closing the technology gap with the U.S.
A separate paper reviewed by the news agency noted that researchers from the Aviation Industry Corporation of China (AVIC), identified by the U.S. as having ties to the PLA, described using Llama 2 to train airborne electronic warfare strategies.
Additionally, a June paper discussed the use of Llama for “intelligence policing” in domestic security, enhancing data processing and police decision-making.
William Hannas, lead analyst at Georgetown University’s Centre for Security and Emerging Technology, remarked, “Can you keep them (China) out of the cookie jar? No, I don’t see how you can.”
A 2023 paper found that 370 Chinese institutions had researchers publishing work related to General Artificial Intelligence, furthering China’s strategy to lead globally in AI by 2030.
Hannas added that there is significant collaboration between top Chinese and U.S. AI scientists, complicating efforts to exclude China from advancements in the field.
Comments are closed.