China Telecom’s AI Research Institute has successfully trained a 100-billion-parameter AI model named TeleChat2-115B using exclusively domestic computing resources. This development indicates that Chinese entities remain resilient against Western technology sanctions. The institute announced that the model was trained with 10 trillion tokens from Chinese and English languages, and has been open-sourced. A notable aspect of this project is the use of Huawei’s Ascend Atlas 800T A2 training server, equipped with the Kunpeng processors, built using the 7nm Arm architecture, thus demonstrating China’s self-reliance in AI infrastructure.

As compared to other models like Llama and OpenAI’s estimates that boast significantly higher parameter counts, TeleChat2 might not require the equivalent computational power. The lack of GPU mention and reliance on different infrastructure suggests that Chinese advancements in AI continue despite sanctions, showcasing internal technological capabilities and enterprise scale. China Telecom, a major player in telecommunications with vast resources, further underscores the nation’s ability to innovate and grow in the AI sector without relying on international hardware support. The company leverages OpenStack extensively, reaffirming its commitment to using open-source solutions to amplify its R&D efforts.