China's Options to Cushion against U.S.-Led AI Chip Restrictions

Subscribe To Download This Insight

3Q 2023 | IN-7014

There have been rumors of impending restrictions on Artificial Intelligence (AI) chip exports to China, meaning that Chinese companies could be prevented from training large AI models. If this happens, China could still develop domestic capabilities in AI inference and edge computing that will bring interesting opportunities for AI software and hardware vendors. Players in the region may also want to develop smaller generative AI models for specific use cases.

Registered users can unlock up to five pieces of premium content each month.

Log in or register to unlock this Insight.

 

U.S. Authorities Want to Reduce the Export of AI Chips to China

NEWS


Recent weeks have seen speculation that U.S. authorities could enforce stricter regulations on Artificial Intelligence (AI) chip exports to China. The moves started last year, when U.S. officials attempted to restrict China’s access to advanced semiconductors that exceed certain power specifications. Chip manufacturing equipment and designs were also restricted, with firms like Arm being unable to secure a license to export its Intellectual Property (IP) to China. In response to the restrictions, market leading U.S. chipmaker NVIDIA succeeded in creating new versions of its chips (A800 and H800) with reduced capabilities specifically permitted for the Chinese market. However, a new ban is currently being considered that could prohibit even these less powerful chips. It is also rumored that cloud service providers like Amazon and Microsoft could be blocked from working with Chinese companies, further limiting China’s ability to build state-of-the-art AI models.

China relies extensively on U.S. vendors when it comes to sourcing and designing AI chips. Chinese clients have accounted for about 20% to 25% of revenue for U.S. chip vendors, such as NVIDIA and Micron. Despite the large size of the Chinese client base, ABI Research forecasts that the growing global demand for AI computing will allow U.S. vendors to sustain their revenue and competitiveness, even if Chinese custom declines. This is because some of the fastest growing fields of AI, such as Computer Vision (CV), Natural Language Processing (NLP), and generative AI, rely on large models trained using powerful chips. If the U.S. chip export restrictions go ahead, the negative impact will be felt less by U.S. vendors and more by Chinese firms, which will be restricted in their ability to train large models in a timely and competitive manner.

China has already issued a swift retaliation to the United States. Last May, it banned domestic infrastructure companies from purchasing products from U.S. chipmaker Micron. In early July, China restricted the export of gallium and germanium—minerals that are used to build Light-Emitting Diodes (LEDs) and optical fibers for solar cells, telecommunications, and smart lighting equipment. It is true that China currently dominates gallium and germanium production, but other suppliers in the United States and Germany could ramp up their mining efforts if needed. More importantly, these minerals are not critical to the production of computing chips, so the restrictions will not directly impact AI players in the United States or elsewhere. What global vendors must watch for is the direction that China chooses for its domestic AI efforts. If Chinese firms are denied access to the hardware necessary to train large AI models, where will their expertise and investment flow instead?

China's Ability to Train Large AI Models May Be Stunted, but AI Inference Will Thrive

IMPACT


AI is a huge industry with many sub-fields. When considering the impact of restricted AI chip supplies, it is helpful to distinguish between AI tasks that are computationally intensive (training) versus those that require less power (inference).

Computing power is a limiting factor mainly in AI training tasks, where models for language, speech, or image recognition have to learn hundreds of billions of parameters. Whereas firms in the United States have already spent decades on researching and developing AI hardware that can accommodate such models, Chinese chipmakers have a relatively low level of experience in developing advanced chips suitable for AI training. The Chinese government started to subsidize domestic semiconductor firms last year, but the process of building fab plants is long and complex, and it could take years before these investments translate into commercial reality.

However, China’s lack of access to advanced AI chips may not exclude it from the global AI race. A good strategy for Chinese firms would be to focus on AI applications that are less computationally intensive. One possibility is to train small models using context-specific datasets, instead of pursuing giant generalizable models similar to OpenAI’s GPT-3.5, Google’s BARD, or Meta’s LLaMA. Recent research has shown that small, context-specific models can outperform foundational models. They also tend to have better transparency around data sources, which can help with accountability and regulatory compliance.

Another option, which ABI Research believes would be even more advantageous for Chinese firms, is to specialize in AI inference tasks. AI inference works by applying logical rules to analyze new information or to make predictions in real time. In theory, this could include deploying pre-trained models to evaluate new information (e.g., to detect objects in streaming video data). However, China’s regulators are unlikely to let businesses use foreign-trained image classifiers or language models, because they pose the risk of political interference and misinformation. More common use cases would involve domestic-trained numerical models, such as those that use customer transaction data to predict future purchases or risk of fraud. AI inference tasks like this do not require advanced AI chips and will not be impacted by chip restrictions.

AI inference applications hold huge potential in the Chinese market. Unlike U.S. firms that have focused on AI technologies that help the public to create content (e.g., via chatbots and image generators), Chinese companies have strong competence in developing commercial applications for the consumer market. Examples include the use of surveillance and facial recognition technology for citizen monitoring, retail, and personalization. China’s demand for inference chips is expected to increase, with companies like Enflame Tech, Iluvatar, and Denglin already showcasing their inference chip products to the Chinese market. Chinese firms can play to their domestic strengths by further improving the speed, accuracy, and cost of AI inference operations.

One of the main hurdles to running AI inference in consumer settings relates to latency, or the speed with which an AI model can be queried to receive a result. A good way to eliminate this lag is to forego centralized data centers or cloud services in favor of “edge” AI applications, where data processing occurs in servers and devices that are closer to the end user (e.g., via smart cameras or sensors). Hardware companies like FII and Huawei are already tailoring their product offerings to these use cases, with energy efficiency and temperature management methods promising to optimize AI inference chips for edge applications. Low power and low bandwidth consumption are some of the distinguishing features of edge AI. Moreover, data are not required to leave the device, so edge technology makes it easier for enterprises to protect sensitive information.

If necessity is the mother of invention, then U.S. sanctions on chip exports may inadvertently help Chinese firms to strengthen their competence in AI inference and edge computing. Once regulations around AI transparency and security start to come into effect, other countries may start looking to China as a leader in AI inference and edge computing technology.

How to Capitalize on China's Demand for AI Inference Technology

RECOMMENDATIONS


ABI Research forecasts that the chip trade tensions between the United States and China will translate into greater adoption of edge AI hardware and resource-efficient software in China. Software vendors and chip suppliers that operate here should consider the following recommendations:

  • Companies that develop or procure AI models will need to prioritize performance metrics related to computational and power consumption, which may sometimes come at the expense of accuracy. Efficiency metrics will also matter in AI inference tasks that run on edge devices with less memory and power. Software vendors that sell to China should focus on minimal data architectures and low power requirements (e.g., Tiny Machine Learning (TinyML)) as a unique selling point.
  • Chinese companies that still want to train computationally-intensive models (e.g., generative AI) with limited access to advanced chips should consider using small context-specific datasets, quantization techniques to reduce model size, or federated learning to train models using distributed edge devices.
  • Chinese organizations that build AI can minimize their reliance on powerful processors by focusing on nimble models that are contextualized to specific markets or narrow use cases. These models can be trained on billions of parameters or less, rather than the hundreds of billions of parameters required by generalized AI models like ChatGPT.
  • Retrieval-based models are a great alternative to generative AI in enterprises where the required output can be defined in advance (e.g., chatbots that can answer common queries using internal documents and response templates).
  • If more Chinese companies start to rely on context-specific solutions, there will be greater demand for high-quality Chinese text and multimedia datasets, data processing software to transform incoming data to fit with the required standards, and runtime tools to monitor data values for contextual drift. Software-as-a-Service (SaaS) providers that specialize in data quality solutions should consider targeting the Chinese market.
  • The deployment of AI on edge devices will require specialized optimization tools and integration software to help developers make optimal use of the available hardware. Inference chip suppliers can gain a competitive advantage by offering integration and optimization software as part of their solution.

Services

Companies Mentioned