Broadcom’s Trillion-Dollar Valuation Drives Chip Innovation
Advertisements
In a remarkable turn of events, Broadcom, a leading player in the custom AI chip industry, has exceeded fourth-quarter expectations, boosting its market capitalization to over $1 trillionThis significant milestone was further fueled by the announcement from the CEO regarding collaborations with three major American cloud computing companies to develop tailored AI chipsSuch partnerships not only underscore Broadcom's pivotal role in advancing AI technologies but also reflect a burgeoning market trend that sees custom AI chips as a catalyst for growth, particularly in the context of rising global demand for artificial intelligence solutions.
On December 13, 2024, Broadcom’s total market value surged past the trillion-dollar mark, marking an impressive daily growth rateThe following trading day saw the stock price climb further, reaching an unprecedented peak of $251.88. In contrast, Nvidia, a competitor in the general-purpose AI chip market, witnessed a slight decline in its shares, highlighting the shifting dynamics within the tech sector.
The reasons behind Broadcom's substantial market capitalization surge primarily stem from their robust fourth-quarter financial reports and optimistic projections
Advertisements
The company reported revenues of $14.054 billion for the fourth quarter, a staggering 51.20% year-over-year increase, alongside a net profit of $4.324 billion, up by 22.70%. This impressive performance has prompted numerous investment banks on Wall Street to raise their target prices for Broadcom's stock, with Goldman Sachs increasing its target from $190 to $240, and Barclays revising it from $200 to $205.
Moreover, the CEO's remarks about collaborations with significant cloud computing firms to innovate custom AI chip solutions have set the stage for a transformed industry landscapeThe anticipated breadth of these partnerships illustrates not only Broadcom's strategic vision but also establishes a clear demand for specialized AI hardware that addresses the unique needs of emerging technologies.
Within China, leading firms such as Cambricon Technologies (688256.SH), Baidu (09888.HK), and Tencent Holdings (00700.HK) have also begun laying the groundwork in the custom AI chip sector, achieving notable milestones in their development journeys
Advertisements
These companies are continuously refining their offerings to mirror the evolving requirements of the booming AI landscapeAdditionally, domestic cloud giants like Alibaba Cloud, Tianyi Cloud, China Mobile Cloud, and Tencent Cloud rank among the top ten global cloud computing vendors, thereby creating a robust demand ecosystem for the development of custom AI chips.
AI chips, by design, are specialized processors optimized for efficiently running AI algorithmsThey emulate biological neural networks by modeling the functioning mechanisms of biological neuronsEquipped with numerous processing units, these chips are capable of executing complex mathematical computations and managing large data loads, making them essential for both training and inference in AI applications.
In the realm of AI chips, Nvidia GPUs dominate market shares, often leading users to conflate GPU technology directly with AI capabilities
Advertisements
Contrary to this perception, the mainstream AI chip landscape is categorized into three main types: first, general-purpose chips represented by GPUs, second, application-specific integrated circuits (ASICs) designed for specific functions, and third, field-programmable gate arrays (FPGAs) which offer semi-custom solutionsNotably, GPUs and ASICs are the primary technological paths in AI chip development.
According to insights from HuiBo Investment Research, GPUs are particularly advantageous for AI training because they possess the computational capacity, memory bandwidth, and parallel processing capabilities to efficiently handle the voluminous data involved in the training processTheir flexibility facilitates the ongoing adjustments needed for fine-tuning AI models, showcasing their superior adaptability for tasks involving rapid iterations and debugging.
However, GPUs are not without their downsides
- Financial Services Go Digital: Trends and Challenges
- Fed Cuts Rates, Dow Plunges 1,000 Points
- Can AI Drive Future Economic Growth?
- NVIDIA Market Valuation Surpasses $1 Trillion
- Is $1 Trillion About to Flow Back to A-Share Markets?
They are often characterized by relatively high power consumption rates, especially given their generalized architectureThis can lead to inefficiencies when run on tasks that could benefit from a more optimized design.
In response to the limitations of GPUs, ASICs have emerged as the preferred chip choice for cloud computing vendors focusing on inference tasksASICs are custom-designed chips tailored for specific tasks or algorithms, further categorizing into various types like Tensor Processing Units (TPUs), Data Processing Units (DPUs), and Neural Processing Units (NPUs). The TPU, developed by Google, is primarily designed for tensor computations, while DPUs enhance computational speed within data centersOn the other hand, NPUs correspond to convolutional neural network algorithms from the last wave of AI and are widely integrated into edge devices.
Research from Guotai Junan indicates that ASICs are more cost-effective and energy-efficient compared to GPUs due to their simpler hardware structure—designed specifically for targeted tasks, thereby eliminating the superfluous hardware applications intended for general acceleration.
It is pertinent to point out that while ASICs may not yet match the single-card computational power of GPUs, they offer significant price advantages
For instance, Google’s TPU v6 and Microsoft’s Maia 100 reach around 90% and 80% of Nvidia's H100’s performance, yet the cost per unit remains significantly lower, allowing ASICs to deliver favorable cost-performance ratios in inference scenarios.
Marvell data estimates that, in 2023, custom chips constituted only 16% of the data center's accelerated computing chip market, representing approximately $6.6 billionHowever, as AI computing demands escalate, the share of custom chips is predicted to soar, with the data center custom computing chip market projected to reach $42.9 billion by 2028, corresponding to a remarkable compound annual growth rate of 45% from 2023 onward.
Broadcom's CEO holds an even more optimistic view regarding the ASIC market, mentioning in the earnings call the ambitious plans of three major cloud providers to establish million-unit XPU clusters
Consequently, the anticipated market opportunity for Broadcom's AI ASIC solutions stands to reach between $60 billion to $90 billion by 2027.
The global cloud computing market continues its rapid growth, serving as a primary application arena for AI chipsAccording to the China Academy of Information and Communications Technology, the cloud computing market in China reached ¥616.5 billion in 2023, representing a 35.5% annual increase, outpacing global growth rates.
Additionally, with cutting-edge advancements in cloud computing spurred by AI technological innovations and large-scale model applications, China's cloud computing industry is poised for another growth wave, with projections indicating that the market will surpass ¥2.1 trillion by 2027.
As per Choice data for the first three quarters of 2024, Alibaba Cloud reported revenues of ¥81.754 billion, marking a year-over-year growth of 5.55%. During the first half of 2024, China Telecom's Tianyi Cloud recorded revenues of ¥55.2 billion, a 20.4% year-over-year increase; China Unicom's revenue from Unified Cloud was ¥31.7 billion, indicating a growth of 24.3%; and China Mobile’s cloud business amassed ¥50.4 billion, up by 19.3%. Meanwhile, other domestic players such as Tencent Cloud, Huawei Cloud, and Baidu Cloud also reported consistent revenue growth
This rapid acceleration in the Chinese cloud computing market mirrors trends seen in North America, offering lucrative opportunities for domestic ASIC chip manufacturers.
Notably, Cambricon Technologies has emerged as a unicorn in China's AI chip sector, with its terminal devices utilizing their intelligent processing unit IP reaching over 100 million shipmentsTheir intelligent cloud chips and acceleration cards are now widely integrated into mainstream server offerings within the country.
In 2024, Cambricon’s shares fluctuated dramatically, spiking from a low of ¥95.85 to a high of ¥700, with market capitalization nearing ¥300 billionNevertheless, despite the upward trajectory, the company is still grappling with lossesTheir revenue figures for 2022 to 2023 show ¥729 million, ¥709 million, and ¥185 million for the first three quarters of 2024, respectively—indicating year-on-year growth of 1.11%, -2.70%, and 27.09%. When adjusting for non-recurring losses, the net income attributable to shareholders remained in the red with figures of -¥1.579 billion, -¥1.043 billion, and -¥862 million, reflecting improvements in narrowing losses.
Sustained high investment in research and development is a key driver behind Cambricon's losses
Their R&D expenditures from 2022 to mid-2024 are estimated at ¥1.523 billion, ¥1.118 billion, and ¥447 million, constituting alarming proportions of their revenue at 208.92%, 157.53%, and 690.92%, respectively.
By mid-2024, Cambricon boasted a workforce of 727 research and development personnel, making up about 74.79% of the entire companyNotably, 78.82% of these researchers hold master’s degrees or higher, showcasing Cambricon’s commitment to academic and technical excellence.
The investments in research are bearing fruit, as Cambricon has unveiled several series of inference chips, including the MLU100, MLU200, MLU370, and XuanSi 1001, with the MLU370 being particularly noteworthy for its advanced 7nm process technology and chiplet architectureIt stands out as the first domestic cloud AI chip publicly released to support LPDDR5 memory, essentially doubling the computational power of Cambricon’s previous generation, the MLU270.
Importantly, the MLU370 series demonstrates distinct advantages in high-density cloud inference, achieving a peak computational capability of 256 TOPS, surpassing Nvidia's L20. Within the realm of cloud inference chips, the MLU370-X4 competes on par in performance with other domestic offerings like Kunlun R200 and Suyuan I20—each achieving similar output of 256 TOPS at 150W power consumption
The MLU370-S4 further excels by exhibiting a power efficiency ratio of 2.56 for high-density inference tasks, marking a competitive edge in this specific application area.
In addition to Cambricon, industry giants such as Huawei and Baidu have also made strides in developing their proprietary ASIC chipsHuawei launched its Ascend 910 and Ascend 930 in October 2018, with the former being a low-power AI chip with a maximum consumption of only 8W, focused on inference tasksIn September 2023, Huawei introduced the Ascend 910B, showcasing substantial enhancements in single-precision computing power, nearing that of Nvidia’s A100.
Baidu, similarly, unveiled its first-generation Kunlun chips in 2018 and has since undergone two rounds of iterative advancementsUnlike Cambricon and Huawei’s focus on AI applications, the Kunlun chip is versatile for both cloud and edge applications, proving its capabilities in fields like autonomous driving and intelligent transportation.
Tencent has also made its mark by releasing three proprietary chips: Zixiao, Canghai, and Xuanling
Leave a comment
Your email address will not be published