Insights
In late December 2023, reports surfaced indicating OpenAI CEO Sam Altman’s intention to raise funds to construct a semiconductor plant, ensuring a secure supply of AI chips.
According to a report from the Washington Post on January 24, 2024, Sam Altman has engaged with US congressional members to discuss the construction of the semiconductor plant, including considerations of timing and location, highlighting his increasingly fervent ambition to establish the facility.
TrendForce’s Insights:
The rapid emergence of AI-generated content (AIGC) undoubtedly stood out as a highlight of 2023, closely tied to the quality and efficiency of the large language models (LLMs) used. Take OpenAI’s ChatGPT, for instance, which employs the GPT-3.5 model released in 2020. With 175 billion training parameters, it surpasses its predecessor, GPT-2, by over 100 times, itself being over 10 times larger than the initial GPT from 2018.
In pursuit of better content quality, diversified outputs, and enhanced efficiency, the continuous expansion of model training parameters becomes an inevitable trend. While efforts are made to develop lightweight versions of language models for terminal devices, the cloud-based AI computing arena anticipates a continued expansion of language model training parameters, moving towards the “trillion” scale.
Due to the limited growth rate of AI chip performance, coping with the rapidly increasing model training parameters and the vast amount of data generated by the flourishing development of cloud-based AIGC applications inevitably requires relying on more AI chips. This situation continues to exert pressure on the chip supply chain.
Given that the demand for AI computing is escalating faster than the growth rate of chip performance and capacity, it’s understandable why Sam Altman is concerned about chip supply.
The construction of advanced process fabs is immensely costly, with estimates suggesting that the construction cost of a single 3nm fab could amount to billions of dollars. Even if Sam Altman manages to raise sufficient funds for plant construction, there remains a lack of advanced semiconductor process and packaging technology, not to mention capacity, yield, and operational efficiency.
Therefore, it is anticipated that Sam Altman will continue to seek collaboration with sfoundries to achieve his factory construction goal.
Looking at foundries worldwide, TSMC is undoubtedly the preferred partner. After all, TSMC not only holds a leading position in advanced processes and packaging technologies but also boasts the most extensive experience in producing customized AI chips.
While Samsung and Intel are also suitable partners from a localization perspective, considering factors like production schedules and yield rates, choosing TSMC appears to be more cost-effective.
(Photo credit: OpenAI)
News
According to sources cited by the Financial Times, South Korean chip manufacturer SK Hynix is reportedly planning to establish a packaging facility in Indiana, USA. This move is expected to significantly advance the US government’s efforts to bring more artificial intelligence (AI) chip supply chains into the country.
SK Hynix’s new packaging facility will specialize in stacking standard dynamic random-access memory (DRAM) chips to create high-bandwidth memory (HBM) chips. These chips will then be integrated with NVIDIA’s GPUs for training systems like OpenAI’s ChatGPT.
Per one source close to SK Hynix cited by the report, the increasing demand for HBM from American customers and the necessity of close collaboration with chip designers have deemed the establishment of advanced packaging facilities in the US essential.
Regarding this, SK Hynix reportedly responded, “Our official position is that we are currently considering a possible investment in the US but haven’t made a final decision yet.”
The report quoted Kim Yang-paeng, a researcher at the Korea Institute for Industrial Economics and Trade, as saying, “If SK Hynix establishes an advanced HBM memory packaging facility in the United States, along with TSMC’s factory in Arizona, this means Nvidia can ultimately produce GPUs in the United States.”
Previously, the United States was reported to announce substantial chip subsidies by the end of March. The aim is to pave the way for chip manufacturers like TSMC, Samsung, and Intel by providing them with billions of dollars to accelerate the expansion of domestic chip production.
These subsidies are a core component of the US 2022 “CHIPS and Science Act,” which allocates a budget of USD 39 billion to directly subsidize and revitalize American manufacturing.
Read more
(Photo credit: SK Hynix)
Insights
According to Bloomberg, Apple is quietly catching up with its competitors in the AI field. Observing Apple’s layout for the AI field, in addition to acquiring AI-related companies to gain relevant technology quickly, Apple is now developing its large language model (LLM).
TrendForce’s insights:
As the smartphone market matures, brands are not only focusing on hardware upgrades, particularly in camera modules, to stimulate device replacements, but they are also observing the emergence of numerous brands keen on introducing new AI functionalities in smartphones. This move is aimed at reigniting the growth potential of smartphones. Some Chinese brands have achieved notable progress in the AI field, especially in large language models.
For instance, Xiaomi introduced its large language model MiLM-6B, ranking tenth in the C-Eval list (a comprehensive evaluation benchmark for Chinese language models developed in collaboration with Tsinghua University, Shanghai Jiao Tong University, and the University of Edinburgh) and topping the list in its category in terms of parameters. Meanwhile, Vivo has launched the large model VivoLM, with its VivoLM-7B model securing the second position on the C-Eval ranking.
As for Apple, while it may appear to be in a mostly observatory role as other Silicon Valley companies like OpenAI release ChatGPT, and Google and Microsoft introduce AI versions of search engines, the reality is that since 2018, Apple has quietly acquired over 20 companies related to AI technology from the market. Apple’s approach is characterized by its extreme discretion, with only a few of these transactions publicly disclosing their final acquisition prices.
On another front, Apple has been discreetly developing its own large language model called Ajax. It commits daily expenditures of millions of dollars for training this model with the aim of making its performance even more robust compared to OpenAI’s ChatGPT 3.5 and Meta’s LLaMA.
Analyzing the current most common usage scenarios for smartphones among general consumers, these typically revolve around activities like taking photos, communication, and information retrieval. While there is potential to enhance user experiences with AI in some functionalities, these usage scenarios currently do not fall under the category of “essential AI features.”
However, if a killer application involving large language models were to emerge on smartphones in the future, Apple is poised to have an exclusive advantage in establishing such a service as a subscription-based model. This advantage is due to recent shifts in Apple’s revenue composition, notably the increasing contribution of “Service” revenue.
In August 2023, Apple CEO Tim Cook highlighted in Apple’s third-quarter financial report that Apple’s subscription services, which include Apple Arcade, Apple Music, iCloud, AppleCare, and others, had achieved record-breaking revenue and amassed over 1 billion paying subscribers.
In other words, compared to other smartphone brands, Apple is better positioned to monetize a large language model service through subscription due to its already substantial base of paying subscription users. Other smartphone brands may find it challenging to gain consumer favor for a paid subscription service involving large language models, as they lack a similarly extensive base of subscription users.
Read more
News
Following Saudi Arabia’s $13 billion investment, the UK government is dedicating £100 million (about $130 million) to acquire thousands of NVIDIA AI chips, aiming to establish a strong global AI foothold. Potential beneficiaries include Wistron, GIGABYTE, Asia Vital Components, and Supermicro.
Projections foresee a $150 billion AI application opportunity within 3-5 years, propelling the semiconductor market to $1 trillion by 2030. Taiwan covers the full industry value chain. Players like TSMC, Alchip, GUC, Auras, Asia Vital Components, SUNON, EMC, Unimicron, Delta, and Lite-On are poised to gain.
Reports suggest the UK is in advanced talks with NVIDIA for up to 5,000 GPU chips, but models remain undisclosed. The UK government recently engaged with chip giants NVIDIA, Supermicro, Intel, and others through the UK Research and Innovation (UKRI) to swiftly acquire necessary resources for Prime Minister Sunak’s AI development initiative. Critics question the adequacy of the £100 million investment in NVIDIA chips, urging Chancellor Jeremy Hunt to allocate more funds to support the AI project.
NVIDIA’s high-performance GPU chips have gained widespread use in AI fields. Notably, the AI chatbot ChatGPT relies heavily on NVIDIA chips to meet substantial computational demands. The latest iteration of AI language model, GPT-4, requires a whopping 25,000 NVIDIA chips for training. Consequently, experts contend that the quantity of chips procured by the UK government is notably insufficient.
Of the UK’s £1 billion investment in supercomputing and AI, £900 million is for traditional supercomputers, leaving £50 million for AI chip procurement. The budget recently increased from £70 million to £100 million due to global chip demand.
Saudi Arabia and the UAE also ordered thousands of NVIDIA AI chips, and Saudi Arabia’s order includes at least 3,000 of the latest H100 chips. Prime Minister Sunak’s AI initiative begins next summer, aiming for a UK AI chatbot like ChatGPT and AI tools for healthcare and public services.
As emerging AI applications proliferate, countries are actively competing in the race to bolster AI data centers, turning the acquisition of AI-related chips into an alternative arms race. Compal said, “An anticipate significant growth in the AI server sector in 2024, primarily within hyperscale data centers, with a focus on European expansion in the first half of the year and a shift toward the US market in the latter half.”
Insights
Semiconductor manufacturing leader TSMC held its annual shareholder meeting on June 6, addressing issues including advanced process development, revenue, and capital expenditure. TSMC’s Chairman Mark Liu and President C.C. Wei answered a series of questions. The key points from the industry are summarized as follows:
2023 Capital Expenditure Leaning towards $32 Billion
For TSMC’s Q2 and full-year outlook for this year, the consolidated revenue forecast is between $15.2 and $16 billion, a decrease of 5%-10% from the first quarter. Gross profit margin is expected to range between 52%-54%, and operating profit margin between 39.5%-41.5%. Chairman Mark Liu revealed that this year’s capital expenditure is expected to lean more towards $32 billion.
TSMC’s President C.C. Wei lowered the 2023 growth forecast for the overall semiconductor market (excluding memory), expecting a mid-single digit percentage decrease. The revenue in the wafer manufacturing industry is expected to decrease by a high single digit percentage. At this stage, the overall revenue for 2023 is expected to decrease by a low-to-mid single digit percentage, sliding approximately 1%-6%.
Advanced Process N4P to be Mass Produced this Year
TSMC’s total R&D expenditure for 2022 reached $5.47 billion, which expanded its technical lead and differentiation. The 5-nanometer technology family entered its third year of mass production, contributing 26% to the revenue. The N4 process began mass production in 2022, with plans to introduce the N4P and N4X processes. The N4P process technology R&D is progressing smoothly and is expected to be mass-produced this year. The company’s first high-performance computing (HPC) technology, N4X, will finalize product design for customers this year.
Advanced Packaging Demand Far Exceeds Capacity
Due to the generative AI trend initiated by ChatGPT, the demand for advanced packaging orders for TSMC has increased, forcing an increase in advanced packaging capacity. TSMC also pointed out that the demand for TSMC’s advanced packaging capacity far exceeds the existing capacity, and it is forced to increase production as quickly as possible. Chairman Mark Liu stated that the current investment in R&D focuses on two legs, namely 3D IC (chip stacking) and advanced packaging.
At present, three-quarters of TSMC’s R&D expenditure is used for advanced processes, and one quarter for mature and special processes, with advanced packaging falling under mature and special processes.
(Photo credit: TSMC)