News
Foxconn’s annual Technology Day kicked off today at the Nangang Exhibition Center, following the tradition of celebrating founder Terry Gou’s birthday. Although Mr. Gou was not present at the event, NVIDIA CEO Jensen Huang made a surprise appearance, introducing the production version of Model B and highlighting its appeal to young couples.
Jensen Huang announced that he and Foxconn Chairman Young Liu are collaborating to establish an AI Factory. He shared hand-drawn sketches, emphasizing that they are adopting a novel approach to software development, enabling computers to create software based on human preferences and experiences. To ensure a computer’s ability to learn effectively, it must have access to abundant data from which it can derive valuable insights and experiences..
He unveiled the vital role of robust computers in facilitating AI learning. With NVIDIA’s assistance, Foxconn may collect enough data for AI to process shape network models, paving the way for innovative intelligence.
Jensen Huang then emphasized that this groundbreaking system is set to empower any factory or company, with Foxconn’s cutting-edge electric vehicle Model B enabling interaction between drivers and passengers. The AI Factory will offer a wide array of tools, including software solutions, to enhance the overall quality of life.
Chairman Liu reiterated their determination to bring AI Factory into fruition, backed by three core platforms: Smart City, Smart Manufacturing, and Smart EVs, all of which are driven by the innovative AI Factory.
Huang noted that these efforts will all be fueled by AI intelligence, ultimately creating substantial value. In a light-hearted tone, he concluded, “Let’s meet at the night market!”
Adding an interesting twist to the day’s events, Foxconn’s Tech Day featured three Foxconn N7 electric cars in deep blue, light blue, and white. Notably, the white vehicle showcased a handwritten message from Jensen Huang that read, “To Young and my friend at Foxconn, beautiful and amazing EVs! Jensen Huang.”
(Photo credit: Foxconn’s Stream)
Insights
Nvidia hosted its fall GTC (GPU Technology Conference) in early November, during which the company shared details regarding the progress that it had made on products and services such as AI software, data centers, automotive applications, and healthcare. In particular, Nvidia’s foray into virtual worlds and digital twins, both of which are closely tied to the metaverse, garnered significant attention from the public. By leveraging diverse simulation tools that reflect real-life circumstances, Nvidia has extended the application of virtual worlds from the local scale to the planetary scale, thereby reflecting the metaverse’s pioneering qualities and developmental progress.
Along with the ongoing metaverse craze, Nvidia also released its Omniverse Avatar technology platform as well as its Omniverse Replicator, which is a “synthetic data-generation engine” according to the company. Both of these releases are based on the Nvidia Omniverse, a platform that specializes in virtual collaboration. Whereas the Omniverse Avatar platform enables the creation of interactive virtual characters through synergies among voice AI technology, machine vision, and NLP (natural language processing), the Omniverse Replicator constructs more realistic, lifelike virtual worlds by training DNN (deep neural networks) using such synthetic data as velocity, depth, and weather conditions.
Digital twin-based virtual factories are starting to show the first hints of the metaverse
The metaverse value chain primarily revolves around commonly seen infrastructural backbones formed by telecommunications and cloud/edge computing. The virtual space that is then built on top of this infrastructure comprises HMI (human machine interface), decentralization, application creation, and user experiences. More specifically, HMI produces an AI-empowered immersive experience by combining multiple interactive technologies with an AR/VR base layer. At the moment, companies such as Nvida, Meta (formerly known as Facebook), Microsoft (including Xbox), and Vive are heavily invested in HMI development. Application creation, on the other hand, refers to mechanisms that make the metaverse more lively, reliable, diverse, and attractive. Some examples include graphical tools and cryptocurrency technologies. Representative groups focusing on this field include Roblox, IBM, Google AI, Epic, and Unity.
Regarding the content of Nvidia’s presentation during GTC apart from the Omniverse Avatar and Replicator, the company also released CloudXR, Showroom, and other Omniverse-based tools used for optimizing immersive experiences. As well, Nvidia also released the Modulus neural network model, which is accelerates the build-out of digital twins. These releases, in turn, demonstrates Nvidia’s competency and leadership in creating AI-driven software tools for the metaverse value chain. With regards to real-life use cases, digital twins currently represent most of Nvidia’s applications. For instance, BMW and Nvidia have partnered to construct a digital twin-based factory via the Omniverse platform capable of connecting ERP (enterprise resource management), shipment volume simulation, remote-controlled robots, production line simulation, etc. This partnership is indicative of promising early-stage growth of the metaverse.
Nvidia is extending its simulation application from factories to planets
While smart city development has remained one of the main use cases of simulation in recent years, Nvidia has further extended its simulation applications from use cases previously limited to singular offices or factory facilities. For instance, BIM (building information modeling) specialist Bentley Systems has teamed up with Nvidia to apply digital twins to public property management and maintenance. Ericsson, on the other hand, is utilizing Nvidia’s technology to construct a digital replica of an entire city for the purpose of checking 5G signal coverages, optimizing base station placement, and improving antenna designs. During the GTC, Nvidia unveiled the Earth-2 system, which is a supercomputer that generates a digital twin of planet earth for weather forecasts.
As a matter of fact, most products and services announced by Nvidia during GTC represent either a partial or entry-level application of the metaverse. However, as the post-pandemic new normal continues to drive up the demand for contactless and digital transformation applications, strengthening CPS (cyber physical systems) will remain one of the most significant trends in the market. As real-world environments become increasingly complex due to interactions among an increasing number of tools and use cases, Nvidia will aim to create a comprehensive framework for metaverse development through products/services based on more intelligent, comprehensive, and instant virtual worlds. Hence, TrendForce believes that Nvidia will need to address certain major challenges going forward, including lowering its tools’ usage barriers, strengthening its ecosystem, and attracting new users.
(Image credit: NVIDIA)