Nvidia is watching the throne. One day after the king of artificial intelligence chips unveiled new hardware-and-software updates aimed at furthering its leadership in the booming corner of the technology world, the stock dropped more than 4%, adding to the nearly 1.7% decline in the prior session. Perhaps, it’s not surprising to see Club name Nvidia (NVDA) give back some gains lately as shares have nearly tripled in 2023. The stock, which closed at a record high of $474.94 on July 18, has been rather flat over the past month. NVDA YTD mountain Nvidia YTD performance After the year the company has had, these latest AI developments are assuring for shareholders, like us, whose investment thesis rests on Nvidia’s AI dominance today and tomorrow. It’s apparent that Nvidia sees the mounting competition — from fellow Club holding and semiconductor maker Advanced Micro Devices (AMD) and others — but intends to keep innovating and improving its offerings. New AI hardware Case in point: At an industry conference Tuesday, Nvidia announced a next-generation GH200 Grace Hopper Superchip, just over two months after its first version entered full production and before it has even fully shipped to customers. Beyond hardware, Nvidia also said it was partnering with AI community startup Hugging Face, signaling a willingness to embrace parts of the open-source artificial intelligence software landscape. “No one is coming for Nvidia on AI, period, any time soon,” Stifel analyst Ruben Roy told CNBC in an interview Wednesday. The GH200 Grace Hopper Superchip — which links together traditional central processing units (CPUs) and graphics processing units (GPUs) — is designed to power the large language models that undergird generative AI applications, such as ChatGPT from Microsoft -backed OpenAI. While Nvidia’s H100 GPU has become the leading chip to train AI models, the company has positioned the Grace Hopper Superchip to run the models on a day-to-day basis, a process known as inference. The new version of Grace Hopper primarily delivers improvements in memory capacity and bandwidth compared with the first-generation product. This is important because memory plays a key role in the process of a generative AI application returning an answer to user prompt. When a user asks ChatGPT to write a poem on a particular topic, that initial inquiry is fielded by the CPU. “Then it’s going to go out and search for the best poem based on whatever parameters you’re giving it,” Roy explained. “Where does it find all those parameters? Memory.” In practice, improving the memory capacity and bandwidth of the Grace Hopper Superchip should help AI applications run more efficiently and smoothly. Second-gen Grace Hopper uses a new type of premium memory processor, known as HBM3e, to deliver these upgrades. Volume production of the improved Grace Hopper Superchip is expected in the second quarter of 2024, according to Nvidia. At that point, Roy said, to his knowledge, Grace Hopper Superchip will be “the only platform out there that’s capable of handling HBM3e memory,” adding a wrinkle to the competitive landscape. In a note to clients Tuesday, Bank of America argued the technical specs Nvidia released suggest the upgraded Grace Hopper “exceeds” AMD’s CPU-plus-GPU super chip offering, the MI300A. Nvidia’s forthcoming chip will have 282 gigabytes of HBM3e memory, compared with 128GB of the older HBM3 memory for the AMD chip. The MI300A, along with MI300X chip — designed specifically for generative AI applications — are on track for a fourth-quarter rollout , AMD management said last week in its post-earnings call. “The highest-end, flagship workloads that [cloud service providers] are going to try and sell, this is going to be the only game in town,” Roy said of second-gen Grace Hopper. The Stifel analyst also said that Nvidia’s decision to apparently accelerate the Grace Hopper roadmap — before the first gen is widely deployed — suggests “a lot of customer interest.” Open-source software Nvidia’s closed-source CUDA software platform – along with a library of pre-trained AI models — has helped establish the company’s AI dominance. It’s a secret sauce of hardware and software. Then came Tuesday’s revelation that Nvidia was partnering with Hugging Face — seen as a leader in open-source AI — to give software developers using that startup’s platform access to Nvidia DGX Cloud , a supercomputer accessible through a web browser that can be utilized to train and finetune models. AMD and semiconductor rival Intel (INTC) have marketed their AI strategies as embracing open-source models, in contrast to Nvidia. For example, on Intel’s latest earnings call, Intel CEO Pat Gelsinger highlighted the firm’s goal to “democratize AI” four times, according to a FactSet transcript. AMD bills ROCm — its rival to Nvidia’s CUDA — as an “open software platform.” In June, when it debuted a fresh AI-targeted chip, AMD also announced a noteworthy collaboration with Hugging Face , beating Nvidia to the punch. “Nvidia hadn’t done it [open source] until now because they didn’t have to,” Roy said. “They had a propriety, closed operating system called CUDA. Why do they have to go and do an open ecosystem if there’s no competition out there?” With more competitors now jockeying for a slice of the AI market, the Stifel analyst said it’s only “logical” that Nvidia decided to embrace open source, as well. Nvidia on Tuesday also teased an update to Omniverse, its 3D graphics platform that companies such as BMW can use to build digital versions of factories before building them in the real world. Those digital factories can also be used to train autonomous warehouse robots, as in the case of Club name Amazon (AMZN). Among the new features coming soon to Omniverse soon are generative AI capabilities. Earlier Wednesday, Jim Cramer highlighted the benefits of the Omniverse for industrial use and the other software and hardware announcements at the conference in predicting that Nvidia stock has more room to run. The Club price target remains at $450 per share ahead of Nvidia’s latest quarterly earnings release two weeks from now on Aug. 23. Bottom line Our investment in Nvidia hasn’t gone on autopilot just because Jim designated the stock an “own it, don’t trade it” position. We constantly test our long-term bullishness against things we read and see in the here and now. Shares of Nvidia may be slumping Wednesday, due in part to a supply warning Tuesday night from server maker Super Micro Computer . Jim reached out to NVDA for comment, but it’s too close to its quarterly earnings report for the company to address. However, bigger picture, Nvidia’s recent updates on both the hardware and software side only reinforce our view that the chipmaker — already worth over $1 trillion — will persist as the dominant AI enabler — and, as a result, grow more valuable over time. To be sure, we expect AMD to eventually play in the expanding AI marketplace, offering an accelerated computing alternative to Nvidia. But it’s early innings on that journey for AMD, and the traditional data-center market — where it’s been stealing share from Intel thanks to its high-quality CPUs — remains a larger source of near-term revenue. Those dynamics justify our small position in AMD. When it comes to AI, though, Nvidia still wears the crown. (Jim Cramer’s Charitable Trust is long NVDA, AMD, MSFT, AMZN. See here for a full list of the stocks.) As a subscriber to the CNBC Investing Club with Jim Cramer, you will receive a trade alert before Jim makes a trade. Jim waits 45 minutes after sending a trade alert before buying or selling a stock in his charitable trust’s portfolio. If Jim has talked about a stock on CNBC TV, he waits 72 hours after issuing the trade alert before executing the trade. THE ABOVE INVESTING CLUB INFORMATION IS SUBJECT TO OUR TERMS AND CONDITIONS AND PRIVACY POLICY , TOGETHER WITH OUR DISCLAIMER . NO FIDUCIARY OBLIGATION OR DUTY EXISTS, OR IS CREATED, BY VIRTUE OF YOUR RECEIPT OF ANY INFORMATION PROVIDED IN CONNECTION WITH THE INVESTING CLUB. NO SPECIFIC OUTCOME OR PROFIT IS GUARANTEED.
Nvidia is watching the throne.
Read the full article here