Thursday, March 19, 2026

Nvidia Speeds Up: New AI Chips to Roll Out Annually!

Yonhap News

Nvidia CEO Jensen Huang, a leading player in Artificial Intelligence (AI) chips, has officially announced the adoption of the next-generation AI semiconductor chip, Rubin, and the sixth-generation high-bandwidth memory (HBM), HBM4. The release cycle of the AI semiconductor chip is also expected to heat the HBM development speed competition by bringing it forward from the existing two years to one year.

Huang partially revealed the detailed specifications of Rubin, which is set to be unveiled in 2026, ahead of Computex 2024 in Taipei, Taiwan, on June 4. Rubin is the next version of Blackwell, which was introduced this year. Rubin is named after Vera Rubin, an American astronomer who studied dark matter in space and the rotation speed of galaxies. Nvidia also plans to launch its own Central Processing Unit (CPU) called Vera soon.

So far, Nvidia has introduced new architecture every two years. In 2020, the A100 based on Ampere was released, and in 2022, the Hopper-based H100 was launched. The H100 is the most favored GPU for AI accelerator purposes by global big tech companies investing in AI infrastructure. In March, the Blackwell-based B100 was unveiled. The B100 is scheduled to start mass production in the third quarter and start shipping at the end of the year. However, Nvidia has announced a new model even before the previous model enters mass production. Huang said, “GPU development will proceed every year after Rubin.”

Accordingly, Nvidia will release Blackwell Ultra next year, Rubin in 2026, and Rubin Ultra in 2027. Blackwell Ultra dramatically improves the performance of the server GPU that Nvidia released in March 2024. Rubin has a completely redesigned internal structure.

Specifically, Rubin plans to implement an AI semiconductor chip by applying HBM4 to the Graphics Processing Unit (GPU). This is the first time Nvidia has revealed whether HBM4 will be mounted on the next-generation semiconductor chip. The HBM currently developed by SK Hynix, Samsung Electronics, and Micron is the 5th generation HBM3E. HBM4 is a next-generation product currently under development by memory companies and is expected to be completed by 2025.

Rubin will contain 8 HBM4s and Rubin Ultra will contain 12 HBM4s. The demand for HBM is expected to increase. Accordingly, whether SK Hynix and Samsung Electronics will benefit is attracting attention. Unlike Blackwell, which is mass-produced in a 4-nanometer process, Rubin uses a 3-nanometer process.

Meanwhile, Nvidia has unveiled a new product lineup of RTX AI PCs equipped with GeForce RTX GPUs. The newly introduced RTX AI PCs include the Asus TUF A14/A16, Zephyrus G16, ProArt PX13/P16, MSI Stealth A16 AI+, and more.

Nvidia explained that content creators can simplify and automate their workflow using the RTX-based AI PC, while streamers can use AI-based background and noise removal features. In addition, they conveyed that by utilizing the RTX AI Toolkit, which helps developers optimize and personalize their work, tasks can be processed quickly and with less capacity.

Hot this week

Unlocking the Power of Intel Core Series 2: A Comprehensive Guide to Edge AI Solutions

Intel unveils Core Processor Series 2 and AI suite for healthcare, enhancing edge computing performance and reliability in industrial settings.

Samsung’s Record R&D Investment: How it is Shaping the Future of AI and Semiconductors

Samsung Electronics invested a record $25.33 billion in R&D to lead in AI and semiconductors, boosting its future tech capabilities.

How Rising Fuel Prices Impact Asian Airlines: A Comparison of FSC vs. LCC

Low-cost carriers are struggling to cope with rising fuel prices, lacking effective hedging strategies unlike major airlines.

SK Group Invests 630 Million USD AI Company

SK Group invests heavily in AI, aiming to transform into a leader in the AI market through a new U.S. investment firm.

Nvidia’s Jensen Huang Predicts 1000x Surge in AI Computing Demand

Nvidia's CEO highlights surging AI computing demand due to AI agents like OpenClaw, boosting memory chip demand from firms like Samsung.

Topics

Unlocking the Power of Intel Core Series 2: A Comprehensive Guide to Edge AI Solutions

Intel unveils Core Processor Series 2 and AI suite for healthcare, enhancing edge computing performance and reliability in industrial settings.

Samsung’s Record R&D Investment: How it is Shaping the Future of AI and Semiconductors

Samsung Electronics invested a record $25.33 billion in R&D to lead in AI and semiconductors, boosting its future tech capabilities.

How Rising Fuel Prices Impact Asian Airlines: A Comparison of FSC vs. LCC

Low-cost carriers are struggling to cope with rising fuel prices, lacking effective hedging strategies unlike major airlines.

SK Group Invests 630 Million USD AI Company

SK Group invests heavily in AI, aiming to transform into a leader in the AI market through a new U.S. investment firm.

Nvidia’s Jensen Huang Predicts 1000x Surge in AI Computing Demand

Nvidia's CEO highlights surging AI computing demand due to AI agents like OpenClaw, boosting memory chip demand from firms like Samsung.

How Samsung’s Galaxy S26 Series Leverages India’s R&D for Innovation

Samsung's R&D centers in India are key to developing the Galaxy S26, highlighting India's role in global tech innovation.

GPT-5.4 vs. GPT-5.2: What Makes OpenAI’s Latest Model a Game Changer?

OpenAI launched GPT-5.4, an AI model enhancing professional tasks with improved coding, reasoning, and tool integration capabilities.

Apple MacBook Air M5: Is the Price Increase Worth the AI Performance Boost?

Apple launches the MacBook Air with M5 chip, doubling storage and enhancing AI capabilities, starting from 1,223 USD.

Related Articles