Sam Altman replies to Nvidia CEO Jensen Huang’s expanded ‘Like Mad’ for OpenAI comment on stage; says ‘very …’

Spread the love


Sam Altman replies to Nvidia CEO Jensen Huang’s expanded 'Like Mad' for OpenAI comment on stage; says ‘very …’

OpneAI CEO Sam Altman has publicly thanked Nvidia CEO Jensen Huang after the Nvidia CEO revealed the aggressive efforts to expand computing capacity for ChatGPT-maker OpenAI across multiple cloud platforms. In a post shared on social media platform X (formerly known as Twitter) Altman wrote, “Very grateful to Jensen for working to expand Nvidia’s capacity at AWS so much for us!” The remakes made by Altman came shortly after Huang confided that Nvidia will invest $30 billion in OpenAI, describing it as one of the last opportunities to back a ‘consequential company’ before it goes public. Huang made the comments earlier this month at the Morgan Stanley Technology, Media & Telecom Conference, where he emphasized Nvidia’s role in scaling AI infrastructure.

Expanding across clouds

CEO Jensen Huang also explained that Nvidia has been working ‘like mad’ in order to ramp up OpenAI’s computing power not only on Microsoft Azure, but also on Amazon Web Services (AWS) and Oracle Cloud Infrastructure. This expansion is made is make sure that OpenAI has the GPU resources required to support its rapidly growing AI systems.

‘Biggest Mistake Young People Make…’: OpenAI CEO Sam Altman Shares Blunt Take On AI At IIT Delhi

Beyond OpenAI, Nvidia is also expanding infrastructure support for other leading AI companies, including Anthropic and Meta Platforms, as competition intensifies to secure the computing backbone for next-generation AI models.

Nvidia is reportedly planning new chip for OpenAI

Nvidia is said to be working on a new processor that will be built specifically for AI inference computing – a type of processing that allows AI models to respond to user queries – specifically for OpenAI. Reportedly, the announcement is expected at Nvidia’s GTC developer conference in San Jose next month, and ChatGPT-maker has already agreed to become one of its largest customers.According to a report by The Wall Street Journal, this essentially marks marking one of the most significant shifts in the Nvidia’s business strategy since the start of the AI boom. This is because Nvidia has long dominated the market for GPUs – specialised chips used for training AI models. These include its Hopper, Blackwell and Rubin GPU series, and reportedly most analysts estimate Nvidia controls over 90% of the GPU market.But GPUs were designed with training in mind, and now the AI industry is shifting from building models to actually running them, essentially limiting their use. The new processor is designed around inference computing rather than training. Nvidia will incorporate technology from Groq, a chip startup it acquired in a roughly $20 billion deal late last year.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *