City
Epaper

Musk’s Grok-3 slightly outperforms Chinese DeepSeek AI: Report

By IANS | Updated: April 5, 2025 15:46 IST

New Delhi, April 5 As the artificial intelligence (AI) turf war escalates, Elon Musk-owned Grok and Chinese DeepSeek ...

Open in App

New Delhi, April 5 As the artificial intelligence (AI) turf war escalates, Elon Musk-owned Grok and Chinese DeepSeek models now stand at the forefront of AI capability -- one optimised for accessibility and efficiency and the other for brute-force scale -- despite the vast disparity in training resources, a report showed on Saturday.

Grok-3 represents scale without compromise -- 200,000 NVIDIA H100s chasing frontier gains, while DeepSeek-R1 delivers similar performance using a fraction of the compute, signalling that innovative architecture and curation can rival brute force, according to Counterpoint Research.

Since February, DeepSeek has grabbed global headlines by open-sourcing its flagship reasoning model DeepSeek-R1 to deliver performance on a par with the world’s frontier reasoning models.

“What sets it apart isn’t just its elite capabilities, but the fact that it was trained using only 2,000 NVIDIA H800 GPUs — a scaled-down, export-compliant alternative to the H100, making its achievement a masterclass in efficiency,” said Wei Sun, principal analyst in AI at Counterpoint.

Musk’s xAI has unveiled Grok-3, its most advanced model to date, which slightly outperforms DeepSeek-R1, OpenAI’s GPT-o1 and Google’s Gemini 2.

“Unlike DeepSeek-R1, Grok-3 is proprietary and was trained using a staggering 200,000 H100 GPUs on xAI’s supercomputer Colossus, representing a giant leap in computational scale,” said Sun.

Grok-3 embodies the brute-force strategy — massive compute scale (representing billions of dollars in GPU costs) driving incremental performance gains. It’s a route only the wealthiest tech giants or governments can realistically pursue.

“In contrast, DeepSeek-R1 demonstrates the power of algorithmic ingenuity by leveraging techniques like Mixture-of-Experts (MoE) and reinforcement learning for reasoning, combined with curated and high-quality data, to achieve comparable results with a fraction of the compute,” explained Sun.

Grok-3 proves that throwing 100x more GPUs can yield marginal performance gains rapidly. But it also highlights rapidly diminishing returns on investment (ROI), as most real-world users see minimal benefit from incremental improvements.

In essence, DeepSeek-R1 is about achieving elite performance with minimal hardware overhead, while Grok-3 is about pushing boundaries by any computational means necessary, said the report.

Disclaimer: This post has been auto-published from an agency feed without any modifications to the text and has not been reviewed by an editor

Open in App

Related Stories

InternationalTrump says US will stop bombing Houthis as they have 'capitulated'

Other SportsAbhay Singh Sekhon raises final hopes in Nicosia Shotgun World Cup

EntertainmentShraddha Kapoor drops a blast from the past with then & now photos

AurangabadLLB students shocked to get wrong question paper on 1st day of exam

InternationalIndia to drop tariffs to ‘nothing’, says Trump

Technology Realted Stories

TechnologyIndia at the forefront of global energy transition: Piyush Goyal

TechnologyIndia’s 1st human spaceflight scheduled for first quarter of 2027: Minister

TechnologyPaytm Q4 revenue falls 15.7 pc, net loss widens to Rs 544.6 crore QoQ

TechnologyHPCL clocks 18 pc jump in Q4 net profit at Rs 3,355 crore, declares Rs 10.50 dividend

TechnologyGAIL hikes startup investment fund to Rs 500 crore in FY25: Hardeep Puri