City
Epaper

Apple researchers question AI’s reasoning ability in mathematics

By IANS | Updated: October 12, 2024 10:35 IST

New Delhi, Oct 12 A team of Apple researchers has questioned the formal reasoning capabilities of large language ...

Open in App

New Delhi, Oct 12 A team of Apple researchers has questioned the formal reasoning capabilities of large language models (LLMs), particularly in mathematics.

They found that LLMs exhibit noticeable variance when responding to different instantiations of the same question.

Literature suggests that the reasoning process in LLMs is probabilistic pattern-matching rather than formal reasoning.

Although LLMs can match more abstract reasoning patterns, they fall short of true logical reasoning. Small changes in input tokens can drastically alter model outputs, indicating a strong token bias and suggesting that these models are highly sensitive and fragile.

“Additionally, in tasks requiring the correct selection of multiple tokens, the probability of arriving at an accurate answer decreases exponentially with the number of tokens or steps involved, underscoring their inherent unreliability in complex reasoning scenarios,” said Apple researchers in their paper titled “GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models.”

The ‘GSM8K’ benchmark is widely used to assess the mathematical reasoning of models on grade-school level questions.

While the performance of LLMs on GSM8K has significantly improved in recent years, it remains unclear whether their mathematical reasoning capabilities have genuinely advanced, raising questions about the reliability of the reported metrics.

To address these concerns, the researchers conducted a large-scale study on several state-of-the-art open and closed models.

“To overcome the limitations of existing evaluations, we introduce GSM-Symbolic, an improved benchmark created from symbolic templates that allow for the generation of a diverse set of questions,” the authors wrote.

GSM-Symbolic enables more controllable evaluations, providing key insights and more reliable metrics for measuring the reasoning capabilities of models.

“Our findings reveal that LLMs exhibit noticeable variance when responding to different instantiations of the same question,” said researchers, adding that overall, "our work provides a more nuanced understanding of LLMs’ capabilities and limitations in mathematical reasoning”.

Disclaimer: This post has been auto-published from an agency feed without any modifications to the text and has not been reviewed by an editor

Open in App

Related Stories

Other SportsT20 World Cup defence, ODIs aplenty and tough Test assignments: India men's cricket team faces a packed 2026

BusinessBASIC Home Loan partners with udChalo to offer specialised home loans for defence personnel

NationalNitish Kumar inspects under-construction buildings of two varsities in Patna

NationalTN Forest Dept restricts tourist entry into Thalakundha after frost draws large crowds

BusinessSHIVIK LABS: TRIDENT, A Step Toward Self-Improving AI Systems Built on Reasoning

Business Realted Stories

BusinessAP Moller Capital to Invest up to INR 1,350 Crores in Renewable Energy Platform Developed by Rays Power Infra

BusinessGujarat crosses 5 lakh rooftop solar installations, retains Number 1 position

BusinessAmerican Pecans Partners with Haldiram's Nagpur to Launch a Special Festive Dessert Collection in Mumbai and Nagpur

BusinessMATTER MOTOR WORKS Wins Best Patent Portfolio at CII Industrial Intellectual Property Awards 2025

BusinessPUNO Unveils Pune's First Social Zone at PUNO Advance