lepoulsdumonde.com is one of the many independent Mastodon servers you can use to participate in the fediverse.
Small french Mastodon instance for friends, family and useful bots

Administered by:

Server stats:

52
active users

#machinelearning

47 posts25 participants0 posts today

I distrust claims of LLMs reasoning and math ability.

I'm not just skeptical; the way we measure and report these things is majorly broken.

I just read a paper (arxiv.org/abs/2410.05229) that discusses a popular math skills dataset (GSM8K), why it's inadequate, and how LLM performance tanks on a more robust test.

Two big problems here:

Evaluating "mathematical reasoning," should include things like: an equation works the same way regardless of what numbers you plug in. These models tend to just memorize patterns of number tokens without generalization, but GSM8K can't detect that. It's embarrassing that we proudly report success, without considering if the benchmark actually tests the thing we care about.

Worse, this whole math test has leaked into the models' training data. We know this, and can demonstrate the models are memorizing the answers. Yet, folks still report steady gains as if that means something. It's either willfully ignorant, or deceitful.

arXiv logo
arXiv.orgGSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language ModelsRecent advancements in Large Language Models (LLMs) have sparked interest in their formal reasoning capabilities, particularly in mathematics. The GSM8K benchmark is widely used to assess the mathematical reasoning of models on grade-school-level questions. While the performance of LLMs on GSM8K has significantly improved in recent years, it remains unclear whether their mathematical reasoning capabilities have genuinely advanced, raising questions about the reliability of the reported metrics. To address these concerns, we conduct a large-scale study on several SOTA open and closed models. To overcome the limitations of existing evaluations, we introduce GSM-Symbolic, an improved benchmark created from symbolic templates that allow for the generation of a diverse set of questions. GSM-Symbolic enables more controllable evaluations, providing key insights and more reliable metrics for measuring the reasoning capabilities of models.Our findings reveal that LLMs exhibit noticeable variance when responding to different instantiations of the same question. Specifically, the performance of all models declines when only the numerical values in the question are altered in the GSM-Symbolic benchmark. Furthermore, we investigate the fragility of mathematical reasoning in these models and show that their performance significantly deteriorates as the number of clauses in a question increases. We hypothesize that this decline is because current LLMs cannot perform genuine logical reasoning; they replicate reasoning steps from their training data. Adding a single clause that seems relevant to the question causes significant performance drops (up to 65%) across all state-of-the-art models, even though the clause doesn't contribute to the reasoning chain needed for the final answer. Overall, our work offers a more nuanced understanding of LLMs' capabilities and limitations in mathematical reasoning.