Debating If LLM Reasoning Is "Actually Reasoning" Is Meaningless


Ever since CoT prompting was proposed, one of the most heated debates in AI has been whether LLMs can truly reason. It is meaningless without a clear definition of reasoning. In our context, what we call LLM reasoning is simply the model generating more intermediate tokens before reaching a final answer. Nothing more is promised.
Read more ⟶