Hacker News with Generative AI: Reasoning

Grok 3 Beta – The Age of Reasoning Agents (x.ai)
We are thrilled to unveil an early preview of Grok 3, our most advanced model yet, blending superior reasoning with extensive pretraining knowledge.
Reasoning models are just LLMs (antirez.com)
It’s not new, but it’s accelerating. People that used to say that LLMs were a fundamentally flawed way to reach any useful reasoning and, in general, to develop any useful tool with some degree of generality, are starting to shuffle the deck, in the hope to look less wrong. They say: “the progresses we are seeing are due to the fact that models like OpenAI o1 or DeepSeek R1 are not just LLMs”.
LIMO: Less Is More for Reasoning (arxiv.org)
We present a fundamental discovery that challenges our understanding of how complex reasoning emerges in large language models.
Demystifying Long Chain-of-Thought Reasoning in LLMs (arxiv.org)
Scaling inference compute enhances reasoning in large language models (LLMs), with long chains-of-thought (CoTs) enabling strategies like backtracking and error correction.
Understanding Reasoning LLMs (sebastianraschka.com)
This article describes the four main approaches to building reasoning models, or how we can enhance LLMs with reasoning capabilities. I hope this provides valuable insights and helps you navigate the rapidly evolving literature and hype surrounding this topic.
Microsoft Phi 4 with R1 Reasoning (huggingface.co)
These LoRA adapters were trained using diverse reasoning datasets that incorporate structured Thought and Solution responses to enhance logical inference.
A Visual Guide to Reasoning LLMs (maartengrootendorst.com)
DeepSeek-R1, OpenAI o3-mini, and Google Gemini 2.0 Flash Thinking are prime examples of how LLMs can be scaled to new heights through “reasoning“ frameworks.
Efficient Reasoning with Hidden Thinking (arxiv.org)
Chain-of-Thought (CoT) reasoning has become a powerful framework for improving complex problem-solving capabilities in Multimodal Large Language Models (MLLMs). However, the verbose nature of textual reasoning introduces significant inefficiencies.
OpenAI launches o3-mini, its latest 'reasoning' model (techcrunch.com)
OpenAI on Friday launched a new AI “reasoning” model, o3-mini, the newest in the company’s o family of reasoning models.
DeepSeek-R1 at 3,872 tokens / second on a single Nvidia HGX H200 (nvidia.com)
DeepSeek-R1 is an open model with state-of-the-art reasoning capabilities. Instead of offering direct responses, reasoning models like DeepSeek-R1 perform multiple inference passes over a query, conducting chain-of-thought, consensus and search methods to generate the best answer.
Open-R1: an open reproduction of DeepSeek-R1 (huggingface.co)
OpenAI’s o1 model showed that when LLMs are trained to do the same—by using more compute during inference—they get significantly better at solving reasoning tasks like mathematics, coding, and logic.
Bespoke-Stratos: The unreasonable effectiveness of reasoning distillation (bespokelabs.ai)
We trained Bespoke-Stratos-32B, our reasoning model distilled from DeepSeek-R1 using Berkeley NovaSky’s Sky-T1 data pipeline.
Official DeepSeek R1 Now on Ollama (ollama.com)
DeepSeek’s first-generation reasoning models, achieving performance comparable to OpenAI-o1 across math, code, and reasoning tasks.
Mulberry: Empowering MLLM with o1-like Reasoning (arxiv.org)
In this work, we aim to develop an MLLM that understands and solves questions by learning to create each intermediate step of the reasoning involved till the final answer.
Coconut by Meta AI – Better LLM Reasoning with Chain of Continuous Thought? (aipapersacademy.com)
Large language models (LLMs) have demonstrated incredible reasoning abilities, penetrating an increasing number of domains in our lives.
Reimagining mathematics in a world of reasoning machines [video] (youtube.com)
Are LLMs capable of non-verbal reasoning? (arstechnica.com)
Processing in the "latent space" could help AI with tricky logical questions.
Phi-4: Microsoft's Newest Small Language Model Specializing in Complex Reasoning (microsoft.com)
Today we are introducing Phi-4, our 14B parameter state-of-the-art small language model (SLM) that excels at complex reasoning in areas such as math, in addition to conventional language processing.
Training LLMs to Reason in a Continuous Latent Space (arxiv.org)
Large language models (LLMs) are restricted to reason in the "language space", where they typically express the reasoning process with a chain-of-thought (CoT) to solve a complex reasoning problem.
DeepThought-8B: A small, capable reasoning model (ruliad.co)
Today we're releasing DeepThought-8B, a small, capable AI reasoning model built on LLaMA-3.1 8B.
The Problem with Reasoners (notion.site)
LLaVA-O1: Let Vision Language Models Reason Step-by-Step (arxiv.org)
Large language models have demonstrated substantial advancements in reasoning capabilities, particularly through inference-time scaling, as illustrated by models such as OpenAI's o1. However, current Vision-Language Models (VLMs) often struggle to perform systematic and structured reasoning, especially when handling complex visual question-answering tasks.
Reasonable Person Principle (cs.cmu.edu)
Everyone will be reasonable. Everyone expects everyone else to be reasonable. No one is special. Do not be offended if someone suggests you are not being reasonable.
Detecting when LLMs are uncertain (thariq.io)
This post tries to explain the new reasoning techniques developed by XJDR in a new project called Entropix.
Use Prolog to improve LLM's reasoning (shchegrikovich.substack.com)
On one side, LLMs show unseen capabilities in reasoning, but on the other - reasoning in LLMs is not ideal.
Google is working on AI software with human-like reasoning ability (msn.com)
LLMs still can't reason like humans (freethink.com)
Imagine what would happen if you attempted the following experiment: First, place a washed, fresh tomato and an equally clean carrot on top of a normal kitchen plate. With one hand behind your back, flip the non-stick plate upside-down, inspecting the underside of the plate for marks. Now, slowly turn the plate right-side up and count the number of vegetables remaining on top. How many are on the plate?
Deductive Verification for Chain-of-Thought Reasoning in LLMs (arxiv.org)
Large Language Models (LLMs) significantly benefit from Chain-of-Thought (CoT) prompting in performing various reasoning tasks.
Inductive or deductive? Rethinking the fundamental reasoning abilities of LLMs (arxiv.org)
OpenAI working on reasoning tech under code name 'Strawberry' (reuters.com)