Hacker News with Generative AI: Reasoning

Mulberry: Empowering MLLM with o1-like Reasoning (arxiv.org)
In this work, we aim to develop an MLLM that understands and solves questions by learning to create each intermediate step of the reasoning involved till the final answer.
Coconut by Meta AI – Better LLM Reasoning with Chain of Continuous Thought? (aipapersacademy.com)
Large language models (LLMs) have demonstrated incredible reasoning abilities, penetrating an increasing number of domains in our lives.
Reimagining mathematics in a world of reasoning machines [video] (youtube.com)
Are LLMs capable of non-verbal reasoning? (arstechnica.com)
Processing in the "latent space" could help AI with tricky logical questions.
Phi-4: Microsoft's Newest Small Language Model Specializing in Complex Reasoning (microsoft.com)
Today we are introducing Phi-4, our 14B parameter state-of-the-art small language model (SLM) that excels at complex reasoning in areas such as math, in addition to conventional language processing.
Training LLMs to Reason in a Continuous Latent Space (arxiv.org)
Large language models (LLMs) are restricted to reason in the "language space", where they typically express the reasoning process with a chain-of-thought (CoT) to solve a complex reasoning problem.
DeepThought-8B: A small, capable reasoning model (ruliad.co)
Today we're releasing DeepThought-8B, a small, capable AI reasoning model built on LLaMA-3.1 8B.
The Problem with Reasoners (notion.site)
LLaVA-O1: Let Vision Language Models Reason Step-by-Step (arxiv.org)
Large language models have demonstrated substantial advancements in reasoning capabilities, particularly through inference-time scaling, as illustrated by models such as OpenAI's o1. However, current Vision-Language Models (VLMs) often struggle to perform systematic and structured reasoning, especially when handling complex visual question-answering tasks.
Reasonable Person Principle (cs.cmu.edu)
Everyone will be reasonable. Everyone expects everyone else to be reasonable. No one is special. Do not be offended if someone suggests you are not being reasonable.
Detecting when LLMs are uncertain (thariq.io)
This post tries to explain the new reasoning techniques developed by XJDR in a new project called Entropix.
Use Prolog to improve LLM's reasoning (shchegrikovich.substack.com)
On one side, LLMs show unseen capabilities in reasoning, but on the other - reasoning in LLMs is not ideal.
Google is working on AI software with human-like reasoning ability (msn.com)
LLMs still can't reason like humans (freethink.com)
Imagine what would happen if you attempted the following experiment: First, place a washed, fresh tomato and an equally clean carrot on top of a normal kitchen plate. With one hand behind your back, flip the non-stick plate upside-down, inspecting the underside of the plate for marks. Now, slowly turn the plate right-side up and count the number of vegetables remaining on top. How many are on the plate?
Deductive Verification for Chain-of-Thought Reasoning in LLMs (arxiv.org)
Large Language Models (LLMs) significantly benefit from Chain-of-Thought (CoT) prompting in performing various reasoning tasks.
Inductive or deductive? Rethinking the fundamental reasoning abilities of LLMs (arxiv.org)
OpenAI working on reasoning tech under code name 'Strawberry' (reuters.com)
Claude uses hidden chain of thoughts to plan artifact use (ycombinator.com)
Q*: Improving Multi-Step Reasoning for LLMs with Deliberative Planning (arxiv.org)
RAR-B: Reasoning as Retrieval Benchmark (arxiv.org)
Simple tasks showing reasoning breakdown in state-of-the-art LLMs (arxiv.org)
Can large language models reason? (arnaldur.be)
GitHub: Awesome-reasoning, a curated list of datasets for reasoning AIs (github.com/neurallambda)