36 points by lawrenceyan 31 days ago | 105 comments
Most leading chatbots routinely exaggerate science findings(uu.nl) It seems so convenient: when you are short of time, asking ChatGPT or another chatbot to summarise a scientific paper to quickly get a gist of it. But in up to 73 per cent of the cases, these large language models (LLMs) produce inaccurate conclusions, a new study by Uwe Peters (Utrecht University) and Benjamin Chin-Yee (Western University and University of Cambridge) finds.
Neural Thermodynamic Laws for Large Language Model Training(arxiv.org) Beyond neural scaling laws, little is known about the laws underlying large language models (LLMs). We introduce Neural Thermodynamic Laws (NTL) -- a new framework that offers fresh insights into LLM training dynamics.
Getting a paper accepted(maxwellforbes.com) In 2019, I submitted a paper that was rejected with review scores 2.5, 3, 3. One week later, I resubmitted it with minor changes, and it was accepted with scores 4, 4.5, 4.5.01 For context, that’s an almost unspeakably dramatic jump in scores, from “middling reject” to “strong accept.”