Hacker News with Generative AI: Computer Science

SQL, Homomorphisms and Constraint Satisfaction Problems (philipzucker.com)
Database queries are a pretty surprisingly powerful tool that can solve seemingly intractable problems.
Grant Sanderson: Visualizing transformers and attention [video] (youtube.com)
U of T computational imaging researchers harness AI to fly with light in motion (cs.toronto.edu)
This is now possible, thanks to new research from University of Toronto computer scientists who have built an advanced camera setup that — for the first time — can visualize light in motion from any perspective, opening avenues for further inquiry into new types of 3D sensing techniques.
Decisions and Dragons (decisionsanddragons.com)
A guide to the perilous world of reinforcement learning.
Understanding the BM25 full text search algorithm (emschwartz.me)
BM25, or Best Match 25, is a widely used algorithm for full text search. It is the default in Lucene/Elasticsearch and SQLite, among others. Recently, it has become common to combine full text search and vector similarity search into "hybrid search". I wanted to understand how full text search works, and specifically BM25, so here is my attempt at understanding by re-explaining.
Large-Scale Dimension Reduction with Both Global and Local Structure (2021) [pdf] (jmlr.org)
Creating Your Own Programming Language – Laurence Tratt [video] (youtube.com)
Flattening ASTs with Arena Allocation (cs.cornell.edu)
Arenas, a.k.a. regions, are everywhere in modern language implementations.
/ 0 = 0 (hillelwayne.com)
Have a tweet:
You could have designed state of the art positional encoding (fleetwood.dev)
This post walks you through the step-by-step discovery of state-of-the-art positional encoding in transformer models. We will achieve this by iteratively improving our approach to encoding position, arriving at Rotary Postional Encoding (RoPE) used in the latest LLama 3.2 release and most modern transformers. This post intends to limit the mathematical knowledge required to follow along, but some basic linear algebra, trigonometry and understanding of self attention is expected.
Everything Is Just Functions: 1 week with David Beazley and SICP (notion.site)
Our brains are vector databases – here's why that's helpful when using AI (venturebeat.com)
In 2014, a breakthrough at Google transformed how machines understand language: The self-attention model. This innovation allowed AI to grasp context and meaning in human communication by treating words as mathematical vectors — precise numerical representations that capture relationships between ideas. Today, this vector-based approach has evolved into sophisticated vector databases, systems that mirror how our own brains process and retrieve information.
Distributed Systems 4th Edition (distributed-systems.net)
This is the fourth edition of “Distributed Systems. We have stayed close to the setup of the third edition, including examples of (part of) existing distributed systems close to where general principles are discussed. For example, we have included material on blockchain systems, and discuss their various components throughout the book. We have, again, used special boxed sections for material that can be skipped at first reading.
Convolutional Differentiable Logic Gate Networks (arxiv.org)
With the increasing inference cost of machine learning models, there is a growing interest in models with fast and efficient inference.
SICP: The only computer science book worth reading twice? (2010) (simondobson.org)
I was talking to one of my students earlier, and lent him a book to read over summer. It was only after he’d left that I realised that — for me at any rate — the book I’d given him is probably the most seminal work in the whole of computer science, and certainly the book that’s most influenced my career and research interests.
280K Indian international students in US majority doing CS (twitter.com)
Check if your performance intuition still works with CUDA (wordsandbuttons.online)
For those of you who don't know what CUDA is, let me explain. Imagine, buses were never invented. There are cars, trains, planes, and motorcycles, just not buses. And one day someone smart asks himself: “wouldn't it be splendid to have cars that would fit a lot of people? One guy could be driving, and all the rest will enjoy the ride.” “Right, like trucks but for people!” “No-no-no, who on earth would ever want to travel by truck?
The Fallacies of Distributed Systems (francofernando.com)
More than 20 years ago, Peter Deutsch and others at Sun Microsystems came up with a list of false assumptions that many developers new to distributed applications always make.
Manual for PUB (a markup language in 1971) – Larry Tesler (nomodes.com)
PUB is a compiler which translates a manuscript into a document.
Thomas E. Kurtz has died (computerhistory.org)
With deep sadness, we say goodbye to computer pioneer Thomas Kurtz.
Wirth's Law (wikipedia.org)
Wirth's law is an adage on computer performance which states that software is getting slower more rapidly than hardware is becoming faster.
Show HN: Knight's Graph – game based on the Knight's tour problem (apple.com)
Step into the world of logic and strategy with "Knight's Graph," an addictive puzzle game inspired by the centuries-old knight’s tour problem.
Inverse Symbolic Calculator (cecm.sfu.ca)
Please enter a number or a Maple expression:
3rd edition of Ross Anderson's Security Engineering now free to download (lightbluetouchpaper.org)
Ross Anderson had agreed with his publisher, Wiley, that he would be able to make all chapters of the 3rd edition of his book Security Engineering available freely for download from his website. These PDFs are now available there.
A stubborn computer scientist accidentally launched the deep learning boom (arstechnica.com)
Ignoring negative feedback, Li pursued the project for more than two years. It strained her research budget and the patience of her graduate students. When she took a new job at Stanford in 2009, she took several of those students—and the ImageNet project—with her to California.
The Lost Reading Items of Ilya Sutskever's AI Reading List (tensorlabbet.com)
In this post: An attempt to reconstruct Ilya Sutskever's 2020 AI reading list (8 min read)
Steven Rudich (1961-2024) (computationalcomplexity.org)
Complexity theorist Steven Rudich passed away on October 29 at the age of 63.
The surprising effectiveness of test-time training for abstract reasoning [pdf] (mit.edu)
Brian Kernighan Reflects on Unix: A History and a Memoir [video] (youtube.com)
Binary vector embeddings are so cool (emschwartz.me)
Vector embeddings by themselves are pretty neat. Binary quantized vector embeddings are extra impressive. In short, they can retain 95+% retrieval accuracy with 32x compression and ~25x retrieval speedup. Let's get into how this works and why it's so crazy.