Hacker News with Generative AI: Performance Optimization

Train faster static embedding models with sentence transformers (huggingface.co)
This blog post introduces a method to train static embedding models that run 100x to 400x faster on CPU than state-of-the-art embedding models, while retaining most of the quality. This unlocks a lot of exciting use cases, including on-device and in-browser execution, edge computing, low power and embedded applications.
Fedora 42 Looks to Ship Optimized Executables for Different x86_64 Capabilities (phoronix.com)
Fedora Linux has already supported making use of glibc HWCAPs for allowing libraries to be built for different x86_64 micro-architecture feature levels for performance-sensitive code where it can pay off when leveraging AVX/AVX2 or other newer Intel/AMD CPU instruction set extensions. For Fedora 42 is now a proposal to extend that further to allow binary executables to also leverage glibc HWCAPs for better performance.
CSSWind: Bloat-Free Component Styling (xeiaso.net)
What you need when even HTMX is too much.
YJIT 3.4: Even Faster and More Memory-Efficient (railsatscale.com)
It’s 2025, and this year again, the YJIT team brings you a new version of YJIT that is even faster, more stable, and more memory-efficient.
Double-keyed caching: Browser cache partitioning (addyosmani.com)
The web’s caching model served us well for over two decades. Recently, in the name of privacy, it’s undergone a fundamental shift that challenges many of our performance optimization assumptions. This is called Double-keyed Caching or cache-partitioning more generally. Here’s what changed, why it matters, and how to adapt.
Expressive Vector Engine – SIMD in C++ (github.com/jfalcou)
EVE is a re-implementation of the old EVE SIMD library by Falcou et al. which for a while was named Boost.SIMD. It's a C++20 and onward implementation of a type based wrapper around SIMD extensions sets for most current architectures. It aims at showing how C++20 can be used to design and implement efficient, low level, high abstraction library suited for high performance.
Breaking Up with Long Tasks or: how I learned to group loops and wield the yield (perfplanet.com)
Arrays are in every web developer’s toolbox, and there are a dozen ways to iterate over them. Choose wrong, though, and all of that processing time will happen synchronously in one long, blocking task. The thing is, the most natural ways are the wrong ways. A simple for..of loop that processes each array item is synchronous by default, while Array methods like forEach and map can ONLY run synchronously.
Mptcp: Revolutionizing connectivity, one path at a time (cloudflare.com)
The Internet is designed to provide multiple paths between two endpoints. Attempts to exploit multi-path opportunities are almost as old as the Internet, culminating in RFCs documenting some of the challenges. Still, today, virtually all end-to-end communication uses only one available path at a time.
Speeding Up SQLite Inserts (julik.nl)
In my work I tend to reach for SQLite more and more. The type of work I find it useful for most these days is quickly amalgamating, dissecting, collecting and analyzing large data sets. As I have outlined in my Euruko talk on scheduling, a key element of the project was writing a simulator. That simulator outputs metrics - lots and lots of metrics, which resemble what our APM solution collects.
Postgres UUIDv7 and per-back end monotonicity (brandur.org)
An implementation for UUIDv7 was committed to Postgres earlier this month. These have all the benefits of a v4 (random) UUID, but are generated with a more deterministic order using the current time, and perform considerably on inserts using ordered structures like B-trees.
Static search trees: faster than binary search (curiouscoding.nl)
In this post, we will implement a static search tree (S+ tree) for high-throughput searching of sorted data, as introduced on Algorithmica.
Optimizing Ruby's JSON, Part 4 (byroot.github.io)
In the previous post, we established that as long as ruby/json wasn’t competitive on micro-benchmarks, public perception wouldn’t change. Since what made ruby/json appear so bad on micro-benchmarks was its setup cost, we had to find ways to reduce it further.
Peephole optimizations: adding `opt_respond_to` to the Ruby VM, part 4 (jpcamara.com)
In The Ruby Syntax Holy Grail: adding opt_respond_to to the Ruby VM, part 3, I found what I referred to as the “Holy Grail” of Ruby syntax. I’m way overstating it, but it’s a readable, sequential way of viewing how a large portion of the Ruby syntax is compiled.
Subprocess: Don't close all file descriptors by default (close_fds=False) (python.org)
To make subprocess faster, I propose to no longer close all file descriptors by default in subprocess: change Popen close_fds parameter default to False (close_fds=False).
The intricacies of implementing memoization in Ruby (denisdefreyne.com)
In the never-ending quest to write code that is performant, we have many techniques at our disposal. One of those techniques is memoization,111 That’s memoization, not memorization — there’s no “r”!  which boils down to storing the results of expensive function calls, so that these expensive functions do not need to be called more than absolutely necessary.
Navtive FlameGraphViewer (laladrik.xyz)
There is something in Rust Analyzer that I would like to fix. This requires understanding its interaction with Chalk. To find the starting point I ran Rust Analyzer with Linux Perf to get the tree of calls represented in a Flame Graph. The Flame Graph was so big, that it was rendered in the browser for quite a few seconds. The hover events were delayed. Nothing happened when I tried to open a frame of the graph.
Reads Causing Writes in Postgres (jesipow.com)
It is good practice to regularly inspect the statements running in the hot path of your Postgres instance. One way to do this is to examine the pg_stat_statements view, which shows various statistics about the SQL statements executed by the Postgres server.
Show HN: Bodo – high-performance compute engine for Python data processing (github.com/bodo-ai)
Bodo is a cutting edge compute engine for large scale Python data processing. Powered by an innovative auto-parallelizing just-in-time compiler, Bodo transforms Python programs into highly optimized, parallel binaries without requiring code rewrites, which makes Bodo 20x to 240x faster compared to alternatives!
Optimizing Ruby's JSON, Part 1 (byroot.github.io)
I was recently made maintainer of the json gem, and aside from fixing some old bugs, I focused quite a bit on its performance, so that it is now the fastest JSON parser and generator for Ruby on most benchmarks.
Valhalla – Java's Epic Refactor (inside.java)
Project Valhalla wants to heal the rift in Java’s type system between classes and primitives by introducing value classes, which “code like a class, work like an int” and offer a flat and dense memory layout.
In Search of a Faster SQLite (avi.im)
SQLite is already fast. But can we make it even faster? Researchers at the University of Helsinki and Cambridge began with this question and published a paper, “Serverless Runtime / Database Co-Design With Asynchronous I/O”. They demonstrate up to a 100x reduction in tail latency. These are my notes on the paper.
In Search of a Faster SQLite (avi.im)
SQLite is already fast. But can we make it even faster? Researchers at the University of Helsinki and Cambridge began with this question and published a paper, “Serverless Runtime / Database Co-Design With Asynchronous I/O”. They demonstrate up to a 100x reduction in tail latency. These are my notes on the paper.
Algorithms for high performance terminal apps (textualize.io)
I've had the fortune of being able to work fulltime on a FOSS project for the last three plus years.
Fair Go vs. Elixir Benchmarks (github.com/antonputra)
The code previously used Jason.encode! but Jason.encode_to_iodata! should be preferred over IO devices. This should increase performance and reduce memory usage. This is what frameworks such as a Phoenix would have used by default
My wish for VFS or filesystem level cgroup (v2) IO limits (utoronto.ca)
I wish Linux cgroups (v2 of course) had an option/interface that limited *filesystem* IO that you could do, read and/or write.
Turning Off Zen 4's Op Cache for Curiosity and Giggles (chipsandcheese.com)
CPUs start executing instructions by fetching those instruction bytes from memory and decoding them into internal operations (micro-ops). Getting data from memory and operating on it consumes power and incurs latency. Micro-op caching is a popular technique to improve on both fronts, and involves caching micro-ops that correspond to frequently executed instructions.
Building a Tiny CDN with Pyinfra and Chimera Linux (wezm.net)
In my quest to make linkedlist.org—my link blog—faster, I set up multiple deployments around the world. I used pyinfra to automate the process and Chimera Linux as the host operating system. Join me on this adventure in over-engineering to see how I dropped the average response time across nine global locations from 807ms to 189ms without spending a fortune.
Using vectorization in C# to boost performance (btburnett.com)
As an application developer, I rarely get to dig into low-level C# code. I feel like this is probably true for most of my fellow C# developers as well. We build our applications on top of the excellent work of the .NET team and others and watch performance improve each year as if by magic. However, every now and then I get my hands a bit dirtier, and it’s a lot of fun.
F# developer stories: how we've fixed a 9-year-old performance issue (microsoft.com)
Programming language authors have to think about many things at once: overall language design, runtime dangers, possible feature misuse, backward compatibility, forward compatibility, and so on. All these aspects, together with communication hiccups and time constraints, might get in the way of some seemingly clear and manageable problems.
Linux EFI Zboot Abandoning "Compression Library Museum", Focusing on Gzip, ZSTD (phoronix.com)
The Linux kernel EFI Zboot code for carrying the Linux kernel image for EFI systems in compressed form is doing away with its "compression library museum" of offering Gzip, LZ4, LZMA, LZO, XZ, and Zstd compression options to instead just focus on Gzip and Zstd compression support.