Hacker News with Generative AI: Computer Graphics

Show HN: 3DGS implementation in Nvidia Warp: clean, minimal, runs on CPU and GPU (github.com/guoriyue)
This project reimplements the core ideas of 3D Gaussian Splatting in a clean, minimalist Python codebase using NVIDIA Warp.
Running GPT-2 in WebGL: Rediscovering the Lost Art of GPU Shader Programming (nathan.rs)
Preface: A few weeks back, I implemented GPT-2 using WebGL and shaders (Github Repo) which made the front page of Hacker News (discussion). By popular demand, here is a short write-up over the main ideas behind GPU shader programming (for general-purpose computing).
Particle Life simulation in browser using WebGPU (lisyarus.github.io)
You might know that I'm a sucker for physics simulations, and particle simulations in particular. Usually I implement something based on conventional physics, but recently I've stumbled upon a funny non-physical model that can display...well, let's call it life-like behavior.
GPU-Driven Clustered Forward Renderer (logdahl.net)
Real-Time Grass Simulation in the Browser – Over 1M Blades at 60 FPS (techredux.co)
FLOWINGGRASS FIELDS
A Zero-Level Set Preserving Technique for SDF Computation (jcgt.org)
Traditional and Neural Order-Independent Transparency (tobias-franke.eu)
Order independent transparency (OIT) is a technique in computer graphics that allows for accurate rendering of transparent objects without the need to sort them in a specific order based on their depth.
Inigo Quilez: computer graphics, mathematics, shaders, fractals, demoscene (iquilezles.org)
Please visit my the landing page to find video tutorials on computer graphics and other resources; this page contains only the written tutorials.
How Computer Graphics Will Change the World [video] (1981) (youtube.com)
Precomputing Transparency Order in 3D (jacobdoescode.com)
Transparency — or more precisely, translucency — remains a problem when rendering in 3D. When you have translucent shapes, the order in which they get rendered is very important. Consider what happens if this is done incorrectly.
The Journal of Computer Graphics Techniques (jcgt.org)
Implicit UVs: Real-time semi-global parameterization of implicit surfaces [pdf] (baptiste-genest.github.io)
Mipmap selection in too much detail (pema.dev)
In this post, I want to shed some light on something I’ve been wondering about for a while: How exactly are mipmap levels selected when sampling textures on the GPU?
15 Years of Shader Minification (ctrl-alt-test.fr)
How do demosceners create complex computer animations in just a few kilobytes? One of our secret weapons is Shader Minifier, a tool that minifies GLSL code. Over the years, it has evolved to pack more data into tiny executables, pushing the boundaries of what’s possible. In this blog post, we’ll go through its evolution.
A Taxonomy for Rendering Engines (c0de517e.com)
It's time to grow up.
Load-Store Conflicts (zeux.io)
meshoptimizer implements several geometry compression algorithms that are designed to take advantage of redundancies common in mesh data and decompress quickly - targeting many gigabytes per second in decoding throughput.
A Pixel Is Not a Little Square (1995) [pdf] (alvyray.com)
Pixel is a unit of length and area (nayuki.io)
Building a Fast, SIMD/GPU-Friendly Random Number Generator for Fun and Profit (vectrx.substack.com)
When writing shaders, SIMD code, or GPU kernels, one typically doesn’t need a cryptographically secure random number generator — something fast and statistically decent is often good enough.
Monte Carlo Crash Course: Rendering (thenumb.at)
So far, we’ve explored Monte Carlo methods using simple examples, like sampling the unit disk and sphere. Now, we’ll apply Monte Carlo to a more realistic task: simulating light traveling through a scene, or rendering.
Turing-Drawings (github.com/maximecb)
Randomly generated Turing machines draw images and animations on a 2D canvas.
Procedural Textures with Hash Functions (douglasorr.github.io)
I'm the sort of person who gets very excited when simple rules create complex behaviour. The other day, I needed a simple hash function that maps $(x, y)$ coordinates to a colour, and found a straightforward equation that ended up being astoundingly rich. Hence this post; to talk about and play with this function.
OmniSVG (github.com/OmniSVG)
OmniSVG is the first family of end-to-end multimodal SVG generators that leverage pre-trained Vision-Language Models (VLMs), capable of generating complex and detailed SVGs, from simple icons to intricate anime characters.
Bilinear interpolation on a quadrilateral using Barycentric coordinates (gpuopen.com)
In computer graphics, we rarely encounter continuous data.
TVMC: Time-Varying Mesh Compression (github.com/SINRG-Lab)
This repository contains the official authors implementation associated with the paper "TVMC: Time-Varying Mesh Compression Using Volume-Tracked Reference Meshes".
ProtoGS: Efficient and High-Quality Rendering with 3D Gaussian Prototypes (arxiv.org)
3D Gaussian Splatting (3DGS) has made significant strides in novel view synthesis but is limited by the substantial number of Gaussian primitives required, posing challenges for deployment on lightweight devices.
Gaussian Splatting Alternative: WebGL Implementation of Nvidia's SVRaster (github.com/samuelm2)
A WebGL-based viewer for visualizing sparse voxel scenes from the Nvidia Sparse Voxels Rasterization paper. This viewer provides an interactive way to explore and visualize the voxel radiance field from the web. You can try the viewer at vid2scene.com/voxel
Images trapped in a feedback loop and analog fractals create each other (youtube.com)
Measuring Acceleration Structures (zeux.io)
Hardware accelerated raytracing, as supported by DirectX 12 and Vulkan, relies on an abstract data structure that stores scene geometry, known as “acceleration structure” and often referred to as “BVH” or “BLAS”. Unlike geometry representation for rasterization, rendering engines can not customize the data layout; unlike texture formats, the layout is not standardized across vendors.
Measuring Acceleration Structures (zeux.io)
Hardware accelerated raytracing, as supported by DirectX 12 and Vulkan, relies on an abstract data structure that stores scene geometry, known as “acceleration structure” and often referred to as “BVH” or “BLAS”. Unlike geometry representation for rasterization, rendering engines can not customize the data layout; unlike texture formats, the layout is not standardized across vendors.