Building a Fast, SIMD/GPU-Friendly Random Number Generator for Fun and Profit
(vectrx.substack.com)
When writing shaders, SIMD code, or GPU kernels, one typically doesn’t need a cryptographically secure random number generator — something fast and statistically decent is often good enough.
When writing shaders, SIMD code, or GPU kernels, one typically doesn’t need a cryptographically secure random number generator — something fast and statistically decent is often good enough.
Monte Carlo Crash Course: Rendering
(thenumb.at)
So far, we’ve explored Monte Carlo methods using simple examples, like sampling the unit disk and sphere. Now, we’ll apply Monte Carlo to a more realistic task: simulating light traveling through a scene, or rendering.
So far, we’ve explored Monte Carlo methods using simple examples, like sampling the unit disk and sphere. Now, we’ll apply Monte Carlo to a more realistic task: simulating light traveling through a scene, or rendering.
Turing-Drawings
(github.com/maximecb)
Randomly generated Turing machines draw images and animations on a 2D canvas.
Randomly generated Turing machines draw images and animations on a 2D canvas.
Procedural Textures with Hash Functions
(douglasorr.github.io)
I'm the sort of person who gets very excited when simple rules create complex behaviour. The other day, I needed a simple hash function that maps $(x, y)$ coordinates to a colour, and found a straightforward equation that ended up being astoundingly rich. Hence this post; to talk about and play with this function.
I'm the sort of person who gets very excited when simple rules create complex behaviour. The other day, I needed a simple hash function that maps $(x, y)$ coordinates to a colour, and found a straightforward equation that ended up being astoundingly rich. Hence this post; to talk about and play with this function.
OmniSVG
(github.com/OmniSVG)
OmniSVG is the first family of end-to-end multimodal SVG generators that leverage pre-trained Vision-Language Models (VLMs), capable of generating complex and detailed SVGs, from simple icons to intricate anime characters.
OmniSVG is the first family of end-to-end multimodal SVG generators that leverage pre-trained Vision-Language Models (VLMs), capable of generating complex and detailed SVGs, from simple icons to intricate anime characters.
Bilinear interpolation on a quadrilateral using Barycentric coordinates
(gpuopen.com)
In computer graphics, we rarely encounter continuous data.
In computer graphics, we rarely encounter continuous data.
TVMC: Time-Varying Mesh Compression
(github.com/SINRG-Lab)
This repository contains the official authors implementation associated with the paper "TVMC: Time-Varying Mesh Compression Using Volume-Tracked Reference Meshes".
This repository contains the official authors implementation associated with the paper "TVMC: Time-Varying Mesh Compression Using Volume-Tracked Reference Meshes".
ProtoGS: Efficient and High-Quality Rendering with 3D Gaussian Prototypes
(arxiv.org)
3D Gaussian Splatting (3DGS) has made significant strides in novel view synthesis but is limited by the substantial number of Gaussian primitives required, posing challenges for deployment on lightweight devices.
3D Gaussian Splatting (3DGS) has made significant strides in novel view synthesis but is limited by the substantial number of Gaussian primitives required, posing challenges for deployment on lightweight devices.
Gaussian Splatting Alternative: WebGL Implementation of Nvidia's SVRaster
(github.com/samuelm2)
A WebGL-based viewer for visualizing sparse voxel scenes from the Nvidia Sparse Voxels Rasterization paper. This viewer provides an interactive way to explore and visualize the voxel radiance field from the web. You can try the viewer at vid2scene.com/voxel
A WebGL-based viewer for visualizing sparse voxel scenes from the Nvidia Sparse Voxels Rasterization paper. This viewer provides an interactive way to explore and visualize the voxel radiance field from the web. You can try the viewer at vid2scene.com/voxel
Measuring Acceleration Structures
(zeux.io)
Hardware accelerated raytracing, as supported by DirectX 12 and Vulkan, relies on an abstract data structure that stores scene geometry, known as “acceleration structure” and often referred to as “BVH” or “BLAS”. Unlike geometry representation for rasterization, rendering engines can not customize the data layout; unlike texture formats, the layout is not standardized across vendors.
Hardware accelerated raytracing, as supported by DirectX 12 and Vulkan, relies on an abstract data structure that stores scene geometry, known as “acceleration structure” and often referred to as “BVH” or “BLAS”. Unlike geometry representation for rasterization, rendering engines can not customize the data layout; unlike texture formats, the layout is not standardized across vendors.
Measuring Acceleration Structures
(zeux.io)
Hardware accelerated raytracing, as supported by DirectX 12 and Vulkan, relies on an abstract data structure that stores scene geometry, known as “acceleration structure” and often referred to as “BVH” or “BLAS”. Unlike geometry representation for rasterization, rendering engines can not customize the data layout; unlike texture formats, the layout is not standardized across vendors.
Hardware accelerated raytracing, as supported by DirectX 12 and Vulkan, relies on an abstract data structure that stores scene geometry, known as “acceleration structure” and often referred to as “BVH” or “BLAS”. Unlike geometry representation for rasterization, rendering engines can not customize the data layout; unlike texture formats, the layout is not standardized across vendors.
The DDA Algorithm, explained interactively
(aaaa.sh)
I've written a number of voxel raytracers, and all of them (all the good ones, at least) use the Digital Differential Analyzer Algorithm for raycasting.
I've written a number of voxel raytracers, and all of them (all the good ones, at least) use the Digital Differential Analyzer Algorithm for raycasting.
The demoscene as a UNESCO heritage in Sweden
(goto80.com)
The demoscene has become a national UNESCO-heritage in Sweden, thanks to an application that Ziphoid and me did last year.
The demoscene has become a national UNESCO-heritage in Sweden, thanks to an application that Ziphoid and me did last year.
Actually drawing some ovals – that are not ellipses (2017)
(medium.com)
In the last part I hopefully made it clear why you wouldn’t want to use an actual ellipse when making a real object, curves constructed from multiple fixed radius arcs are much more useful and look just the same.
In the last part I hopefully made it clear why you wouldn’t want to use an actual ellipse when making a real object, curves constructed from multiple fixed radius arcs are much more useful and look just the same.
The Wrong Way to Use a Signed Distance Function (SDF)
(winterbloed.be)
Disclaimer: there’s nothing wrong with using a sdf this way.
Disclaimer: there’s nothing wrong with using a sdf this way.
When Gaussian Splatting Meets 19th Century 3D Images
(shkspr.mobi)
Depending on which side of the English Channel / La Manche you sit on, photography was invented either by Englishman Henry Fox Talbot in 1835 or Frenchman Louis Daguerre in 1839.
Depending on which side of the English Channel / La Manche you sit on, photography was invented either by Englishman Henry Fox Talbot in 1835 or Frenchman Louis Daguerre in 1839.
Bolt3D: Generating 3D Scenes in Seconds
(szymanowiczs.github.io)
Feed-forward 3D scene generation in 6.25s on a single GPU.
Feed-forward 3D scene generation in 6.25s on a single GPU.
Laplacian Mesh Smoothing by Throwing Vertices
(nosferalatu.com)
In this blog post, I’ll talk about smoothing and blurring 3D meshes using Laplacian mesh smoothing.
In this blog post, I’ll talk about smoothing and blurring 3D meshes using Laplacian mesh smoothing.
Ask HN: 2x Arc A770 or 1x Radeon 7900 XTX for llama.cpp
(ycombinator.com)
Can't find "apple to apple" comparison on performance on QWQ 32b (4bit), can anyone help me with decision on which solution to pick?
Can't find "apple to apple" comparison on performance on QWQ 32b (4bit), can anyone help me with decision on which solution to pick?
A GS-Cache Inference Framework for Large-Scale Gaussian Splatting Models
(arxiv.org)
Rendering large-scale 3D Gaussian Splatting (3DGS) model faces significant challenges in achieving real-time, high-fidelity performance on consumer-grade devices.
Rendering large-scale 3D Gaussian Splatting (3DGS) model faces significant challenges in achieving real-time, high-fidelity performance on consumer-grade devices.
The Early History of Deferred Shading and Lighting
(sites.google.com)
At the 2004 Game Developer Conference Matt Pritchard (one of my coworkers at the now closed Ensemble Studios who wrote Age of Empires 2's graphics code), John Brooks (CTO and my boss at Blue Shift Inc., the graphics programmer on "Super Mario Wacky Worlds") and I came out of the cold and gave a fairly rushed, but groundbreaking 1 hour presentation on real-time deferred rendering techniques to an audience of 300-400 people.
At the 2004 Game Developer Conference Matt Pritchard (one of my coworkers at the now closed Ensemble Studios who wrote Age of Empires 2's graphics code), John Brooks (CTO and my boss at Blue Shift Inc., the graphics programmer on "Super Mario Wacky Worlds") and I came out of the cold and gave a fairly rushed, but groundbreaking 1 hour presentation on real-time deferred rendering techniques to an audience of 300-400 people.
Show HN: Rust Vector and Quaternion Lib
(github.com/David-OConnor)
Vectors and quaternions, and matrices for general purposes, and computer graphics.
Vectors and quaternions, and matrices for general purposes, and computer graphics.
Get Started with Neural Rendering Using Nvidia RTX Kit (Vulkan)
(nvidia.com)
Neural rendering is the next era of computer graphics. By integrating neural networks into the rendering process, we can take dramatic leaps forward in performance, image quality, and interactivity to deliver new levels of immersion.
Neural rendering is the next era of computer graphics. By integrating neural networks into the rendering process, we can take dramatic leaps forward in performance, image quality, and interactivity to deliver new levels of immersion.