Hacker News with Generative AI: Hugging Face

Train faster static embedding models with sentence transformers (huggingface.co)
This blog post introduces a method to train static embedding models that run 100x to 400x faster on CPU than state-of-the-art embedding models, while retaining most of the quality. This unlocks a lot of exciting use cases, including on-device and in-browser execution, edge computing, low power and embedded applications.
Spaces ZeroGPU: Dynamic GPU Allocation for Spaces (huggingface.co)
Improving Parquet Dedupe on Hugging Face Hub (huggingface.co)
The Xet team at Hugging Face is working on improving the efficiency of the Hub's storage architecture to make it easier and quicker for users to store and update data and models.
Exponential growth brews 1M AI models on Hugging Face (arstechnica.com)
On Thursday, AI hosting platform Hugging Face surpassed 1 million AI model listings for the first time, marking a milestone in the rapidly expanding field of machine learning.
Hugging Face replacing Git LFS storage back end (xethub.com)
DevRel at HuggingFace (dx.tips)
Join Super AI Agent Hackathon at Stanford, Hosted by HuggingFace and Nexa AI (twitter.com)
ML for 3D Course on Hugging Face (huggingface.co)