Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
5
19
66
dev7halo
dev7halo
Follow
21world's profile picture
4n3mone's profile picture
awacke1's profile picture
5 followers
·
12 following
HaloKim
AI & ML interests
None yet
Recent Activity
upvoted
an
article
7 days ago
Accelerate ND-Parallel: A Guide to Efficient Multi-GPU Training
reacted
to
badaoui
's
post
with 🚀
8 days ago
Is there a "one-size-fits-all" recipe for quantizing Large Language Models? 🤔 As part of my ongoing work in mixed-precision quantization, I've been exploring this question by measuring layer-by-layer sensitivity. The goal is to see if we can find universal rules for which layers can be quantized aggressively without impacting performance.The results are fascinating and reveal two key insights: 1️⃣ Sensitivity profiles are like architectural "fingerprints." Models from the same family share strikingly similar sensitivity patterns. As you can see in the charts below for the Gemma and SmolLM families, the ranking and relative sensitivity of the layers remain remarkably consistent. This suggests that the underlying architecture is a primary driver of a model's quantization behavior. 2️⃣ A "universal" mixed-precision quantization strategy is challenging. While models within a family are similar, these "fingerprints" change dramatically when comparing different architectures like LLaMA, Qwen, and StableLM. This highlights the difficulty in creating a generalized mixed-precision configuration that works optimally across all model families. However, there is one near-universal truth we uncovered: the mlp.down_proj layer consistently emerges as one of the most sensitive components across all models studied. This finding strongly resonates with the work in "The Super Weight in Large Language Models" (by Mengxia Yu et al.). The paper identifies that functionally critical parameters, or "super weights," are concentrated in these down_proj layers. Our empirical results provide clear validation for this theory, showing these layers are highly intolerant to precision loss. In short, while every architecture has a unique sensitivity profile, a fingerprint shaped not only by its core design but also by its specific training dataset and optimization approach, some components remain universally critical! What are your thoughts?
reacted
to
mrs83
's
post
with 👀
8 days ago
Introducing the Computer Says No Dataset: https://huggingface.co/datasets/ethicalabs/computer-says-no An LLM can do almost anything, but should it? This dataset provides clear examples of when LLMs should decline requests, such as: - Counting characters (e.g., "number of 'r's in 'raspberry'" – seriously, you’ve got this) - Solving basic equations (like *5.9 = x + 5.11* – please, show that calculator some love) Inspired by Little Britain's iconic "Computer Says No" sketch, we address a critical issue in AI systems today: the waste of using a rocket launcher to swat flies (aka powerful models for trivial tasks). Goals: - Reduce waste by saving compute for tasks that actually need it - Guide users to better tools - Spark discussion about ethical AI This isn’t a training set. It’s a provocation: if we don’t define AI's limits, who will?
View all activity
Organizations
dev7halo
's activity
All
Models
Datasets
Spaces
Papers
Collections
Community
Posts
Upvotes
Likes
Articles
New activity in
skt/A.X-4.0
about 2 months ago
ax vllm mcp template, jinja
👍
4
#1 opened about 2 months ago by
dev7halo
New activity in
kakaocorp/kanana-1.5-8b-instruct-2505
3 months ago
kanana vllm mcp template, jinja
#1 opened 3 months ago by
dev7halo
New activity in
naver-hyperclovax/HyperCLOVAX-SEED-Text-Instruct-1.5B
4 months ago
Thanks for sharing !
👍
2
1
#2 opened 4 months ago by
dev7halo
New activity in
meta-llama/Meta-Llama-Guard-2-8B
over 1 year ago
Why do I get results without filling out the prompt?
1
#11 opened over 1 year ago by
dev7halo
Load more