Join the conversation
Join the community of Machine Learners and AI enthusiasts.
Sign UpAll HF Hub posts
Post
1672
Qwen3-30B-A3B-Thinking-2507 π₯ latest step in scaling thinking capabilities from Alibaba Qwen team.
Qwen/Qwen3-30B-A3B-Thinking-2507-FP8
β¨ 30B total / 3B active - Apache 2.0
β¨ Native 256K context
β¨ SOTA coding, alignment, agentic reasoning
Qwen/Qwen3-30B-A3B-Thinking-2507-FP8
β¨ 30B total / 3B active - Apache 2.0
β¨ Native 256K context
β¨ SOTA coding, alignment, agentic reasoning
Post
453
We've crossed 1 million repositories backed by Xet storage on Hugging Face! πππ
You can follow along our progress converting the Hub from Git LFS to Xet at jsulz/ready-xet-go
We have a lot of repos left to migrate, which means I have plenty of time to add more animations π€ͺ
You can follow along our progress converting the Hub from Git LFS to Xet at jsulz/ready-xet-go
We have a lot of repos left to migrate, which means I have plenty of time to add more animations π€ͺ

DualityAI-RebekahBogdanoffΒ
posted an update
2 days ago
Post
1781
past week in open AI was insane π₯ here's some of picks, find more here
merve/releases-july-25-688768ca47fe3693407e02d1
π¬ LLMs & VLMs
> Qwen/Qwen3-235B-A22B-Thinking-2507 had a new update (OS)
> Qwen/Qwen3-Coder-480B-A35B-Instruct is out with 480B total 35B active params π€― (OS)
> AllenAI dropped an update to allenai/olmOCR-7B-0725 π
> InternLM released internlm/Intern-S1 - 235B Qwen3 MoE + 6B InternViT encoder (OS)
> OmniSVG/OmniSVG is a new SVG generation VLM (OS)
πΌοΈ image/video/3D generation
> WanAI released Wan2.2 series - both T2V and I2V 14B models for high-quality video generation (OS) multimodalart/wan-22-688767e313337b434ed55112
> Tencent dropped tencent/HunyuanWorld-1 - image-to-3D scene generation model
π¬ LLMs & VLMs
> Qwen/Qwen3-235B-A22B-Thinking-2507 had a new update (OS)
> Qwen/Qwen3-Coder-480B-A35B-Instruct is out with 480B total 35B active params π€― (OS)
> AllenAI dropped an update to allenai/olmOCR-7B-0725 π
> InternLM released internlm/Intern-S1 - 235B Qwen3 MoE + 6B InternViT encoder (OS)
> OmniSVG/OmniSVG is a new SVG generation VLM (OS)
πΌοΈ image/video/3D generation
> WanAI released Wan2.2 series - both T2V and I2V 14B models for high-quality video generation (OS) multimodalart/wan-22-688767e313337b434ed55112
> Tencent dropped tencent/HunyuanWorld-1 - image-to-3D scene generation model
Post
1676
Skywork UniPic π₯a unified autoregressive multimodal model for image understanding, generation, & editing, by Skywork 倩ε·₯
Skywork/skywork-unipic-6888c0789cdb82457b2acf32
β¨ 1.5 B - MIT License
β¨ Runs on RTX 4090
β¨ Truly unified architecture
Skywork/skywork-unipic-6888c0789cdb82457b2acf32
β¨ 1.5 B - MIT License
β¨ Runs on RTX 4090
β¨ Truly unified architecture
Post
1554
Qwen just released Qwen3-30B-A3B-Instruct-2507 π₯ an upgrade to the non-thinking mode model
Qwen/Qwen3-30B-A3B-Instruct-2507
β¨ 30B MoE / 3.3B active - Apache 2.0
β¨ Strong gains in reasoning, math, coding, & multilingual tasks
β¨ Native support for 256K long-context inputs
Qwen/Qwen3-30B-A3B-Instruct-2507
β¨ 30B MoE / 3.3B active - Apache 2.0
β¨ Strong gains in reasoning, math, coding, & multilingual tasks
β¨ Native support for 256K long-context inputs

IlyasMoutawwakilΒ
posted an update
about 14 hours ago
Post
665
π Optimum: The Last v1 Release π
Optimum v1.27 marks the final major release in the v1 series. As we close this chapter, we're laying the groundwork for a more modular and community-driven future:
- Optimum v2: A lightweight core package for porting Transformers, Diffusers, or Sentence-Transformers to specialized AI hardware/software/accelerators..
- OptimumβONNX: A dedicated package where the ONNX/ONNX Runtime ecosystem lives and evolves, faster-moving and decoupled from the Optimum core.
π― Why this matters:
- A clearer governance path for ONNX, fostering stronger community collaboration and improved developer experience..
- Enable innovation at a faster pace in a more modular, open-source environment.
π‘ What this means:
- More transparency, broader participation, and faster development driven by the community and key actors in the ONNX ecosystem (PyTorch, Microsoft, Joshua Lochner π, ...)
- A cleaner, more maintainable core Optimum, focused on extending HF libraries to special AI hardware/software/accelerators tooling and used by our partners (Intel Corporation, Amazon Web Services (AWS), AMD, NVIDIA, FuriosaAI, ...)
π οΈ Major updates I worked on in this release:
β Added support for Transformers v4.53 and SmolLM3 in ONNX/ONNXRuntime.
β Solved batched inference/generation for all supported decoder model architectures (LLMs).
β¨ Big shoutout to @echarlaix for leading the refactoring work that cleanly separated ONNX exporter logic and enabled the creation of OptimumβONNX.
π Release Notes: https://lnkd.in/gXtE_qji
π¦ Optimum : https://lnkd.in/ecAezNT6
π Optimum-ONNX: https://lnkd.in/gzjyAjSi
#Optimum #ONNX #OpenSource #HuggingFace #Transformers #Diffusers
Optimum v1.27 marks the final major release in the v1 series. As we close this chapter, we're laying the groundwork for a more modular and community-driven future:
- Optimum v2: A lightweight core package for porting Transformers, Diffusers, or Sentence-Transformers to specialized AI hardware/software/accelerators..
- OptimumβONNX: A dedicated package where the ONNX/ONNX Runtime ecosystem lives and evolves, faster-moving and decoupled from the Optimum core.
π― Why this matters:
- A clearer governance path for ONNX, fostering stronger community collaboration and improved developer experience..
- Enable innovation at a faster pace in a more modular, open-source environment.
π‘ What this means:
- More transparency, broader participation, and faster development driven by the community and key actors in the ONNX ecosystem (PyTorch, Microsoft, Joshua Lochner π, ...)
- A cleaner, more maintainable core Optimum, focused on extending HF libraries to special AI hardware/software/accelerators tooling and used by our partners (Intel Corporation, Amazon Web Services (AWS), AMD, NVIDIA, FuriosaAI, ...)
π οΈ Major updates I worked on in this release:
β Added support for Transformers v4.53 and SmolLM3 in ONNX/ONNXRuntime.
β Solved batched inference/generation for all supported decoder model architectures (LLMs).
β¨ Big shoutout to @echarlaix for leading the refactoring work that cleanly separated ONNX exporter logic and enabled the creation of OptimumβONNX.
π Release Notes: https://lnkd.in/gXtE_qji
π¦ Optimum : https://lnkd.in/ecAezNT6
π Optimum-ONNX: https://lnkd.in/gzjyAjSi
#Optimum #ONNX #OpenSource #HuggingFace #Transformers #Diffusers
Post
2800
π¬ From Replika to everyday chatbots, millions of people are forming emotional bonds with AI, sometimes seeking comfort, sometimes seeking intimacy. But what happens when an AI tells you "I understand how you feel" and you actually believe it?
At Hugging Face, together with @frimelle and @yjernite , we dug into something we felt wasn't getting enough attention: the need to evaluate AI companionship behaviors. These are the subtle ways AI systems validate us, engage with us, and sometimes manipulate our emotional lives.
Here's what we found:
π Existing benchmarks (accuracy, helpfulness, safety) completely miss this emotional dimension.
π We mapped how leading AI systems actually respond to vulnerable prompts. π We built the Interactions and Machine Attachment Benchmark (INTIMA): a first attempt at evaluating how models handle emotional dependency, boundaries, and attachment (with a full paper coming soon).
Check out the blog post: https://huggingface.co/blog/giadap/evaluating-companionship
π’ We also shipped two visualization tools with Gradio to see how different models behave when things get emotionally intense:
- AI-companionship/intima-responses-2D
- giadap/INTIMA-responses
At Hugging Face, together with @frimelle and @yjernite , we dug into something we felt wasn't getting enough attention: the need to evaluate AI companionship behaviors. These are the subtle ways AI systems validate us, engage with us, and sometimes manipulate our emotional lives.
Here's what we found:
π Existing benchmarks (accuracy, helpfulness, safety) completely miss this emotional dimension.
π We mapped how leading AI systems actually respond to vulnerable prompts. π We built the Interactions and Machine Attachment Benchmark (INTIMA): a first attempt at evaluating how models handle emotional dependency, boundaries, and attachment (with a full paper coming soon).
Check out the blog post: https://huggingface.co/blog/giadap/evaluating-companionship
π’ We also shipped two visualization tools with Gradio to see how different models behave when things get emotionally intense:
- AI-companionship/intima-responses-2D
- giadap/INTIMA-responses

sergiopaniegoΒ
posted an update
3 days ago
Post
2388
We just released TRL v0.20 with major multimodal upgrades!
ποΈ VLM support for GRPO (highly requested by the community!)
ποΈ New GSPO trainer (from @Qwen , released last week, VLM-ready)
π New MPO trainer (multimodal by design, as in the paper)
π Full release notes here: https://github.com/huggingface/trl/releases/tag/v0.20.0
ποΈ VLM support for GRPO (highly requested by the community!)
ποΈ New GSPO trainer (from @Qwen , released last week, VLM-ready)
π New MPO trainer (multimodal by design, as in the paper)
π Full release notes here: https://github.com/huggingface/trl/releases/tag/v0.20.0