PINNsFormer: A Transformer-Based Framework For Physics-Informed Neural Networks Paper โข 2307.11833 โข Published Jul 21, 2023 โข 2
DINOv3 Collection DINOv3: foundation models producing excellent dense features, outperforming SotA w/o fine-tuning - https://arxiv.org/abs/2508.10104 โข 13 items โข Updated 1 day ago โข 216
Mixture-of-Recursions: Learning Dynamic Recursive Depths for Adaptive Token-Level Computation Paper โข 2507.10524 โข Published Jul 14 โข 69
Physical AI Collection Collection of commercial-grade datasets for physical AI developers โข 22 items โข Updated 9 days ago โข 68
Running 3.1k 3.1k The Ultra-Scale Playbook ๐ The ultimate guide to training LLM on large GPU Clusters
view post Post 4734 The AMD Instinct MI50 (~$110) is surprisingly fast for inference Quantized models. This runs a Llama 3.1 8B Q8 with Llama.cpphttps://huggingface.co/spaces/DevQuasar/Mi50A little blogpost about the HWhttp://devquasar.com/uncategorized/amd-radeon-instinct-mi50-cheap-inference/ See translation ๐ 16 16 ๐ฅ 1 1 + Reply