Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
aws-neuron
/
optimum-neuron-cache
like
18
Follow
AWS Inferentia and Trainium
100
License:
apache-2.0
Model card
Files
Files and versions
Community
424
main
optimum-neuron-cache
/
inference-cache-config
Ctrl+K
Ctrl+K
2 contributors
History:
62 commits
dacorvo
HF Staff
Add phi4 cached configurations
c564534
verified
about 1 month ago
gpt2.json
Safe
398 Bytes
Add more gpt2 configurations
about 1 year ago
granite.json
Safe
1.3 kB
Add configuration for granite models
4 months ago
llama-variants.json
Safe
1.45 kB
Add DeepSeek distilled versions of LLama 8B
3 months ago
llama.json
Safe
1.67 kB
Update inference-cache-config/llama.json
7 months ago
llama2-70b.json
Safe
287 Bytes
Create llama2-70b.json
10 months ago
llama3-70b.json
Safe
584 Bytes
Add DeepSeek distilled model
3 months ago
llama3.1-70b.json
Safe
289 Bytes
Rename inference-cache-config/Llama3.1-70b.json to inference-cache-config/llama3.1-70b.json
7 months ago
mistral-variants.json
Safe
1.04 kB
Remove obsolete mistral variants
7 months ago
mistral.json
Safe
1.87 kB
Update inference-cache-config/mistral.json
4 months ago
mixtral.json
Safe
583 Bytes
Update inference-cache-config/mixtral.json
7 months ago
phi4.json
556 Bytes
Add phi4 cached configurations
about 1 month ago
qwen2.5-large.json
Safe
849 Bytes
Update inference-cache-config/qwen2.5-large.json
3 months ago
qwen2.5.json
Safe
2.69 kB
Add DeepSeek distilled models
3 months ago
stable-diffusion.json
Safe
1.91 kB
Update inference-cache-config/stable-diffusion.json
7 months ago