ME 1
AIGUYCONTENT
AI & ML interests
None yet
Recent Activity
new activity
1 day ago
mradermacher/llama-33B-instructed-i1-GGUF:Is there slop in this?
new activity
1 day ago
mradermacher/model_requests:https://huggingface.co/soob3123/amoral-gemma3-27B-v2/tree/main
new activity
2 days ago
TheDrummer/Moistral-11B-v3:WAR ON MINISTRATIONS
Organizations
None yet
AIGUYCONTENT's activity
Is there slop in this?
3
#1 opened 1 day ago
by
AIGUYCONTENT

https://huggingface.co/soob3123/amoral-gemma3-27B-v2/tree/main
1
#871 opened 1 day ago
by
AIGUYCONTENT

WAR ON MINISTRATIONS
11
12
#1 opened 12 months ago
by
TheDrummer

First review, Q5-K-M require 502Gb RAM, better than Meta 405billions
1
6
#11 opened 3 months ago
by
krustik
Interview request: Thoughts on genAI evaluation & documentation
1
#6 opened 4 months ago
by
evatang
Nemotron 51B
15
#436 opened 5 months ago
by
AIGUYCONTENT

Q6_K vs. Q5_K_L
3
#6 opened 6 months ago
by
AIGUYCONTENT

Model suggestions and requests
19
#5 opened 7 months ago
by
jukofyork

Llama 3.1 70B Instruct Lorablated Creative Writer GGUF please
13
#351 opened 7 months ago
by
AIGUYCONTENT

Prompt format
3
#3 opened 7 months ago
by
AIGUYCONTENT

GGUF wen?
5
#3 opened 7 months ago
by
AIGUYCONTENT

Model creating gibberish after certain amount of tokens.
22
#2 opened 8 months ago
by
AIGUYCONTENT

Low quants don't seem to work (no <reflection> tags)
6
#3 opened 8 months ago
by
MrHillsss
5.0bpw quants
2
3
#1 opened 9 months ago
by
bullerwins

I love this model but why does it keeps calling me Anon?
3
#7 opened 8 months ago
by
AliceThirty
How much VRAM do you need (you = Anthracite)
3
#3 opened 8 months ago
by
AIGUYCONTENT

136GB VRAM enough to run Q8?
10
#1 opened 8 months ago
by
AIGUYCONTENT

Files empty?
3
#2 opened 8 months ago
by
AIGUYCONTENT

136GB VRAM enough to run Q8?
10
#1 opened 8 months ago
by
AIGUYCONTENT

136GB VRAM enough to run Q8?
10
#1 opened 8 months ago
by
AIGUYCONTENT
