Spaces:
Running
To my new Model-Mixing Magician!
Hey YnTec,
I've started following you on HuggingFace and I'm totally into your discoveries, your models mixes, and especially... the list of models in your app.py from your "Toy World" Space.
So much so that, since I love programming in my free time and my favorite language is C#, I whipped up a little WinForms project to make it easier for me when I want to generate several images at once. I'm using your models list and the free Inference API service of HuggingFace.
I even found out that during an API call, we can not only provide the "inputs" but also pass an "options" object with 2 boolean properties: use_cache and wait_for_model. The use_cache defaults to true but if set to false, it generates a new image with each call, and wait_for_model is false by default, but setting it to true avoids the 504 error codes (model is loading...) but we have to wait, like... 2 more minutes, that's all!
Anyway, I was wondering, which Spaces should I check out to get your most up-to-date list for the best models :)
And if possible, I'd love to chat (discord or something) with you about how you mix models and other magic tricks you are doing!
Looking forward to it!
Hey BlinkSun! Welcome aboard! It's great to have your feedback, as sometimes it feels kind of lonely here, and I don't even know if I'm going to the right direction, and what I get is confusing, like, getting lots of likes in a model I made... but... it rarely gets used, and then no hearts at all, but 20000 downloads in a week?! o_O Things like that.
Let me get something out of my chest, I think I'm reaching some kind of model saturation, when I started this, or rather, continued from what the giants of hugging spaces had created, I dreamed to be able to provide models that could do everything you could think of! If a single model model can't do it, I could have 5 that put together could get close, and then Dalle 3 happened and it was a shock, I could never get close, no matter what I did, no matter what models I mixed, the outputs just weren't there, and so, I gave up, and compromised, and started delivering the best I could, even if it wasn't what I wanted.
My dream was to get to 1000 models and make a party but what I noticed was that most of the models I was planning to release actually were worse versions of what i already had, or they didn't bring anything new to the table. Civitai previews are tricky as most depend on negative prompts, and those just take a big chunk of the space of possible outputs, so I optimize my models to not require them, but then many, many models become mediocre without them. You may have seen big gaps between one upload and the next, but I'm not on vacation, the other day I tested 11... Yeah, eleven models, and none of them were up to my standards for release, so I spent more days searching until I found one that did it...
Hey, let me show you something from today...
Those are outputs from 4 models I didn't release... so the REASON I merged the Memento model was to have a model to use as a base for Analog Diffusion (https://huggingface.co/wavymulder/Analog-Diffusion) because what bothered me about it is that if you don't put "Analog Style" on it it just has bad outputs... what if we had a model that produced great things no matter what AND those outputs if you added "Analog Style" to it? But no matter what I tried, nothing worked because we already have this:
At the left, we have the output for Analog Diffusion, and at the right, we have Memento's. As you can see, none of my merges really improved over these outputs, she's basically the same as Memento alone, and none of it has the style of Analog Diffusion, so I parked this idea along with many that haven't worked yet because Memento is clearly not suitable for the job...
Anyway, just mentioning it because I really have no one to talk about this, I guess I could have a blog or something, but on the theme of "my failed attempts at merging models" I guess it's better if people don't know about all this, ha!
About the best models: I always try to keep the best one at the top, whenever a model does something that is not up to the standards of Toy World, I sink it deep on the list, sometimes near the bottom. For most recent models you can check this space: https://huggingface.co/spaces/Yntec/PrintingPress - here I try to upload models daily, if I find them, as I become more and more critical about what I upload, and I also alternate with digiplay's models. digiplay is my hero, I learned everything about uploading models from him and the greatest inspiration to merge models was to match his models merges.
About cached models: What kills me is generating images and then losing them before I can save them, that's what clicking Generate does without a cache, and I keep doing it accidentally. With a cache, there's the image again, nothing is lost. Plus, sometimes the errors attack after an image has been generated, but not shown, so clicking generate after such an error restores it from the cache instantly, I can't sacrifice all that for different images from the same prompt.
About wait_for_model: Yeah, I'm using that on my space that allows you to generate with up to 6 models at a time: https://huggingface.co/spaces/Yntec/Diffusion60XX though it's always behind Toy World and the Printing Press because selecting and unselecting models makes the whole list jump around, but one could just select 1 and use it like this, with the cool feature of seeing what actually happens on the errors.
About chatting and Discord: I'm definitively anti-Discord, and in recent times, anti-Private messages and anything that remains concealed from people. I'm glad Huggingface has nothing of that so we have to talk publicly and whatever we talk about can benefit people, but feel free to use this comment section like that, when you post a yellow circle goes around my avatar and I can check what you said quickly.
About secrets: Sure, I'll reveal all my secrets, just beware that they'll be here in the open so everyone will know. Huh, I guess a "secret" is that I could create spaces that allows 6 images at a time with any model, all different, like this one: https://huggingface.co/spaces/Yntec/DreamAnything - or that I have code that would allow you to use negative prompts and adjust the size of your generations so you can have landscape or portraits in your outputs, and 1024x1024 pictures instead of the default 768x768 ones, something I may plan to do for selected models.
And any question you have about merging models, I can answer. I don't have the hardware, so I do all my merging in huggingface spaces, and it takes a while because each image I test takes 15 minutes to generate on there, but I'm not in a hurry and do other things on the meantime. If you want some kind of tutorial about how to merge models and upload them to huggingface I can do that too.
I'm like a music box that you turn once and then I play music for a long time, known for my big walls of text like this one! Hope I didn't forget anything, see you!
you are not alone
niether behind your walls of text ,
or in your unguided direction
you are not 404
or lost under your pile of models
nor within the ocean of the unforseen problems
that is known as huggingface
i gotta say one thing about the air of solitude here in HF
like i have tons of spaces, miles of code, mostly all just me failing and learning and generally just being lame and noobish
models, and python are new concepts to me, but im getting my time in, im learning
but out of hundreds of spaces, ive made and deleted
i've only publicly shared like2 (more like 10, but 8 of them i never looked back at)
only 1 really, with about 200 of your models and johns
(just this week ive started smithing my own, merging and playing pin the loRA on the Models ,blindfolded)
but you know
so in that 1 space, (48 likes)
i was told by chatgpt to add this debug logging line so i could pull error codes , so i could aim for 100% generation rates for all the models
and so i did
and the console log, would bounce to the bottom,
over and over , eventually i was like fine
i'll lock it in place with this checkbox here
and when i did
i was like,
wait, i didnt type that prompt that i see on the screen in this log
nor the following one nor
none of them that kept scrolling the page
prompt after prompt after prompt after
these people, look at some crazy stuff
like stuff that even made me gringe
gringe so badly that i just wanted the scrolling to stop again
but it kept going and going and going
120 hours later still going
over like 250,000 prompts
that i didnt prompt
all from 1 space
with 200 models (half yours)
and only 48 likes
and ive only had 1 person speak to me in my inbox,
trying to show me how easy it (wasn't) to update its gradio
they backtracked out once they realized it though,
leaving me there,
by myself to resolve it
(and i killed that debugging output too.)
but hey
Yntec
if it feels so quiet here on hugginface
just think of me and my one space
250,000 prompts just silently rolling by
just be happy you've NOT BEEN Cruc- i - f i e d
Oh, hey charliebaby2023! Nice to hear from you. My last post of this thread remains valid, except we got seeds since then so we can reproduce images at will, I killed Diffusion60XX because it wasn't different from this one, people can tweak the models at Blitz Diffusion, and the most successful space like this was advertised as Sexy Diffusion or something like that.
Interest has gone downhill since much more capable models like Flux have appeared, right now you can go to chatgpt.com and ask for an image, and it'll probably be light years ahead of what any of these models could come up with, perhaps like Dalle 3 but with perfect text and prompt understanding. It's as if image generation has been solved, I'm just glad I released AnythingV7 back when I did, if I released it today nobody would have cared, if I released Shampoo back then it would have broken the ground!
But it's great to continue like casting a radio station for the few people listening, there's no pressure anymore, my next release doesn't need to be better than the last, I never saw the images generated with my models unless people shared them. And, ha! My models... The only time I put myself on the credits of a model was for DreamAnything, and I'll probably take myself away if I update it. The only point of authorship is to let others find more things released by the same person (I wouldn't know how to find more stuff by Lyriel's anonymous author), but when it comes to AI, it's never about "hey! look at what I did!", because I just pressed some buttons and pasted some text, at the end of the day.
My advice to you will be about never chasing the likes, never seeking people that enjoy your work, pretend that at the end of the day, all you did that day was deleted and unseen by anyone. Would you still do it? Because you do enjoy doing it? Then, go ahead, that's the answer, and the reason for doing, not some number in the screen that goes up.
Yeah, I've never actually been attracted by the numbers, just perplexed, I actually never release any of my code , well all of my codepen is public but it's all just places for my scratch and access.
I keep my hf spaces private, just because of speed, but made it public only so a irl friend could experience image gen easily
Was still mind blown when I could see the active log of prompts flying by. A number that was a far reflection of what the like had indicated.
Any how
Issues of the recent
Have you noticed the sudden wave of 404 on SD models the other day?
I've gotten some to respond, but the majority do not.
I can get them to a 200, but then it's, too many request, please try again later.
I think, they're now selecting some SD models over others for use through the new hf inference api.
You have any thoughts on the subjects or solution proposals other than buy Cuda cores?
Honestly I don't know what they're doing with their Inference API, if you have to knock on a door to use it that's like expecting one to go to a salesman's door to buy his stuff, when the stuff should be available at your home's door, which is the point of a salesman.
Anyway, about the 404 errors, it's as if huggingface can't find itself, so I don't think that's intentional, something seems to have broken like the last time all inference APIs of all models shut down, so I'll be hoping it's a temporary problem that will eventually fix itself, so I'll be pretending the problem doesn't exist for now.
I think this is the page to keep an eye on:
https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5
That's the model you'd expect to work, it doesn't, so when the problem is resolved, it'll work, along with the others.
As for the volume, yeah, I never expected a model like Insane Realistic v2 to get to 111000 in a month, it's tempting to measure its success by that metric, but I've seen many models outperform it, but they lack the brand to make them popular, that stable-diffusion-v1-5 provides worse outputs than anything here but it's the most recognizable.
i guess just trying to confirm 3 things
this kind of problem here, happens, even on a large scope? just because life and reality can never be perfect
if so, what would you say had been the aveage life span of this kind of down time?is this possibly likely directly tied to issues regarding 1.5? technically not all 1.5 models, just one nova something version of 1.5.
if so, i dont blame hf one bit, i just hope they're reasonable about it.or maybe its both that and $$$?
HF im sure needs to pay off some infrastructure, and i guess i can respect that
but
i dont know anything about when or how im triggering infrencing, like only 1 time in the year ive been here, 1 hit my limit on infrencing. after 10 months of not even knowing there was a limit on such a thing. i didnt even know what it was, and still dont . yeah, i feel kinda dumb but im confident that will easily change.
so, do you know where i can go for any comprehensive info or links on inferencing and their rates here on HF
or instead of 3,
4. your guess is as good as mine?
i tend to be pretty pestimistic about what im seeing,
you seem to be pretty optimistic in general and knowledgeable.
its a breath of fresh air
i hope im not bugging you with all my questions
Hey charlie, just got here:
It's the very first time something like this happens, in previous instances, the spaces had problems that wouldn't allow them to access the models, this time around, the models themselves can't be accessed.
No, it's affecting all models no matter the architecture. The models that can be used still are those that have alternative Inference Providers.
Shutting down like this would be equivalent of shutting down TV ads, people are supposed to use these things to come around and probably stay and buy accounts or other things, so it wouldn't be the $$$ part.
Anyway, it's time to summon @John6666 , perhaps he can give us any insight about what's happening and if it's permanent (so I can take down these spaces? It's not my first time retiring them, anyway.)
As for me, LOL, the only thing I was using these for was to release new models, and I found a way of doing that without these spaces, so if it were going to remain like this, I could just continue as normal and ignore them. But it's like a snake that eats its own tail, at first the point was to release models so people could use them in these spaces, it would be weird to continue, specially as without them I'm getting 13 downloads daily instead of 1000, but again, I'd be a hypocrite if I cared about that after what I've said about numbers in this thread.
I remain optimistic because I'm still able to do what I've been doing, nothing of value was lost, the optimism isn't about the models coming back, who knows?
Well, to be honest, I haven't heard anything (basically, always) and I'm just piecing together bits of information...
I think there was an outage about 10 days ago. HF fixed it 8 days ago. Has it been broken ever since?
For example, Meta's Llama 3.2 Vision seems to be broken as well.
There haven't been any announcements since then, including on Discord...
Serverless Inference API glitch
https://discuss.huggingface.co/t/500-internal-error-were-working-hard-to-fix-this-as-soon-as-possible/150333/32
https://discuss.huggingface.co/t/inference-api-stopped-working/150492
Spaces glitch
https://discuss.huggingface.co/t/my-space-suddenly-went-offline-the-cpu-cannot-restart/151121/26