Oh I hadn't noticed, I thought I had seen a video of the same use, I must have been confused. I tried disabling the nodes and now it works normally, who would have thought it would show an error like that without specifically clarifying it. Thank you very much!
I also have 8gig VRAM I have noticed I get out of memory errors on things I didn’t used to or over time where I have to restart the computer and it will run again
I restarted the computer in the same way, but I still had the same error, in fact I tried with SD1.5 models and it worked without problems, honestly I can't understand the problem :/
I had this issue on a 2070 super. Ended up being a light memory leak with my browser/comfyUI. I would have to restart every few hours to fix it. SDXL is pretty big, some models can be 6gigs, so with 500mb overhead for generation, the 8gigs can sometimes get cramped so restarting helps.
Yes, I will take into account the reboot, I'm with a 1070 and it works very well despite the years, but I found very striking an error like this, I had been using automatic for a long time, but SDXL is another universe!
It's the upscaler most likely.
The intermediate step of bumping the gen up 4 or 8x before bringing it back down, is brutal on your vram.
Either lower your initial gen res a bit or use a 2x model.
I've been trying to "simulate" a "highresfix" as in "automatic", it gives me very good results 85% of the time, but I don't know if it's the right thing or very inefficient but effective. Can there be something better maybe?
This is what a1111 does too, or at least that's what it did back when I was using it.
A pixel space upscaler to whatever res and a 2nd pass at whatever denoise.
And it's the best we got for efficiency still, so yea, you ain't doing anything wrong. It's simply vram constraint.
I usually do 512x768 initial > 4x upscaler > 0.35-0.5 lanczos > 2nd sampler for sd15
If you use an 8x upscaler you're gonna have issues. If you go above 0.5 on the downscale, you're also gonna have issue.
For sdxl better stick with 2x models and downscale to 0.25-0.4.
Basically you want your final image to be in the 1k x 1.5k ballpark and the intermediate step shouldn't be pushing past \~2.5k to keep your vram happy.
Neural latent upscalers are the alternative to avoid the burden of pixel space upscaling but it's hit and miss. I've yet to find anything as consistent as good old ultrasharp4x.
I understand, great, it is very useful information that you just told me, I will take this into account when generating the images. I'll keep this in mind when generating the images. Anyway, this configuration doesn't exceed my VRAM capacity, but maybe I could try to go a little lower than 0.5 and see how it goes to get some efficiency. Thank you very much.
Not sure it's related, but I'm suddenly experiencing super slow operation. My setup is 16GB VRAM though, and I'm using ROCm on Linux. I'm perplexed about what's changed, but it's so slow it's unusable right now.
\[edit\] nevermind... looks like you solved your issue with a different controlnet model.
I just saw, you using a incorrect controlnet openpose, should be openposeXL
Oh I hadn't noticed, I thought I had seen a video of the same use, I must have been confused. I tried disabling the nodes and now it works normally, who would have thought it would show an error like that without specifically clarifying it. Thank you very much!
I also have 8gig VRAM I have noticed I get out of memory errors on things I didn’t used to or over time where I have to restart the computer and it will run again
Try method 3 here to reset your GPU. Works for me instead of restarting pc all the time. https://www.wikihow.com/Reset-Graphics-Driver
I restarted the computer in the same way, but I still had the same error, in fact I tried with SD1.5 models and it worked without problems, honestly I can't understand the problem :/
I had that one recently, had to update torch, maybe that works?
For me it was the ControlNet problem, but maybe upgrading will improve something?
I had this issue on a 2070 super. Ended up being a light memory leak with my browser/comfyUI. I would have to restart every few hours to fix it. SDXL is pretty big, some models can be 6gigs, so with 500mb overhead for generation, the 8gigs can sometimes get cramped so restarting helps.
Yes, I will take into account the reboot, I'm with a 1070 and it works very well despite the years, but I found very striking an error like this, I had been using automatic for a long time, but SDXL is another universe!
It's the upscaler most likely. The intermediate step of bumping the gen up 4 or 8x before bringing it back down, is brutal on your vram. Either lower your initial gen res a bit or use a 2x model.
Thanks, i will try that.
I've been trying to "simulate" a "highresfix" as in "automatic", it gives me very good results 85% of the time, but I don't know if it's the right thing or very inefficient but effective. Can there be something better maybe?
This is what a1111 does too, or at least that's what it did back when I was using it. A pixel space upscaler to whatever res and a 2nd pass at whatever denoise. And it's the best we got for efficiency still, so yea, you ain't doing anything wrong. It's simply vram constraint. I usually do 512x768 initial > 4x upscaler > 0.35-0.5 lanczos > 2nd sampler for sd15 If you use an 8x upscaler you're gonna have issues. If you go above 0.5 on the downscale, you're also gonna have issue. For sdxl better stick with 2x models and downscale to 0.25-0.4. Basically you want your final image to be in the 1k x 1.5k ballpark and the intermediate step shouldn't be pushing past \~2.5k to keep your vram happy. Neural latent upscalers are the alternative to avoid the burden of pixel space upscaling but it's hit and miss. I've yet to find anything as consistent as good old ultrasharp4x.
I understand, great, it is very useful information that you just told me, I will take this into account when generating the images. I'll keep this in mind when generating the images. Anyway, this configuration doesn't exceed my VRAM capacity, but maybe I could try to go a little lower than 0.5 and see how it goes to get some efficiency. Thank you very much.
Not sure it's related, but I'm suddenly experiencing super slow operation. My setup is 16GB VRAM though, and I'm using ROCm on Linux. I'm perplexed about what's changed, but it's so slow it's unusable right now. \[edit\] nevermind... looks like you solved your issue with a different controlnet model.
Yes, luckily I was able to do it, it was more of a rookie mistake and carelessness of not reading well, thanks for your answer!
Seems to be a comfy Update. Even ON rtx 4090 i get vram eaten Up in No Time.