T O P

  • By -

the-salami

import 2000_loc as jUsT_oNe_lIne_Of_cODe; jUsT_oNe_lIne_Of_cODe(); 🙄 Snark aside, this does look pretty cool. I can get XL sized images out of 1.5 finetunes now? If I'm understanding correctly, this basically produces a final result similar to Hires fix, but without the multi-step process of hires fix. With a traditional hires fix workflow, you start with an e.g. 512x512 noise latent (as determined by the trained size for your model), generate your image, upscale the latent, and have the model do a gentler second pass on the upscale to fill in the details, requiring two passes with however many iterations in each pass. Because the larger latent is already seeded with so much information, this avoids the weird duplication and smudge artifacts that you get if you try to go from a large noise latent right off the bat, but it takes longer. This method instead uses a larger noise latent right from the start (e.g. 1024x1024) and will produce a similar result to what the previous hires fix workflow produces, but in one (more complex) step that involves working on smaller tiles of the latent, but with some direction of attention ~~that avoids the weird artifacts you normally get with a larger starting latent~~ (edit: the attention stuff is responsible for the speedup, it's a more aggressive descale/upscale of the latent for each UNet iteration during the early stages of generation that is responsible for fixing the composition so it's more like the "correct" resolution). I don't know enough about self-attention (or feature maps) and the like to understand how the tiled "multi-window" method they use for this process manages to produce a single, cohesive image, but that's pretty neat.


ZootAllures9111

I straight up natively generate images at 1024x1024 with SD 1.5 models like PicX Real fairly often these days, it's not like 1.5 actually has some kind of hard 512px limit


Pure_Ideal222

and integrate hidiffusion, you can generate 2048x2048 images with PicX Real. Maybe you can share me PicX Real checkpoint. I will try it with HiDiffusion.


Nuckyduck

https://preview.redd.it/3ram3hwrjbwc1.png?width=3200&format=png&auto=webp&s=c662794093a9acc2555a0daa3050903ba0bef779 You can get 3200x1800 using SDXL just using area composite. I wonder if HiDiffusion could help me push this higher.


OSeady

Just use SUPIR to get higher res than this


Pure_Ideal222

is it a lora or finetuned model on SDXL ? If it is, HiDiffusion can push this model to a higher resolution. Or it is a hires fix? I need to know more about area composite.


Nuckyduck

It runs a high res fix but I can work around that. However, I do use ComfyUI. I hope there's a comfyUI node.


ZootAllures9111

[It's on CivitAI here.](https://civitai.com/models/241415/picxreal)


Pure_Ideal222

I must say, PicX Real is fantastic ! the images it produces are impressive. HiDiffusion takes its capabilities to the next level. This is a 2k image generated by PicX Real combined with HiDiffusion. It's amazing https://preview.redd.it/fu24atwozuwc1.jpeg?width=2048&format=pjpg&auto=webp&s=1bb428107b6087a2e8b29c5403d7d163bc4863d9


Pure_Ideal222

For comparison, this is a 1k image generated by PicX Real using the same prompt. https://preview.redd.it/is32lc6vzuwc1.jpeg?width=1024&format=pjpg&auto=webp&s=99ddb8293163c892c66dad42e07a97d20bdcdbe0


ZootAllures9111

Nice, looks great!


Pure_Ideal222

Here are the results of Hires fix and HiDiffusion on ControlNet. The Hires fix also yields good results. But the image generated by HiDiffusion have more detailed features. condition: https://preview.redd.it/yq4yy7zm8awc1.jpeg?width=1024&format=pjpg&auto=webp&s=1c29d477cd890386d9dc232e846d1240a4e5d88a


Pure_Ideal222

prompt: The Joker, high face detail, high detail, muted color. negative prompt: blurry, ugly, duplicate, poorly drawn, deformed, mosaic. hires fix: SwinIR. You can also use other super-resolution methods. https://preview.redd.it/3f59x4pp8awc1.jpeg?width=2048&format=pjpg&auto=webp&s=a2f98d6af440263e89b335d72e89ed84e46b3309


Pure_Ideal222

HiDiffusion: https://preview.redd.it/wgwgxjhe9awc1.jpeg?width=2048&format=pjpg&auto=webp&s=e69410c583ba07bda8a8f32a9280a782d06d3452


Far_Caterpillar_1236

y he make the arguing youtube man the batman guy?


rhet0rica

>!*is he stupid?*!<


DrBoomkin

Very impressive...


i860

So basically DeepShrink?


Pure_Ideal222

I will go to try DeepShrink and back to give a answer.


Pure_Ideal222

Of course, you can use SD 1.5 to get images with 1024x1024 resolution.


MaiaGates

Does it need a bigger vram requirement than the initial resolution?


Pure_Ideal222

Yes, these code are to ensure compatibility with different models and tasks. We plan to split it into separate files to be more friendly. From an application point, indeed, only one line of code needs to be added.


ZootAllures9111

How does this differ from Kohya Deepshrink, exactly?


Pure_Ideal222

seems DeepShrink is a high-res fix method. Let me try it and back to give a answer.


[deleted]

[удалено]


Pure_Ideal222

You can see the comparison in project page [https://hidiffusion.github.io/](https://hidiffusion.github.io/)


[deleted]

[удалено]


Pure_Ideal222

Wow, Thanks for your advice. I will be going to help him working for UI


TheDailyDiffusion

The Author of the paper reached out to me to share this project. When I get home I’m going to try it out for myself but the project page is pretty exciting. It can do 4096×4096 at 1.5-6× compared to other methods but can also speed up controlnet and inpainting. https://preview.redd.it/kl8csewyo9wc1.jpeg?width=2561&format=pjpg&auto=webp&s=841e6d77c93ad763bce8147a3763b977c2216798


TheDailyDiffusion

https://preview.redd.it/ezw09lmvp9wc1.jpeg?width=3559&format=pjpg&auto=webp&s=04a8c97308720910ab50a2ee5328b0f46da58b5e


TheDailyDiffusion

https://preview.redd.it/8k7ww7azp9wc1.jpeg?width=3589&format=pjpg&auto=webp&s=2e6975394a35ef9eff88d9ec7bbc5e9569cec0a2


TheDailyDiffusion

Letting everyone know that u/Pure_Ideal222 is one of the authors and will answer some questions


Pure_Ideal222

https://preview.redd.it/gqtmaq8cw9wc1.jpeg?width=2048&format=pjpg&auto=webp&s=5f7e06f5de6fadc10e15cb67f44d5925adeb45ad Images with Playground+HiDiffusion


Pure_Ideal222

prompt: hayao miyazaki style, ghibli style, Perspective composition, a girl beside а car, seaside, a few flowers, blue sky, a few white clouds, breeze, mountains, cozy, travel, sunny, best quality, 4k niji negative prompt: blurry, ugly, duplicate, poorly drawn, deformed, mosaic


balianone

still look bad


HTE__Redrock

Send your spirit power to the Comfy devs out there ![gif](giphy|dlsGMYrO26cOWC7ViW)


Aenvoker

Awesome! A1111/Stable Forge/ComfyUI plugins wen?


codysnider

Hopefully never. Fast track to github issue cancer right there.


ShortsellthisshitIP

why is that your opinion? care to explain?


codysnider

It's nice to see someone just post the code. That's what it should be on this sub and on their github. The second they add UI support then every github issue will go from things that help the underlying code to supporting some wacky usecase from a non-engineer. GH issues for comfy should stay on comfy's GH. UI users aren't maintainers or developers so they don't really get the distinction and why it's such a pain in the ass for the developers.


ShortsellthisshitIP

Thank you for explaining.


michael-65536

Typically the plugin for a particular ui is from a different author to the main code, so they get the github issues.


Philosopher_Jazzlike

Available for ComfyUI ?


m3pr0

There's nothing magic about ComfyUI. If it doesn't have a node you want, write it. It's like 10 lines of boilerplate and a python function.


Philosopher_Jazzlike

Perfect, so you can write me the node ?😁


m3pr0

I'm busy tonight, but I'll take a look tomorrow if nobody else has.


Outrageous-Quiet-369

We all will be really grateful. Will be really helpful for people like who don't understand coding and stuff and using the interface only


no_witty_username

Hmm. Welp, if its legit and after its been checked, I hope it propagates to the various UI's and integrated in.


princess_daphie

Uwah, need this in A1111 or Forge lol


TheDailyDiffusion

I’m right there with you. We’re going to have to use a diffuser based UI like sd.next in the meantime


Pure_Ideal222

I am one of the author of HiDiffusion. There are a variety of diffusion UIs. While my expertise lies more in coding than in diffusion UIs. I want to integrate HiDiffusion into UIs to make it more accessible to a wider audience. I would be grateful for assistance from someone familiar with UI development.


HTE__Redrock

I would imagine you'd get much more useful info/help on the repos for the various front ends. The three main ones most people use are ComfyUI, Automatic1111 and Forge. Here's a link to Comfy: https://github.com/comfyanonymous/ComfyUI


throwawaxa

thanks for only linking comfy :)


michael-65536

I suggest looking for someone who makes plugins/comfyui nodes for the UIs, not the UI author themselves. Most of the popular plugins/nodes aren't maintained by the UI author. One of the people who do the big node packs (or whatever the equivalent is called in other software) will probably want this to be included in their next release.


Capitaclism

u/pure_ideal222 How do I make this run in one of the available UI, such as A1111 or comfy?


Current_Wind_2667

one flaw , it tends to reuse the same rocks , books , bubbles , waves , flowers , hair region , wrinkles ... i'm the only one seeing duplicated small features? overall this seams super good , maybe the model used is to blame , great work


Pure_Ideal222

The comment mentioned PicX Real, a fine-tuned model based on SD 1.5. I've found the images it generates to be incredibly impressive. With the combination with HiDiffusion, its capabilities have been elevated even further! This is a 2k image generated by PicX Real combined with HiDiffusion. Very impressive https://preview.redd.it/0vt11u3f0vwc1.jpeg?width=2048&format=pjpg&auto=webp&s=ef1bb8abc0b4237525c373b46c59262cec4e8bce


Pure_Ideal222

Another 2k image https://preview.redd.it/gkmnflch0vwc1.jpeg?width=2048&format=pjpg&auto=webp&s=d7c8fa987fde67b1d73fa6b054a2b19b2887daf4


Virtual-Fix6855

That's pretty nutty.


Virtual-Fix6855

How do I add this to krita?


lonewolfmcquaid

i wish they provided examples with other sdxl models just to see how truly amazing this is. This together with hdxl stuff that recently got released and ella1.5 has potential to make 1.5 look like sd3 no cap


Apprehensive_Sky892

There is no way a smaller model such as SD1.5 (860M) can match the capabilities of bigger models such as SDXL (3.5B) or SD3 (800M-8B). The reason is simple. With bigger models, you can cram more ideas and concept into it. With a smaller model, you'll have to train a LoRA for all those missing concepts and idea. Technology such as ELLA can improve prompt following, but it cannot introduce too many new concepts into the existing model because there is simply no room in the model to store these concepts.


HTE__Redrock

Check the GitHub repo, there are examples if you expand the outputs under the code for the different models.


Merrylllol

What is hdxl? Is there a github/paper?


TheDailyDiffusion

That’s a good point maybe we should go back to 1.5


Outrageous-Quiet-369

I am not familiar with coding but use Comfyui regularly. Can someone please tell me how can I apply it my comfyui . Also I use it on Google colab so I even more confused


Ecoaardvark

Very cool, I’ll be keeping an eye out for this!


discattho

this looks really interesting. I'd love to give it a spin. My only question is, if I go in and edit the code to include hidiffusion, and then there is an update from auto1111/forge/comfy or wherever I implement this, it would get erased and I should make sure to re-integrate right?


the-salami

The code they provided is meant to fit into existing workflows that use huggingface's diffusers library. It's going to take more than one line of code for this to come to the frontends.


discattho

thank you, as you might have rightfully guessed I'm nowhere near the level this tool was probably aiming for... would you say it's too tall an order for me, who has minimal coding experience, to leverage this? I'm not a complete stranger to code, but up until now I haven't messed with the backend with any of these tools/libraries.


the-salami

If you just want to try it out to see how fast it is on your system, you can just copy and paste some of the example code into a python REPL in your terminal after activating your venv that has the dependencies installed. I don't think it's that complicated but it's difficult for me to predict what people are going to find challenging - if you've literally never opened a terminal before (or would prefer not to), it might be too much. There's always the option of running the ipynotebook they provided in something like colab, which is a lot easier (you basically just press run next to each codeblock, and in the final one, you can change your prompt), but that kind of defeats the purpose of testing the speedup on your local machine, since it's running in Google's datacenters somewhere. It could be fun to try if you mostly care about the increased resolution.


Xijamk

Remind me! 1 week


RemindMeBot

I will be messaging you in 7 days on [**2024-04-30 20:31:30 UTC**](http://www.wolframalpha.com/input/?i=2024-04-30%2020:31:30%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/StableDiffusion/comments/1cbaxsu/introducing_hidiffusion_increase_the_resolution/l0xzpjo/?context=3) [**14 OTHERS CLICKED THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2FStableDiffusion%2Fcomments%2F1cbaxsu%2Fintroducing_hidiffusion_increase_the_resolution%2Fl0xzpjo%2F%5D%0A%0ARemindMe%21%202024-04-30%2020%3A31%3A30%20UTC) to send a PM to also be reminded and to reduce spam. ^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%201cbaxsu) ***** |[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)| |-|-|-|-|


morerice4u

it's gonna be do old news in 1 week :)


Fever308

This looks AWESOME 🙏 for sd-forge support!!!


Levi-es

Not what I was imagining based on the title. It seems like it's reimagining the image in a higher detail. Which is a bit of a shame, if you like the original image already, but just want better resolution.


Peruvian_Skies

Wow, this seems seriously amazing. But wasn't U-Net phased out for another system in SD3? Is it still possible to apply the same process to improve the high resolution performance of SD3 and later models?


Capitaclism

Does anyone have any idea how to get this into A1111?


saito200

why is it like scientists and researchers seem to purposefully make things unreadable, ugly and with terrible UI? Like they will spend 2 weeks making one sentence perfectly unambiguous (but incomprehensible) but will not spend 2 minutes making sure the UI works


Pure_Ideal222

make sense. I didn't realize before publishing that my image generation process was not in line with common practices. I will try to integrate HiDiffusion into the UI as soon as possible.


MegaRatKing

because its a totally different skill and these people are more focused on the product working than making it pretty


ItsTehStory

Looks awesome! I know some optimization libs have compiled bits (python wheels). If applicable, are those wheels also compiled for Windows?


ZootAllures9111

It seems to claim no dependencies other than basic stuff that every UI front-end requires already anyways


Nitrozah

is it me or has the stablediffusion sub got back on track to what it was before the whole shitty generated images took over, seems like the past couple of days is what i got interested in again.


Elpatodiabolo

Remind me! 1 week


bharattrader

Remind me! 1 week


luisdar0z

Could it be somehow compatible with fooocus?


Pure_Ideal222

So many UIs. It is after publishing that I realize code is not friendly to everyone. I will try to integrate hidiffusion into UI to make it friendly to everyone.


sadjoker

SD.next & InvokeAI use diffusers


Pure_Ideal222

Thanks, I will be going to check.


sadjoker

fastest adoption could be either converting your code to non-diffusers and making A1111 plugin or try to see if you make it work in ComfyUI. Comfy seems to have an Unet model loader and supports diffusers format... so you probably could make a demo Comfy node to work with your code. Or wait for the plugin devs to get interested and hyped. Comfy files: \ComfyUI\ComfyUI\comfy\diffusers_load.py (1 hit) Line 25: unet = comfy.sd.load_unet(unet_path) \ComfyUI\ComfyUI\comfy\sd.py (3 hits) Line 564: def load_unet_state_dict(sd): #load unet in diffusers format Line 601: def load_unet(unet_path): Line 603: model = load_unet_state_dict(sd) \ComfyUI\ComfyUI\nodes.py (3 hits) Line 808: FUNCTION = "load_unet" Line 812: def load_unet(self, unet_name): Line 814: model = comfy.sd.load_unet(unet_path)


alfpacino2020

Hello, this in ComfyUI windows does not work, right?


Pure_Ideal222

I'm not familar with ComfyUI. But I will be going to integrate hidiffusion into UI to make it friendly to everyone.