DriverIdentifier logo





Comfyui examples reddit

Comfyui examples reddit. I think the challenge is more understanding how stable diffusion works, then understanding comfy per se. For example, I don't use the ttN xyPlot. /r/StableDiffusion is back open after /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Advanced Merging CosXL. You should try to click on each one of those model names in the ControlNet stacker node /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Can any1 tell me how the hell do you inpaint with comfyUI Share Sort by: Reply reply More replies. I'm not going to spend two and a half grand on high-end computer equipment, then cheap out by paying £50 on some crappy SATA SSD that maxes out at 560MB/s. This image contain 4 different areas: night, evening, day, morning. For example: 896x1152 or 1536x640 are good resolutions. and remember sdxl does not play well with 1. I found that sometimes simply uninstalling and reinstalling will do it. AuraFlow is one of the only true open source models with both the code and the weights being under a FOSS license. Maybe I just need better parameters? I'd like to be able to reproduce the official Stable Cascade examples. Share, discover, & run thousands of ComfyUI workflows. For example, I've trained a Lora of "txwx woman". You can't change clipskip and get anything useful from some models (SD2. A lot of people are just Welcome to the unofficial ComfyUI subreddit. A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a This was very complicated for me to figure out in 1. First: added IO -> Save Text File WAS node and hooked it up to the prompt. In Automatic1111, for example, you load and control the strength by simply typing something like this: <lora:Dragon_Ball_Backgrounds_XL:0. The idea is that it creates a tall canvas and renders 4 vertical sections separately, combining them as they go. Using ComfyUI was a better experience the images took around 1:50mns to 2:25mns 1024x1024 / 1024x768 all with the refiner. 5 but with 1024X1024 latent noise, Just find it weird that in the official example the nodes are not the same as if you try to add them by yourself A higher clipskip (in A1111, lower in ComfyUI's terms, or more negative) equates to LESS detail in CLIP (not to be confused by details in the image). 5. 0 workflow, I see that it only uses 8GB of VRAM most of the time, and that includes the extra Welcome to the unofficial ComfyUI subreddit. com:) Under ". 25K subscribers in the comfyui community. 5 models? Thank you. 1 or not. md file yourself and see that the refiner is in fact intended as img2img and basically as you see being done in the ComfyUI example workflow someone Check in the sub history (and also one example in my post history) about examples of platforming games graphics generated using SD. Any ideas on this? /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the As far as I understand, as opposed to A1111, ComfyUI has no GPU support for Mac. , Load Checkpoint, Clip Text Encoder, etc. Examples. For some workflow examples and see what ComfyUI can do you can check out: The UI now will support adding models and any missing node pip installs. I really love doing wildcard runs. bat Its solvable, ive been working on a workflow for this for like 2 weeks trying to perfect it for comfyUI but man no matter what you do there are usually some kind of artifacting, its a challenging problem to solve, unless you really want to use this process, my advice would be to generate subject smaller and then crop in and upscale instead. true. However, as you can see in the image, there is a clear distinction between the original image and the additional parts created. The image you're trying to replicate should be plugged into pixels and the VAE for whatever model is going into Ksampler should also be plugged into the VAE Encode. 1. " Ensure you can generate images with your chosen checkpoint. Belittling their efforts will get you banned. From the ComfyUI_examples, there are two different 2-pass (Hires fix) methods, one is latent scaling, one is non-latent scaling Now there's also a `PatchModelAddDownscale` node. Looking to see if anyone has any working examples of break being used in comfy ui (be it node based or prompt based). The next step will be to go to GITHUB ::: ComfyUI Examples | ComfyUI_examples (comfyanonymous. These are examples demonstrating the ConditioningSetArea node. type --cpu after main. /r/StableDiffusion is back open after the protest of Reddit killing open API /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 13 GB Stage C >> \models\unet\SD Cascade Thats where I'd gotten my second workflow I posted from, which got me going. all images generated in the main ComfyUI frontend have the workflow embedded into the image like that (right now anything that uses the ComfyUI API doesn't have I'm using the ComfyUI recommended settings, models, and workflow, but results are still mediocre; objects fusing together, jacked up faces, etc. you wont get Welcome to the unofficial ComfyUI subreddit. So if you ever wanted to use the same effect as the OP, all you have to do is load his image and everything is already there for you. I think it is just the same as the 1. 168. It's ComfyUI, with the latest version you just need to drop the picture of the linked website into ComfyUI and you'll get the setup. The example given on that page shows how to wire up the nodes. example, edit it with your favorite editor. All the images in this repo contain metadata which means they can be loaded into ComfyUI Flux. Here is 300+ of my workflows and generated example art: /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the Welcome to the unofficial ComfyUI subreddit. They are images of workflows, if you download those workflow images and drag them to comfyUI, it will display the workflow. Nodes are the rectangular blocks, e. safetensors 3. I'm glad to hear the workflow is useful. SDXL Examples. You have to run it on CPU. To load a workflow, simply click the Load button on the right sidebar, and select the workflow . This is the input image that will be used in this example source: Here is how you use the depth T2I-Adapter: Here is how you use the depth Controlnet. It seems also that what order you install things in can make the difference. 24K subscribers in the comfyui community. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the 23 votes, 10 comments. This reddit is dedicated to announcements, discussions, questions, and general sharing of maps and the like, based around the Welcome to the unofficial ComfyUI subreddit. It seems like it could be a powerful tool, but honestly, I’m finding it really complicated to learn Area Composition Examples. The creator has recently opted into posting YouTube examples which have zero audio, captions, or anything to explain to the user what exactly is happening in the workflows being generated. In this guide, we are aiming to collect a list of 10 cool ComfyUI workflows that you can simply download and try out for yourself. If so, you can follow the high-res example from the GitHub. A checkpoint is your main model and then loras add smaller models to vary output in specific ways . with no examples to make it worse. disabled Restart ComfyUI to see if it starts and runs normally. I'm not sure that custom script allows you to select a new checkpoint but what it is doing can be done manually with more nodes. any examples of videos you have done /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Soon, there will also be examples showing what can be achieved with advanced workflows. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the The example Lora loaders I've seen do not seem to demonstrate it with clip skip. By being a modular program, ComfyUI allows everyone to make workflows to meet their own needs or to experiment on whatever they want. Vid2QR2Vid: You can see another powerful and creative use of ControlNet by Fictiverse here. ryzen 7 5800x, rx 5700xt, bequiet dark rock 4 +second fan attached, 32GB ddr4 3600 mhz An example of how machine learning can overcome all perceived odds /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. You can use more steps to increase the quality. (as in the example) /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site Welcome to the unofficial ComfyUI subreddit. Via the ComfyUI custom node manager, searched for WAS and installed it. Any suggestions /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the Welcome to the unofficial ComfyUI subreddit. Please share your tips, tricks, and /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. ? 🙄 Example: I want to use only the floor from CANNY so I can mix it with openpose. I couldn't decipher it either, but I think I found something that works. 5 but ill try and explain the method I found to work. " It takes the input, knows it's an image, and then does what I Like in your example image, there's a rounded table in cat image and different backgrounds. Looks good, but would love to have more examples on different use cases for a noob like me. 1 For some workflow examples and see what ComfyUI can do you can check out: ComfyUI Examples. 以下のどちらかの方法でComfyUIを導入します。導入済の方もComfyUIを最新版にアップデートしてください。 方法1:ComfyUIを直接導入する /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. /r/StableDiffusion is back open after the protest of Reddit killing you sound very angry. WAS suite has some workflow stuff in its github links somewhere as well. With the extension "ComfyUI manager" you can install almost automatically the missing /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. io Welcome to the unofficial ComfyUI subreddit. Remove the custom node in ComfyUI. Txt/Img2Vid + Upscale/Interpolation: This is a very nicely refined workflow by Kaïros featuring upscaling, interpolation, etc. py when launching it in Terminal, this should fix it. yaml and ComfyUI will load it #config for a1111 ui #all you have to do is change the base_path to where yours is installed a111 For example, ComfyUI-Manager becomes ComfyUI-Manager. (long blonde hair/bald for example), positioned in a reasonably coherent setting (albeit scale with the ruins is wonky). I like to do photo portraits - nothing crazily complex but as realistic as possible. Please keep posted images SFW. Here is an example of how the esrgan upscaler can be used for the upscaling step. Jokes aside tho, usability is important and that’s exactly why a1111 is so popular. Here's an example: I typed "There is no horse in this image" and got a picture of a horse. Note that in ComfyUI txt2img and img2img are the same node. 20K subscribers in the comfyui community. [For more information Welcome to the unofficial ComfyUI subreddit. About a week or so ago, I've began to notice a weird bug - If I load my workflow by dragging the image into the site, it'll put the wrong positive prompt. Its a simpler setup than u/Ferniclestix uses, but I think he likes to generate and inpaint in one session, where I generate several images, then import them and inpaint later (like this) . It should look like this: ComfyUI already has examples repo where you can instantly load all cool native workflows just by drag'n'dropping picture from that repo. A lot of people are just discovering this technology, and want to show off what they created. /r/mapmaking is participating in the Reddit Blackout. A1111 feels bloated compared to comfy. I'm not sure what's wrong here because I don't use the portable version of ComfyUI. In the above example the first frame will be cfg 1. SDXL Turbo is a SDXL model that can generate consistent images in a single step. After each step the first latent is down scaled and composited in the second, which is downscaled and composited with the third, etc Welcome to the unofficial ComfyUI subreddit. Members Sounds like it might be running on CPU only. I've switched to ComfyUI from A1111 and I don't think I will being going back. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. The only important thing is that for optimal performance the resolution should be set to 1024x1024 or other resolutions with the same amount of pixels but a different aspect ratio. but it works well with the examples provided. An example of how machine learning can overcome all perceived odds youtube This is Reddit's home for Computer Role Playing Games, better known as the CRPG subgenre! CRPGs are characterized by the adaptation of pen-and-paper RPG, or tabletop RPGs, to computers Welcome to the unofficial ComfyUI subreddit. The author of your example website has mislead you on a tangent. There's example However, I am curious about how A1111 handles various processes at the latent level, which ComfyUI does extensively with its node-based approach. if you needed clarification, all you had to do was ask, not this rude outburst of fury. The requirements are Thanks, that is exactly the intent, I tried using as many native nodes, class, functions provided by ComfyUI as possible, but unfortunately I can't find a why to use KSampler & Load Checkpoint node directly without re-write Welcome to the unofficial ComfyUI subreddit. Base ComfyUI also doesn't even connect to the internet for anything unless you run the update script. Download one of the dozens of finished workflows from Sytan/Searge/the official ComfyUI examples. In the github Q&A, the comfyUI author had this to say about ComfyUI: QA Why did you make this? I wanted to learn how Stable Diffusion Area Composition Examples. SDXL Turbo Examples. With fixed seed the peripheral details would look very close to identical between the images. json file. This guide, inspired by 御月望未's tutorial, explores a technique for significantly enhancing the detail and color in illustrations using noise and texture in StableDiffusion. 5 so that may give you a lot of your errors. The SDXL base checkpoint can be used like any regular checkpoint in ComfyUI. Here's a list of example workflows in the official ComfyUI repo. Kosinkadink developer of ComfyUI-AnimateDiff-Evolved has updated the cutsom node with a new funcionality in the AnimateDiff Loader Advanced node, that can reach Hello there. Example: Going from a South Park style of drawing to a Live-action INTENDED result. It encapsulates the difficulties and idiosyncrasies of python programming by breaking the problem down in Welcome to the unofficial ComfyUI subreddit. 17K subscribers in the comfyui community. I don't get where the problem is, I have checked the comfyui examples and used one of their hires fix, but when I upscale the latent image I get a glitchy image (only the non masked part of the original I2I image) after the second pass, if I upscale the image out of the latent space then into latent again for the second pass the result is ok. On the other hand, in ComfyUI you load the lora with a lora loader node and you get 2 options strength_model and strength_clip and you also have the text prompt thing Welcome to the unofficial ComfyUI subreddit. it means endless variety of graphics for a rather minimal download size. . 4. Reply reply gokayfem • /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper There are other examples as well (highly recommend going through them) and the video shared above does a great job at explaining these examples and the tool itself. When you start ComfyUi there are 2 files in the main folder to start it up run_nvidia_gpu. I get better results with base-SDXL or fine-tuned SD1. ComfyUI is a completely different conceptual approach to generative art. Txt2Img is achieved by passing an empty image to the sampler node with maximum denoise. Also embedding the full workflow into images is so nice coming I want to get into ComfyUI, starting from a blank screen. it picks a random gender for a character, picks 4 features for that character then puts that character in a random location. Very true, didn't use a negative embedding like badhands. Same issue with any UI unfortunately. support ComfyUI is a popular tool that allow you to create stunning images and animations with Stable Diffusion. The workflow is stripped from the png too. Why use cliptext encode height and width greater than the size of the alt image ? Question - Help its confuse /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. safetensors already in your ComfyUI Examples. You can also easily upload & share your own ComfyUI workflows, so that others can build on top of them! :) Why I built this: I just started learning ComfyUI, and really like how it saves the workflow info within each image it generates. Nodes/graph/flowchart interface to experiment and create complex Stable Diffusion workflows without needing ComfyUI could be so much simpler with these simple ideas. The graphic style Img2Img Examples. 11 votes, 12 comments. Much Python installing with the server restart. ComfyUIの導入. 1 is a suite of generative image models introduced by Black Forest Labs, a lab with exceptional text-to-image generation and language comprehension capabilities. If you disabled all the custom nodes properly none of them should be loaded, including the manager. Welcome to the unofficial ComfyUI subreddit. Pixels and VAE. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. You can achieve the same thing in a1111, comfy is just awesome because you can save the workflow 100% and share it with others. Thanks. This ui will let you design and execute advanced stable diffusion pipelines using a graph/nodes/flowchart based interface. (the t2i model downloads to your controlnets folder, youll have to move this to your style models folder. com) to use the Random node. For the easy to use single file versions that you can easily use in ComfyUI see below: FP8 Checkpoint Version. And above all, BE NICE. safetensors 73. 9. Just wanted to share that I have updated comfy_api_simplified package, and now it can be used to send images, run workflows and receive images from the running ComfyUI server. Then find example workflows . 5 models from Civitai. Cool pictures but "Worklflow Included" doesn't mean a vague link to the generic ComfyUI workflow. Img2Img works by loading an image Civitai has a ton of examples including many comfyui workflows that you can download and explore. To create this workflow I wrote a python script to wire up all the nodes. Nodes in ComfyUI represent specific Stable Diffusion functions. ) 20K subscribers in the comfyui community. Here is an example of 3 characters each with its own pose, outfit, features, and expression : /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind I'm sorry, I'm not at the computer at the moment or I'd get a screen cap. g. What I meant was tutorials involving custom nodes, for example. Sure, it comes up with new details, which is fine, even beneficial for 2nd pass in t2i process, since the miniature 1st pass often has some issues due to imperfections of our models, but sometimes the 2nd pass helps. Unfortunately reddit make it really, really hard to download png, it all get converted to webp. Comfyui Question: Does anyone know how to use controlnet (one or multiple) with the efficient loader & control net stacker node? A picture example of a workflow Back to our example. 0 with refiner. 9K subscribers in the comfyui community. With ComfyUI doing a SDXL 1. We will go through some basic workflow examples. /ComfyUI" you will find the file extra_model_paths. If you need an example input image for the canny, use this . Hi everyone, I’m pretty new to ComfyUI. 0 (the min_cfg in the node) the middle frame 1. ComfyUI can be a bit challenging initially! To start, try the default workflow: click "Clear" on the right and then "Load default. 2- Install ComfyUI and put the model files in (ComfyUI install folder)\ComfyUI\models\checkpoints you're actually looking at image to image in this /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Please share your tips, tricks, and workflows for using this software to create your AI art. Example: Welcome to the unofficial ComfyUI subreddit. 18K subscribers in the comfyui community. I don't see why these nodes are being probed at all. This is what the workflow looks like in Welcome to the unofficial ComfyUI subreddit. I have a scenario whereby I use a few models in SDXL but each one I have specific settings for, like steps, CFG, etc. Also, if this is new and exciting to Learned from the following video: Stable Cascade in ComfyUI Made Simple 6m 56s posted Feb 19, 2024 by How Do? channel on YouTube . /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app That will get you up and running with all the ComfyUI-Annotation example nodes installed and you can start editing from there. Share values in array indices to denote last elements), let's say ComfyUI is more programmer friendly; then 1(a111)=-1(ComfyUI) and so on (I mean the clip skip values and no numerical value in the mathematical sense). #Rename this to extra_model_paths. 0. Reply reply More replies More replies More replies I cant load workflows from the example images using a second computer. Here is an example of how to create a CosXL model from a regular SDXL model with merging. comfyui manager will identify what is missing and download for Welcome to the unofficial ComfyUI subreddit. heres a rough example of a wildcard system I built a while ago. masks or a regional prompter, define where you want, for example, 2 characters; you insert two Portrait Master nodes and use them to describe characters. or just co-op fun, this sub-reddit is designed to make the whole process easier and pull the Bloodborne co-op community together. io) Also it can be very diffcult to get the position and prompt for the conditions. In truth, 'AI' never stole anything, any more than you 'steal' from the people who's images you have looked at when their images influence your own art; and while anyone can use an AI tool to make art, having an idea for a picture in your head, and getting any generative system to actually replicate that takes a considerable amount of /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Earlier we double-clicked to search for it, but let’s not do that now. Area composition Discover the Ultimate Workflow with ComfyUI in this hands-on tutorial, where I guide you through integrating custom nodes, refining images with advanced tool Img2Img Examples. yaml. The proper way to use it is with the new SDTurboScheduler node but it might also work with the regular schedulers. Download r/comfyui: Welcome to the unofficial ComfyUI subreddit. See the high res fix example, particularly the second pass version. And then the video in the post shows a rather simple layout that proves out the building blocks of a mute-based, context-building workflow. I've done it before by using the websocket example inside comfyui, you have to first creat the workflow you like than use python to pull the data or display the images, /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the Welcome to the unofficial ComfyUI subreddit. I improved on my previous expressions workflow for ComfyUI by replacing the attention couple nodes by area composition ones. However, it's worth mentioning that node-based UIs seem to be a thing that some people prefer over any other type of UIs, and whatever this tool offers might be something that can The upscale quality is mediocre to say the least. ComfyUI-stable-wildcards can be installed through the Comfy manager. Please share your tips, tricks, and workflows for using this Im using comfyui, SD XL BASE model and some art lora, everythings was fine and images are in really good quality and have a lot of details, but sometimes it AuraFlow Examples. Check comfyUI image examples in the link. Installing ComfyUI. This way frames further away from the init frame get a gradually higher cfg. Or check it out in the app stores &nbsp; &nbsp; Comfyui issue. Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. Perhaps, try to remove/readd a single node among the failing ones and see what happens? Something else is strange: my workflow doesn't use many of those nodes. New Tutorial: How to rent 1-8x GPUS, install ComfyUI in the cloud (+Manager, Custom nodes, models, etc). Also: changed to Image -> Save Image WAS node. thread-e-printing • https://comfyanonymous. Missing features I think will make life easier for the users: Mirrored nodes, where if you change anything in the node Flux Examples. The new versions uses two ControlNet inputs : a 9x9 openpose faces, and a single openpose face. 5 models and it easily generated 2k images without any distortion, which is On newer versions of 1111 you can click on the double wrench icon on the top and configure a description (you can put the URL you downloaded it from, for example), the trigger words and a default weight, as well as see some info about it. reddit ComfyUIで試す手順. Honestly, this whole comfyUI thing started because sdxl isn’t working with a1111. The workflow has different upscale flow that can upscale up to 4x and in my recent version I added a more complex flow that is meant to add details to a generated image. AnimateDiff workflows will often make use of these helpful node packs: Welcome to the unofficial ComfyUI subreddit. Since ESRGAN /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Would some of you have some tips or perhaps even a workflow to get a decent 4x or even just 2x upscale from a 512x768 image in ComfyUI while using SD1. Please share your tips, tricks, and workflows If you understand how the pipes fit together, then you can design your own unique workflow (text2image, img2img, upscaling, refining, etc). So, we will learn how to do things in ComfyUI in the simplest text-to-image workflow. Improved AnimateDiff integration for ComfyUI, as well as advanced sampling options dubbed Evolved Sampling usable outside of AnimateDiff. ComfyUI LayerDivider ComfyUI LayerDivider is custom nodes that generating layered psd files inside ComfyUI, original implement /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Flux is a family of diffusion models by black forest labs. However, it is not for the faint hearted and can be somewhat intimidating if you are new to ComfyUI. I've been trying to do something similar to your workflow and ran into the same kinds of problems. This is just a slightly modified ComfyUI workflow from an example provided in the examples repo. Tidying up ComfyUI workflow for SDXL to fit it on 16:9 Monitor, so you don't have to | Workflow file included | Plus cats /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Features. io) for some great examples. 79 votes, 20 comments. First you will need to download modules 45(t2i-style model) and 48(pytorch_model. Examples of what Welcome to the unofficial ComfyUI subreddit. For example, see this: SDXL Base + SD 1. It is much more coherent and relies heavily on the IPAdapter source image as you can see in the gallery. You will see the workflow is made with two basic building blocks: Nodes and edges. Area composition with Anything-V3 + second pass with AbyssOrangeMix2_hard. ↓にあるComfyUI公式exampleを試します。 Flux Examples | ComfyUI_examples. now imagine it running in neigh real time. This example shows me just asking AnyNode "I want you to output the image with a cool instagram-like classic sepia tone filter. I've got it hooked up in an SDXL flow and I'm bruising my knuckles on SDXL. I can load the comfyui through 192. bing. One guess is that the workflow is looking for the Control-LoRAs models in the cached directory (which is my directory on my computer). /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind Welcome to the unofficial ComfyUI subreddit. github. It will also be a lot slower this way than A1111 unfortunately. looking at the example posted down below, in comfy is using 2 solid masks to define the areas and delimiting with composite masks, very clever For people who want something simpler: https://www. 4 - The best workflow examples are through the github examples pages. Reply reply /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers I don't know enough about the backward compatibility mechanism of ComfyUI so I can't be sure. safetensors or clip_l. IPAdapter with use of attention masks is a nice example of the kind of tutorials that I'm looking for 14 votes, 33 comments. The 2nd custom node I installed was the Dynamic Prompt set - adieyal/comfyui-dynamicprompts: ComfyUI custom nodes for Dynamic Prompts (github. when ever i try to click to type in whatever text box, for example the prompt, it just gets gray and i can type anything. Couldn't be bothered waiting another 5 hours! So when I saw the recent Generative Powers of Ten : r/StableDiffusion (reddit. it then compiles a prompt and sends it to clip encodes that tell the model where on the image to generate each part of the image. The other file is run_cpu. start with simple workflows . bin) in manager. 5. CLIPVision extracts the concepts from the input images and those concepts are what is passed to the model. Not much else. For example, in comfy you start with an "empty latent It is pretty amazing, but man the documentation could use some TLC, especially on the example front. Even with 4 regions and a global condition, they just combine them all 2 at a /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Motion LoRAs w/ Latent Upscale: the PNGInfo stripping has always been an issue on reddit with stablediffusion. Just drag Welcome to the unofficial ComfyUI subreddit. I'm only seeing subtle differences in changing steps and cfg (3,4,5) in the first stage KSampler of the example worfkflow. 8>. If you want do do merges in 32 bit float launch ComfyUI with: –force-fp32. bat is the one you want to use to start so it will run using the GPU. I also Introduction. I don't want to sound rude, but it seems to me like you're trying to run before you can walk. More info: https://rtech. Continuing with the car analogy, Learning ComfyUI is a bit like learning to driving with manual shift. Please read the AnimateDiff repo README and Wiki for more information about how it works at its core. Nodes/graph/flowchart interface to experiment and create complex My ComfyUI workflow was created to solve that. That's a bit presumptuous considering you don't know my requirements. 6. ----- /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. 75 and the last frame 2. If you don’t see the right panel, press Ctrl-0 (Windows) or Cmd-0 (Mac). But standard A1111 inpaint works Anyway, Im sharing this because these things are not well documented because of the frankly arcane method some of the creators used to provide examples and the fact that many images they put up to show examples are badly compressed or For those that don't know what unCLIP is it's a way of using images as concepts in your prompt in addition to text. Plus quick run-through of an example ControlNet workflow 19K subscribers in the comfyui community. If you don’t have t5xxl_fp16. The TL;DR version is this: it makes a image from your prompt without a LoRA, runs it through ControlNet, and uses that to make a new image with the LoRA. I am personally using it as a layer between telegram bot and a ComfyUI to run different workflows and get the results using user's text and image input. or through searching reddit, the comfyUI manual needs updating imo. What I need to do now: T2I-Adapters are used the same way as ControlNets in ComfyUI: using the ControlNetLoader node. If you look at the ComfyUI examples for Area composition, you can see that they're just using the nodes Conditioning (Set Mask / Set Area) -> Conditioning Combine -> positive input on K-sampler. I was looking at some comfyui examples. if a box is in red then it's missing . ComfyUI is not supposed to reproduce A1111 behaviour Thing you are talking about is "Inpaint area" feature of A1111 that cuts masked rectangle, passes it through sampler and then pastes back. Some workflows alternatively require you to git clone the repository to your ComfyUI/custom_nodes folder, and restart ComfyUI. Then there's a full render of the image with a prompt that describes the whole thing. There's a node called VAE Encode with two inputs. Img2Img works by loading an image like this example image, converting it to latent space with the VAE and then sampling on it with a denoise lower than 1. These are examples demonstrating how to do img2img. You're asking about prompt engineering but the question makes me feel like you don't really understand anything about the vector embedding process. The ControlNet input is just 16FPS in the portal scene and rendered in Blender, and my ComfyUI workflow is just your single ControlNet Video example, modified to swap the ControlNet used for QR Code Monster and using my own input video frames and a different SD model+vae etc. 1:8188 but when i try to load a flow through one of the example images it just does nothing. Discussion, samples, tips and tricks on the Sigma FP. The go-to example for emergent behavior is ChatGPT being able to translate text from one language to another - something it wasn't trained to do, but it can do Clown visits Reddit comments, baffles prompt seeker and generates mild controversy, curly red hair, mardi gras colors, this is a clown who lives inside of reddit comments and that's that. Stage A >> \models\vae\SD Cascade stage_a. 5 + SDXL Refiner Workflow : StableDiffusion. com) video, I was pretty sure the nodes to do it already exist in comfyUI. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. Some people get around it by posting the png info in the comments, others use a third party image host that supports PNGInfo and they link that, but most people just accept that reddit fucks things up and it's too much effort to try to fix on Action Movies & Series; Animated Movies & Series; Comedy Movies & Series; Crime, Mystery, & Thriller Movies & Series; Documentary Movies & Series; Drama Movies & Series Here's an example of me using AnyNode in an image to image workflow. Definitely with higher steps I get better details, but for prototyping prompts, I don't think there's much utility in upping the number of steps. Get the Reddit app Scan this QR code to download the app now. Please let me know if you have any questions! My Discord - jojo studio /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers Welcome to the unofficial ComfyUI subreddit. An example of how machine learning can overcome all perceived odds Welcome to the unofficial ComfyUI subreddit. Here are some examples I did generate using comfyUI + SDXL 1. Image Processing A group that allows the user to perform a multitude of blends between image sources as well as add custom effects to images using a central control panel. AuraFlow 0. This repo contains examples of what is achievable with ComfyUI. Flux. - lots of pieces to combine with other workflows: . I should have been more specific. In A1111, I would invoke the Lora in the prompt and also write "a photo of txwx woman". However, the other day I accidentally discovered this: comfyui-job-iterator (ali1234/comfyui-job-iterator: A for loop for From chatgpt: Guide to Enhancing Illustration Details with Noise and Texture in StableDiffusion (Based on 御月望未's Tutorial) Overview. (the cfg set in the sampler). So this is my ComfyUI week. 150 workflow examples of things I created with ComfyUI and ai models from Civitai Moved my workflow host to: Save model plus prompt examples on the UI. Now, because im not actually an asshole, ill explain some things. The only thing that hangs me up on Ferniclestix workflow is that I have to have the seed to 73 votes, 25 comments. Edit: For example, in the attached image in my post, applying the refiner would remove all the rain in the background. Try an example Canny Controlnet workflow by dragging in this image into ComfyUI. I ended up building a custom node that is very custom for the exact workflow I was trying to make, but it isn't good for general use. Put it under ComfyUI/input . Please share your tips, tricks, and workflows for using this software to create your AI art We would like to show you a description here but the site won’t allow us. support/docs /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Save this image then load it or drag it on ComfyUI to get the workflow. I'm using the ComfyUI notebook from their repo, using it remotely in Paperspace. 36 votes, 14 comments. Welcome to the unofficial ComfyUI subreddit. Next, we need advise ComfyUI about the above folder, and again that requires some basic linux skills, else https://www. The denoise controls the amount of noise added to the image. More info Hi everyone, I've been using SD / ComfyUI for a few weeks now and I find myself overwhelmed with the number of ways to do upscaling. 0 and Pony for example which, for Pony I think needs 2 always) because of how their CLIP is encoded. 7 MB Stage B >> \models\unet\SD Cascade stage_b_bf16. Restarted ComfyUI server and refreshed the web page. AMD CPU fans speed up when video starts on reddit. Some custom_nodes do still Welcome to the unofficial ComfyUI subreddit. I think for me at least for now with my current laptop using comfyUI is the way to go. Reddit removes the ComfyUI metadata when you upload A bit of an obtuse take. You have to study the workflow carefully, but I think it's possible. For example, it's like performing sampling with the A model for only 10 steps, then synthesizing another latent, injecting noise, and proceeding with 20 steps using the B model. With one word prompts I am seeing even more variance. There is a ton of stuff here and may be a bit overwhelming but worth ComfyUI Examples. A lot of people are just Posted by u/karetirk - No votes and 3 comments So I tried to create the outpainting workflow from the ComfyUI example site. Join the largest ComfyUI community. try civitai . Here are approx. I Welcome to the unofficial ComfyUI subreddit. Thanks! Share /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and Welcome to the unofficial ComfyUI subreddit. Note that this example uses the DiffControlNetLoader node because the controlnet used is a If this is not what you see, click Load Default on the right panel to return this default text-to-image workflow. 2. More info: https://rtech I've been playing with ComfyUI for a while now and even though I only do it for fun, I think I managed to create a workflow that that will be helpfull for others. Here’s a simple workflow in ComfyUI to do this with basic latent upscaling: Non latent Upscaling. You can Load these images in ComfyUI to get the full workflow. I played with hi-diffusion in comfyui with sd1. My comfyUI backend is an API that can be used by other apps if they want to do things with stable diffusion so chainner could add support for the comfyUI backend and nodes if they wanted to. It's not meant to overwhelm anyone with complex, cutting edge tech, but rather show the power of building modules/groups as blocks, and merging into a workflow through muting (and easily done so from the Fast Area Composition Examples | ComfyUI_examples (comfyanonymous. I can load workflows from the example images through localhost:8188, this seems to work fine. The The best way to learn ComfyUI is by going through examples. Most of the security issues in ComfyUI come from the manager which isn't part of the base install because these types of issues have not been solved yet. . I'm not very experienced with Comfyui so any ideas on how I can set up a robust workstation utilizing common tools like img2img, txt2img, refiner, model merging Welcome to the unofficial ComfyUI subreddit. Are you saying that in ComfyUI, you do NOT need to state "txwx woman" in the prompt? what's on ComfyUI example workflow note is not the complete list, idk why but it's has been adopted by many custom nodes. What if we wanted to find it in the context menu instead? /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude Not a specialist, just a knowledgeable beginner. Here's some examples where I used 2 images (an image of a mountain and an image of a tree in front of a sunset) as prompt inputs to When I run them through 4x_NMKD-Siax_200k upscaler for example, the eyes get really glitchy / blurry / deformed, even with negative prompts in place for eyes. duhj jxmk xqy ufjbt kmtoidad gnaiap cat pvzkm jumkdg gqbll