Comfyui reddit
$
Comfyui reddit. to try to replicate magnific its a good starting point using stuff available of 6-5 months ago. A lot of people are just discovering this technology, and want to show off what they created. It's possible that MoonDream is competitive if the user spends a lot of time crafting the perfect prompt, but if the prompt simply is "Caption the image" or "Describe the image", Florence2 wins. Next, install RGThree's custom node pack, from the manager. Workflows are much more easily reproducible and versionable. Welcome to the unofficial ComfyUI subreddit. Jul 6, 2024 Β· You can construct an image generation workflow by chaining different blocks (called nodes) together. And then connect same primitive node to 5 other nodes to change them in one place instead of each node. I'm starting to make my way towards ComfyUI from A1111. ComfyUI runs SDXL (and all other generations of model) the most efficiently. Hi guys, Has anyone managed to implement Krea. 76 votes, 17 comments. 17K subscribers in the comfyui community. - comfyanonymous/ComfyUI. Assuming you had a KSampler named KSampler, you would do this: %KSampler. Now you can manage custom nodes within the app. 27 votes, 12 comments. VFX artists are also typically very familiar with node based UIs as they are very common in that space. Please share your tips, tricks, and… Comfyui is much better suited for studio use than other GUIs available now. Where ever you launch comfyui from is where you need to set the launch options, like so: python main. With ComfyUI you just download the portable zip file, unzip it and get ComfyUI running instantly, even a kid can get ComfyUI installed. 22, the latest one available). Again, would really appreciate any of your Comfy 101 materials, resources, and creators, as well as your advice re. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. Belittling their efforts will get you banned. Started with A1111, but now solely ComfyUI. 0 with refiner. I've ensured both CUDA 11. this are some of my Welcome to the unofficial ComfyUI subreddit. If ever you find some way of using ComfyUI your phone, please come back here and let me (us) know : -))) I've tried and the interface just doesn't move, and the Queue Prompt widget is fixed. 1 are updated and used by ComfyUI. safetensors already in your ComfyUI/models/clip/ directory you can find them on: this link. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. i've been "detailing" my images months ago. This guy is your artist, he'll take care of all the drawing and painting and whatnot. It seems that the path always look to the root of ComfyUI not relative to the custom_node folder "comfyui-popup_preview". You can build an interactive, real-time dialogue game in ComfyUI with the theme of the Chinese mythological story "Journey to the West. Please share your tips, tricks, and workflows for using this software to create your AI art… Welcome to the Reddit home for ComfyUI a graph/node style UI for Stable Diffusion. ComfyUI is meant for people who: like node-based editors (and are rigorous enough not to get lost in their own architecture); Welcome to the unofficial ComfyUI subreddit. I run some tests this morning. And above all, BE NICE. /r/MCAT is a place for MCAT practice, questions, discussion, advice, social networking, news, study tips and more. Although ComfyUI and A1111 ultimately do the same thing, they are not targeting the same audience. The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. On Linux with the latest ComfyUI I am getting 3. GPT is responsible for scriptwriting, SDXL and Dall. txt" It is actually written on the FizzNodes github here Welcome to the unofficial ComfyUI subreddit. It took me hours to get one I'm more or less happy with, where I feather the mask ( feather nodes usually don't work how I want to, so I use mask2image, blur the image, then image2mask ), 'only masked area' where it also apply to the controlnet ( applying it to the controlnet was probably the worst part ), and Download & drop any image from the website into ComfyUI, and ComfyUI will load that image's entire workflow. YouTube playback is very choppy if I use SD locally for anything serious. With the extension "ComfyUI manager" you can install almost automatically the missing nodes with the "install missing custom nodes" button. The big current advantage of ComfyUI over Automatic1111 is it appears to handle VRAM much better. May 6, 2024 Β· Those detail loras are 100% compatible with comfyui, and yes, that's the first, second and third recommendation I would give. If you don’t have t5xxl_fp16. I am so sorry but my video is outdated now because ComfyUI has officially implemented the a SVD natively, update ComfyUI and copy the previously downloaded models from the ComfyUI-SVD checkpoints to your comfy models SVD folder and just delete the custom nodes ComfyUI-SVD. 8 and PyTorch 2. On my 12GB 3060, A1111 can't generate a single SDXL 1024x1024 image without using RAM for VRAM at some point near the end of generation, even with --medvram set. Basically it doesn't open after downloading (v. We learned that downloading other workflows and trying to run them often doesn't work because of missing custom nodes, unknown model files, etc. My questions weren't so much that you should or shouldn't include it, BUT I am confused by the support/lack thereof for it via any method (core or custom node) and what seems like a format that is widely used (HF and Civitai). 0. Using ComfyUI was a better experience the images took around 1:50mns to 2:25mns 1024x1024 / 1024x768 all with the refiner. It JUST WORKS! I love that. It's ComfyUI, with the latest version you just need to drop the picture of the linked website into ComfyUI and you'll get the setup. 21K subscribers in the comfyui community. py --normalvram. So, as long as you don't expect comfyui not to break occasionally, sure give it a go. r/comfyui: Welcome to the unofficial ComfyUI subreddit. I improved on my previous expressions workflow for ComfyUI by replacing the attention couple nodes by area composition ones. We ask that you please take a minute to read through the rules and check out the resources provided before creating a post, especially if you are new here. The new versions uses two ControlNet inputs : a 9x9 openpose faces, and a single openpose face. Also I don't know when it has been changed, but ComfyUI is not a conda packet enviroment anymore, it depends from an python_embeded package, and generate an venv from it results in no tkinter. The biggest tip for comfy - you can turn most node settings into itput buy RMB - convert to input, then connect primitive node to that input. A1111 is REALLY unstable compared to ComfyUI. Different artist can do do different things, so pick an artist that suits the image you want. But one of the really cool things is has is a separate tab for a "Control Surface". It is much more coherent and relies heavily on the IPAdapter source image as you can see in the gallery. I use a cheap, expendable Chromebook (to access Google Colab) for my travelling ComfyUI needs (with a mouse). the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. I think for me at least for now with my current laptop using comfyUI is the way to go. ComfyUI breaks down a workflow into rearrangeable elements so you can easily make your own. The MCAT (Medical College Admission Test) is offered by the AAMC and is a required exam for admission to medical schools in the USA and Canada. Invoke just released 3. Then comes the higher resolution by upscaling. The #1 social media platform for MCAT advice. ) I haven't managed to reproduce this process i Install ComfyUI. I or Magnific AI in comfyui? I've seen the websource code for Krea AI and I've seen that they use SD 1. Hello! Looking to dive into animatediff and am looking to learn from the mistakes of those that walked the path before meπ«‘ππ«‘ππ«‘ππ«‘π Are people using… Welcome to /r/SkyrimMods! We are Reddit's primary hub for all things modding, from troubleshooting for beginners to creation of mods by experts. . 0&modelType=LORA&sortBy=models_v8&query=details. While I primarily utilize PyTorch cross attention (SDP) I also tested xformers to no avail. Florence2 (large, not FT, in more_detailed_captioning mode) beats MoonDream v1 and v2 in out-of-the-box captioning. 5 while creating a 896x1152 image via the Euler-A sampler. " You play as the newly born Monkey King, Sun Wukong. 55 it/s for SD1. 53 it/s for SDXL and approximately 4. Go to your FizzNodes folder ("D:\Comfy\ComfyUI\custom_nodes\ComfyUI_FizzNodes" for me) Run this, make sure to also adapt the beginning match with where you put your comfyui folder: "D:\Comfy\python_embeded\python. Install ComfyUI Manager. Basically, in patcher, you can string plugins together in much the same way as ComfyUI. Room for improvement (or, inquiring about… Aug 2, 2024 Β· Flux is a family of diffusion models by black forest labs. 0 that ads controlnet and a node based backend that you can use for plugins etc so seems a big teams finally taking node based expansion serious i love comfy but a bigger team and really nice ui with node plugin support gives serious potential to them… wonder if comfy and invoke will somehow work together or if things will stay fragmented between all the various Welcome to the unofficial ComfyUI subreddit. I have a Nvidia GeoForce GTX Titian with 12GB Vram and 128 normal ram. It all starts with "load checkpoint" node. Hi Reddit! In October, we launched https://comfyworkflows. safetensors or clip_l. Thanks! Welcome to the unofficial ComfyUI subreddit. I use an 8GB GTX 1070 without comfyui launch options and I can see from the console output that it chooses NORMAL_VRAM by default for me. I'm into it. I've been wondering the same since I saw a tutorial on using just the model upscaler vs the ultimate upscaler. i dont mind changing too much my images because i think of the detailer as a step in the workflow. Please share your tips, tricks, and workflows for using this software to create your AI art. And run Comfyui locally via Stability Matrix on my workstation in my home/office. A somewhat decent inpainting workflow in comfyui can be a pain in the ass to make. ComfyUI Manager issue. For those of you familiar with FL Studio, and specifically with Patcher, you might know what I'm about to describe. You can also easily upload & share your own ComfyUI workflows, so that others can build on top of them! :) Why I built this: I just started learning ComfyUI, and really like how it saves the workflow info within each image it generates. Please keep posted images SFW. I am now just setting up ComfyUI and I have issues (already LOL) with opening the ComfyUI Manager from CivitAI. exe -s -m pip install -r requirements. Please share your tips, tricks, and… Installation is complicated and annoying to setup, most people would have to watch YT tutorials just to get A1111 installed properly. A1111 is probably easier to start with: everything is siloed, easy to get results. simply add LORAs into your workflow: https://civitai. 23 votes, 21 comments. Hi Reddit! I just shipped some new custom nodes that let you easily use the new MagicAnimate model inside ComfyUI!… Welcome to the unofficial ComfyUI subreddit. We would like to show you a description here but the site won’t allow us. A. magnific is a really clever workfow too be honest, it is not that trivial to add detail and not to change the image too much as op said. I created this subreddit to separate discussions from Automatic1111 and Stable Diffusion discussions in general. ComfyUI is also trivial to extend with custom nodes. com/search/models?baseModel=SDXL%201. If you have multiple KSamplers in your workflow, you need to find the S&R name and use that for the node_name (see the link, its in the right click menu when you right click a node) Welcome to the unofficial ComfyUI subreddit. 1 or not. From what I've generated so far, the model upscale edges slightly better than the Ultimate Upscale. Here are some examples I did generate using comfyUI + SDXL 1. I've used those loaders but did not know thats what it is doing under the hood. denoise% where denoise is the name of the widget value as shown on the node itself. The graphic style Welcome to the unofficial ComfyUI subreddit. Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. Thanks for explaining that! Totally makes sense. Some commonly used blocks are Loading a Checkpoint Model, entering a prompt, specifying a sampler, etc. This pack includes a node called "power prompt". 5 (+ Controlnet,PatchModel. For seven months now. Sure, my paintbrush never crashed after an update, but then comfyui doesn't get crimped in my bag, my loras don't need cleaning, and a png is quite a bit cheaper than canvas. E3 for creating the illustrations, and MS-TTS for delivering the spoken dialogues in various voices. com to make it easier for people to share and discover ComfyUI workflows. dlmh qkxedsj ebpdb xsvrql gekemp gotpy dgfhnb viaawn nrgy cbhs