Mixed feelings: Inong Ayu, Abimana Aryasatya's wife, will be blessed with her 4th child after 23 years of marriage

Reddit stable diffusion upscaling. The dataset has 15 images of me.

foto: Instagram/@inong_ayu

Reddit stable diffusion upscaling. The clip generated was 256x256.

7 April 2024 12:56

Reddit stable diffusion upscaling. I find this to be the quickest and simplest workflow - AnimateDiff + QRCodeMonster. this image is 1280x960. It's a bit of faffing around, but it gets the job done. I set it up for batch mode and hit go, then it ran off about 30 images and just stopped. Using Stable Diffusion 1. 25 if you want to change small stuff, or 0. Anyway try the stableSR upscaling with tiled vae enabled. •. Since you have "only masked" selected, stable diffusion will only work on the bounding rectangle containing the masked area extended by "only masked padding", i. It also uses ESRGAN baked-in. First previews until ~75% look great, but after it shifts to these. 8192x8192 image saved as A2B. Time taken: 1. To do upscaling you need to use one of the upscaling options. First on NightCafe; created some great stuff there, and loved the built-in upscaler (it does a REALLY solid job, blowing stuff up to 8000x8000px). Thanks a lot for the detailed explanation! Advice I had seen for a slower computer with less RAM was that when using the SD Upscale script on img2img, it was ok to remove all of your prompt except for style things like photorealistic, HD, 4K, masterpiece etc. However, remember that this is StableDiffusion 1. generate your 2048x2048 image using the high-res fix, then send to extras, then upscale to 8k using any of the available options. Personally, I usually take 2 upscaled images and show/hide regions to get the desired result (“photobashing”) Denoising parameter is in img2img tab. Set the initial image size to your resolution and use the same seed/settings. It worked for me in 50% of my prompts. I see many people praising how cool their creations look after upscaling them and I feel sad. Uhh, the laws of physics say otherwise. Double check any of your upscale settings and sliders just in case. I use ultimae SD upscaler with 768 tile width and height. just keep your denoise slider less than 0. AnyDoor: Copy-paste any object into an image with AI! (with code!) 637 upvotes · 90 comments. 1. This week I stumbled upon a better method for upscaling than I used to use over here which I'm happy about! Just wondering if there are better methods out there, with this method I am able to get good quality pictures at 1536x1536 but if there TheGhostOfPrufrock. Steps: 20, Sampler: Euler a, CFG scale: 23. Upscale 3x to 1408 x 2112. py build) If 1 doesn't work, try add --medvram to launch argument (and make sure --xformers already there) Old-Wolverine-4134. the upscaler 4x-UltraSharp is not the best choice for upscaling photo realistic images. For this example I used Wuffy, my generated cyberdog :D. • 1 yr. The authors of SUPIR are working on an open-source video upscaling model though, so keep an eye out for that. TBH, I don't use the SD upscaler. Nothing beats real AI upscaler like magnific. • 1 day ago. In this case, it's a 2×2 square. Then that upscaled image is going to be rediffused img2img guided by the prompt, possibly adding additional detail There are couple good ones that 1) don't oversharpen the edges and 2) don't smudge the details. After I make an image, I send it to img2img, and then I set this up. So i started it again and it ran off about 20 and stopped, then 15, then 9. I have yet to find a solid way of upscaling images. Did something happen? Anyone know how to add it back? Looks like a bug. Happens with almos every prompt involving people. Tiled upscaling effectively let you get the best of both worlds. Usually something about 0. With each step - the time to generate the final image increases exponentially. 1-0. Since, I'm creating videos for reels and my tiktok, the typical dimensions I mention is about 1080 x 1920 pixels. The higher the output resolution, the better the quality of the animations. I put steps to 150 and denoise to 0. Tips for upscaling/inpainting? I am aware that many people here use A111 for their SD related stuff, however given the hardware I am running this on I am limited to only using the command line to generate images. Some colours change and there are some minor modifications, but overall this is the best result I got. Problem solved. The tiny faces is due high denoising. Do SD upscale with upscaler A using 5x5 (basically 512x512 tilesize, 64 padding) [1] Send to extras, and upscale (scale 4) with upscaler B. Note that there is a limit to resolution beyond which SD will struggle to compose the image properly (lots of cursed anatomy). The original input image is 480x480 pixels. The low res images generated by stable diffusion for some models are honestly so bad that most people won't bat and eye toward them. Only dog, also perfect. My preferred tool is Invoke AI which makes upscaling pretty simple. I upscaled it 4x so each output image is 1920x1920 pixels (8x outputs got downsized). After that, they generate seams and combine everything together. The dataset has 15 images of me. As shown in the video, this site is meant for you to be able to visually compare the outputs of different upscaling models for yourself, so you can then better decide which upscaling model you want to use on your own generated images. - If you are patient you may continue upscaling X2 with smaller denoising/bigger tile resolution to get another X2. E. Each of those tiles is going to get upscaled by the external upscaler, at a resolution increase it can do really well. Need to look at LDSR; I don't mind a long processing time if it gets good results. Technical details regarding Stable Diffusion samplers, confirmed by Katherine: - DDIM and PLMS are originally the Latent Diffusion repo DDIM was implemented by CompVis group and was default (slightly different update rule than the samplers below, eqn 15 in DDIM paper is the update rule vs solving eqn 14's ODE directly) this has probably been asked way too many times already. I have a custom image resizer that ensures the input image matches the output dimensions. Final Product (1536x2304) In my process I use a High CFG and low denoise strength. It does NOT matter what program created the image, if the image is 72 PPI and you print at 300 DPI you will always get a much smaller physical size of print than what your image editor tells you it will be IF YOU 1: generate in as high a resolution as you can on the initial image, AnythingV3 does resolutions outside of square fairly decently. I send the output of AnimateDiff to UltimateSDUpscale with 2x ControlNet Tile and 4xUltraSharp. When I first started I was using text2img and upscaling within it to 1024x1024 + refiner, but I then saw that others said it's better to generate images without the refiner and without upscaling (so leave at 512x512) and then upscale 2x or however much in img2img. (also install ToMe, search this sub/youtube for install tutorial. 3. Need help with this? I’ve tried adding more steps in the original renders (moving it to 50) and that’s it Use TAESD (VAE optimisation) for prototyping before upscaling. Didnt play with settings yet, but its kind of magic. I use different upscalers depending on the image. 15 denoise strength adds plenty of minor details, smooth lines, etc. Push the resolutions up until it starts to make ugly bodies on top of bodies and other strange things, then back down a bit. After training an embedding, I can only get 1024x1024 pics to generate Upscaling With IMG2IMG without using upscaling algorithm, example. Also, I'm using the Stability Matrix AIO front end - I am far from an expert at setting up and using Stable Diffusion. For upscaling any image i use realisticvision model, unless its anime, then original model used for generation. Helaman lsdirdat is good. I add in much more detail into the prompt when I start the upscaling. People who upscale a lot tend to use Topaz Gigapixel (around $100 I think), I prefer to use img2img in stable diffusion with a lower denoise and inpainting model like realisticvision20 and 4x Ultra Sharp model to upscale, it's free but it is kind of technical and you need a gpu, it may modify the image a bit in /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. I realise you could inpaint the area, but having to change your prompt to be more focused on what you're fixing seems like a worse way vs being able to just redo (retry) the upscale process on a region. DangerousOutside-. It seems like stable diffusion has memory leaks. TAESD (Tiny AutoEncoder for Stable Diffusion) is a neat optimisation of VAE that sacrifices quality of small details for almost instant VAE decoding. I'm trying the second method but I notice that when I do so the pictures tend to Unfortunately, all upscalers detect NSFW content and purposefully mangle it. (I only posted the best ones). Hi guys, I've been using SD since like May and used to use this method for upscaling. Add "head close-up" to the prompt and with around 400 pixels for the face it will usually end up nearly perfect. 1:1 (512 to 512). fix and Loopback Scaler either don't produce the desired output, meaning they change too much about the image (especially faces), or they don't increase the details Sort by: divedave. For example, an extra head on top of a head, or an abnormally elongated torso. Now I know why Van Gogh cut off his ear! I think I tried probably any upscaler under this universe with no luck. Upscaler: 4x-UltraSharp (also try R-ESRGAN 4x+ Anime6B) Redraw type: Linear or chess. Although Topaz is very good at upscaling, it does not "invent" details and tries to figure out what the pixels should look which means that when you have anomalies in the source image, they translate into the upscale. Contrary to what is usually recommended, I'm experimenting the for the final high resolution upscaling, we can use high CFG values, like 20 or even 30. IMO, what you can do is that after the initial render: - Super-resolution your image by 2x (ESRGAN) - Break that image into smaller pieces/chunks. I'll leave my prompt the same or sometimes I'll change it to "highly detailed". Super powerful for adding detail, and upscaling without losing crisp lines like just using an upscaler in the extras tab. 5 - 2x on image generation, then 2 - 4x in extras with R-ESRGAN 4x+ or R-ESRGAN 4x+ Anime6B. For the default resolution sliders above select your resolution times 2. Maybe you can try the open outpaint extension (also do inpainting) to add details and then back to A1111 to further upscaling. I'm using mm_sd_v15_v2. Took pictures myself with my phone, same clothing. The latent space offers a world of opportunity. 2 Share. A1111 Upscaling. As of now I am only capable of creating 512x512 images or 768x512 / 512x768 images. and so on. There are 87 output images. The one major difference between this and Gigapixel is it redraws each tile with the new prompt / img2img, so it will paint things that weren't there before. Crop the upscaled image directly in the center, using the dimensions of one original tile multiplied by whatever upscaling factor you used. The 'old ways' and limitations don't apply in this case, to Stable Diffusion upscaling. Gigapixel and Real ESRGAN are in my current regular toolset. The code for real ESRGAN was free & upscaling with that before getting Stable Diffusion to run on each tile turns out better since less noise & sharper shapes = better results per tile. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. you should not remove parts of the prompt in img2img. In my opinion 100 dollars is awesome value for the results it gives, plus it's not a subscription model : "buy once own forever with 1 year of updates included". ago. In my case, with an Nvidia RTX 2060 with 12 GB, the processing time to scale an image from 768x768 pixels to 16k was approximately 12 minutes. The hlky SD development repo has RealESRGAN and Latent Diffusion upscalers built in, with quite a lot of functionality. I redownloaded the esrgan models, still have the same problem. Upscaling you use when you're happy with a generation and want to make it higher resolution. Non-latent upscaler = any upscaler without "latent" in the name. Sort by: I wonder if there are any workflows for ComfyUI that combine Ultimate SD Upscale + controlnet_tile + IP-Adapter. This means that in the upscaling process it can be added new details to the image depending on the denoising strength. I personally never saw cropping happening with LDSR, it will The M. You do only face, perfect. After a fresh restart, I can get 2048x2048 to generate without issue. 10s. the video example in the Readme actually looks really impressive! Generally, non-latent upscalers had no lower limit for Denoising needed whereas latent need at least 0. - Apply SD on top of those images and stitch back. Download this first, put it into the folder inside conmfy ui called custom nodes, after that restar comfy ui, then u should see a new button on the left tab the last one, click that, then click missing custom nodes, and just install the one, after you have installed it once more restart comfy ui and it ahould work. Technically hires-fix must be upscaling, but it's upscaling in latent space - if I'm well informed. • 7 mo. I'm not patient enough for that :) - I have added the following more words in negative prompt when upscaling because I find it useful for revAnimated model and some others as it tends to introduce too much saturation: My upscaling technique with 1111. you generally don't want anything larger than 768x512, then upscale. • 5 mo. For Euler I like to try 70 which is 25/. 4. With the Dudley and Solid Snake ones I did some extra touch-ups with inpainting, but I still used the method in the video to get the initial images for them, and touched them up with inpaint after. 5, so the resolution is limited to 1000. There is a hell of a lot of depth to SD Upscaling and you can get some real magic (and some real dogshit) out of it. even better results with. fr. 30-35 denoising strength. 35. So far, the focus was on “realism”, and only in Restart your computer. txt2img no Hires fix: For Automatic1111, you can set the tiles, overlap, etc in Settings. The upscaling method is a separate option from the scheduler. 75 denoiser strength. Some will do better than others. I am using stable diffusion webui, here is my config: Webui config I am not experienced with stable diffusion or upscaling, so I might have made terrible choices If you insist on txt2img, try in this order: Install multidiffusion extension, then enable tiledVAE. Let's say an image is broken into four tiles. Settings used: Sampling Method: Euler a. However, I'm having certain doubts regarding Upscaling. Hi , Is there any way to upscale videos in Stable Diffusion? Or any extensions? I'm trying to upscale my Deforum videos, but the only way I've seen to do that is to break the video down into images and then batch upscale them and then reassemble them again, is there a better way to do that? The best is high res fix in Automatic1111 with scale latents on in the settings. I changed the description, in order to make it reflect your objection. Unfortunately Diffusion Bee doesn’t have true upscaling. Ok-Vacation5730. I find it strange because the feature to upscale is there in extras tab. You can change image size and add details (via denoise parameter) to skin, texture, etc Just enter new width & height and Denoising value. The high-res fix is for fixing the generation of high-res (>512) images. Hi guys, these are my parameters when upscaling with 1111. I tried using SD upscale (inside img2img) but the image resolution remained the same. Actual Steps = steps * denoising. 4x NMKD Siax - also very good. Original prompt: Used embeddings: BadDream [48d0] img2img prompt: same thing but Seed: 2602354140, Size: 1024x1536. StaplerGiraffe. can be improved a lot more with tweaking cfg and denoising, when it comes to detail, contrast and other atributes. 32 pixels, and in the aspect ratio you selected, i. A full-body image 512 pixels high has hardly more than 50 pixels for the face, which is not nearly enough to make a non-monstrous face. Fractalization/twinning happened at lower denoising as upscaling increased. Upscaling with temporal stability / deflicker removal methods? Hi guys, I have a clip animated using SadTalker of a person's face moving. 12 or so to only upscale while adding some detail. Trained a new Stable Diffusion XL (SDXL) Base 1. Problem upscaling realistic paintings with thick brushstrokes. Sure, they adhere to the prompt, but look like they were drawn by a middle schooler. when I upscale in img2img with 4x-ultrasharp, controlnet and/or ultimate sd upscale I get bleached images and very noticeable seams. 1 seconds. • 2 yr. High CFG for upscaling. 2 to 0. If I use the models already built into Automatic1111 and Forge, they work. 1. • 4 mo. 3, Mask blur: 4, Model: mdjrny-v4. Or use script SD Upscale. I've been messing with this for the last few days and cannot for the life of me get the Detailer panel to work. The quality of the clip is very good, however each frame I downloaded the analog diffusion model Which is probably the most photo realistic model out there. fix and other upscaling methods like the Loopback Scaler script and SD Upscale. (worked in my rtx 3060 12gb) I don't think he wanted to advertise this yet, its still a wip. Plain vanilla ESRGAN 4x - surprisingly good for textures. no. It seems that Upscayl only uses a upscaling model, so there is Stable Diffusion needs some resolution to work with. 2 denoise was creating too much noise and color variations, that were perfectly fine on one image but when blending all together it was not giving a smooth feeling but a kind of dirty image that I classicwfl. so it is not so automatic. However, as the author of the Tiled Diffusion extension, I believe that although its functions and image output performance are stronger, Ultimate SD Upscaler can serve as a simple substitute for it. Ultimate SD Upscaler uses a diffusion proccess that depends on the SD model and the prompt, and an upscaling model RESGRAN, it can also be combined with controlnet. Now, I personally use Tiled Diffusion + Stable SR without much thinking. The only major difference between Hires fix and upscaling through img2img (like through Ultimate SD Upscale), using an upscaling model (like Remacri) comes when choosing a latent upscaler. Regarding Stable Diffusion Upscaling (Deforum) Hi all! I've been running deforum lately and it's quite incredible. 2. keep full positive and negative prompt - ControlNet tile_resample will take "care". It is only capable of dealing with the material its models have been trained on, and typed text is not commonly found there. 5, Denoising strength: 0. e. The clip generated was 256x256. Decided to start running on a local machine just so I can experiment more Nope, this is even better then Topaz Video AI, which is already far ahead of open-source video upscaling. Latent upscaler = any upscaler with "latent" in the name. original 512 x 512 image into IMG2IMG at 2048 x 2048. Workflow: Use baseline (or generated it yourself) in img2img. Then when you run the image through the upscaler, it just magically becomes a masterpiece. I was getting sick of waiting when all diffusion steps are done it took forever just for image to show up. The idea is simple, it's exactly the same principle than txt2imghd but done manually : upscale the image with another software (ESRGAN, GigapixelAI etc. It's funny how often adetailer thinks boobs are eyes too. If I understand correctly how Ultimate SD Upscale + controlnet_tile works, they make an upscale, divide the upscaled image on tiles and then img2img through all the tiles. This should leave you with a seamless upscaled tile. Sometimes I drop it to 0. If you don’t already have it, then you have a few options for getting it: Option 1: You can demo Stable Diffusion for free on websites such as StableDiffusion. Generally speaking, Stable Diffusion is not efficient at rendering text in generated images, let alone restoring it in existing ones. It was such as short time that I wondered if I did something wrong, but I don't think I did. High-res fix you use to prevent the deformities and artifacts when generating at a higher resolution than 512x512. It sometimes reappears when you reload the UI. I, for example, use highres fix if I want to create a base The best upscale to date for me. The important thing is to use very low denoise strength like 0. It says: Postprocess upscale by: 4, Postprocess upscaler: LDSR. 3 should work. I highly recommend it, you can push images directly from txt2img or img2img to upscale, Gobig, lots of stuff to play with. Also the "prompt" I'm copying has "Hires upscaler: Latent (nearest-exact)", but I don't see it. black boxes being added are a result of improper resolutions, in terms of downsampling on the A1111 repo, LDSR by default will only upscale to 4x, so if you leave it at the default setting of 2x upscale it will always downsample by 1/2, there are also further options in the settings. models were designed for 512x512 and when the resolution larger it causes duplicates. 4-0. A significant change is that I changes the ui Upscaling with ControlNet and Ultimate SD Upscale is just incredible! I've struggled with Hires. Strange. personally I only use adetailer to create my first image, then upscale with ultimate sd upscale at a denoise of 0. For just the straightforward LDSR 4X upscale of a 512x512 image, my 3060 took only 1. 5 with AUTOMATIC1111. WyomingCountryBoy. 0 DreamBooth model. The low model on the left has 474 x 711. 1 even, when SD generates ‘too much added detail’. But however, it's taking almost forever to process. RealAstropulse. 1, -1 seed and neutral positive and negative prompts, where you only specify that you want detailed thing I also had this problem in the beginning. Automatic didn't want to implement automatic upscale. without fundamentally altering the image’s content. The first step is to get access to Stable Diffusion. Denoising strength: 0. r/StableDiffusion. but Im having problems with upscaling, both hires fix and img2img. What you can also do is use Hi-res fix with ca. The upscalers used here are: UniversalUpscalerV2-Neutral (denoted as N ) UniversalUpscalerV2-Sharp (denoted as S Noticed that LDSR is no longer listed in the dropdown under Extras > Upscaler 1. Same thing for number of iterations setting. Steps: Depends on denoising. The results are satisfying for me. ckpt motion with Kosinkadink Evolved . This usually requires the --medvram flag. But maybe the person offering this advice was not well informed, Well, yes, your'e right. I don't upscale with adetailer on either. Upscaling options. 2-0. But try both at once and they miss a bit of quality. Copy it back into SD if you want. In Automatic1111, I will do a 1. The original kitten is partially in that area so inpainting is somewhat aware I uninstalled and reinstalled Forge, no impact. This is an img2img method where I use Blip Model Loader from WAS to set the positive caption. use ESRGAN_4x instead. I like to generate realistic paintings with textured canvas and thick Actually I started the upscaling process all over again for the robot picture I showed you, because sor some reason, with the style of this image, 0. Then I pressed Fetch updates and Update ComfyUI and the line got up as it should and those two items disappeared. Check out Remacri (gotta look around) or v4 universal (i heard is now an extension in automatic repo). I have a lot of images (in the thousands to ten-thousands) that I want to upscale using R-ESRGAN 4x, but I'm worried that would take forever. your guide works for upscaling simple anime images, but it's gonna screw up photos or photoreal work, and will likely mess with styles and add some nasty artifacting if you're running a 0. Upscaling does exactly this - it segments Often I find 90% of the image upscales fantastically, but often there's something not perfect like I'd like to redo multiple times. Used my medium quality training images dataset. Tutorial - Guide. - Reapply this process multiple times. Here is the image I wanted to upscale : 768x512px image to upscale. g. Is there a faster way to upscale images using R-ESRGAN? Either zooming in or generating at a higher resolution can improve the coherency of faces. Drawbacks of Tiled Diffusion: I think that when you put too many things inside, it gives less attention to it. This one is a treasure. I then upscaled the clip to 1080x1080 using the Extras menu of Automatic1111, using GFPGAN, Codeformer and SCUNET PSNR. There are a lot of outputs per example on the multiple models page, so I also included a favorites page with a Gigapixel has a 30 days trial version which you can use for your comparison. I managed to get the entire upscaling process running on my RTX 3060 (12gb) by slicing the input image into tiles, then upscaling each one of them, and finally merging them back with the exact same arrangement as the original image. You'll just have to do it like the rest of us: render natively at 4k. Grokodaemon. It's amazing to see what 0xbitches have accomplished with this node pack, when I created the first node I didn't think it would come this far while still using the diffusers backend. I set the denoising to 0. Option 2: Use a pre-made template of Stable Diffusion WebUI on a configurable online Personally, when I’m upscaling an image it’s usually because I like it the way it already looks, and upscaling at 0. Alright, so now that creation has become much more available, I've started messing with Stable Diffusion. The reason was that it would encourage people to always upscale and upload upscaled images to the Internet, and those are not pure SD images. It just blows up the image without adding more detail. I previously did a post in r/ArtificialInteligence, but that one is geared towards models for real life photos with faces, so I Apr 5, 2023 · How to Start Upscaling in Stable Diffusion. Bison worked pretty easily, I just followed the video instructions. In the workflow notes, you will find some recommendations as well as links to the model, LoRa, and upscalers. 4x NMKD Superscale - my current favorite. I am lost on the fascination with upscaling scripts. 5 to get a decent result. tl;dr git clone tome extension, activate venv, git clone tomesd, cd, setup. At least, it seems to not be the same as fully rendering an image and upscaling it afterwards at all, because the results obtained with upscaling using img2img with any upscaler I know of are not Upscale that whole image. If you choose a model based upscaler in Hires fix, it's basically the same. Upscaling Videos in Stable Diffusion. I've been using Gigapixel AI for several years on my 3D Rendered stuff as well as upscaling family photos . ), slice it into tiles that have a size that Standard Diffusion can handle, pass each slice through img2img, and blend all the The processing time will clearly depend on the image resolution and the power of your computer. 12. Reply. I would really like to run this on a folder of images output from Deforum so i can combine them back together in a new high rez MP4. 2. Hires. iv pm bb sc tj vn qz yq yq nr