AI Prompt for Image Workflows (ComfyUI, ControlNet)
An inpainting and outpainting workflow to achieve outpaint a portrait into a full environment using MLSD straight lines on SD3.5 Large with Topaz Gigapixel AI finishing.
More prompts for Image Workflows (ComfyUI, ControlNet).
A consistent-character workflow generating fox in an enchanted forest of glowing mushrooms across poses using IP-Adapter FaceID Portrait and Scribble with Juggernaut XL v10.
A consistent-character workflow generating fox in an enchanted forest of glowing mushrooms across poses using IP-Adapter Composition and Depth (MiDaS) with RealVisXL V4.0.
A consistent-character workflow generating fox in an enchanted forest of glowing mushrooms across poses using IP-Adapter Composition and Shuffle with Pony Diffusion XL.
A consistent-character workflow generating astronaut standing on Mars cliff edge across poses using IP-Adapter Plus (style only) and Lineart anime with Animagine XL 3.1.
A consistent-character workflow generating astronaut standing on Mars cliff edge across poses using IP-Adapter FaceID Plus v2 and Canny edge with SDXL 1.0 base.
A node-graph-level ComfyUI workflow specification to achieve remove background cleanly with hair detail using Flux.1 [dev] with Tile upscaling control.
You are designing an inpainting / outpainting workflow. This is surgical image editing: change a region while leaving the rest untouched, OR extend an image beyond its borders. ## Brief - **Goal:** outpaint a portrait into a full environment - **Guidance:** MLSD straight lines - **Base model:** SD3.5 Large - **Final upscaler:** Topaz Gigapixel AI ## Inpaint vs Outpaint — Pick One Primary - **Inpaint** = edit a region INSIDE the existing canvas. Use when: removing an object, replacing a face, changing wardrobe, cleaning up artifacts. - **Outpaint** = extend the canvas OUTSIDE the original frame. Use when: converting vertical to horizontal, adding environment, zooming out. ## Workflow A — Inpainting ### Step 1. Mask creation Two ways: 1. **Manual mask** — paint in the ComfyUI Load Image (as Mask) node or A1111 inpaint tab. 2. **Auto-mask** — use SAM (Segment Anything) with a text prompt: "face", "shirt", "sky". Mask feathering: 5-10 px blur for smooth blending. Avoid hard edges — they create visible seams. ### Step 2. Model choice - Use **inpaint-specific checkpoints** when possible: SDXL Inpainting, FluxInpaint. These are trained to respect mask boundaries. - If using a non-inpaint SD3.5 Large, apply the `DifferentialDiffusion` node to improve boundary quality. ### Step 3. Prompt Describe ONLY what should be IN the masked region, in context. Good: "a blue silk scarf wrapped around her neck, matching the lighting of the scene" Bad: "a blue silk scarf" (the model doesn't know to match lighting) ### Step 4. Sampling - **Denoise:** 0.7-0.95 for major changes; 0.3-0.5 for subtle edits - **Steps:** 30 - **CFG:** 5-7 - **Conditioning mask strength:** 1.0 (full mask respect) - **Mask blend with original:** via SetLatentNoiseMask or VAEEncodeForInpaint ### Step 5. ControlNet for inpaint Add MLSD straight lines to guide the inpaint — for example, Depth ControlNet ensures new content respects the original scene depth. ### Step 6. Post-blend Use `ImageCompositeMasked` to merge the inpainted region back into the original canvas with feathered mask. ## Workflow B — Outpainting ### Step 1. Canvas extension Pad the image on the sides you want to extend. Typical padding: 256-512 px each side. Use `ImagePadForOutpaint` node with: - left, top, right, bottom: [pixels] - feathering: 40-80 px ### Step 2. Prompt for the extended area Describe the full extended scene. Include: - Subject from original (anchor) - Environment on the new edges - Lighting continuity from original ### Step 3. Mask the new padded areas The padded area becomes the inpaint mask automatically via the padding node. ### Step 4. Sampling - Denoise: 1.0 on new area, 0.0 on original (full repaint of new area, zero on kept area) - CFG: 5-7 - Steps: 30-40 - MLSD straight lines: use Tile or Depth extracted from the padded image to guide continuity ### Step 5. Second pass for seams Sometimes the seam between original and outpainted area is visible. Run a second inpaint pass over a thin strip (30-50 px) along the seam with denoise 0.3-0.4 and Tile ControlNet to harmonize. ### Step 6. Final upscale Use Topaz Gigapixel AI to upscale the completed outpaint. ## Checkpoint-Specific Notes - **SDXL Inpainting model:** Purpose-built. Use when available. - **Flux.1 [dev] + FluxFill:** Flux's native inpainting; excellent quality. - **Juggernaut XL v10 / RealVisXL:** Works with VAEEncodeForInpaint; use DifferentialDiffusion node. - **SD 3.5 Large:** Emerging inpainting support. - **Pony / Anime checkpoints:** Inpainting faces often drifts to default Pony face. Use FaceDetailer after. ## Pitfalls - **Hard mask edges cause seams.** Always feather. - **Low denoise + big change** = ghost of the original bleeds through. Raise denoise. - **High denoise + small change** = over-shoot; use lower denoise + ControlNet. - **Outpaint without continuity prompt** = discontinuous extension. Always describe the full scene, not just the new parts. - **Upscaling a seamy outpaint** = upscales the seam. Fix seam BEFORE upscaling. ## Output Return: ### 1. WORKFLOW TYPE State: inpainting OR outpainting, based on outpaint a portrait into a full environment. ### 2. NODE GRAPH Complete ComfyUI graph for the chosen workflow, with values filled in. ### 3. PROMPTS Positive + negative, tuned for SD3.5 Large. ### 4. SETTINGS All sampler, ControlNet, mask, and upscale values. ### 5. QA CHECKLIST 5 specific things the user should verify in the output. ### 6. ONE-LINE SUMMARY "This workflow accomplishes outpaint a portrait into a full environment by [approach]." Generate the full deliverable.
Replace the bracketed placeholders with your own context before running the prompt:
[pixels]— fill in your specific pixels.[dev]— fill in your specific dev.[approach]— fill in your specific approach.