ChatGPT Prompt for Image Workflows (ComfyUI, ControlNet)
A step-by-step ControlNet recipe using IP2P instruct on Flux.1 [schnell] to achieve remove background cleanly with hair detail with SUPIR finishing.
More prompts for Image Workflows (ComfyUI, ControlNet).
A consistent-character workflow generating fox in an enchanted forest of glowing mushrooms across poses using IP-Adapter FaceID Portrait and Scribble with Juggernaut XL v10.
A consistent-character workflow generating fox in an enchanted forest of glowing mushrooms across poses using IP-Adapter Composition and Depth (MiDaS) with RealVisXL V4.0.
A consistent-character workflow generating fox in an enchanted forest of glowing mushrooms across poses using IP-Adapter Composition and Shuffle with Pony Diffusion XL.
A consistent-character workflow generating astronaut standing on Mars cliff edge across poses using IP-Adapter Plus (style only) and Lineart anime with Animagine XL 3.1.
A consistent-character workflow generating astronaut standing on Mars cliff edge across poses using IP-Adapter FaceID Plus v2 and Canny edge with SDXL 1.0 base.
A node-graph-level ComfyUI workflow specification to achieve remove background cleanly with hair detail using Flux.1 [dev] with Tile upscaling control.
You are writing a ControlNet recipe — a focused, single-purpose workflow a user can run in ComfyUI or Automatic1111 / Forge WebUI.
## Recipe Brief
- **Goal:** remove background cleanly with hair detail
- **ControlNet type:** IP2P instruct
- **Base model:** Flux.1 [schnell]
- **Finishing upscaler:** SUPIR
## Why this ControlNet for this goal
Explain in 2-3 sentences WHY IP2P instruct is the right choice for remove background cleanly with hair detail:
- What information IP2P instruct preserves
- What it discards
- What it enables the model to do freely
## Step-by-Step Recipe
### Step 1 — Source prep
- Required input: [what the user needs to provide]
- Resolution: match the Flux.1 [schnell] native (1024x1024 for SDXL, 512x512 for SD1.5, 1024 variable for Flux)
- Preprocessing: [convert to Canny / Depth / Pose map using the preprocessor node or ControlNet Aux]
### Step 2 — Positive prompt construction
Write the prompt as if IP2P instruct did not exist, because the ControlNet handles structure. The prompt should describe:
- Subject identity and details
- Style (Flux.1 [schnell]-appropriate)
- Lighting and mood
- Technical quality words
Do NOT describe pose, layout, or perspective in the prompt — IP2P instruct is doing that.
### Step 3 — Negative prompt
Standard for Flux.1 [schnell]:
```
[checkpoint-family-specific negatives]
```
For SDXL derivatives (RealVisXL, Juggernaut, DreamShaper):
`deformed, low quality, blurry, watermark, text, jpeg artifacts, extra limbs, bad anatomy, long neck, low contrast`
For Flux: **do not use a negative prompt**. Flux does not support it meaningfully.
### Step 4 — ControlNet settings
- **Control strength:** [suggest 0.6 for loose guidance, 0.85 for strict, 1.0 for rigid]
- **Start step:** 0.0 (active from the beginning)
- **End step:** [0.75-1.0 depending on how much freedom the model needs in final steps]
- **Control mode:** "Balanced" for most; "ControlNet is more important" when structure must hold
### Step 5 — Sampling
- **Sampler:** DPM++ 2M Karras (or DPM++ 3M SDE for SDXL)
- **Steps:** 28-35
- **CFG:** 5-7 for SDXL; 3.5-4 for Flux; 6-8 for SD1.5
- **Seed:** fix once you find a good one — critical for consistency across batches
### Step 6 — Upscale with SUPIR
- Tile size: 512 or 768
- Tile padding: 32
- Denoise on upscale: 0.2-0.35 (lower = more faithful, higher = more detail invention)
- Second ControlNet in upscale pass: Tile ControlNet at strength 0.5 to prevent detail drift
### Step 7 — Optional FaceDetailer pass
If remove background cleanly with hair detail involves a human face:
- FaceDetailer node with denoise 0.4-0.5
- BBOX detector: bbox/face_yolov8m.pt
- SAM detector: sam_vit_b for precise masking
## Example Settings JSON
```json
{
"checkpoint": "Flux.1 [schnell]",
"controlnet": {
"type": "IP2P instruct",
"strength": 0.8,
"start_percent": 0.0,
"end_percent": 0.85
},
"sampler": "dpmpp_2m_karras",
"steps": 30,
"cfg": 5.5,
"denoise": 1.0,
"upscaler": "SUPIR",
"upscale_factor": 2.0,
"upscale_denoise": 0.25
}
```
## Troubleshooting Matrix
| Symptom | Likely Cause | Fix |
| --- | --- | --- |
| Output ignores control | Strength too low | Raise to 0.9-1.0 |
| Output feels stiff/unnatural | Strength too high | Lower to 0.6-0.75, end_percent 0.7 |
| Composition correct but low detail | Denoise too low | Raise denoise, add HiRes fix |
| ControlNet model not loading | Wrong architecture (SD1.5 model on SDXL) | Download matching arch ControlNet |
| Face breaks after upscale | No FaceDetailer pass | Add Step 7 |
| Artifact seams in upscale | Tile padding too small | Raise padding to 48-64 |
## Output Format
Deliver all sections above in order, filled with concrete values. Close with a one-line summary of the expected output for remove background cleanly with hair detail.Replace the bracketed placeholders with your own context before running the prompt:
[schnell]— fill in your specific schnell.[what the user needs to provide]— fill in your specific what the user needs to provide.[checkpoint-family-specific negatives]— fill in your specific checkpoint-family-specific negatives.