I’ve been trying to figure out how people generate the same character in different poses while keeping the background consistent (like carousel-style outputs).
From what I gathered so far:
* They’re using **Z-Image** with **SDXL IP Adapter**
* Custom nodes like **WAS** \+ **CR Image Grid** are used for carousel layouts
* LoRAs seem to play a big role (and can mess with consistency depending on how they’re used)
**Questions:**
1. What’s the proper workflow for keeping the same character acros
r/comfyui
Wanted to share a free, open source script panel I made that connects After Effects to ComfyUI. The main use case: build a comp with text layers describing different shots, and the panel turns each one into an AI image stacked on the timeline.
Reads your ComfyUI API workflow JSON — Writes prompts, seed, steps, CFG, sampler, scheduler, and dimensions into the standard nodes (KSampler, CLIPTextEncode, RandomNoise, etc.) — Leaves the rest of your workflow untouched, so LoRAs, image references, cus
r/comfyui
I spent way too many hours working on this, so I figured I'd share the result with folks.
[Ultimate Detailer Workflow](https://huggingface.co/joydriver/UltimateDetailerWorkflow)
\- What does it do?
The main purpose of the workflow is to simplify and streamline the detailing of SD images with multiple characters by allowing the user to execute accurate and automatic masking through a combination of natural language (SAM3) and YOLO models. It also simplifies prompting by automatically concatena
r/comfyui