Been building up a library of 1,300+ workflows and couldn't find anything. So I built this.
**What it does:**
* Search your local workflows by describing what you want (*"generate video from an image"*, *"face swap with LoRA"*) — not just by filename
* Preview the node graph of any workflow without opening ComfyUI
* Search YouTube, CivitAI, GitHub and Reddit in real time to find new workflows — with download links where it can find them
* Filter search results by the custom node packages you a
r/comfyui
After using a lot of AI image prompt libraries I realized the problem wasn’t lack of prompts, it was lack of structure. Everything was mixed together: subject, lighting, camera, style… all in one blob. Hard to read, harder to modify.
So I started breaking prompts into modular parts for personal use and eventually decided to make my own prompt library.
Check it out 👉 [https://promptdexter.com/](https://promptdexter.com/)
Its FREE + No Login Required
**Key features:**
1. ✨ **Modular Struct
r/comfyui
I spent 3 hours debugging a workflow that wasn't broken.
Qwen models have an internal reasoning mode. Before they answer, they sometimes stop and think — silently. Zero output. Zero progress bar. You're just staring at a frozen node wondering if it crashed.
It didn't crash. It's reasoning. And there was absolutely no way to see it.
So I forked the Qwen plugin and built ThinkingLLM.
What it does:
Live token streaming — every word appears in the terminal as the model generates it. You can
r/comfyui