ComfyUI User Guide

A practical guide to generating images with ComfyUI on the Haiven server.

Getting Started

Access the UI

Open https://comfyui.haiven.local in your browser.

Default Workflow

ComfyUI loads with a basic text-to-image workflow. The default nodes are:

  1. Load Checkpoint - Select your model
  2. CLIP Text Encode (Prompt) - Positive prompt (what you want)
  3. CLIP Text Encode (Prompt) - Negative prompt (what to avoid)
  4. KSampler - Generation settings (steps, CFG, sampler)
  5. VAE Decode - Convert latent to image
  6. Save Image - Output to disk

Quick Start: Generate Your First Image

  1. In Load Checkpoint, select a model (e.g., sd_xl_base_1.0.safetensors)
  2. Enter your prompt in the positive CLIP Text Encode node
  3. Enter negatives like blurry, low quality, watermark in the negative node
  4. Click Queue Prompt (or press Ctrl+Enter)
  5. Watch progress in the UI - image appears when done

Installing Models

  1. Click Manager button in the sidebar
  2. Go to Install Models
  3. Search for model name
  4. Click Install → downloads directly to server

Method 2: Manual Upload (SCP)

From your local machine:

# Checkpoints
scp -P 25636 ~/Downloads/model.safetensors \
    elijahryoung@10.0.0.42:/mnt/models/image/checkpoints/

# LoRAs
scp -P 25636 ~/Downloads/lora.safetensors \
    elijahryoung@10.0.0.42:/mnt/models/image/loras/

After upload, click the refresh button (🔄) next to model dropdowns.


Model Directories

ComfyUI uses symlinks to access models stored in /mnt/models/image/. The following model directories are available:

ComfyUI Path Symlink Target Purpose
models/checkpoints /mnt/models/image/checkpoints Base models (SD 1.5, SDXL, etc.)
models/loras /mnt/models/image/loras LoRA adapters
models/vae /mnt/models/image/vae VAE models
models/controlnet /mnt/models/image/controlnet ControlNet models
models/clip /mnt/models/image/clip CLIP text encoders
models/clip_vision /mnt/models/image/clip_vision CLIP vision encoders (for image prompts)
models/embeddings /mnt/models/image/embeddings Textual inversions
models/unet /mnt/models/image/unet UNet models
models/diffusion_models /mnt/models/image/diffusion_models Diffusion models
models/text_encoders /mnt/models/image/text_encoders Text encoder models

If a new model type folder is needed (e.g., clip_vision):

# Create symlink as the comfyui user
sudo ln -sfn /mnt/models/image/<folder_name> /opt/comfyui/ComfyUI/models/<folder_name>

# Verify the symlink
ls -la /opt/comfyui/ComfyUI/models/<folder_name>

Available Models

Checkpoints (as of 2025-11-29)

Model Type Best For
sd_xl_base_1.0 SDXL General purpose, high quality
sd_xl_turbo_1.0_fp16 SDXL Turbo Fast generation (4-8 steps)
juggernautXL_v8Rundiffusion SDXL Photorealistic
playground-v2.5-1024px-aesthetic SDXL Artistic, aesthetic
animaPencilXL_v500 SDXL Anime/illustration
realisticStockPhoto_v20 SDXL Stock photo style
photon_v1 SD 1.5 Fast, lightweight

SDXL Models

SDXL Turbo

SD 1.5 Models


Using LoRAs

  1. Add a Load LoRA node (right-click → Add Node → loaders)
  2. Connect it between Load Checkpoint and CLIP Text Encode
  3. Select your LoRA file
  4. Set strength_model and strength_clip (start with 0.7-1.0)
  5. Include trigger words in your prompt if the LoRA requires them

Keyboard Shortcuts

Key Action
Ctrl+Enter Queue prompt
Ctrl+Shift+Enter Queue prompt (front of queue)
Ctrl+Z Undo
Ctrl+Y Redo
Ctrl+S Save workflow
Ctrl+O Load workflow
Space + drag Pan canvas
Ctrl+Scroll Zoom
Backspace/Delete Delete selected node
Ctrl+M Mute/unmute node
Ctrl+B Bypass node

Saving & Loading Workflows

Save Workflow

Load Workflow

Share Workflows

Workflow data is embedded in generated PNG images. Share the image, and others can drag it into ComfyUI to get your exact setup.


Output Images

Generated images are saved to:
- Server path: /mnt/storage/generated-images/
- Organized by date: 2025-11-29/ComfyUI_00001_.png

To download images:
1. Right-click image in UI → Save Image
2. Or browse via SCP/SFTP to /mnt/storage/generated-images/


Installing Custom Nodes

Via ComfyUI Manager

  1. Click ManagerInstall Custom Nodes
  2. Search for the node pack
  3. Click Install
  4. Restart ComfyUI: sudo systemctl restart comfyui

Manual Installation

cd /opt/comfyui/ComfyUI/custom_nodes
sudo -u comfyui git clone https://github.com/user/custom-node-repo.git
sudo systemctl restart comfyui

Troubleshooting

Model not appearing

  1. Verify file location: ls /mnt/models/image/checkpoints/
  2. Click refresh (🔄) in the model dropdown
  3. Restart: sudo systemctl restart comfyui

Generation stuck / no output

  1. Check logs: journalctl -u comfyui -f
  2. Look for CUDA errors or OOM (out of memory)
  3. Try a smaller resolution or fewer steps

WebSocket disconnected

  1. Refresh the browser page
  2. If persistent, check: systemctl status comfyui

Out of VRAM

The RTX 4090 has 24GB VRAM. If you hit limits:
- Reduce resolution
- Use fewer steps
- Disable preview (saves ~1GB)
- Close other GPU applications

Check GPU usage

watch -n 1 nvidia-smi

Tips & Best Practices

  1. Start simple - Get a basic workflow working before adding complexity
  2. Save frequently - Use Ctrl+S to save workflow iterations
  3. Use seed - Set a specific seed to reproduce results
  4. Batch size 1 - Start with batch size 1, increase for variations
  5. Preview method - "auto" is set; disable for faster processing
  6. Negative prompts matter - Always include quality negatives

Effective Negative Prompts

blurry, low quality, watermark, signature, text, logo,
bad anatomy, bad hands, missing fingers, extra fingers,
cropped, worst quality, jpeg artifacts

Getting Help


Example Workflows

Basic Text-to-Image (SDXL)

Load Checkpoint (sd_xl_base_1.0)
    ↓
CLIP Text Encode (positive) → "a serene mountain lake at sunset, photorealistic"
CLIP Text Encode (negative) → "blurry, low quality, watermark"
    ↓
KSampler (steps=25, cfg=7, sampler=euler_ancestral, scheduler=karras)
    ↓
VAE Decode
    ↓
Save Image

SDXL Turbo (Fast)

Load Checkpoint (sd_xl_turbo_1.0_fp16)
    ↓
CLIP Text Encode (positive) → "your prompt"
CLIP Text Encode (negative) → ""  # can be empty for turbo
    ↓
KSampler (steps=4, cfg=1, sampler=euler_ancestral)
    ↓
VAE Decode
    ↓
Preview Image

Last updated: 2025-12-13