HOWW TO DOWNLOAD AND USE ON COMFYUI HELP PLEEASEE https://huggingface.co/SulphurAI/Sulphur-2-base
May 11, 5:44pm

To download and use the Sulphur-2-base model in ComfyUI, follow these steps:


1. Download the Model

  • Sulphur-2-base is available on Hugging Face: SulphurAI/Sulphur-2-base.
  • You can also find GGUF versions (quantized for lower VRAM usage) here: vantagewithai/Sulphur-2-Base-GGUF .
  • Download the model files (e.g., .safetensors or .gguf) to your local machine.

2. Install ComfyUI

  • If you haven’t already, install ComfyUI using the easiest method for your system:
    • Windows/Nvidia GPU: Use ComfyUI-Easy-Install for a one-click setup.
    • Other setups: Follow the official ComfyUI GitHub instructions .

3. Place the Model Files

  • For GGUF models:

    • Place the downloaded .gguf file in:
      ComfyUI/models/gguf/
      Do not put it in unet/ or diffusion_models/ .
  • For other model types:

    • Check the model page for specific folder instructions (e.g., checkpoints/, vae/).

4. Install Required Custom Nodes

  • Sulphur-2-base requires the ComfyUI_LTX2_SM custom node pack for video workflows.
    • Install it via ComfyUI’s Manager (Menu > Manager > Custom Nodes > Search for "LTX2_SM").

5. Load the Model in ComfyUI

  • Open ComfyUI and load a workflow compatible with Sulphur-2-base.
  • If you’re new to ComfyUI, watch a tutorial on how to load models and workflows.
  • For GGUF models, ensure you select the model from the gguf/ folder in the model dropdown.

6. Hardware & Optimization

  • VRAM Requirements: Sulphur-2-base is resource-intensive. For lower VRAM, use FP8 or GGUF versions.
  • Flags for ComfyUI:
    • Use --disable-pinned-memory --reserve-vram 2 in your ComfyUI launch command to reduce memory usage .
    • For very low RAM, add --cache-none (slower but uses less RAM).

7. Run a Workflow

  • Use a pre-made workflow (e.g., from LTX-2.3 Workflows) or create your own.
  • For GGUF, ensure your workflow uses the correct custom nodes for LTX-2.3.

Troubleshooting

  • If the model doesn’t appear in the dropdown, double-check the file path.
  • If you get out-of-memory errors, increase your pagefile size or use a lower-precision model (e.g., Q6_K GGUF) .

Need a specific workflow or more details? Let me know what part you’re stuck on!

May 11, 5:44pm

This is a shared conversation and may include unverified content that doesn't reflect Mistral AI's views. Login to resume chatting.