Back to Learn
Tutorial

Text to Animation with HY-Motion — Full ComfyUI Setup Guide

Complete tutorial for setting up HY-Motion in ComfyUI with Python 3.11 for FBX export. Includes VRAM requirements, model downloads, and step-by-step installation.

There's an amazing release from Tencent — HY-Motion (Hunyuan Motion). This model was trained on a massive dataset and does an incredible job at generating animations from text. You just type a description, and you get a full animation. It's genuinely impressive.

In this guide, I'll show you how to set up HY-Motion in ComfyUI with FBX export support — which requires a specific Python version. Let's get into it.

Plugin: https://github.com/jtydhr88/ComfyUI-HY-Motion1

Examples of animations generated with HY-Motion
Examples of motions generated with HY-Motion — from the official paper

Prerequisites: Install Miniconda

Before we start, you need Miniconda installed on your system. This guide isn't about installing Miniconda itself, but it's essential for what we're doing. If you don't have it yet, follow the official Miniconda installation guide.

Why Conda? HY-Motion's FBX export feature only works with Python 3.11. The latest portable ComfyUI comes with Python 3.13, which breaks FBX export. Conda lets us create a separate environment specifically for Python 3.11 without messing with your main Python installation.

If your system Python is already 3.11, you might be able to skip Conda. But if you've updated to 3.13 (like I have), Conda is the cleanest solution. When you don't need this environment anymore, you can simply delete it.

Step 1: Create Python 3.11 Environment

Open PowerShell (required for Conda to work properly). I also recommend using Cursor or VS Code terminal — if you run into any issues, you can ask the AI agent to help debug.

Create and activate Conda environment
# Create a new environment with Python 3.11
conda create -n comfy311 python=3.11 -y

# Activate the environment
conda activate comfy311

Now you're in a fresh Python 3.11 environment. Everything we install from here will be isolated to this environment.


Step 2: Clone and Install ComfyUI

Navigate to a directory where you want to install this new ComfyUI instance. It can be anywhere — I chose my E: drive, but pick whatever works for you.

Clone ComfyUI
# Go to your preferred directory (change this to wherever you want)
cd E:\

# Clone ComfyUI into a new folder
git clone https://github.com/comfyanonymous/ComfyUI.git ComfyUI_py311

# Enter the ComfyUI folder
cd ComfyUI_py311

Important: Before installing ComfyUI's requirements, we need to install PyTorch with CUDA support. If you skip this step and just run pip install torch, you'll get the CPU version — which is significantly slower.

Don't skip the CUDA flag! Without the --index-url parameter, pip installs a CPU-only PyTorch. You want GPU acceleration.
Install PyTorch with CUDA, then ComfyUI requirements
# Upgrade pip first
python -m pip install --upgrade pip

# Install PyTorch with CUDA support (this is critical!)
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124

# Now install ComfyUI requirements
pip install -r requirements.txt

At this point, you have a working ComfyUI installation. But we still need to add HY-Motion.


Step 3: Install HY-Motion Plugin

Now let's install the ComfyUI-HY-Motion1 plugin. We need to clone it into the custom_nodes folder.

Clone and install HY-Motion
# Navigate to custom_nodes folder
cd custom_nodes

# Clone the HY-Motion plugin
git clone https://github.com/jtydhr88/ComfyUI-HY-Motion1

# Enter the plugin folder
cd ComfyUI-HY-Motion1

# Install plugin dependencies (includes FBX SDK)
pip install -r requirements.txt

The requirements.txt already includes the FBX SDK (fbxsdkpy). If you want to double-check it's installed, you can run this command — if everything's fine, it will just say "requirement already satisfied":

Optional: verify FBX SDK installation
pip install fbxsdkpy --extra-index-url https://gitlab.inria.fr/api/v4/projects/18692/packages/pypi/simple

Step 4: Download Model Weights

HY-Motion requires model weights that you need to download manually from Hugging Face. You'll need to create the correct folder structure inside your ComfyUI installation.

Here's how to set up the folder structure:

  1. Go to your ComfyUI models folder
  2. Create a new folder called HY-Motion
  3. Inside that, create ckpts
  4. Inside ckpts, create tencent
  5. Download and place the HY-Motion-1.0 and/or HY-Motion-1.0-Lite folders there
Folder Structure
ComfyUI/
└── models/
    └── HY-Motion/           # Create this folder
        └── ckpts/           # Create this folder
            ├── tencent/     # Create this folder
            │   ├── HY-Motion-1.0/        # Download from HuggingFace
            │   │   ├── config.yml
            │   │   └── latest.ckpt
            │   └── HY-Motion-1.0-Lite/   # Download from HuggingFace
            │       ├── config.yml
            │       └── latest.ckpt
            └── GGUF/                     # Optional: only if using GGUF
                └── Qwen3-8B-Q4_K_M.gguf

If you want to use GGUF quantized models for the text encoder, you'll also need to download those separately:


VRAM Requirements

Choose models based on your available VRAM. The motion model and text encoder run together, so add up the requirements:

Motion Models

  • HY-Motion-1.0~8GB+ VRAM
  • HY-Motion-1.0-Lite~4GB+ VRAM

Qwen3-8B Text Encoder

  • HuggingFace (no quant)~16GB VRAM
  • HuggingFace int8~8GB VRAM
  • HuggingFace int4~4GB VRAM
  • GGUF Q4_K_M~5GB VRAM
My setup (12GB GPU): I use HY-Motion-1.0-Lite (~4GB) with int4 quantization for Qwen3 (~4GB). Total: ~8GB VRAM, leaving headroom for the generation.

Step 5: Run ComfyUI

Open a terminal in your ComfyUI root folder (make sure your Conda environment is still active) and start ComfyUI:

Start ComfyUI
# Make sure you're in the ComfyUI root folder
cd E:\ComfyUI_py311

# Run ComfyUI
python main.py

The test workflows are in the workflow folder inside the HY-Motion plugin directory. Load one of those to get started.

HY-Motion workflow in ComfyUI with animation preview
HY-Motion workflow in ComfyUI with animation preview

FBX Export & Performance

The FBX export works really well. On my 12GB GPU with the Lite model, I get about 40 seconds for a 12-second animation — that's very fast.

The output format is SMPLX animation, which you can import directly into Blender. If you want to use the animation with a different skeleton (like Mixamo for Unreal Engine), you'll need to retarget it.

Retargeting SMPLX animation to Mixamo rig in Blender using Auto-Rig Pro
Retargeting SMPLX animation to Mixamo skeleton in Blender with Auto-Rig Pro
I tested retargeting with Auto-Rig Pro in Blender — mapped the SMPLX skeleton to a Mixamo skeleton, and it worked great. The whole process took about 3 minutes. Once you save the mapping, future retargets are almost instant. That's a topic for another tutorial though!

Conclusion

HY-Motion is an extremely powerful tool. The quality of animations from just text prompts is impressive, and the FBX export makes it actually usable in real production workflows.

Huge credit to the Tencent Hunyuan team for creating HY-Motion, and to jtydhr88 for building the ComfyUI integration that makes this accessible.

TL;DR: Install Miniconda → Create Python 3.11 environment → Clone ComfyUI → Install PyTorch with CUDA → Clone HY-Motion plugin → Download weights → Run and enjoy text-to-animation with FBX export!

Want to compare these tools yourself? Check out our 3D AI Arena.

Try the Arena