There's an amazing release from Tencent — HY-Motion (Hunyuan Motion). This model was trained on a massive dataset and does an incredible job at generating animations from text. You just type a description, and you get a full animation. It's genuinely impressive.
In this guide, I'll show you how to set up HY-Motion in ComfyUI with FBX export support — which requires a specific Python version. Let's get into it.
Plugin: https://github.com/jtydhr88/ComfyUI-HY-Motion1

Prerequisites: Install Miniconda
Before we start, you need Miniconda installed on your system. This guide isn't about installing Miniconda itself, but it's essential for what we're doing. If you don't have it yet, follow the official Miniconda installation guide.
Why Conda? HY-Motion's FBX export feature only works with Python 3.11. The latest portable ComfyUI comes with Python 3.13, which breaks FBX export. Conda lets us create a separate environment specifically for Python 3.11 without messing with your main Python installation.
Step 1: Create Python 3.11 Environment
Open PowerShell (required for Conda to work properly). I also recommend using Cursor or VS Code terminal — if you run into any issues, you can ask the AI agent to help debug.
# Create a new environment with Python 3.11
conda create -n comfy311 python=3.11 -y
# Activate the environment
conda activate comfy311Now you're in a fresh Python 3.11 environment. Everything we install from here will be isolated to this environment.
Step 2: Clone and Install ComfyUI
Navigate to a directory where you want to install this new ComfyUI instance. It can be anywhere — I chose my E: drive, but pick whatever works for you.
# Go to your preferred directory (change this to wherever you want)
cd E:\
# Clone ComfyUI into a new folder
git clone https://github.com/comfyanonymous/ComfyUI.git ComfyUI_py311
# Enter the ComfyUI folder
cd ComfyUI_py311Important: Before installing ComfyUI's requirements, we need to install PyTorch with CUDA support. If you skip this step and just run pip install torch, you'll get the CPU version — which is significantly slower.
--index-url parameter, pip installs a CPU-only PyTorch. You want GPU acceleration.# Upgrade pip first
python -m pip install --upgrade pip
# Install PyTorch with CUDA support (this is critical!)
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124
# Now install ComfyUI requirements
pip install -r requirements.txtAt this point, you have a working ComfyUI installation. But we still need to add HY-Motion.
Step 3: Install HY-Motion Plugin
Now let's install the ComfyUI-HY-Motion1 plugin. We need to clone it into the custom_nodes folder.
# Navigate to custom_nodes folder
cd custom_nodes
# Clone the HY-Motion plugin
git clone https://github.com/jtydhr88/ComfyUI-HY-Motion1
# Enter the plugin folder
cd ComfyUI-HY-Motion1
# Install plugin dependencies (includes FBX SDK)
pip install -r requirements.txtThe requirements.txt already includes the FBX SDK (fbxsdkpy). If you want to double-check it's installed, you can run this command — if everything's fine, it will just say "requirement already satisfied":
pip install fbxsdkpy --extra-index-url https://gitlab.inria.fr/api/v4/projects/18692/packages/pypi/simpleStep 4: Download Model Weights
HY-Motion requires model weights that you need to download manually from Hugging Face. You'll need to create the correct folder structure inside your ComfyUI installation.
Here's how to set up the folder structure:
- Go to your ComfyUI
modelsfolder - Create a new folder called
HY-Motion - Inside that, create
ckpts - Inside ckpts, create
tencent - Download and place the
HY-Motion-1.0and/orHY-Motion-1.0-Litefolders there
ComfyUI/
└── models/
└── HY-Motion/ # Create this folder
└── ckpts/ # Create this folder
├── tencent/ # Create this folder
│ ├── HY-Motion-1.0/ # Download from HuggingFace
│ │ ├── config.yml
│ │ └── latest.ckpt
│ └── HY-Motion-1.0-Lite/ # Download from HuggingFace
│ ├── config.yml
│ └── latest.ckpt
└── GGUF/ # Optional: only if using GGUF
└── Qwen3-8B-Q4_K_M.ggufIf you want to use GGUF quantized models for the text encoder, you'll also need to download those separately:
VRAM Requirements
Choose models based on your available VRAM. The motion model and text encoder run together, so add up the requirements:
Motion Models
- HY-Motion-1.0~8GB+ VRAM
- HY-Motion-1.0-Lite~4GB+ VRAM
Qwen3-8B Text Encoder
- HuggingFace (no quant)~16GB VRAM
- HuggingFace int8~8GB VRAM
- HuggingFace int4~4GB VRAM
- GGUF Q4_K_M~5GB VRAM
Step 5: Run ComfyUI
Open a terminal in your ComfyUI root folder (make sure your Conda environment is still active) and start ComfyUI:
# Make sure you're in the ComfyUI root folder
cd E:\ComfyUI_py311
# Run ComfyUI
python main.pyThe test workflows are in the workflow folder inside the HY-Motion plugin directory. Load one of those to get started.

FBX Export & Performance
The FBX export works really well. On my 12GB GPU with the Lite model, I get about 40 seconds for a 12-second animation — that's very fast.
The output format is SMPLX animation, which you can import directly into Blender. If you want to use the animation with a different skeleton (like Mixamo for Unreal Engine), you'll need to retarget it.

Conclusion
HY-Motion is an extremely powerful tool. The quality of animations from just text prompts is impressive, and the FBX export makes it actually usable in real production workflows.
Huge credit to the Tencent Hunyuan team for creating HY-Motion, and to jtydhr88 for building the ComfyUI integration that makes this accessible.
TL;DR: Install Miniconda → Create Python 3.11 environment → Clone ComfyUI → Install PyTorch with CUDA → Clone HY-Motion plugin → Download weights → Run and enjoy text-to-animation with FBX export!