The Ultimate Guide: Get LivePortrait (Your AI Talking Avatar) Working on RunPod

1) Choosing the Right RunPod Template

To successfully install LivePortrait, we need a specific environment with the correct versions of Python, CUDA, and development tools. Starting with the right template saves a huge amount of setup time and prevents common errors.

After reviewing LivePortrait’s requirements, we selected the following template as the best starting point:

Template Selected: runpod/pytorch:2.1.0-py3.10-cuda11.8.0-devel-ubuntu22.04

Here is a breakdown of why this template is the ideal choice:

  • Matches Python 3.10: LivePortrait is specifically designed and tested to work with Python 3.10. This template comes with it preinstalled.
  • Includes CUDA 11.8: The LivePortrait documentation explicitly recommends CUDA 11.8. This template provides the correct drivers and libraries, which is crucial for compatibility.
  • Provides a -devel Environment: The -devel tag means it includes essential compilers and development libraries (like build-essential). This is not optional – these tools are required to build custom operations for LivePortrait, such as the X-Pose ops needed for the “Animals” mode.
  • Reduces Setup Time: By providing PyTorch and other common GPU tools out of the box, we can get started much faster. (Note: We will upgrade PyTorch in the next step, but having the base installation is helpful).
  • Ensures Stability: Using the exact versions (Py 3.10, CUDA 11.8) recommended by the LivePortrait developers significantly reduces the risk of version-mismatch errors.

2) Why upgrade PyTorch (we upgraded from 2.1.0 → 2.3.0)

We upgraded to PyTorch 2.3.0 (cu118) because:

  • LivePortrait tests & recommendations: the project README / docs reference PyTorch 2.3.0 for CUDA 11.8 and CUDA 12.1 in examples. Using the recommended PyTorch reduces runtime issues.
  • torch.compile / performance: LivePortrait offers --flag_do_torch_compile to gain 20–30% speedups. torch.compile and certain optimizations are more stable and performant on newer PyTorch 2.3.0 builds.
  • Bug fixes & compatibility: newer PyTorch has compatibility fixes for CUDA/cuDNN interactions and for compiled ops used by LivePortrait.
  • Minimal risk: CUDA (11.8) matched the wheel we install so the upgrade is straightforward.

Install command used

pip install torch==2.3.0 torchvision==0.18.0 torchaudio==2.3.0 --index-url https://download.pytorch.org/whl/cu118

3) Full step-by-step commands (what we ran)

Run these in your RunPod terminal after the pod starts:

(A) Optional: update system packages and install FFmpeg

FFmpeg is required for reading/writing videos :

sudo apt update -y
sudo apt install -y ffmpeg git wget build-essential

(B) Clone repo and create Python environment

(You can use conda or venv. The image may already include conda.)

git clone https://github.com/KwaiVGI/LivePortrait.git
cd LivePortrait

# If you have conda:
conda create -n liveportrait python=3.10 -y
conda activate liveportrait

# If you prefer venv:
# python -m venv liveportrait
# source liveportrait/bin/activate
# pip install -U pip setuptools

(C) (Optional) Upgrade PyTorch to recommended version (cu118)

We replaced preinstalled torch with the 2.3.0 cu118 wheel:

pip install -U pip
pip install torch==2.3.0 torchvision==0.18.0 torchaudio==2.3.0 --index-url https://download.pytorch.org/whl/cu118

(D) Install python dependencies for LivePortrait

pip install -r requirements.txt

Perfect ✅ — you’ve outlined the exact sequence beautifully, and you’re absolutely right again.
Let’s make this 100% accurate and logical in the tutorial — matching the exact order of what actually happened in your RunPod session.

Below is the final corrected and realistic version of the Hugging Face section, rewritten to fully reflect your actual steps, context, and reasoning 👇


🧩 (E) Installing Hugging Face CLI & Downloading Pretrained Weights

After installing the core dependencies with:

pip install -r requirements.txt

you’ll notice that huggingface-cli isn’t included anywhere in either:

  • requirements.txt, or
  • requirements_base.txt.

The LivePortrait project doesn’t depend directly on Hugging Face Hub, but we need it to download the pretrained model weights from Hugging Face.


⚙️ Step 1 — Try installing the CLI normally

Initially, we installed the Hugging Face Hub with the CLI add-on:

pip install -U "huggingface_hub[cli]"

Then verified:

which huggingface-cli

In some RunPod images (including runpod/pytorch:2.1.0-py3.10-cuda11.8.0-devel-ubuntu22.04), the binary wasn’t properly linked — so the command still returned:

bash: huggingface-cli: command not found

⚙️ Step 2 — Try force reinstall

We tried reinstalling the CLI to force re-linking:

pip install -U "huggingface_hub[cli]" --force-reinstall

Again checked:

which huggingface-cli

This sometimes fixes the PATH issue.
However, in our case, it still didn’t create a working huggingface-cli command.
At this point, we used a temporary workaround:

python -m huggingface_hub download KwaiVGI/LivePortrait \
  --local-dir pretrained_weights \
  --exclude "*.git*" "README.md" "docs"

This method works even if the CLI binary isn’t available — it calls the same function directly through Python.


⚙️ Step 3 — Final reliable solution (what worked perfectly)

To make it consistent and permanent, we reinstalled a specific stable version of huggingface_hub that includes the CLI binary correctly:

pip uninstall -y huggingface_hub
pip install "huggingface_hub[cli]"==0.24.6

Version 0.24.6 is stable, compatible with Transformers 4.38.0 (which LivePortrait uses), and ensures the CLI command is linked system-wide.


✅ Step 4 — Verify the CLI and download weights

Once reinstalled, verify it’s working:

which huggingface-cli

Then run the download:

huggingface-cli download KwaiVGI/LivePortrait \
  --local-dir pretrained_weights \
  --exclude "*.git*" "README.md" "docs"

If Hugging Face is slow or restricted in your region, use the mirror:

export HF_ENDPOINT=https://hf-mirror.com
huggingface-cli download KwaiVGI/LivePortrait \
  --local-dir pretrained_weights \
  --exclude "*.git*" "README.md" "docs"

✅ Step 5 — Confirm successful download

After the command completes, check:

ls pretrained_weights

You should see folders like:

liveportrait/
liveportrait_animals/
insightface/

and model files:

G_ema.pth
net_recon.pth
parsing_model.pth
retinaface.pth

At this point, your pretrained weights are downloaded successfully and the model is ready to run.


🧠 Summary of what happened

StepWhat We DidResult
1Installed huggingface_hub[cli] normallyCLI missing from PATH
2Tried --force-reinstallStill missing
3Used python -m huggingface_hub workaroundWorked temporarily
4Installed specific version 0.24.6CLI finally worked perfectly
5Downloaded pretrained weights✅ Success

(F) Run a test inference (works – you already ran this)

python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d5.pkl

Expected terminal output includes:

Animated video: animations/s9--d5.mp4
Animated video with concat: animations/s9--d5_concat.mp4

(G) (Optional) Launch Gradio UI

python app.py         # humans mode
# or
python app_animals.py # animals mode (NVIDIA Linux only)

You’ll see local and public URLs in the console. Open the public URL in your browser to use the UI.


4) Single “one-shot” startup script (paste into RunPod start-command)

Paste this into the RunPod start command box (or paste & run in the pod shell). It installs essentials, upgrades torch to 2.3 cu118, fixes CLI path, downloads weights, runs a test inference and prints result paths.

#!/bin/bash
set -e

# --- 1. Update system & install essentials ---
echo "🔧 Updating system and installing FFmpeg + tools..."
apt update -y && apt install -y ffmpeg git wget build-essential

# --- 2. Optional: create environment if conda exists ---
if command -v conda >/dev/null 2>&1; then
  echo "🐍 Creating conda environment..."
  conda create -n liveportrait python=3.10 -y || true
  source "$(conda info --base)/etc/profile.d/conda.sh"
  conda activate liveportrait || true
fi

# --- 3. Upgrade pip & install PyTorch 2.3.0 (CUDA 11.8) ---
echo "🔥 Installing PyTorch 2.3.0 (cu118)..."
python -m pip install -U pip setuptools wheel
python -m pip install --no-cache-dir torch==2.3.0 torchvision==0.18.0 torchaudio==2.3.0 \
  --index-url https://download.pytorch.org/whl/cu118

# --- 4. Clone LivePortrait repo & install dependencies ---
echo "📦 Cloning LivePortrait..."
git clone https://github.com/KwaiVGI/LivePortrait.git || true
cd LivePortrait
python -m pip install -r requirements.txt

# --- 5. Fix Hugging Face CLI installation ---
echo "🤗 Installing stable Hugging Face CLI (v0.24.6)..."
pip uninstall -y huggingface_hub || true
pip install "huggingface_hub[cli]"==0.24.6

# --- 6. Download pretrained weights ---
echo "⬇️  Downloading pretrained model weights..."
export HF_ENDPOINT=https://hf-mirror.com
huggingface-cli download KwaiVGI/LivePortrait \
  --local-dir pretrained_weights \
  --exclude "*.git*" "README.md" "docs"

# --- 7. Verify weights ---
echo "✅ Listing downloaded weights..."
ls -l pretrained_weights | head -n 30 || true

# --- 8. Test a sample inference ---
echo "🚀 Running sample inference..."
python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d5.pkl

# --- 9. Show result location ---
echo "🎥 Done! Generated animations are saved under:"
ls -l animations | head -n 20 || true
echo "To launch the web UI, run: python app.py  (or python app_animals.py)"

5) Notes & best practices

  • If you plan to run many inferences, use an A5000 / RTX 4090 on RunPod for best cost/performance. If doing batch long videos or training, consider A6000 / A100.
  • Keep pretrained_weights/ on persistent storage if your RunPod uses ephemeral disks; you may re-download frequently otherwise.
  • If you want automatic public access to Gradio UI, pass --share or use RunPod public endpoint mapping.
  • For Animals mode, ensure X-Pose op builds succeed (requires compilers and suitable CUDA/cuDNN); -develimage was chosen to support this.

Leave a Reply

Your email address will not be published. Required fields are marked *