Skip to content

[Bug] Studio setup messed up install of torchcodec on aarch64 #4446

@bjester

Description

@bjester
  1. Did you update? This was fresh install
  2. Local -- DGX Spark (aarch64)
  3. Number GPUs used: 1
  4. Which notebook? None-- Unsloth Studio
  5. Which Unsloth version, TRL version, transformers version, PyTorch version? unsloth==2026.3.5, trl==0.24.0, transformers==5.3.0, and torch==2.10.0+cu130
  6. Which trainer?

What I ran

uv pip install unsloth --torch-backend=auto
unsloth studio setup

Result

╔══════════════════════════════════════╗
║     Unsloth Studio Setup Script      ║
╚══════════════════════════════════════╝
✅ Node v24.14.0 and npm 11.11.1 already meet requirements. Skipping nvm install.
✅ Node v24.14.0 | npm 11.11.1
✅ Frontend built to frontend/dist
finished finding best python
✅ Using python3 (3.12.13) — compatible (3.11.x – 3.13.x)
[=======-------------]  4/11  extra codecs                               uv failed, falling back to pip...
Using Python 3.12.13 environment at: /home/bjester/.unsloth/studio/.venv
  × No solution found when resolving dependencies:
  ╰─▶ Because all versions of torchcodec have no wheels with a matching platform tag (e.g., `manylinux_2_39_aarch64`) and you require torchcodec,
      we can conclude that your requirements are unsatisfiable.

      hint: Pre-releases are available for `torchcodec` in the requested range (e.g., 0.0.0.dev0), but pre-releases weren't enabled (try:
      `--prerelease=allow`)

      hint: Wheels are available for `torchcodec` (v0.10.0) on the following platforms: `manylinux_2_28_x86_64`, `macosx_11_0_arm64`, `win_amd64`

[====================] 11/11  finalizing                              
✅ Python dependencies installed

   Pre-installing transformers 5.x for newer model support...
✅ Transformers 5.x pre-installed to /home/bjester/.unsloth/studio/.venv_t5/

Building llama-server for GGUF inference...
   Building with CUDA support (nvcc: /usr/local/cuda/bin/nvcc)...
   GPU compute capabilities: 121 -- limiting build to detected archs
✅ llama-server built at /home/bjester/.unsloth/llama.cpp/build/bin/llama-server
✅ llama-quantize available for GGUF export

╔══════════════════════════════════════╗
║           Setup Complete!            ║
╠══════════════════════════════════════╣
║ Launch with:                         ║
║                                      ║
║ unsloth studio -H 0.0.0.0 -p 8000    ║
╚══════════════════════════════════════╝

I didn't think much about the torchcodec error since the whole process completed. Later on, when I went to fine tune a model, it threw an error about it couldn't import from torchcodec.decoders.

Training error: Custom format mapping failed: No module named 'torchcodec.decoders'

As this issue notes, the ARM (aarch64) wheels for torchcodec aren't available on PyPi, but are on the pytorch index.

How I fixed it

First, I stoped Unsloth Studio. Then I activated the virtualenv Unsloth Studio was using:

cd ~/.unsloth/studio
source .venv/bin/activate

I took a look at what was installed:

torchcodec==0.0.0.dev0

Seeing that the current version is 0.10, this seemed wrong. Within the virtualenv, I installed it manually:

pip install torchcodec --index-url=https://download.pytorch.org/whl/cu130

I restarted Unsloth Studio, and I didn't hit the error again.

Metadata

Metadata

Assignees

No one assigned

    Labels

    feature requestFeature request pending on roadmap

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions