Skip to content

feat: add wan2.2_t2v model and quantization config#454

Open
Charles2530 wants to merge 11 commits intoModelTC:mainfrom
Charles2530:feat/wan2.2-t2v
Open

feat: add wan2.2_t2v model and quantization config#454
Charles2530 wants to merge 11 commits intoModelTC:mainfrom
Charles2530:feat/wan2.2-t2v

Conversation

@Charles2530
Copy link

@Charles2530 Charles2530 commented Mar 10, 2026

Add wan2.2_t2v model and quant configuration, corresponding config and script changes

Charles2530 and others added 11 commits March 10, 2026 10:57
Add a small test script to load sharded safetensors from a Hugging Face repo/local dir and print parameter keys with shapes.

Made-with: Cursor
…sformer experts

Add support for skipping quantization on specified transformer blocks
(block_ids: [0, 40] → block 0 of transformer and transformer_2) to
improve quality of the two highest-impact blocks.

Changes:
- base_blockwise_quantization.py: add _get_ignored_block_ids_set and
  _is_ignored_block helpers; modify set_no_quant_layer to skip all
  linear layers when layer_names is empty; modify run to skip
  block_transform for ignored blocks so AWQ scales are not applied
- configs/…/awq_w_a_skip_first.yaml: new config with ignored_layers
  block_ids [0, 40] and separate save_path

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant