A PyTorch-native inference engine with cache, parallelism, quantization for Diffusion Transformers.
-
Updated
Apr 28, 2026 - Python
A PyTorch-native inference engine with cache, parallelism, quantization for Diffusion Transformers.
W4A4 and INT8 KV-cache quantization for Infinity VAR models. Optimized for high-fidelity generative AI deployment on edge GPUs (e.g. NVIDIA Jetson).
Add a description, image, and links to the svdquant topic page so that developers can more easily learn about it.
To associate your repository with the svdquant topic, visit your repo's landing page and select "manage topics."