feat(ascend): add 9 Ascend operator kernels#47
Draft
zhangyue207 wants to merge 56 commits intomasterfrom
Draft
Conversation
bf9e4b1 to
7398f9f
Compare
3f43d57 to
be48553
Compare
added 23 commits
April 18, 2026 00:01
- Add AclTensorCache for descriptor reuse across operator calls - Rename ToAclDtype/IsIntegerDtype to toAclDtype/isIntegerDtype (camelCase) - Extend WorkspacePool with multi-slot support and capture-mode assertion - Optimize Gemm kernel with executor/scalar caching - Add CacheKey hash support for operator instance caching - Fix generate_wrappers.py argument ordering and format - Rename skip_unsupported_dtypes fixture, add get_npu_stream utility
Add base classes: Cast, Cat, Linear, Matmul (replaces MatMul), Mul, PagedAttention, SiluAndMul. Rename AddRmsNorm params to match CANN convention (x1/x2/gamma/y_out/x_out). Remove verbose doc comments from FlashAttention, ReshapeAndCache, RotaryEmbedding base classes (implementation details belong in kernels).
Add ACLNN-based implementations for: Add, Cast, Cat, CausalSoftmax, FlashAttention, Linear, Matmul, Mul, RmsNorm, RotaryEmbedding, ReshapeAndCache (+ v2), Swiglu, SiluAndMul. All kernels use AclTensorCache for descriptor reuse and WorkspacePool for device memory management. Executor instances are cached with aclSetAclOpExecutorRepeatable for repeat dispatch.
Add alternative implementations with registries: - AddRmsNorm: decomposed (0), fused aclnnAddRmsNorm (1), custom AscendC (2) - RmsNorm: ACLNN (0), custom AscendC (1) - RotaryEmbedding: ACLNN (0), ATB Rope (1) - ReshapeAndCache: ACLNN (0), ScatterPaKvCache (1), ATB (2) - Swiglu: decomposed (0), fused aclnnSwiGlu (1) - SiluAndMul: fused aclnnSwiGlu (0), registry (1) - PagedAttention: ATB (0)
Standalone AscendC kernel project with CMake build system. Includes op_host tiling, op_kernel device code, precision tests, and msprof benchmarks for both operators.
Add new tests: Cast, Cat, E2E Layer, FlashAttention, Linear, Matmul, Mul, PagedAttention, ReshapeAndCache, RotaryEmbedding, SiluAndMul. Update existing tests with NPU stream handling and Ascend-specific parametrization.
- C1: auto-format all C++ files with clang-format (25 files) - C4: lowercase assert messages, remove trailing periods (10 messages) - G4: backtick-fence identifiers in comments (causal_softmax) - P5: add blank lines before return statements (generate_wrappers.py)
- C4: lowercase assert message starts (workspace_pool_, rms_norm, rotary_embedding) - C4: remove trailing period from workspace_pool_ assert - C9: add blank line between SlotKey struct members - G4: backtick-fence identifiers in comments across 12 files - G4: backtick-fence identifiers in assert messages (flash_attention, rotary_embedding) - P1: remove duplicate `import re` in generate_wrappers.py - P4: add blank lines around control flow in test_flash_attention.py
- C4: lowercase "rope" in ATB assert messages - G4: backtick-fence `VariantPack`, `rotaryCoeff`, `sparseMode`, `hostData` - G4: backtick-fence identifiers in Python test comments - P4: add blank line before `if` in test_rms_norm_precision.py
… loading - Delete `test_rms_norm_precision.py` (duplicate of `tests/test_rms_norm.py`) - Delete `run_rms_norm_precision_report.py` (another copy with hardcoded path) - Unify `test_add_rms_norm.py` to use `import ascend_kernel` instead of ctypes manual loading
New operators and features: - ApplyRotaryPosEmb: pre-gathered cos/sin operator with ATB backend - TopkToppSampling: ATB-based fused sampling operator - SiluAndMul: standalone operator backed by aclnnSwiGlu - ATB PagedAttention: graph-safe decode attention Enhancements: - WorkspacePool: multi-slot support and capture-mode assertion - Migrate temp buffers to WorkspacePool slots (Swiglu, CausalSoftmax, RmsNorm, AddRmsNorm) - RotaryEmbedding: accept 2D [T, N*D] input, fix ATB cos/sin gathering - ReshapeAndCache: handle int64 slot_mapping in ATB kernel - Swiglu: add fused aclnnSwiGlu implementation (index=1) - Parametrize rms_norm and reshape_and_cache tests by implementation_index
… data The operator cache keys ignore data pointers (compare only shape/dtype/ device/strides). When RotaryEmbedding was cached from one test and reused by another with a different cos_sin_cache tensor (same shape, different random data), the IndexSelect gathered from the old tables, producing garbage output. Track the cos_sin_cache data pointer and re-upload the expanded cos/sin tables when it changes. In production this is a single pointer comparison per call (no-op); the cos_sin_cache weight tensor has a stable address. Fixes 6 rotary_embedding_2d test failures (head_size=64, fp16, both CANN and ATB paths) that only reproduced when test_apply_rotary_pos_emb ran first.
Replace per-operator stale-cache workaround with Operator::clear_cache() generation counter. pytest autouse fixture clears caches between test modules. Skip aclnnScatterPaKvCache (impl_index=1) on 910B hardware. Synced from feat/ascend-operators commits c68633f, 57f96bf.
ATB Rope with rotaryCoeff=2 supports bf16 on 910B. Remove the fp16-only skip guard — all 6 previously skipped bf16 test cases pass.
Extend PagedAttention base class and ATB kernel with optional seq_lens_host / block_table_host params that skip aclrtMemcpy D2H copies when caller provides CPU-pinned host tensors. Add unit tests for host-tensor PA and FA paged decode with CPU cu_seqlens_kv.
`aclDestroyAclOpExecutor` internally frees `aclTensor` descriptors it holds. Add `AclTensorCache::release()` and `destroy()` methods, guard all destructors with `isAclRuntimeAlive()`, and remove redundant `aclDestroyTensor` calls for executor-owned tensors. Verified: CANN reference-counts tensors, so destroy-tensor-then-destroy-executor order is safe.
added 26 commits
April 18, 2026 00:03
…used-attention plan
…ng, ratio table first, decision matrix
…CT_BACKENDS=OFF The docs/perf/*.md files documented the e2e optimization mission and are not part of the Ascend operator kernel scope this PR delivers. Removed from the tip; unchanged in intermediate history. AUTO_DETECT_BACKENDS default flipped to OFF in pyproject.toml to avoid the openblas link failure in the ascend CI container (master's torch-backend auto-detect requires libgfortran symbols not present there). Build still enables torch backend explicitly when requested.
Container-side openblas linker issue will be fixed separately; do not regress the master-level default in this PR.
torch wheels on aarch64 (including `torch==2.9.0+cpu` used in the ascend CI container) are auditwheel-repaired and bundle transitive dependencies (`libgfortran-<hash>.so`, `libopenblasp-<hash>.so`) into a sibling `torch.libs/` directory. `torch.utils.cpp_extension.library_paths()` returns only `torch/lib`, so the linker cannot resolve the bundled NEEDED entries and fails with `undefined reference to _gfortran_etime@GFORTRAN_8`. Add `torch.libs/` to both the build and install rpath, plus `-rpath-link` for link-time resolution without polluting our final NEEDED list.
…name + drop registry.h (SFINAE autodetect)
…m order (impl before stream)
…ailures Docker 18.09 on Ascend CI hosts races on `--rm` cleanup: the inner process exits cleanly with rc=0 but the daemon SIGKILLs the container during teardown, surfacing exit code 137 to `run.py` even though the pytest stage succeeded. Parse the per-run junit XML when returncode==137 and downgrade to a warning if no failures/errors are reported.
The skip was based on an outdated diagnosis that ATB PagedAttention crashes during Setup on 910B + CANN 8.5.x. After the framework rebase onto master (which includes the pybind11 kw arg order fix), all 10 parametrizations pass on 910B4 with CANN 8.5.1. Keep the NPU-available and implementation-registered checks since they are cheap, structural prerequisites.
RotaryEmbedding impl=1 (ATB Rope) now plumbs both rotary styles: - is_neox_style=true -> rotaryCoeff=2 (half split + cat) - is_neox_style=false -> rotaryCoeff=head_size (interleave) The cos/sin expand path also branches: neox layout duplicates the half values front/back, while interleave layout repeats each value pair-wise. Test skip is narrowed to impl=0 only, which still uses aclnnApplyRotaryPosEmbV2 (declares "interleave" but only implements "half"). G (partial rotary) skip message updated to reflect that neither aclnn nor ATB fused APIs support rotary_dim < head_size.
Partial rotary (`rotary_dim < head_size`) is not expressible in the V2 (`aclnnApplyRotaryPosEmbV2`, impl=0) or ATB `RopeParam` (impl=1) APIs — both require `cos.D == sin.D == x.D`. `aclnnRopeWithSinCosCache` is the only Ascend fused API that accepts partial rotary natively; it also supports both neox and interleave styles via `isNeoxStyle` bool. `test_rotary_embedding_partial` now routes through impl=2, resolving the 4 G-case skips.
…t` exist The rationale (CANN CPU-tensor contract + NPUGraph capturability) was only documented in the Ascend ATB kernel header. Surface it on the base class where the API contract lives, so any future backend implementor understands why the optional host tensors are part of the signature.
…sync `aclnnCast` The ATB `ReshapeAndCacheParam` (impl=2) int64 path previously did `aclrtMemcpyAsync` D2H + CPU int64→int32 cast + `aclrtMemcpyAsync` H2D with an explicit `aclrtSynchronizeStream` in between. The sync blocks the stream and makes the int64 path NPUGraph-incompatible, which forced callers (vllm-infini) to pre-cast `slot_mapping` to int32 on the Python side (36 redundant Cast launches otherwise per decoding step). Route the int64 branch through a cached `aclnnCast` instead: src/dst tensor descriptors live in `AclTensorCache` slots, the executor is set repeatable, and the cast stays fully async on-stream. The whole op now matches vLLM's native int64 `slot_mapping` convention without the sync penalty.
…e-default)
Align with vLLM's `RotaryEmbedding.forward(positions, query, key)`
signature by letting callers omit the output buffers — the kernel then
writes back in place on `query` / `key`. This removes a signature
mismatch that forced vllm-infini to allocate and pass explicit out
tensors it doesn't need.
Base class signature:
`query_out` / `key_out` → `std::optional<Tensor>` with `std::nullopt`
default. Shape / stride members fall back to `query` / `key` when the
optional is empty.
All three Ascend impls resolve the optional to a concrete `Tensor` at
the top of `operator()` via `value_or(query)`:
- impl=0 (aclnn V2): skips the D2D memcpy in the inplace case
since `query.data() == q_out.data()`
- impl=1 (ATB RopeParam): same short-circuit on the D2D copy
- impl=2 (aclnnRopeWithSinCosCache): descriptors reuse `q_out` /
`k_out` pointers, so the kernel writes to
whichever tensor is resolved
Adds `test_rotary_embedding_inplace` covering both fp16 / bf16 on
impl=0 and impl=1. Tolerance is atol=5e-3 — matches the V2 ~4 ULP
fp16 accumulator error documented in `kernel.h`.
Keeps the native `window_left` / `window_right` pair as-is and adds an optional `std::optional<int64_t> sliding_window` parameter. When set, the base class normalizes it to the causal-sliding pair `(sliding_window - 1, 0)`; when both forms are supplied the normalized values must agree. Callers can now use either entry point: // Pair form (existing, unchanged): flash_attention(..., window_left=255, window_right=0, ...) // vLLM form: flash_attention(..., sliding_window=256, ...) Ascend impl reads the resolved pair from the base-class members (`window_left_` / `window_right_`) so `sliding_window` is honored at both construction and call time. Also extends `generate_wrappers.py` to set `py::arg(...) = py::none()` defaults for all `std::optional<...>` parameters (previously only `std::optional<Tensor>`), so `sliding_window` is properly optional on the Python side. Adds `test_flash_attention_sliding_window_equivalence` asserting bit-exact equality between the two entry points.
0d93135 to
df07f95
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Full Ascend operator set (18 operators + framework scaffolding), plus three
foundational bug fixes surfaced during a rebase onto latest master and four
rounds of API alignment with vLLM's conventions.
Compared to the original PR #47, this branch adds three things on top:
2ffbeb0(which now includes PR refactor: group backends by hardware category #60WorkspacePoolrename, PR fix: make
cuda/caster.cuhself-contained by includingdata_type.h#62 caster fix, PR refactor: rename non-PascalCase function names per Google C++ style #63 SFINAE autodetect, and so on)bindings generator)
skipifs (of theoriginal 1682) and six new test cases
Full suite: 3767 passed / 1664 skipped / 0 failed on Ascend 910B + CANN 8.5.1.
Ascend operators (18)
AddaclnnAddMulaclnnMulCastaclnnCastCataclnnCatMatmulaclnnMatmulGemmaclnnMmLinearaclnnMatmul+ optional biasRmsNormaclnnRmsNorm+ custom AscendCAddRmsNormSwigluaclnnSilu+aclnnMulSiluAndMulCausalSoftmaxaclnnSoftmax+ maskRotaryEmbeddingRopeParam/aclnnRopeWithSinCosCacheApplyRotaryPosEmbReshapeAndCacheInplaceIndexCopy/ custom / ATBFlashAttentionaclnnFusedInferAttentionScoreV4(prefill + paged decode)PagedAttentionPagedAttentionParam(with CPU-pinned D2H-free entry)TopkToppSamplingTopkToppSamplingParamFramework / generator / CI fixes
fix(scripts): py::arg order in bindings generatorRoot cause: the pybind11 bindings emitted by
scripts/generate_wrappers.pylisted
py::argentries in a different order than the C++ lambda parameters.When callers used kwargs,
implementation_indexandstreamwere silentlyswapped — the stream integer went into the impl-index slot, and dispatch
SIGABRTed.
Fix: emit
py::arg("implementation_index")beforepy::arg("stream")sothe kwarg names line up positionally with the C++ signature.
fix(ci): treat exit 137 as success when pytest junit XML reports no failuresSymptom: the Docker 18.09
chownstep occasionally receives a SIGKILL,so
.ci/run.pyexits with code 137 even though pytest itself completedcleanly.
Fix: read
/workspace/results/test-results.xmlerrors/failuresfields to determine true failure — don't treat a teardown race as a test
failure.
fix(ascend): adopt PR #63/#60 master APIBring Ascend code up to date with the latest master conventions:
WorkspacePool::GetInstance()→GetWorkspacePool()Pool::Get(stream, size)→Pool::Ensure(stream, size)registry.h(useActiveImplementationsImplSFINAEautodetect instead)
Skip coverage
is_neox_style=false(ATBRopeParamrotaryCoeff=head_sizeinterleave mode)rotary_dim < head_size) viaaclnnRopeWithSinCosCacheOnly remaining skip: RotaryEmbedding
impl=0(V2) withis_neox_style=false— V2 only plumbsrotaryMode="half", out of scope forthis PR.
vLLM API alignment (additive, no breaking change)
perf(reshape_and_cache): int64 slot_mapping async CastATB
ReshapeAndCacheParamrequires int32slot_mapping. The previousimplementation handled int64 (PyTorch / vLLM's native dtype) via D2H + CPU
cast + H2D +
aclrtSynchronizeStream, which stalled the stream and madethe int64 path NPUGraph-incapturable.
Replaced with a cached
aclnnCastasync conversion on-stream. Performancematches the int32 pass-through and the whole op is now graph-captureable.
feat(rotary_embedding): optional query_out / key_outvLLM's
RotaryEmbedding.forward(positions, query, key)is inplace;InfiniOps previously required the caller to pass
query_out/key_out.Both parameters are now
std::optional<Tensor>:query/key(vLLM semantics)All three impls (V2, ATB, SinCosCache) support this.
test_rotary_embedding_inplacecovers both dtypes × two impls.feat(flash_attention): add sliding_window entry (additive)Native
window_left/window_rightpair kept as-is; added an optionalstd::optional<int64_t> sliding_window:sliding_windowonly → normalized to(sliding_window - 1, 0)causalsliding (vLLM convention)
generate_wrappers.pyextended: allstd::optional<...>parameters nowdefault to
= py::none()(previously onlystd::optional<Tensor>had thatdefault).
test_flash_attention_sliding_window_equivalenceasserts bit-exactequivalence between the two entry points.
docs(paged_attention): host tensor contractThe
src/base/paged_attention.hclass comment now explains whyseq_lens_host/block_table_hostexist (CANNqSeqLensCPU-residentcontract + ATB
hostData+ NPUGraph capture prerequisite), so futurebackend implementors understand the API contract.
Test status
Key delta:
Non-Ascend platforms: not exercised locally. This PR touches only
Ascend / base / scripts / ci; it does not alter the CUDA / Metax /
Cambricon / Moore / Iluvatar operator paths. CI covers those.
Review guidance
(pybind arg order / CI exit 137 / PR refactor: rename non-PascalCase function names per Google C++ style #63 API adoption) — they are
operator-agnostic and broadly useful; splitting them out into small
PRs could accelerate master convergence.
recommend reviewing per operator (each kernel file is self-contained).
feat/ascend-operators-bak-2026-04-18is a pre-force-push backup ref(points at the original PR feat(ascend): add 9 Ascend operator kernels #47 tip
0d93135). Safe to delete oncemerged.
Test plan
python3 .ci/run.py --local(full regression, Ascend 910B): 3767 passedtest_rotary_embedding_inplace(fp16/bf16 × impl=0/1): 4 passedtest_flash_attention_sliding_window_equivalence(pair vs sliding_window bit-exact): 2 passedtest_reshape_and_cache(int32 + int64 paths): 32 passedtest_paged_attention(10 passed after 910B skip removal)clang-formatpasses locally on all tracked*.h/*.cc/*.cuh/*.mlu