-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Pull requests: uxlfoundation/oneDNN
Author
Label
Projects
Milestones
Reviews
Assignee
Sort
Pull requests list
xe: ggemm: Relax the src quantization mask. Fix bia conversion bug.
component:tests
Codeowner: @oneapi-src/onednn-arch
platform:gpu-intel
Codeowner: @oneapi-src/onednn-gpu-intel
#4997
opened Apr 10, 2026 by
umar456
Contributor
Loading…
[rls-v3.12] benchdnn: inputs: graph: set fpmath mode for int4 mlp test
backport
component:examples
component:tests
Codeowner: @oneapi-src/onednn-arch
#4996
opened Apr 10, 2026 by
TaoLv
Contributor
Loading…
[Backport] src: gpu: intel: binary: widen offset calc with off_t
backport
component:tests
Codeowner: @oneapi-src/onednn-arch
platform:gpu-intel
Codeowner: @oneapi-src/onednn-gpu-intel
#4995
opened Apr 9, 2026 by
h-sadia
Contributor
Loading…
[Backport] src: gpu: intel: binary: widen offset calc with off_t
backport
component:tests
Codeowner: @oneapi-src/onednn-arch
platform:gpu-intel
Codeowner: @oneapi-src/onednn-gpu-intel
#4994
opened Apr 9, 2026 by
h-sadia
Contributor
Loading…
2 tasks
Kealanba/nvf4 nvlp rls v312
backport
platform:gpu-intel
Codeowner: @oneapi-src/onednn-gpu-intel
#4993
opened Apr 9, 2026 by
kealan-barbieri
Contributor
Loading…
[GPU] NVLP compiler issue workaround for pooling
platform:gpu-intel
Codeowner: @oneapi-src/onednn-gpu-intel
#4992
opened Apr 9, 2026 by
skazakov1
Contributor
Loading…
[GPU] Widen GPU offsets (part2)
component:tests
Codeowner: @oneapi-src/onednn-arch
platform:gpu-intel
Codeowner: @oneapi-src/onednn-gpu-intel
#4991
opened Apr 9, 2026 by
h-sadia
Contributor
Loading…
2 tasks
cpu: fix coverity major issues
platform:cpu-x64
Intel64/AMD64 processors. Codeowner: @oneapi-src/onednn-cpu-x64
#4990
opened Apr 9, 2026 by
tczeszun
Contributor
Loading…
aarch64: support for per_dim_0 scales and bf16 dst_dt in jit int8 matmul
component:common
component:tests
Codeowner: @oneapi-src/onednn-arch
platform:cpu-aarch64
Codeowner: @oneapi-src/onednn-cpu-aarch64
#4987
opened Apr 9, 2026 by
michalowski-arm
Contributor
Loading…
2 tasks done
MFDNN-14690: Replace XE3P_35_10/11/UNKNOWN Core enum values with Xe3p
platform:gpu-intel
Codeowner: @oneapi-src/onednn-gpu-intel
third_party
#4981
opened Apr 8, 2026 by
dyoussif
Contributor
Loading…
Kealanba/sdpa backport v312 pc
backport
platform:gpu-intel
Codeowner: @oneapi-src/onednn-gpu-intel
#4980
opened Apr 8, 2026 by
kealan-barbieri
Contributor
Loading…
3 of 4 tasks
cpu, benchdnn: add reorder to/from grouped with different dts and use grouped in matmul ref
component:tests
Codeowner: @oneapi-src/onednn-arch
cpu: aarch64: implement forward lnorm in SVE
component:common
platform:cpu-aarch64
Codeowner: @oneapi-src/onednn-cpu-aarch64
graph: backend: dnnl: patterns: f32 to xf16 typecast before VS
component:graph-api
Codeowner: @oneapi-src/onednn-graph
component:tests
Codeowner: @oneapi-src/onednn-arch
#4973
opened Apr 8, 2026 by
TaoLv
Contributor
Loading…
[GPU] xe: tensor: fix view_t::normalized_tlayout()
component:tests
Codeowner: @oneapi-src/onednn-arch
platform:gpu-intel
Codeowner: @oneapi-src/onednn-gpu-intel
#4970
opened Apr 7, 2026 by
echeresh
Contributor
Loading…
xe: sdpa, ggemm, gmlp: avoid copies
component:tests
Codeowner: @oneapi-src/onednn-arch
platform:gpu-intel
Codeowner: @oneapi-src/onednn-gpu-intel
third_party
#4967
opened Apr 7, 2026 by
atkassen
Contributor
Loading…
[rls-v3.12-pc] xe: gemm: jit: restrict bf16 quant stride override
backport
platform:gpu-intel
Codeowner: @oneapi-src/onednn-gpu-intel
#4966
opened Apr 7, 2026 by
kealan-barbieri
Contributor
Loading…
benchdnn: inputs: graph: add cases for gated mlp with gelu activation
component:tests
Codeowner: @oneapi-src/onednn-arch
#4962
opened Apr 7, 2026 by
TaoLv
Contributor
Loading…
graph: sdpa: support dropout seed/offset/prob in fused sdpa
component:graph-api
Codeowner: @oneapi-src/onednn-graph
component:tests
Codeowner: @oneapi-src/onednn-arch
#4961
opened Apr 7, 2026 by
TaoLv
Contributor
Loading…
[GPU][NVL-P] Use upconversion for unsupported scales on NVL-P
platform:gpu-intel
Codeowner: @oneapi-src/onednn-gpu-intel
#4960
opened Apr 6, 2026 by
kealan-barbieri
Contributor
Loading…
4 tasks done
ze api: add support for persistent cache
component:api
Codeowner: @oneapi-src/onednn-arch
component:common
component:tests
Codeowner: @oneapi-src/onednn-arch
platform:gpu-intel
Codeowner: @oneapi-src/onednn-gpu-intel
third_party
#4959
opened Apr 6, 2026 by
dzarukin
Contributor
Loading…
[GPU] fixup matmul ref implementation
component:tests
Codeowner: @oneapi-src/onednn-arch
platform:gpu-intel
Codeowner: @oneapi-src/onednn-gpu-intel
#4958
opened Apr 6, 2026 by
dyoussif
Contributor
Loading…
benchdnn: cold cache improvement: attempt 2
component:graph-api
Codeowner: @oneapi-src/onednn-graph
component:tests
Codeowner: @oneapi-src/onednn-arch
#4957
opened Apr 6, 2026 by
dzarukin
Contributor
Loading…
[WIP][Do not review] common: memory: remove UB for tensor dimension checks
component:common
component:tests
Codeowner: @oneapi-src/onednn-arch
#4956
opened Apr 6, 2026 by
avmanerikar
Contributor
•
Draft
Copy of Upconvert fp8 weights to xf16 in Matmul in case of xf16 activations
component:common
component:tests
Codeowner: @oneapi-src/onednn-arch
platform:cpu-x64
Intel64/AMD64 processors. Codeowner: @oneapi-src/onednn-cpu-x64
Previous Next
ProTip!
Updated in the last three days: updated:>2026-04-07.