Qualcomm AI Engine Direct - Minimal Inference Runtime Core Requirment#18434
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/18434
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New Failure, 2 Unrelated FailuresAs of commit 3c9a652 with merge base c7f1d72 ( NEW FAILURE - The following job has failed:
BROKEN TRUNK - The following jobs failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This PR needs a
|
|
@winskuo-quic can you please rebase. |
b52b847 to
0f916c6
Compare
I have rebased. Thanks |
0f916c6 to
4883181
Compare
|
@abhinaykukkadapu has imported this pull request. If you are a Meta employee, you can view this in D99394613. |
| auto dump_tensor = executorch::extension::from_blob( | ||
| QNN_TENSOR_VER_PTR(output_tensor)->clientBuf.data, | ||
| sizes, | ||
| std::vector<executorch::aten::StridesType> stride_size(sizes.size(), 0); |
There was a problem hiding this comment.
@winskuo-quic do we want to init zeros for stride?
There was a problem hiding this comment.
Updated to align with from_blob behavior. Thanks
1. Removed from_blob tensor creation 2. Compile and Linking Option optimization 3. Function visibility optimization 4. Expose Power Config to user
4883181 to
3c9a652
Compare
Summary
Test plan
add
--direct_build_folder build-hexagon/at end of anyTestQNNQuantizedUtils,TestQNNQuantizedModel,TestQNNFloatingPointModel,TestQNNFloatingPointOperatorAuthor: @haowhsu-quic, @shewu-quic, @winskuo-quic