[ET-VK] Lower reduce_peak_memory threshold from 500 MB to 10 MB#18891
[ET-VK] Lower reduce_peak_memory threshold from 500 MB to 10 MB#18891
Conversation
During prepack, staging buffers accumulate in buffers_to_clear_ until flush() is called. Previously, the reduce_peak_memory path (which calls submit_and_wait + flush to free staging buffers incrementally) only triggered when total constant data exceeded 500 MB. This meant models with moderate weight sizes (e.g. 42 MB) never benefited from incremental cleanup, causing all staging buffers to coexist in memory until the final flush. Lowering the threshold to 10 MB enables incremental staging buffer cleanup for most models. On SceneX V9 FP16 (42 MB weights, Samsung S24 Adreno 750), this reduces transient VMA peak during prepack from 89.6 MB to 57.3 MB (-36%) at a cost of ~15 ms additional load latency (+4.4%). Steady-state memory and inference performance are unaffected. Authored with Claude. Differential Revision: [D100332227](https://our.internmc.facebook.com/intern/diff/D100332227/) ghstack-source-id: 365456277 Pull Request resolved: #18816
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/18891
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ❌ 2 New Failures, 1 Cancelled Job, 4 Pending, 1 Unrelated FailureAs of commit b69f771 with merge base 930ecfd ( NEW FAILURES - The following jobs have failed:
CANCELLED JOB - The following job was cancelled. Please retry:
BROKEN TRUNK - The following job failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This PR needs a
|
This PR was created by the merge bot to help merge the original PR into the main branch.
ghstack PR number: #18816 by @SS-JIA
^ Please use this as the source of truth for the PR details, comments, and reviews
ghstack PR base: https://github.com/pytorch/executorch/tree/gh/SS-JIA/519/base
ghstack PR head: https://github.com/pytorch/executorch/tree/gh/SS-JIA/519/head
Merge bot PR base: https://github.com/pytorch/executorch/tree/main
Merge bot PR head: https://github.com/pytorch/executorch/tree/gh/SS-JIA/519/orig
Differential Revision: D100332227
@diff-train-skip-merge