Refactor CMake build for examples/models/llama#18825
Refactor CMake build for examples/models/llama#18825GregoryComer wants to merge 1 commit intopytorch:mainfrom
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/18825
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ❌ 16 New Failures, 2 Cancelled Jobs, 2 Pending, 3 Unrelated FailuresAs of commit 965a4a9 with merge base 74403e2 ( NEW FAILURES - The following jobs have failed:
CANCELLED JOBS - The following jobs were cancelled. Please retry:
BROKEN TRUNK - The following jobs failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This PR needs a
|
9fe348c to
3f9c81d
Compare
3f9c81d to
965a4a9
Compare
Summary
Update the CMake build for examples/models/llama to be in-tree in the top-level build but excluded from all. This means that it can be built in a single step, but will only be built when explicitly requested with --target llama_main.
This simplifies the build quite a bit (as evidenced by how many lines we remove in this PR) and prevents issues with iterative rebuild or mismatched build options between llama_main and the core runtime.
Note that the top-level
make llama-cpucommand is unmodified, just simplified under the hood. For direct CMake builds, here's a before and after:Before:
After: