Skip to content

fix: update cpu_get_num_math to common_cpu_get_num_math (llama.cpp rename)#603

Open
WayOfTheMap wants to merge 1 commit into
withcatai:masterfrom
WayOfTheMap:fix/cpu-get-num-math-rename
Open

fix: update cpu_get_num_math to common_cpu_get_num_math (llama.cpp rename)#603
WayOfTheMap wants to merge 1 commit into
withcatai:masterfrom
WayOfTheMap:fix/cpu-get-num-math-rename

Conversation

@WayOfTheMap
Copy link
Copy Markdown

Summary

After npx node-llama-cpp source download --release latest --gpu metal, the addon fails to compile with:

addon/AddonContext.cpp:400:41: error: use of undeclared identifier 'cpu_get_num_math'
addon/AddonContext.cpp:748:75: error: use of undeclared identifier 'cpu_get_num_math'
addon/addon.cpp:54:42:        error: use of undeclared identifier 'cpu_get_num_math'

Upstream llama.cpp renamed cpu_get_num_math() to common_cpu_get_num_math() in common/common.h as part of the common_* namespace pass. The function signature is unchanged.

This PR updates the three call sites in the addon to use the new name.

Changes

  • llama/addon/AddonContext.cpp:400cpu_get_num_math()common_cpu_get_num_math()
  • llama/addon/AddonContext.cpp:748cpu_get_num_math()common_cpu_get_num_math()
  • llama/addon/addon.cpp:54cpu_get_num_math()common_cpu_get_num_math()

Related

This is one of three compile blockers when rebuilding the addon against current llama.cpp on macOS Sequoia (SDK 26.2). The other two — the commonllama-common library target rename and the std::atomic_bool loaded = false; deleted-copy-ctor — are addressed by #597 (@CreatiCoding). All three patches together produce a working build.

Verification

Applied this patch plus #597 against withcatai/node-llama-cpp@v3.18.1, ran npx node-llama-cpp source download --release latest --gpu metal (resolved to llama.cpp b9145), and verified end-to-end on macOS 15.x arm64 with Metal:

Model Load Generate
Llama 3.2 3B Instruct Q4_K_M
Qwen 2.5 7B Instruct Q4_K_M
Gemma 4 E4B Instruct Q4_K_M

Gemma 4 specifically requires this rebuild path — the prebuilt llama.cpp b8390 that ships with v3.18.1 predates Gemma 4 architecture support, so loading a Gemma 4 GGUF without rebuilding returns unknown model architecture: 'gemma4'.

…name)

Upstream llama.cpp renamed cpu_get_num_math() to common_cpu_get_num_math()
in common/common.h as part of the common_* namespace pass. The function
signature is unchanged. This updates the three call sites in the addon to
use the new name, resolving the compile errors when rebuilding against
current llama.cpp.

- llama/addon/AddonContext.cpp:400
- llama/addon/AddonContext.cpp:748
- llama/addon/addon.cpp:54
@andreinknv
Copy link
Copy Markdown

+1 — independently hit the same three undeclared-identifier errors today while rebuilding the addon against current llama.cpp on macOS arm64 (b9151). This patch fixes them cleanly; combined with #597 (the commonllama-common link target rename + the atomic_bool loaded = false brace-init) the addon builds end-to-end. Verified locally with Qwen2.5-Coder Q4_K_M, jina-embeddings-v2-base-code, and bge-reranker-v2-m3 — all load + decode correctly on Metal.

Cleanly extractable from a larger working tree; great PR scope.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants