Skip to content

add spec decoding#198

Open
fyuan1316 wants to merge 3 commits intomasterfrom
speculative-decoding
Open

add spec decoding#198
fyuan1316 wants to merge 3 commits intomasterfrom
speculative-decoding

Conversation

@fyuan1316
Copy link
Copy Markdown
Contributor

@fyuan1316 fyuan1316 commented Apr 24, 2026

Summary by CodeRabbit

  • Documentation
    • Added comprehensive guide for enabling and operating speculative decoding in vLLM InferenceService, including configuration methods, step-by-step setup examples, verification procedures, and troubleshooting guidance.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 24, 2026

Warning

Rate limit exceeded

@fyuan1316 has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 14 minutes and 20 seconds before requesting another review.

Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 14 minutes and 20 seconds.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: e8db0a35-1106-423a-bb5d-e07722d14b49

📥 Commits

Reviewing files that changed from the base of the PR and between d283463 and 47e20c2.

📒 Files selected for processing (11)
  • .gitignore
  • docs/en/dify/install.mdx
  • docs/en/installation/ai-cluster.mdx
  • docs/en/kserve/install.mdx
  • docs/en/label_studio/install.mdx
  • docs/en/llama_stack/quickstart.mdx
  • docs/en/model_inference/inference_service/functions/inference_service.mdx
  • docs/en/model_inference/inference_service/how_to/vllm_expert_parallel.mdx
  • docs/en/model_inference/inference_service/how_to/vllm_speculative_decoding.mdx
  • docs/en/model_inference/model_management/functions/model_repository.mdx
  • docs/en/trustyai/ai-guardrails.mdx

Walkthrough

A new documentation page is added describing how to enable and operate speculative decoding for vLLM-backed KServe InferenceService. It covers supported methods (N-gram and EAGLE-3), configuration, benchmarks, YAML examples, verification steps, and troubleshooting.

Changes

Cohort / File(s) Summary
vLLM Speculative Decoding Documentation
docs/en/model_inference/inference_service/how_to/vllm_speculative_decoding.mdx
New comprehensive guide covering speculative decoding setup, configuration methods, benchmark metrics, end-to-end YAML examples, verification procedures, and troubleshooting guidance for KServe InferenceService deployments.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~18 minutes

Poem

🐰 A tale of decoding swift and fair,
Where speculation fills the air,
With N-gram paths and EAGLE's flight,
The docs now guide you through the night,
Fast predictions, metrics bright! 📊✨

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'add spec decoding' directly reflects the main change: adding documentation for speculative decoding functionality for vLLM in KServe InferenceService.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch speculative-decoding

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
docs/en/model_inference/inference_service/how_to/vllm_speculative_decoding.mdx (1)

338-338: Add explanation or concrete examples for the {{.Name}} and {{.Namespace}} template variables.

The --served-model-name {{.Name}} {{.Namespace}}/{{.Name}} pattern on lines 338 and 403 (and the same pattern across vllm_expert_parallel.mdx and create_inference_service_cli.mdx) relies on Alauda AI's runtime template engine to substitute these Go template placeholders at deploy time. Without inline documentation or a link to how the platform processes these variables, readers copying the manifest will not understand that {{Name}} and {{.Namespace}} are placeholders, not literal model names.

Either:

  • (a) Add a brief note near the first example clarifying that {{.Name}} and {{.Namespace}} tokens are resolved by the Alauda AI runtime template engine at deployment, with a link to the runtime template documentation, or
  • (b) Show the snippet with concrete example values and document the templating variant separately.

This would match how other inference-service guides in the repository document templated fields.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@docs/en/model_inference/inference_service/how_to/vllm_speculative_decoding.mdx`
at line 338, The doc uses the Go template placeholders {{.Name}} and
{{.Namespace}} in the --served-model-name flag (seen in the string pattern
"--served-model-name {{.Name}} {{.Namespace}}/{{.Name}}") but doesn't explain
they are runtime template tokens; update the first occurrence to either (a) add
a brief inline note stating that {{.Name}} and {{.Namespace}} are resolved by
Alauda AI's runtime template engine at deployment and add a link to the runtime
template documentation, or (b) replace the example with concrete values (e.g.,
my-model my-namespace/my-model) and then add a short separate note showing the
templated variant using {{.Name}} and {{.Namespace}}; ensure you update the
other occurrences (same pattern in vllm_expert_parallel.mdx and
create_inference_service_cli.mdx) for consistency.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@docs/en/model_inference/inference_service/how_to/vllm_speculative_decoding.mdx`:
- Line 241: Replace incorrect version references that claim storageUris is
available in "KServe 0.17" with the correct minimum version "KServe 0.16" in the
document
docs/en/model_inference/inference_service/how_to/vllm_speculative_decoding.mdx
by updating all occurrences of the string "KServe 0.17" (and variants like
"KServe ≥ 0.17" and "available from KServe 0.17") to "KServe 0.16" so the four
noted locations (the sentence containing "storageUris is a KServe field...", the
phrase "KServe ≥ 0.17", and the two later mentions around lines ~606 and ~623)
correctly reflect that storageUris was introduced in v0.16.0.

---

Nitpick comments:
In
`@docs/en/model_inference/inference_service/how_to/vllm_speculative_decoding.mdx`:
- Line 338: The doc uses the Go template placeholders {{.Name}} and
{{.Namespace}} in the --served-model-name flag (seen in the string pattern
"--served-model-name {{.Name}} {{.Namespace}}/{{.Name}}") but doesn't explain
they are runtime template tokens; update the first occurrence to either (a) add
a brief inline note stating that {{.Name}} and {{.Namespace}} are resolved by
Alauda AI's runtime template engine at deployment and add a link to the runtime
template documentation, or (b) replace the example with concrete values (e.g.,
my-model my-namespace/my-model) and then add a short separate note showing the
templated variant using {{.Name}} and {{.Namespace}}; ensure you update the
other occurrences (same pattern in vllm_expert_parallel.mdx and
create_inference_service_cli.mdx) for consistency.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 6583efc4-62c2-4997-a959-b0018fb1fd6b

📥 Commits

Reviewing files that changed from the base of the PR and between d4c4a48 and d283463.

📒 Files selected for processing (1)
  • docs/en/model_inference/inference_service/how_to/vllm_speculative_decoding.mdx

Comment thread docs/en/model_inference/inference_service/how_to/vllm_speculative_decoding.mdx Outdated
@cloudflare-workers-and-pages
Copy link
Copy Markdown

cloudflare-workers-and-pages Bot commented Apr 24, 2026

Deploying alauda-ai with  Cloudflare Pages  Cloudflare Pages

Latest commit: 47e20c2
Status: ✅  Deploy successful!
Preview URL: https://1703b247.alauda-ai.pages.dev
Branch Preview URL: https://speculative-decoding.alauda-ai.pages.dev

View logs

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant