vLLM based Eval framework#3531
Conversation
Codecov Report✅ All modified and coverable lines are covered by tests. 📢 Thoughts on this report? Let us know! |
|
🤖 Hi @Rohan-Bierneni, I've received your request, and I'm working on it now! You can track my progress in the logs for more details. |
There was a problem hiding this comment.
The implementation of the vLLM-based evaluation framework is a strong addition to MaxText, providing native support for custom benchmarks, lm-evaluation-harness, and evalchemy. The code is well-structured, but it needs critical updates to correctly support multi-host TPU environments whereリード (lead) rank coordination is essential.
🔍 General Feedback
- Rank Coordination: In multi-host TPU setups, client-side operations (warmup, generation, reporting) must be restricted to
jax.process_index() == 0to avoid redundant work and failures on non-lead ranks. - Configurability: Key parameters like request timeouts should be made configurable via the CLI/config files rather than being hardcoded.
- Efficiency: Minor optimizations in NLTK data handling and FastAPI request processing would improve the overall robustness and performance of the evaluation tool.
RissyRan
left a comment
There was a problem hiding this comment.
Thanks for the PR! I’ve done a high-level pass, though I haven’t done a deep dive into the code just yet. Have you had a chance to run any benchmarks? I'm curious if you're seeing decent scores. Also, is multi-host benchmarking for large models within the scope of this PR?
| Requires: `pip install evalchemy` | ||
|
|
||
| ```bash | ||
| python -m maxtext.eval.runner.evalchemy_runner \ |
There was a problem hiding this comment.
Have you tested if large scale works? i.e. a workload on v5p-64.
There was a problem hiding this comment.
Ran some tests on v5p-64, example results in b/508639301.
| | `--hf_token` | HuggingFace token for gated models | | ||
| | `--num_samples` | Limit number of eval samples | | ||
| | `--hf_mode` | Force HF safetensors mode (disables MaxTextForCausalLM mode) | | ||
| | `--tensor_parallel_size` | vLLM tensor parallelism | |
There was a problem hiding this comment.
is this the only sharding supported?
There was a problem hiding this comment.
Added and tested EP and DP as well. There were some issues with testing with EP on vLLM with scanned RL checkpoints, but this PR unblocked the remaining of these.
| python -m maxtext.eval.runner.eval_runner ... | ||
| ``` | ||
|
|
||
| ### Configuration (eval_runner) |
There was a problem hiding this comment.
Do you think it will be useful if we put common configs together? Then only put new ones for eval_runner, lm_eval_runner, and evalchemy_runner?
There was a problem hiding this comment.
I made a first effort to move some common configs out but there are still more that are common between the runners, will clean this up in a follow up PR.
| logger = logging.getLogger(__name__) | ||
|
|
||
| # Maps MaxText benchmark names to lm-eval task names. | ||
| _TASK_MAP: dict[str, str] = { |
There was a problem hiding this comment.
Wondering if we could directly run lm-eval tasks instead of adding extra mapping layer here? So for any future benchmarks in lm-eval-harness, we could directly use. Similar comments for other framework if applies.
There was a problem hiding this comment.
Agreed, updated.
cb0295b to
348c539
Compare
c2d7d04 to
03ce83a
Compare
| def sample_requests( | ||
| self, | ||
| num_samples: int | None, | ||
| tokenizer, |
There was a problem hiding this comment.
should we also annotate the tokenizer type here? Is the base class PretrainedTokenizerBase from transformers?
|
@dipannita08 has this code been tested with an nnx checkpoint? @hengtaoguo recently added support for this and it is needed in order to support distilled checkpoints. Hengtao's PR: #3188 |
RissyRan
left a comment
There was a problem hiding this comment.
Thanks for the change!
Potentially we could have a run and see how it performance? This is Kurt's final presentation shows some period of time for common benchmarks using JetStream. It will be great to see if we could speed up using this tool.
| @@ -0,0 +1,5 @@ | |||
| # MLPerf OpenOrca evaluation config. | |||
|
|
|||
| benchmark: "mlperf_openorca" | |||
There was a problem hiding this comment.
Could you provide some context? i.e. example of perf configs for inference?
There was a problem hiding this comment.
This config is passed via --config to the eval runner. It sets benchmark name, generation length, and sample count, all server, model, and parallelism params come from CLI args.
Example:
python -m maxtext.eval.runner.run \
--runner eval \
--config src/maxtext/eval/configs/mlperf.yml \
--checkpoint_path $CHECKPOINT \
--model_name llama3.1-8b \
--hf_path meta-llama/Llama-3.1-8B-Instruct \
--hf_token $HF_TOKEN \
--base_output_directory gs://<bucket>/ \
--run_name $RUN \
--gcs_results_path gs://<bucket>/results/mlperf.json \
--max_model_len 8192 \
--tensor_parallel_size 4 \
--expert_parallel_size 1 \
--data_parallel_size 1 \
--hbm_memory_utilization 0.7 \
--max_num_batched_tokens 4096 \
--max_num_seqs 256 \
--num_samples 5000 \
--max_tokens 1024 \
--temperature 0.0 \
--concurrency 64 \
--log_level INFO
Only --config, --hf_path, --base_output_directory, --run_name, and --max_model_len are required, others have defaults or can be inherited from the config file.
| @@ -0,0 +1,8 @@ | |||
| # Base evaluation configuration. | |||
There was a problem hiding this comment.
Shall we also include benchmark name or eval dataset in this base yml?
There was a problem hiding this comment.
Base config covers server and generation parameters that are shared across all eval runs (temperature, concurrency, tensor_parallel_size, etc.). Benchmark name and dataset-specific settings (e.g. num_samples, max_tokens) live in task-specific configs like mlperf.yml so we can re-use the base config for different benchmarks without modification.
For the harness-based runners (lm_eval, evalchemy), benchmark/task selection is handled by the --tasks CLI arg (--tasks gsm8k mmlu gpqa) rather than config files (examples in the bug I shared above).
| upload_results(output["local_path"], gcs_results_path) | ||
|
|
||
|
|
||
| def add_server_args(parser: argparse.ArgumentParser) -> None: |
There was a problem hiding this comment.
Will those configs be included in base yaml config?
There was a problem hiding this comment.
Yes, targeting these type of refactoring changes in a follow up PR to merge the implementation ASAP.
| description="MaxText model evaluation runner.", | ||
| formatter_class=argparse.ArgumentDefaultsHelpFormatter, | ||
| ) | ||
| parser.add_argument("--config", required=True, help="Path to eval config file.") |
There was a problem hiding this comment.
Will those configs be included in base yaml config?
There was a problem hiding this comment.
Yes, there are several common configs that can be put in base. Targeting these type of refactoring changes in a follow up PR to merge the implementation ASAP.
Yes, the checkpoint loading in model_creation_utils.py defaults to nnx and falls back to Linen if it detects the params.params double-nesting. MaxTextForCausalLM in the adapter is an nnx.Module. Some example runs with Qwen3-30B-A3B NNX checkpoints are in b/508639301. |
Please see E2E results in the bug: b/508639301 |
Description
Implement a evaluation framework with vllm backend. Requirements, design, further details: go/eval-framework-vllm
The rest of the description includes relevant details and context, examples:
If the change fixes a bug or a Github issue, please include a link, e.g.,:
FIXES: b/508639301
Tests
Checklist
Before submitting this PR, please make sure (put X in square brackets):
gemini-reviewlabel.