Conversation
New elastickv-pebble-internals.json covering block cache hit rate, capacity, L0 pressure, compactions, memtables, FSM apply sync mode, and store write conflicts. Uses $datasource template variable and $node_id query variable driven by elastickv_pebble_l0_num_files.
Adds 'Unsupported Commands by Name' timeseries panel driven by the PR #594 elastickv_redis_unsupported_commands_total counter, and updates the 'Errors by Command' panel description so operators know the legacy 'unknown' bucket is broken down by real name in the new panel. The old panel is preserved to keep existing alerts wired up.
Merges the 9 panels from elastickv-redis-hotpath.json into a collapsed 'Hot Path (legacy PR #560)' row at the bottom of elastickv-redis-summary.json, preserving every panel verbatim, and retitles the summary to 'Elastickv Redis' now that it is the main Redis dashboard. Adds a $job template variable so the nested panels keep working. Deletes the standalone hotpath dashboard and updates the four source comments and the docs page that referenced it by path.
elastickv-cluster-summary was a pure DynamoDB-metric dashboard despite its title, so rename the file to elastickv-dynamodb.json, retitle it to 'Elastickv DynamoDB' with uid elastickv-dynamodb, and drop the misleading 'summary' tag. This also doubles as the dedicated DynamoDB dashboard requested in the refactor. Retitle elastickv-cluster-overview.json from 'Elastickv Cluster Overview' to 'Elastickv Cluster' now that it is the sole cluster landing page. No panel content changed. Update README to list the new file set and point each dashboard at its intended operator task.
|
Warning You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again! |
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 51 minutes and 23 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (10)
✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 8ad63cdd8f
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| { | ||
| "datasource": "$datasource", | ||
| "editorMode": "code", | ||
| "expr": "sum by (command) (rate(elastickv_redis_unsupported_commands_total{job=\"elastickv\",node_id=~\"$node_id\"}[$__rate_interval]))", |
There was a problem hiding this comment.
Avoid querying unexported unsupported-command metric
This panel queries elastickv_redis_unsupported_commands_total, but that metric is not exported anywhere in the current tree (checked with repo search in monitoring/*.go), so the new chart will show no data in production until another change lands. Because the panel is presented as the breakdown for the unknown error bucket, shipping it before instrumentation is available creates a misleading observability gap for Redis incident triage.
Useful? React with 👍 / 👎.
| { | ||
| "datasource": "$datasource", | ||
| "editorMode": "code", | ||
| "expr": "elastickv_pebble_fsm_apply_sync_mode{job=\"elastickv\",node_id=~\"$node_id\",mode=\"nosync\"}", |
There was a problem hiding this comment.
Remove or defer panel for missing FSM sync-mode metric
The new Pebble internals dashboard depends on elastickv_pebble_fsm_apply_sync_mode, but this metric is not defined/exported in the repository’s monitoring code (again verified by searching monitoring/*.go), so this stat panel will always render empty for this commit. That leaves an always-dead panel in a core operator dashboard and can be mistaken for scrape/config failures rather than a not-yet-implemented metric.
Useful? React with 👍 / 👎.
Codex flagged that the "Unsupported Commands by Name" (Redis summary) and "FSM Apply Sync Mode" (Pebble internals) panels query metrics not yet on main, so during an incident operators could mistake the empty panels for a scrape failure. Append a sentence to each panel's description calling out the dependency and that the panel will populate automatically once PR #594 / PR #592 merge.
|
Thanks @codex for the P2 sequencing catch on the two forward-looking panels. Rather than pull them out (and then re-add once #594 / #592 merge), I annotated each panel description so on-call won't mistake an empty graph for a scrape failure:
Once the dependent PRs merge, the panels activate automatically — no further dashboard change needed. JSON validated with Commit: 43fe41b /gemini review |
|
Warning You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again! |
|
Codex Review: Didn't find any major issues. Another round soon, please! ℹ️ About Codex in GitHubYour team has set up Codex to review pull requests in this repo. Reviews are triggered when you
If Codex has suggestions, it will comment; otherwise it will react with 👍. Codex can also answer questions or update the PR. Try commenting "@codex address that feedback". |
|
Follow-up on branch Layout
Not paired (and why): Grid: two 12-wide x 8-high panels per visual row. Queries use |
Summary
elastickv_redis_unsupported_commands_total) and annotate the legacy Errors by Command panel so operators know theunknownbucket is broken down by real name in the new panel.elastickv-cluster-summary.json(which was already a pure DynamoDB dashboard) toelastickv-dynamodb.json/ Elastickv DynamoDB withuid=elastickv-dynamodb, and retitleelastickv-cluster-overview.jsonto Elastickv Cluster now that it is the sole cluster landing page.Before / After file list
monitoring/grafana/dashboards/elastickv-pebble-internals.jsonelastickv-cluster-summary.jsontoelastickv-dynamodb.jsonelastickv-cluster-overview.json(title nowElastickv Cluster)elastickv-redis-summary.json(titleElastickv Redis; added Unsupported Commands panel and a collapsed Hot Path row)monitoring/grafana/dashboards/elastickv-redis-hotpath.jsonmonitoring/grafana/dashboards/elastickv-raft-status.jsonWhat each dashboard now covers
Hot Path (legacy PR #560)row with the 9 GET-fast-path panels preserved verbatim.Dead-metric warnings
Two metrics referenced by panels in this PR are not yet present in
monitoring/*.goonmain:elastickv_redis_unsupported_commands_total(PR obs(redis): expose unsupported-command names via bounded metric #594) - the spec asked for a panel targeting this and this PR adds one; it will stay empty until obs(redis): expose unsupported-command names via bounded metric #594 lands.elastickv_pebble_fsm_apply_sync_mode(PR perf(store): add ELASTICKV_FSM_SYNC_MODE for FSM apply fsync opt-out #592) - same situation; panel is in place and will populate once perf(store): add ELASTICKV_FSM_SYNC_MODE for FSM apply fsync opt-out #592 lands.Both are noted here so reviewers can decide whether to sequence this PR after the upstream ones.
Test plan
python3 -c "import json; json.load(open('<file>'))"passes on every modified dashboard file.schemaVersion,panels,time,title,uid.go build ./...succeeds after updating the four source comments / doc page that referenced the deleted hotpath JSON by path.go test ./...passes, including./monitoring/....