Check requires-mark coverage across CI runs#11293
Check requires-mark coverage across CI runs#11293officialasishkumar wants to merge 3 commits intopydata:mainfrom
Conversation
for more information, see https://pre-commit.ci
|
Hi @officialasishkumar , thanks for looking into this! Hmmm. I was wondering whether in the original issue whether this can be explored using custom marks (whether that's something that Pytest supports). Have you looked into this @officialasishkumar ? Maybe that would be a way forward? Solely grepping through error messages seems quite error prone/flaky and I don't know if that's the way we should be handling something as fundamental as determining if our test suite passes Also, can you elaborate on how you tested this @officialasishkumar ? The test plan doesn't really make sense to me and the job was failing with many errors |
|
cc @keewis since you were thinking along with me in the original issue |
|
I see that you've posted #11290, #11291, #11292, and this PR in very quick succession. Have you read through our AI policy? https://docs.xarray.dev/en/latest/contribute/ai-policy.html And can you confirm whether all of these contributions are in line with this policy? This is a lot of PRs to go through and I want to make sure that maintainers time is being respected while reviewing these |
|
hi @VeckoTheGecko , yes, i have gone through the AI policy |
Description
Add a CI safeguard that uploads
pytest-reportlogoutput from each test-matrix job and checks that tests skipped viarequires_*markers are exercised somewhere in CI, unless the dependency gate is explicitly allowlisted as unsupported in GitHub Actions.This also adds focused tests for the coverage checker script.
Checklist
requires_tests are run at least once #11273whats-new.rst(N/A - internal CI change)api.rst(N/A - internal tooling change)AI Disclosure
Tools: Codex
Test plan
PYTEST_DISABLE_PLUGIN_AUTOLOAD=1 python3 -m pytest -o addopts= /tmp/xarray-wt-11273/ci/test_check_requires_coverage.py -q