Skip to content

[WIP] Increase pod-network-availability timeout to 15 minutes#31027

Open
Neha-dot-Yadav wants to merge 1 commit intoopenshift:mainfrom
Neha-dot-Yadav:monitor-increase-timeout
Open

[WIP] Increase pod-network-availability timeout to 15 minutes#31027
Neha-dot-Yadav wants to merge 1 commit intoopenshift:mainfrom
Neha-dot-Yadav:monitor-increase-timeout

Conversation

@Neha-dot-Yadav
Copy link
Copy Markdown

@Neha-dot-Yadav Neha-dot-Yadav commented Apr 17, 2026

The pod-network-availability monitor tests are consistently failing on PowerVS infrastructure during the collection phase. Containers are stuck in ContainerCreating state, causing timeouts.

Failing tests:
[Monitor:pod-network-availability][Jira:"Network / ovn-kubernetes"] monitor test pod-network-availability collection
[Monitor:pod-network-availability][Jira:"Network / ovn-kubernetes"] monitor test pod-network-availability cleanup

Root Cause
Analysis of deployment timestamps from test runs shows that containers are taking approximately 12 minutes to start on PowerVS, exceeding the current 5-minute (300 second) timeout.

Evidence from deployment timestamps:

4.22 Test Run:

pod-network-to-host-network-disruption-poller:

Started: 2026-04-15T06:03:17Z
Finished: 2026-04-15T06:15:15Z
Duration: ~12 minutes
host-network-to-host-network-disruption-poller:

Started: 2026-04-15T06:03:17Z
Finished: 2026-04-15T06:15:15Z
Duration: ~12 minutes
Test run links:

4.22: https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/test-platform-results/logs/periodic-ci-openshift-multiarch-main-nightly-4.22-ocp-e2e-ovn-powervs-capi-multi-p-p/2044264624701837312/
4.21: https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/test-platform-results/logs/periodic-ci-openshift-multiarch-main-nightly-4.21-ocp-e2e-ovn-powervs-capi-multi-p-p/2043404000346247168/

Workaround:
This PR increases the service endpoint wait timeout from 5 minutes to 15 minutes to accommodate the slower container startup times observed on PowerVS infrastructure.

Related
Bug: https://redhat.atlassian.net/browse/OCPBUGS-83579
Similar fix: #29970 (OCPBUGS-58354)

Summary by CodeRabbit

  • Bug Fixes
    • Increased the verification timeout and reduced polling frequency for service endpoint checks in network monitoring tests, extending the wait to 15 minutes. This improves test reliability, reduces intermittent timeout failures during startup, and lowers churn from frequent polling.

@openshift-merge-bot
Copy link
Copy Markdown
Contributor

Pipeline controller notification
This repo is configured to use the pipeline controller. Second-stage tests will be triggered either automatically or after lgtm label is added, depending on the repository configuration. The pipeline controller will automatically detect which contexts are required and will utilize /test Prow commands to trigger the second stage.

For optional jobs, comment /test ? to see a list of all defined jobs. To trigger manually all jobs from second stage use /pipeline required command.

This repository is configured in: automatic mode

@openshift-ci openshift-ci Bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Apr 17, 2026
@openshift-ci-robot openshift-ci-robot added jira/severity-moderate Referenced Jira bug's severity is moderate for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. labels Apr 17, 2026
@openshift-ci-robot
Copy link
Copy Markdown

@Neha-dot-Yadav: This pull request references Jira Issue OCPBUGS-83579, which is invalid:

  • expected the bug to target the "5.0.0" version, but no target version was set

Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.

The bug has been updated to refer to the pull request using the external bug tracker.

Details

In response to this:

The pod-network-availability monitor tests are consistently failing on PowerVS infrastructure during the collection phase. Containers are stuck in ContainerCreating state, causing timeouts.

Failing tests:
[Monitor:pod-network-availability][Jira:"Network / ovn-kubernetes"] monitor test pod-network-availability collection
[Monitor:pod-network-availability][Jira:"Network / ovn-kubernetes"] monitor test pod-network-availability cleanup

Root Cause
Analysis of deployment timestamps from test runs shows that containers are taking approximately 12 minutes to start on PowerVS, exceeding the current 5-minute (300 second) timeout.

Evidence from deployment timestamps:

4.22 Test Run:

pod-network-to-host-network-disruption-poller:

Started: 2026-04-15T06:03:17Z
Finished: 2026-04-15T06:15:15Z
Duration: ~12 minutes
host-network-to-host-network-disruption-poller:

Started: 2026-04-15T06:03:17Z
Finished: 2026-04-15T06:15:15Z
Duration: ~12 minutes
Test run links:

4.22: https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/test-platform-results/logs/periodic-ci-openshift-multiarch-main-nightly-4.22-ocp-e2e-ovn-powervs-capi-multi-p-p/2044264624701837312/
4.21: https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/test-platform-results/logs/periodic-ci-openshift-multiarch-main-nightly-4.21-ocp-e2e-ovn-powervs-capi-multi-p-p/2043404000346247168/

Workaround:
This PR increases the service endpoint wait timeout from 5 minutes to 15 minutes to accommodate the slower container startup times observed on PowerVS infrastructure.

Related
Bug: https://redhat.atlassian.net/browse/OCPBUGS-83579
Similar fix: #29970 (OCPBUGS-58354)

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot openshift-ci-robot added the jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. label Apr 17, 2026
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 17, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml

Review profile: CHILL

Plan: Pro Plus

Run ID: e82a596c-b517-4581-afc0-b2a92e0a8352

📥 Commits

Reviewing files that changed from the base of the PR and between 9a07cca and 1913f76.

📒 Files selected for processing (1)
  • pkg/monitortests/network/disruptionpodnetwork/monitortest.go
✅ Files skipped from review due to trivial changes (1)
  • pkg/monitortests/network/disruptionpodnetwork/monitortest.go

Walkthrough

Polling interval in the pod network availability check was increased from 1s to 5s, and the maximum wait duration was extended from 300 seconds to 15 minutes in podNetworkAvalibility.PrepareCollection; error handling and overall control flow remain unchanged.

Changes

Cohort / File(s) Summary
Timeout & Polling
pkg/monitortests/network/disruptionpodnetwork/monitortest.go
Changed polling frequency from 1 * time.Second to 5 * time.Second and extended overall timeout from 300 * time.Second to 15 * time.Minute in podNetworkAvalibility.PrepareCollection; behavior on timeout unchanged.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~2 minutes

🚥 Pre-merge checks | ✅ 10
✅ Passed checks (10 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly describes the main change: increasing the pod-network-availability timeout from 5 minutes to 15 minutes, which directly matches the file change increasing the wait timeout parameter.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Stable And Deterministic Test Names ✅ Passed The modified file does not contain any Ginkgo test declarations, so there are no test names to evaluate for dynamic content.
Test Structure And Quality ✅ Passed The PR modifies monitor test fixture code, not Ginkgo tests, so the Ginkgo-specific check does not apply.
Microshift Test Compatibility ✅ Passed PR modifies only timeout configuration in existing monitor test implementation, not adding new Ginkgo e2e tests.
Single Node Openshift (Sno) Test Compatibility ✅ Passed This PR does not add any new Ginkgo e2e tests. It only modifies a timeout value in an existing monitor test implementation.
Topology-Aware Scheduling Compatibility ✅ Passed PR only modifies timeout and polling interval values in PrepareCollection method; does not introduce deployment manifests, operators, or scheduling constraints.
Ote Binary Stdout Contract ✅ Passed PR introduces only single-line timeout value change in PrepareCollection method with no stdout writes or JSON communication contract violations.
Ipv6 And Disconnected Network Test Compatibility ✅ Passed This PR does not add new Ginkgo e2e tests; only a timeout adjustment in a monitor test framework component.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@openshift-ci openshift-ci Bot requested review from deads2k and p0lyn0mial April 17, 2026 06:07
@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented Apr 17, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: Neha-dot-Yadav
Once this PR has been reviewed and has the lgtm label, please assign sosiouxme for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
pkg/monitortests/network/disruptionpodnetwork/monitortest.go (1)

181-181: Timeout bump looks reasonable; consider a longer poll interval.

The 15-minute timeout comfortably covers the observed ~12-minute PowerVS container startup, consistent with neighboring timeouts in disruptionserviceloadbalancer/monitortest.go (20m/60m). One minor consideration: polling every 1 second for up to 15 minutes can issue ~900 EndpointSlices.List calls against the apiserver in the worst case. A slightly larger interval (e.g., 5s) would meaningfully reduce API load while barely affecting detection latency, matching patterns used elsewhere in the package (e.g., the 15s/20m poll in Cleanup at line 308).

♻️ Optional tweak
-	err = wait.PollUntilContextTimeout(ctx, 1*time.Second, 15*time.Minute, true, pna.serviceHasEndpoints)
+	err = wait.PollUntilContextTimeout(ctx, 5*time.Second, 15*time.Minute, true, pna.serviceHasEndpoints)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pkg/monitortests/network/disruptionpodnetwork/monitortest.go` at line 181,
Increase the poll interval to reduce API server load: in the call to
wait.PollUntilContextTimeout that currently uses a 1*time.Second interval
(invoking pna.serviceHasEndpoints), change the interval to a larger value such
as 5*time.Second (or another small multiple) while keeping the 15*time.Minute
timeout; update the invocation of wait.PollUntilContextTimeout where
pna.serviceHasEndpoints is passed so it polls less frequently but retains the
same overall timeout.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@pkg/monitortests/network/disruptionpodnetwork/monitortest.go`:
- Line 181: Increase the poll interval to reduce API server load: in the call to
wait.PollUntilContextTimeout that currently uses a 1*time.Second interval
(invoking pna.serviceHasEndpoints), change the interval to a larger value such
as 5*time.Second (or another small multiple) while keeping the 15*time.Minute
timeout; update the invocation of wait.PollUntilContextTimeout where
pna.serviceHasEndpoints is passed so it polls less frequently but retains the
same overall timeout.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml

Review profile: CHILL

Plan: Pro Plus

Run ID: dd9a08b3-3b64-4a4d-bf92-39f212fc6fbe

📥 Commits

Reviewing files that changed from the base of the PR and between d7ad0db and 9a07cca.

📒 Files selected for processing (1)
  • pkg/monitortests/network/disruptionpodnetwork/monitortest.go

@Neha-dot-Yadav Neha-dot-Yadav changed the title [WIP] OCPBUGS-83579: Increase pod-network-availability timeout to 15 minutes [WIP] Increase pod-network-availability timeout to 15 minutes Apr 17, 2026
@openshift-ci-robot openshift-ci-robot removed jira/severity-moderate Referenced Jira bug's severity is moderate for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. labels Apr 17, 2026
@openshift-ci-robot
Copy link
Copy Markdown

@Neha-dot-Yadav: No Jira issue is referenced in the title of this pull request.
To reference a jira issue, add 'XYZ-NNN:' to the title of this pull request and request another refresh with /jira refresh.

Details

In response to this:

The pod-network-availability monitor tests are consistently failing on PowerVS infrastructure during the collection phase. Containers are stuck in ContainerCreating state, causing timeouts.

Failing tests:
[Monitor:pod-network-availability][Jira:"Network / ovn-kubernetes"] monitor test pod-network-availability collection
[Monitor:pod-network-availability][Jira:"Network / ovn-kubernetes"] monitor test pod-network-availability cleanup

Root Cause
Analysis of deployment timestamps from test runs shows that containers are taking approximately 12 minutes to start on PowerVS, exceeding the current 5-minute (300 second) timeout.

Evidence from deployment timestamps:

4.22 Test Run:

pod-network-to-host-network-disruption-poller:

Started: 2026-04-15T06:03:17Z
Finished: 2026-04-15T06:15:15Z
Duration: ~12 minutes
host-network-to-host-network-disruption-poller:

Started: 2026-04-15T06:03:17Z
Finished: 2026-04-15T06:15:15Z
Duration: ~12 minutes
Test run links:

4.22: https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/test-platform-results/logs/periodic-ci-openshift-multiarch-main-nightly-4.22-ocp-e2e-ovn-powervs-capi-multi-p-p/2044264624701837312/
4.21: https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/test-platform-results/logs/periodic-ci-openshift-multiarch-main-nightly-4.21-ocp-e2e-ovn-powervs-capi-multi-p-p/2043404000346247168/

Workaround:
This PR increases the service endpoint wait timeout from 5 minutes to 15 minutes to accommodate the slower container startup times observed on PowerVS infrastructure.

Related
Bug: https://redhat.atlassian.net/browse/OCPBUGS-83579
Similar fix: #29970 (OCPBUGS-58354)

Summary by CodeRabbit

  • Bug Fixes
  • Extended service endpoint verification timeout in network monitoring tests from 5 minutes to 15 minutes, improving test reliability and reducing intermittent timeout failures.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-merge-bot
Copy link
Copy Markdown
Contributor

Scheduling required tests:
/test e2e-aws-csi
/test e2e-aws-ovn-fips
/test e2e-aws-ovn-microshift
/test e2e-aws-ovn-microshift-serial
/test e2e-aws-ovn-serial-1of2
/test e2e-aws-ovn-serial-2of2
/test e2e-gcp-csi
/test e2e-gcp-ovn
/test e2e-gcp-ovn-upgrade
/test e2e-metal-ipi-ovn-ipv6
/test e2e-vsphere-ovn
/test e2e-vsphere-ovn-upi

@Neha-dot-Yadav Neha-dot-Yadav force-pushed the monitor-increase-timeout branch from 9a07cca to 1913f76 Compare April 17, 2026 08:38
@openshift-merge-bot
Copy link
Copy Markdown
Contributor

Scheduling required tests:
/test e2e-aws-csi
/test e2e-aws-ovn-fips
/test e2e-aws-ovn-microshift
/test e2e-aws-ovn-microshift-serial
/test e2e-aws-ovn-serial-1of2
/test e2e-aws-ovn-serial-2of2
/test e2e-gcp-csi
/test e2e-gcp-ovn
/test e2e-gcp-ovn-upgrade
/test e2e-metal-ipi-ovn-ipv6
/test e2e-vsphere-ovn
/test e2e-vsphere-ovn-upi

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented Apr 17, 2026

@Neha-dot-Yadav: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-metal-ipi-ovn-ipv6 1913f76 link true /test e2e-metal-ipi-ovn-ipv6
ci/prow/e2e-vsphere-ovn-upi 1913f76 link true /test e2e-vsphere-ovn-upi

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants