Split TLS observed config tests into OCP and HyperShift suites#31046
Split TLS observed config tests into OCP and HyperShift suites#31046gangwgr wants to merge 1 commit intoopenshift:mainfrom
Conversation
|
Pipeline controller notification For optional jobs, comment This repository is configured in: automatic mode |
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughSplit TLS observed-config tests into separate OCP and HyperShift suites; add HyperShift-specific HostedCluster TLS profile tests; make HyperShift management-cluster init optional and narrow pre-skips; add TLS port-forward and wire-level verification helpers; mark CVO as controlPlane. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant TestRunner as Test Runner
participant MgmtAPI as Management API (HostedCluster)
participant HCP as HostedControlPlane
participant Guest as Guest Cluster (Operators/Deployments)
participant Pod as Target Pod/Service
participant Local as Local TLS Checker
TestRunner->>MgmtAPI: discover HostedCluster (optional)
Note right of MgmtAPI: discovery may fail → mgmt steps skipped
TestRunner->>MgmtAPI: patch HostedCluster APIServer TLS profile
MgmtAPI-->>HCP: propagate HostedCluster change
HCP->>HCP: reconcile & rollout control-plane components
HCP->>Guest: surface control-plane updates to guest
Guest->>Pod: rollout deployments, update ConfigMaps & ObservedConfig
TestRunner->>Pod: port-forward service to local port
Local->>Pod: perform TLS handshakes (TLS1.3 / TLS1.2 etc.)
Local-->>TestRunner: report allowed/rejected TLS versions
TestRunner->>MgmtAPI: restore original TLS profile (DeferCleanup)
MgmtAPI-->>HCP: restore propagation
HCP->>Guest: reconverge guest components
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Important Pre-merge checks failedPlease resolve all errors before merging. Addressing warnings is optional. ❌ Failed checks (3 warnings, 1 inconclusive)
✅ Passed checks (8 passed)
✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
|
Scheduling required tests: |
|
/testwith openshift/tls-scanner/main/periodic-tls-observed-config-hypershift openshift/release#77236 |
|
@gangwgr, |
|
cd230df to
9204fcc
Compare
|
Scheduling required tests: |
|
/hold |
9204fcc to
6a544d6
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/extended/tls/tls_observed_config.go`:
- Around line 476-483: Reset HyperShift management state variables at the start
of each BeforeEach by explicitly setting mgmtOC = nil, hcpNamespace = "",
hostedClusterName = "", and hostedClusterNS = "" so stale values from prior Its
cannot be reused; when credential lookup fails where you currently log the error
(the branch around exutil.NewHypershiftManagementCLI("tls-mgmt") and the call to
discoverHostedCluster), ensure you leave those variables in their zero values
instead of leaving prior values, and in the success branch assign mgmtOC and
call discoverHostedCluster to populate hostedClusterName/hostedClusterNS; apply
the same reset/failure-zeroing pattern to the other similar blocks (the other
occurrences around NewHypershiftManagementCLI/discoverHostedCluster).
- Around line 948-951: The current code unconditionally skips when
apierrors.IsNotFound(err), which can hide real failures; change the branching so
the skip via g.Skip(fmt.Sprintf(..., t.operatorConfigGVR.Resource,
t.operatorConfigName)) only happens when this is the known HyperShift
control-plane case (e.g., check a HyperShift/hosted indicator), otherwise treat
the missing operator config as a test failure (fail the test or report an
error). Concretely, replace the single apierrors.IsNotFound(err) branch with a
conditional that checks apierrors.IsNotFound(err) &&
<HyperShift-control-plane-condition> to call g.Skip(...), and add an else branch
to call g.Fatalf or return an error when apierrors.IsNotFound(err) on
non-HyperShift clusters; locate this change around the apierrors.IsNotFound(err)
check referencing t.operatorConfigGVR.Resource and t.operatorConfigName.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml
Review profile: CHILL
Plan: Pro Plus
Run ID: 5bc60b4a-e6b7-4fb3-b128-4403c410ace1
📒 Files selected for processing (1)
test/extended/tls/tls_observed_config.go
| if err != nil { | ||
| e2e.Logf("HyperShift cluster detected but management cluster credentials are not available: %v", err) | ||
| e2e.Logf("ConfigMap restoration tests will still run; TLS profile change tests will be skipped") | ||
| } else { | ||
| mgmtOC = exutil.NewHypershiftManagementCLI("tls-mgmt") | ||
| hostedClusterName, hostedClusterNS = discoverHostedCluster(mgmtOC, hcpNamespace) | ||
| e2e.Logf("HyperShift: HC=%s/%s, HCP NS=%s", hostedClusterNS, hostedClusterName, hcpNamespace) | ||
| } |
There was a problem hiding this comment.
Reset HyperShift management state at the start of each BeforeEach.
mgmtOC, hcpNamespace, hostedClusterName, and hostedClusterNS are describe-scoped and are only conditionally reassigned. If credential lookup fails in a later test, stale values from a prior It can still be used, which makes skip behavior and patch targets unreliable.
💡 Proposed fix
g.BeforeEach(func() {
+ mgmtOC = nil
+ hcpNamespace = ""
+ hostedClusterName = ""
+ hostedClusterNS = ""
+
isMicroShift, err := exutil.IsMicroShiftCluster(oc.AdminKubeClient())
o.Expect(err).NotTo(o.HaveOccurred())
if isMicroShift {
g.Skip("TLS observed-config tests are not applicable to MicroShift clusters")
}
@@
if isHyperShiftCluster {
_, hcpNamespace, err = exutil.GetHypershiftManagementClusterConfigAndNamespace()
if err != nil {
e2e.Logf("HyperShift cluster detected but management cluster credentials are not available: %v", err)
e2e.Logf("ConfigMap restoration tests will still run; TLS profile change tests will be skipped")
} else {
mgmtOC = exutil.NewHypershiftManagementCLI("tls-mgmt")
hostedClusterName, hostedClusterNS = discoverHostedCluster(mgmtOC, hcpNamespace)
e2e.Logf("HyperShift: HC=%s/%s, HCP NS=%s", hostedClusterNS, hostedClusterName, hcpNamespace)
}
}
})Also applies to: 521-523, 749-751
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/extended/tls/tls_observed_config.go` around lines 476 - 483, Reset
HyperShift management state variables at the start of each BeforeEach by
explicitly setting mgmtOC = nil, hcpNamespace = "", hostedClusterName = "", and
hostedClusterNS = "" so stale values from prior Its cannot be reused; when
credential lookup fails where you currently log the error (the branch around
exutil.NewHypershiftManagementCLI("tls-mgmt") and the call to
discoverHostedCluster), ensure you leave those variables in their zero values
instead of leaving prior values, and in the success branch assign mgmtOC and
call discoverHostedCluster to populate hostedClusterName/hostedClusterNS; apply
the same reset/failure-zeroing pattern to the other similar blocks (the other
occurrences around NewHypershiftManagementCLI/discoverHostedCluster).
| if apierrors.IsNotFound(err) { | ||
| g.Skip(fmt.Sprintf("Operator config %s/%s does not exist on this cluster (control-plane resource may not be available on HyperShift guest)", | ||
| t.operatorConfigGVR.Resource, t.operatorConfigName)) | ||
| } |
There was a problem hiding this comment.
Avoid unconditional skip on missing operator config.
At Line 948, IsNotFound now always skips. That can hide real regressions for non-HyperShift clusters (or guest-side targets that should exist). Restrict this skip to the known HyperShift control-plane case and fail otherwise.
💡 Proposed fix
- if apierrors.IsNotFound(err) {
- g.Skip(fmt.Sprintf("Operator config %s/%s does not exist on this cluster (control-plane resource may not be available on HyperShift guest)",
- t.operatorConfigGVR.Resource, t.operatorConfigName))
- }
+ if apierrors.IsNotFound(err) && isHyperShift && t.controlPlane {
+ g.Skip(fmt.Sprintf("Operator config %s/%s does not exist on HyperShift guest (control-plane resource is on management cluster)",
+ t.operatorConfigGVR.Resource, t.operatorConfigName))
+ }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/extended/tls/tls_observed_config.go` around lines 948 - 951, The current
code unconditionally skips when apierrors.IsNotFound(err), which can hide real
failures; change the branching so the skip via g.Skip(fmt.Sprintf(...,
t.operatorConfigGVR.Resource, t.operatorConfigName)) only happens when this is
the known HyperShift control-plane case (e.g., check a HyperShift/hosted
indicator), otherwise treat the missing operator config as a test failure (fail
the test or report an error). Concretely, replace the single
apierrors.IsNotFound(err) branch with a conditional that checks
apierrors.IsNotFound(err) && <HyperShift-control-plane-condition> to call
g.Skip(...), and add an else branch to call g.Fatalf or return an error when
apierrors.IsNotFound(err) on non-HyperShift clusters; locate this change around
the apierrors.IsNotFound(err) check referencing t.operatorConfigGVR.Resource and
t.operatorConfigName.
|
Scheduling required tests: |
|
/retest-required |
|
Job Failure Risk Analysis for sha: 6a544d6
|
|
|
|
/retest-required |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: gangwgr The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
497f6ed to
3545553
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
test/extended/tls/hypershift/tls_observed_config_hypershift.go (1)
280-304: Avoidtime.Sleepinside poll function; it doubles the effective poll interval.The
time.Sleep(10 * time.Second)at line 287 runs on every poll iteration, effectively making the poll interval 20 seconds (10s poll interval + 10s sleep) and adding unnecessary delays. This also performs duplicate API calls. Consider using a longer poll interval instead, or tracking state across poll iterations.♻️ Suggested approach
- for _, indicator := range []string{"0/", "Pending", "Terminating", "Init"} { - if strings.Contains(out, indicator) { - e2e.Logf(" poll: %s pods still restarting (found %q)", appLabel, indicator) - return false, nil - } - } - - time.Sleep(10 * time.Second) - out2, err := mgmtCLI.AsAdmin().Run("get").Args( - "pods", "-l", "app="+appLabel, - "--no-headers", "-n", hcpNS, - ).Output() - if err != nil { - return false, nil - } - for _, indicator := range []string{"0/", "Pending", "Terminating", "Init"} { - if strings.Contains(out2, indicator) { - e2e.Logf(" poll: %s pods still not stable on recheck", appLabel) - return false, nil - } - } + for _, indicator := range []string{"0/", "Pending", "Terminating", "Init"} { + if strings.Contains(out, indicator) { + e2e.Logf(" poll: %s pods still restarting (found %q)", appLabel, indicator) + return false, nil + } + }If stability confirmation is needed, consider using a separate sequential poll or tracking consecutive successful checks via closure state.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/extended/tls/hypershift/tls_observed_config_hypershift.go` around lines 280 - 304, The poll callback currently calls time.Sleep(10 * time.Second), which doubles the effective poll interval and duplicates API calls; remove that sleep and either increase the poll interval passed to the polling function or implement a consecutive-success counter in the closure to require N successive successful checks (e.g., track a counter in the closure and increment on success / reset on failure) to confirm stability; use the existing mgmtCLI.AsAdmin().Run("get").Args(...).Output() call and e2e.Logf for logging and avoid an extra out2 API call per iteration by reusing the first result or doing a separate sequential poll outside this closure if a definitive recheck is required.test/extended/util/tls_helpers.go (1)
37-37: Port collision risk withrand.Intnfor ephemeral ports.Using
rand.Intnwithout seeding or checking port availability can cause sporadic failures if the same port is selected twice in quick succession or if the port is already in use. Consider using port 0 to let the OS assign an available ephemeral port, or add retry logic specifically for port conflicts.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/extended/util/tls_helpers.go` at line 37, The current selection of localPort via rand.Intn (localPort := rand.Intn(65534-1025) + 1025) risks port collisions; replace this with OS-assigned ephemeral port allocation or a retry-with-availability-check. Implement a helper (e.g., getFreePort) that net.Listen("tcp", "127.0.0.1:0") to obtain the assigned port (from Addr().(*net.TCPAddr).Port) and close the listener, and use that port instead of rand.Intn, or add retry logic that attempts to bind the chosen localPort and retries on "address already in use" errors before proceeding; update uses of localPort in tls_helpers.go accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/extended/tls/hypershift/tls_observed_config_hypershift.go`:
- Around line 132-134: The indentation for the continuation line of the
exutil.ForwardPortAndExecute call is off; align the wrapped line that calls
exutil.CheckTLSConnection so it is indented to the same level as the for-loop
body (i.e., indent the line starting with func(localPort int) error { ... } and
the subsequent o.Expect(err).NotTo(...) so they appear as the for loop's block),
ensuring the call to exutil.ForwardPortAndExecute and the returned err check
remain visually grouped with the loop that uses t.ServiceName, t.Namespace and
t.ServicePort.
---
Nitpick comments:
In `@test/extended/tls/hypershift/tls_observed_config_hypershift.go`:
- Around line 280-304: The poll callback currently calls time.Sleep(10 *
time.Second), which doubles the effective poll interval and duplicates API
calls; remove that sleep and either increase the poll interval passed to the
polling function or implement a consecutive-success counter in the closure to
require N successive successful checks (e.g., track a counter in the closure and
increment on success / reset on failure) to confirm stability; use the existing
mgmtCLI.AsAdmin().Run("get").Args(...).Output() call and e2e.Logf for logging
and avoid an extra out2 API call per iteration by reusing the first result or
doing a separate sequential poll outside this closure if a definitive recheck is
required.
In `@test/extended/util/tls_helpers.go`:
- Line 37: The current selection of localPort via rand.Intn (localPort :=
rand.Intn(65534-1025) + 1025) risks port collisions; replace this with
OS-assigned ephemeral port allocation or a retry-with-availability-check.
Implement a helper (e.g., getFreePort) that net.Listen("tcp", "127.0.0.1:0") to
obtain the assigned port (from Addr().(*net.TCPAddr).Port) and close the
listener, and use that port instead of rand.Intn, or add retry logic that
attempts to bind the chosen localPort and retries on "address already in use"
errors before proceeding; update uses of localPort in tls_helpers.go
accordingly.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml
Review profile: CHILL
Plan: Enterprise
Run ID: 6b6e979a-b331-47d6-9b79-77a4138b2bc9
📒 Files selected for processing (8)
pkg/testsuites/standard_suites.gotest/extended/include.gotest/extended/tls/hypershift/OWNERStest/extended/tls/hypershift/tls_observed_config_hypershift.gotest/extended/tls/ocp/OWNERStest/extended/tls/ocp/tls_observed_config_ocp.gotest/extended/tls/tls_observed_config.gotest/extended/util/tls_helpers.go
✅ Files skipped from review due to trivial changes (2)
- test/extended/include.go
- test/extended/tls/hypershift/OWNERS
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (4)
test/extended/tls/ocp/tls_observed_config_ocp.go (3)
22-27: Consider removingdefer g.GinkgoRecover()from Describe body.
GinkgoRecover()is typically used to recover panics in goroutines spawned by tests. Placing it at the Describe level withdeferhas no practical effect since the Describe body executes synchronously during suite setup. The actual test execution happens later in theItblocks.♻️ Suggested removal
var _ = g.Describe("[sig-api-machinery][Feature:TLSObservedConfig][Serial][Suite:openshift/tls-observed-config-ocp]", func() { - defer g.GinkgoRecover() - oc := exutil.NewCLI("tls-observed-config")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/extended/tls/ocp/tls_observed_config_ocp.go` around lines 22 - 27, Remove the unnecessary defer g.GinkgoRecover() call from the g.Describe block: open the Describe wrapper (the g.Describe(...) anonymous function) and delete the line with defer g.GinkgoRecover(); keep test setup variables (oc := exutil.NewCLI and ctx := context.Background()) intact, and if you need panic recovery for goroutines spawned inside It blocks, add g.GinkgoRecover() calls inside those goroutines or inside specific It/BeforeEach/AfterEach bodies instead.
93-95: Same note aboutdefer g.GinkgoRecover().As mentioned for the non-disruptive suite, this has no practical effect at the Describe level.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/extended/tls/ocp/tls_observed_config_ocp.go` around lines 93 - 95, The deferred call defer g.GinkgoRecover() at the top-level Describe block (the Describe invocation with label "[sig-api-machinery][Feature:TLSObservedConfig][Serial][Disruptive][Suite:openshift/tls-observed-config-ocp]") is ineffective and should be removed; locate the Describe block and delete the defer g.GinkgoRecover() statement so recovery is handled appropriately at test/It level or by Ginkgo's global setup instead.
339-343: ConfigMap not-found handling differs from injection tests.When a ConfigMap is not found, this test logs "SKIP" and continues, while
TestConfigMapTLSInjectionin the context snippets useso.Expect(err).NotTo(o.HaveOccurred()). If the ConfigMap should exist after a Custom profile switch, silently skipping may mask issues.Consider whether this should be an assertion failure or if the soft skip is intentional for targets where ConfigMaps might not exist.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/extended/tls/ocp/tls_observed_config_ocp.go` around lines 339 - 343, The current handling of a missing ConfigMap in the Get call (oc.AdminKubeClient().CoreV1().ConfigMaps(...).Get) silently logs "SKIP" and continues, which differs from TestConfigMapTLSInjection's strict expectation; decide which behavior is correct and change the code accordingly: if the ConfigMap must exist after a Custom profile switch, replace the soft-skip block with a hard assertion using o.Expect(err).NotTo(o.HaveOccurred()) (or equivalent test failure), otherwise add a clear comment explaining why skipping is intentional and make the skip conditional (e.g., detect platforms where ConfigMaps may be absent) so the test intent is explicit. Ensure you update the code around the cm, err := ...Get(...) block and any related test helper functions to reflect the chosen behavior.test/extended/util/tls_helpers.go (1)
54-72: Potential blocking read in readiness loop.
ReadPartialFrom(stdout, 1024)uses a blockingRead()call. Ifoc port-forwardhasn't written anything to stdout yet, this could block indefinitely (or until the context times out), making the 500ms sleep ineffective. The TCP dial fallback at lines 63-69 helps mitigate this, but only after the first blocking read returns.Consider using a non-blocking approach or reading in a separate goroutine with a select timeout.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/extended/util/tls_helpers.go` around lines 54 - 72, The readiness loop currently calls ReadPartialFrom(stdout, 1024) which blocks and can delay the loop; modify the logic in the loop that references ReadPartialFrom and stdout to perform a non-blocking read (for example by launching a goroutine to read from stdout and sending results on a channel, or by using a Reader with deadlines) and use select with a timeout (matching the existing 500ms sleep or shorter) to fall back to the TCP dial test on timeouts; ensure the new code still checks for the "Forwarding from" substring and sets ready=true in the same places (the existing loop, the ReadPartialFrom result handling, and the TCP dial fallback using net.DialTimeout).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/extended/tls/ocp/tls_observed_config_ocp.go`:
- Around line 258-259: The context timeout for the TLS profile change uses
context.WithTimeout(ctx, 60*time.Minute) (creating configChangeCtx and
configChangeCancel) which matches the Ginkgo [Timeout:60m] and risks the
DeferCleanup not finishing; change the timeout to a smaller value (e.g.,
55*time.Minute or 50*time.Minute) so cleanup has headroom, keep the same
configChangeCancel defer and DeferCleanup usage around the TLS profile restore
(same symbols: configChangeCtx, configChangeCancel, context.WithTimeout,
DeferCleanup).
---
Nitpick comments:
In `@test/extended/tls/ocp/tls_observed_config_ocp.go`:
- Around line 22-27: Remove the unnecessary defer g.GinkgoRecover() call from
the g.Describe block: open the Describe wrapper (the g.Describe(...) anonymous
function) and delete the line with defer g.GinkgoRecover(); keep test setup
variables (oc := exutil.NewCLI and ctx := context.Background()) intact, and if
you need panic recovery for goroutines spawned inside It blocks, add
g.GinkgoRecover() calls inside those goroutines or inside specific
It/BeforeEach/AfterEach bodies instead.
- Around line 93-95: The deferred call defer g.GinkgoRecover() at the top-level
Describe block (the Describe invocation with label
"[sig-api-machinery][Feature:TLSObservedConfig][Serial][Disruptive][Suite:openshift/tls-observed-config-ocp]")
is ineffective and should be removed; locate the Describe block and delete the
defer g.GinkgoRecover() statement so recovery is handled appropriately at
test/It level or by Ginkgo's global setup instead.
- Around line 339-343: The current handling of a missing ConfigMap in the Get
call (oc.AdminKubeClient().CoreV1().ConfigMaps(...).Get) silently logs "SKIP"
and continues, which differs from TestConfigMapTLSInjection's strict
expectation; decide which behavior is correct and change the code accordingly:
if the ConfigMap must exist after a Custom profile switch, replace the soft-skip
block with a hard assertion using o.Expect(err).NotTo(o.HaveOccurred()) (or
equivalent test failure), otherwise add a clear comment explaining why skipping
is intentional and make the skip conditional (e.g., detect platforms where
ConfigMaps may be absent) so the test intent is explicit. Ensure you update the
code around the cm, err := ...Get(...) block and any related test helper
functions to reflect the chosen behavior.
In `@test/extended/util/tls_helpers.go`:
- Around line 54-72: The readiness loop currently calls ReadPartialFrom(stdout,
1024) which blocks and can delay the loop; modify the logic in the loop that
references ReadPartialFrom and stdout to perform a non-blocking read (for
example by launching a goroutine to read from stdout and sending results on a
channel, or by using a Reader with deadlines) and use select with a timeout
(matching the existing 500ms sleep or shorter) to fall back to the TCP dial test
on timeouts; ensure the new code still checks for the "Forwarding from"
substring and sets ready=true in the same places (the existing loop, the
ReadPartialFrom result handling, and the TCP dial fallback using
net.DialTimeout).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Central YAML (inherited)
Review profile: CHILL
Plan: Enterprise
Run ID: f14b1aed-eaa8-4372-93ec-88edd9465407
📒 Files selected for processing (8)
pkg/testsuites/standard_suites.gotest/extended/include.gotest/extended/tls/hypershift/OWNERStest/extended/tls/hypershift/tls_observed_config_hypershift.gotest/extended/tls/ocp/OWNERStest/extended/tls/ocp/tls_observed_config_ocp.gotest/extended/tls/tls_observed_config.gotest/extended/util/tls_helpers.go
✅ Files skipped from review due to trivial changes (3)
- test/extended/include.go
- test/extended/tls/hypershift/OWNERS
- pkg/testsuites/standard_suites.go
🚧 Files skipped from review as they are similar to previous changes (1)
- test/extended/tls/hypershift/tls_observed_config_hypershift.go
3545553 to
a2fc928
Compare
|
Scheduling required tests: |
Refactor TLS observed config tests to separate standalone OCP and HyperShift test suites for independent ownership. Move shared types, data, and test implementations to the parent tls package, with platform-specific test registrations in dedicated subdirectories (tls/ocp and tls/hypershift) each with their own OWNERS files. Move generic helpers to test/extended/util for reuse. Suite names updated: - openshift/tls-observed-config-ocp (standalone OCP) - openshift/tls-observed-config-hypershift (HyperShift) Made-with: Cursor
a2fc928 to
4aed097
Compare
|
Scheduling required tests: |
|
@gangwgr: all tests passed! Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
Split TLS observed config tests into OCP and HyperShift suites
Refactor TLS observed config tests to separate standalone OCP and
HyperShift test suites for independent ownership. Move shared types,
data, and test implementations to the parent tls package, with
platform-specific test registrations in dedicated subdirectories
(tls/ocp and tls/hypershift) each with their own OWNERS files.
Move generic helpers to test/extended/util for reuse.
Suite names updated: