[Volume 10] Spring Batch시스템 구현#420
[Volume 10] Spring Batch시스템 구현#420MINJOOOONG wants to merge 5 commits intoLoopers-dev-lab:MINJOOOONGfrom
Conversation
📝 WalkthroughWalkthrough기간별(일일, 주간, 월간) 상품 랭킹 조회 API 엔드포인트를 추가하며, 배치 작업으로 일일 메트릭 스냅샷을 생성하고 주간/월간 랭킹을 집계하는 기능을 구현한다. Redis 데이터를 데이터베이스로 동기화하는 스냅샷 파이프라인도 포함한다. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Controller as RankingV1<br/>Controller
participant Facade as RankingFacade
participant Service as RankingService
participant Repository as RankingRepository<br/>(DB)
Client->>Controller: GET /api/v1/rankings?period=DAILY&date=2026-04-01&size=50
Controller->>Controller: parseRankPeriod(DAILY)<br/>clamp size to [1, 100]
Controller->>Facade: getRanking(DAILY, date, 50)
Facade->>Service: getRanking(DAILY, date, 50)
Service->>Service: switch on DAILY<br/>→ route to findDailyRanking
Service->>Repository: findDailyRanking(date, 50)
Repository->>Repository: Query product_metrics table<br/>Assign 1-based ranks
Repository-->>Service: List<RankingEntry>
Service-->>Facade: List<RankingEntry>
Facade->>Facade: map to List<RankingInfo>
Facade-->>Controller: List<RankingInfo>
Controller->>Controller: convert to RankingListResponse
Controller-->>Client: ApiResponse<RankingListResponse>
sequenceDiagram
participant Batch as Batch<br/>Scheduler
participant Redis as Redis
participant Tasklet as DailyMetrics<br/>Snapshot<br/>Tasklet
participant DB as product_metrics<br/>Table
Batch->>Tasklet: execute(requestDate)
Tasklet->>Tasklet: validate requestDate<br/>(non-null, not future)
Tasklet->>Redis: RankingSnapshot<br/>Repository.getAllScores(dateKey)
Redis-->>Tasklet: List<ProductScore>
Tasklet->>Tasklet: skip null productIds<br/>construct ProductMetrics
Tasklet->>DB: deleteByMetricDate(requestDate)
Tasklet->>DB: saveAll(batch of 1000)
Tasklet->>Tasklet: flush & clear EntityManager
DB-->>Tasklet: persisted
Tasklet-->>Batch: RepeatStatus.FINISHED
sequenceDiagram
participant Batch as Batch<br/>Scheduler
participant Reader as RankAggregation<br/>Reader (JDBC)
participant DB1 as product_metrics<br/>Table
participant Writer as Weekly/Monthly<br/>RankWriter
participant DB2 as mv_product_rank_*<br/>Tables
Batch->>Reader: create Reader for<br/>7/30-day window
Reader->>DB1: Query aggregated scores<br/>GROUP BY product_id
DB1-->>Reader: List<ProductScoreAggregation>
Reader-->>Batch: ItemReader stream
Batch->>Writer: write(chunk of 100)
Writer->>DB2: deleteByBaseDate(once)
Writer->>Writer: Assign 1-based rankings<br/>to chunk items
Writer->>DB2: saveAll(WeeklyProductRank)
DB2-->>Writer: persisted
Writer-->>Batch: completed
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
운영 관점 주의사항1. 배치 작업 멱등성 및 날짜 검증
2. Redis 데이터 손실 시 복구 전략
3. 배치 데이터 일관성
4. API 응답 성능
5. 테스트 커버리지
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 10
🧹 Nitpick comments (16)
apps/commerce-batch/src/main/java/com/loopers/domain/productrank/BaseProductRank.java (2)
36-50: 가드 검증 로직은 적절하나 컬럼 단위로 개별 가드 메서드 추출을 권고한다.검증 규칙은 컬럼 제약과 일치하여 무결성 측면에서 문제없다. 다만 향후 신규 랭킹 엔티티가 추가될 경우 동일 패턴이 복제될 가능성이 있으므로,
guardProductId()/guardTotalScore()등으로 분해해두면 확장 시 재사용이 용이하다. 필수는 아니다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-batch/src/main/java/com/loopers/domain/productrank/BaseProductRank.java` around lines 36 - 50, Split the single guard() into per-field guard methods to make validations reusable: create private methods guardProductId() to check productId != null, guardTotalScore() to check totalScore >= 0, guardRanking() to check ranking >= 1, and guardBaseDate() to check baseDate != null, then have guard() call these methods; reference BaseProductRank, guard(), productId, totalScore, ranking, and baseDate when implementing.
20-21:ranking컬럼명이 일부 DB에서 예약어와 충돌할 가능성이 있다.
RANK/DENSE_RANK등 윈도우 함수와 관련하여RANKING이라는 식별자는 MySQL 8.x에서는 현재 예약어가 아니나, 향후 릴리스·타 RDBMS(Oracle 등) 이식·리포팅 툴 호환 관점에서 잠재 리스크가 있다. 운영 안정성을 위해 컬럼명을 보다 중립적인 이름으로 변경하거나, 현재 방식을 유지한다면 이스케이프(\ranking``) 정책을 프로젝트 차원에서 일원화할 것을 권고한다.- `@Column`(name = "ranking", nullable = false) + `@Column`(name = "rank_position", nullable = false) private int ranking;Is "RANKING" a reserved word in MySQL 8.4 or newer?🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-batch/src/main/java/com/loopers/domain/productrank/BaseProductRank.java` around lines 20 - 21, The field BaseProductRank.ranking uses the column name "ranking" which can clash with reserved identifiers in some RDBMS/reporting tools; update the mapping to a neutral name (e.g., change the `@Column` name to "product_ranking" and rename the Java field or keep field name but map to "product_ranking") and propagate this change to schema migrations, SQL queries, DTOs and any uses of BaseProductRank to ensure consistency; alternatively, if you must keep "ranking", apply a project-wide escape strategy for column names and document it.apps/commerce-batch/src/test/java/com/loopers/domain/productmetrics/ProductMetricsUnitTest.java (1)
12-50: 경계값 테스트를 추가할 것을 권고한다.현 테스트는 실패 케이스의 예외 타입만 단언하고 있다. 운영/회귀 관점에서 (1)
score = 0.0경계값이 성공하는지, (2) 각 실패 케이스가 의도한 메시지를 반환하는지 추가 단언이 있어야 요구사항 변경 시 가드 로직 회귀를 조기에 감지할 수 있다.♻️ 제안 수정
+ `@DisplayName`("score가 0이면 생성에 성공한다.") + `@Test` + void createsWithZeroScore() { + var metrics = new ProductMetrics(1L, LocalDate.of(2026, 4, 12), 0.0); + assertThat(metrics.getScore()).isEqualTo(0.0); + } + `@DisplayName`("productId가 null이면 생성에 실패한다.") `@Test` void failsWithNullProductId() { assertThatThrownBy(() -> new ProductMetrics(null, LocalDate.of(2026, 4, 12), 100.0)) - .isInstanceOf(IllegalArgumentException.class); + .isInstanceOf(IllegalArgumentException.class) + .hasMessageContaining("productId"); }As per coding guidelines: "단위 테스트는 경계값/실패 케이스/예외 흐름을 포함하는지 점검한다."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-batch/src/test/java/com/loopers/domain/productmetrics/ProductMetricsUnitTest.java` around lines 12 - 50, Add boundary and message assertions to ProductMetrics unit tests: add a new test (or extend createsWithValidValues) that constructs ProductMetrics with score = 0.0 and asserts creation succeeds and fields match; update the failure tests for ProductMetrics constructor (failsWithNullProductId, failsWithNullMetricDate, failsWithNegativeScore) to also assert the thrown IllegalArgumentException's message contains the expected keywords (e.g., "productId", "metricDate", "score" or "non-negative") so the tests validate both exception type and intent.apps/commerce-batch/src/test/java/com/loopers/domain/productrank/MonthlyProductRankUnitTest.java (1)
13-62:WeeklyProductRankUnitTest와의 중복 및 경계값 누락을 함께 개선할 것을 권장한다.운영/유지보수 관점에서 두 가지 짚는다.
WeeklyProductRankUnitTest와 구조·검증 내용이 거의 동일하다. 공통 검증 로직은BaseProductRank.guard()에 있으므로, 가드 규칙 자체에 대한 파라미터화 테스트를BaseProductRank기준으로 한 곳에 두고, 하위 타입 단위 테스트는 생성 시 값 매핑 확인 정도로 축소하면 규칙 변경 시 수정 지점이 한 곳으로 모인다.- 허용 경계(
totalScore == 0,ranking == 1) 성공 케이스가 누락되어 있어, 가드가<= 0으로 오변경되어도 테스트가 통과한다. 최소한 경계 성공 케이스 추가를 권장한다.♻️ 경계값 성공 케이스 추가 예시
+ `@DisplayName`("totalScore가 0이면 생성에 성공한다.") + `@Test` + void createsWithZeroTotalScore() { + var rank = new MonthlyProductRank(1L, 0.0, 1, LocalDate.of(2026, 4, 12)); + assertThat(rank.getTotalScore()).isZero(); + }As per coding guidelines: "단위 테스트는 경계값/실패 케이스/예외 흐름을 포함하는지 점검한다."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-batch/src/test/java/com/loopers/domain/productrank/MonthlyProductRankUnitTest.java` around lines 13 - 62, The tests duplicate validation across WeeklyProductRankUnitTest and MonthlyProductRankUnitTest and miss boundary-success cases; extract and parametrize guard rule tests into a single BaseProductRank test suite that exercises BaseProductRank.guard() with negative, zero, and valid boundary inputs (include totalScore == 0 and ranking == 1 as success cases), then simplify MonthlyProductRankUnitTest and WeeklyProductRankUnitTest to only verify value mapping on construction (e.g., MonthlyProductRank and WeeklyProductRank constructors produce expected productId, totalScore, ranking, baseDate) while removing duplicated validation assertions; ensure references to MonthlyProductRank, WeeklyProductRank, and BaseProductRank.guard() are used to locate and change the tests.apps/commerce-batch/src/test/java/com/loopers/domain/productrank/WeeklyProductRankUnitTest.java (1)
41-53: 경계값 테스트 보강을 권장한다.운영 관점에서
BaseProductRank.guard()의 경계 조건이 정확히 어디에서 통과/실패하는지 회귀 방지용으로 명시적으로 고정되어야 한다. 현재는totalScore < 0,ranking < 1실패만 검증되고, 허용 경계인totalScore == 0,ranking == 1성공 케이스가 누락되어 있어 향후 가드 조건을<= 0으로 오변경해도 테스트가 놓칠 수 있다. 경계값 성공 케이스를 추가하는 것을 권장한다.♻️ 제안 패치
+ `@DisplayName`("totalScore가 0이면 생성에 성공한다.") + `@Test` + void createsWithZeroTotalScore() { + var rank = new WeeklyProductRank(1L, 0.0, 1, LocalDate.of(2026, 4, 12)); + assertThat(rank.getTotalScore()).isZero(); + } + + `@DisplayName`("ranking이 1이면 생성에 성공한다.") + `@Test` + void createsWithRankingOne() { + var rank = new WeeklyProductRank(1L, 100.0, 1, LocalDate.of(2026, 4, 12)); + assertThat(rank.getRanking()).isEqualTo(1); + }As per coding guidelines: "단위 테스트는 경계값/실패 케이스/예외 흐름을 포함하는지 점검한다."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-batch/src/test/java/com/loopers/domain/productrank/WeeklyProductRankUnitTest.java` around lines 41 - 53, Add explicit boundary-success unit tests to ensure BaseProductRank.guard() accepts the allowed edges: add tests in WeeklyProductRankUnitTest that construct a WeeklyProductRank with totalScore == 0 (e.g., new WeeklyProductRank(1L, 0.0, 1, LocalDate.of(...))) and with ranking == 1 (and totalScore non-negative) and assert no exception is thrown; reference the WeeklyProductRank constructor and BaseProductRank.guard() so the tests will fail if the guard is changed to reject these boundary values.apps/commerce-batch/src/main/java/com/loopers/infrastructure/productrank/MonthlyProductRankJpaRepository.java (1)
11-16: 벌크 삭제 구현은 적절하다.
@Modifying+ JPQLDELETE로 행 단위 조회 없이 대상baseDate전체를 제거하므로 대량 데이터에서도 병목이 최소화된다. 다만 운영 관점에서 한 가지만 짚는다: 이 벌크 삭제는 영속성 컨텍스트를 우회하므로 동일 트랜잭션 내에서 같은 엔티티를saveAll로 저장하는 경우 1차 캐시 불일치 가능성이 있다. 현재WeeklyRankWriter/MonthlyRankWriter는 청크별 새 엔티티를 생성해 저장하므로 실질 위험은 없으나, 향후 동일 트랜잭션에서 기존 엔티티를 재사용하는 코드가 추가될 경우를 대비해 주석 또는clearAutomatically/flushAutomatically옵션 고려를 권장한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-batch/src/main/java/com/loopers/infrastructure/productrank/MonthlyProductRankJpaRepository.java` around lines 11 - 16, The JPQL bulk delete in MonthlyProductRankJpaRepository.deleteByBaseDate bypasses the persistence context which can leave the first-level cache out of sync if the same transaction later reuses entity instances (e.g., future changes to WeeklyRankWriter/MonthlyRankWriter). To fix: either (a) add `@Modifying`(clearAutomatically = true, flushAutomatically = true) to the deleteByBaseDate method to flush/clear the persistence context after the bulk delete, or (b) add a concise code comment on MonthlyProductRankJpaRepository.deleteByBaseDate warning about persistence-context invalidation and advising callers to clear the EntityManager or avoid reusing managed entities in the same transaction; choose one of these and apply consistently.apps/commerce-api/src/main/java/com/loopers/interfaces/api/ranking/RankingV1ApiSpec.java (1)
18-19: period 문서화에 enum 값을 스키마로 노출하면 좋다현재
period는String으로 받고 description에만 허용 값을 기술한다. Swagger UI에서 선택형 UX를 제공하려면@Parameter(schema =@Schema(allowableValues = {"DAILY","WEEKLY","MONTHLY"}))를 추가하는 것을 고려한다. 기능상 문제는 아니다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/interfaces/api/ranking/RankingV1ApiSpec.java` around lines 18 - 19, 현재 RankingV1ApiSpec.java의 API method 파라미터 `period`는 String으로만 선언되어 있어 Swagger UI에서 선택형이 되지 않습니다; 유지할 String 타입은 그대로 두고 해당 파라미터의 `@Parameter` 어노테이션에 schema 정보(`@Schema`(allowableValues = {"DAILY","WEEKLY","MONTHLY"}))를 추가해 스키마에 enum 값들을 노출하도록 수정하세요 (대상 식별자: `period` 파라미터, 파일: RankingV1ApiSpec.java).apps/commerce-batch/src/main/java/com/loopers/batch/job/rankaggregate/step/RankAggregationReader.java (1)
12-19: 동점 처리 시 결정적 정렬을 위한 tiebreaker 추가를 권장한다.
ORDER BY total_score DESC만 있어 동일 점수 상품 간 순서가 DB/플랜에 의존한다. 배치 재실행 시 TOP 100 경계(100~101위)에서 서로 다른 상품이 선택되거나 ranking 순서가 바뀌어 사용자에게 노출되는 결과가 실행마다 달라질 수 있으므로, 운영 관점에서 재현성/디버깅 용이성을 위해product_id등 보조 정렬 키를 추가하는 것이 안전하다.♻️ 제안 수정안
- ORDER BY total_score DESC + ORDER BY total_score DESC, product_id ASC LIMIT 100추가 테스트: 동일
total_score를 가진 상품이 여러 개 있을 때, 반복 실행 시 ranking 배정이 항상 동일한product_id순서로 나오는지 검증한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-batch/src/main/java/com/loopers/batch/job/rankaggregate/step/RankAggregationReader.java` around lines 12 - 19, The SQL in AGGREGATION_SQL inside RankAggregationReader uses only "ORDER BY total_score DESC", which can produce non-deterministic ordering for tied scores; update the query to include a deterministic tiebreaker such as "product_id" (e.g., ORDER BY total_score DESC, pm.product_id ASC) so TOP 100 selection is stable across runs, and add a test that verifies repeated executions produce the same ordering for rows with identical total_score.apps/commerce-api/src/main/java/com/loopers/infrastructure/ranking/RankingRepositoryImpl.java (1)
19-25: DAILY 쿼리에 보조 정렬 키와 인덱스 점검을 권장한다.
ORDER BY score DESC만 존재하여 동점 시LIMIT경계에서 비결정적 결과가 나올 수 있고, API 사용자에게 매 호출마다 다른 순위가 노출될 소지가 있다. 또한product_metrics의 유니크 제약은(product_id, metric_date)순서라WHERE metric_date = ?단독 조회에서는 해당 인덱스의 선행 컬럼 조건을 만족하지 못해 풀스캔으로 빠질 수 있다. TOP N 조회 성능/안정성을 위해(metric_date, score DESC)보조 인덱스와 tiebreaker를 함께 검토한다.♻️ 제안 수정안
SELECT product_id, score FROM product_metrics WHERE metric_date = ? - ORDER BY score DESC + ORDER BY score DESC, product_id ASC LIMIT ?추가 테스트: 동일
score를 가진 상품이 여러 개인 상태에서 API 연속 호출 시 동일 순서로 응답되는지 검증한다. 인덱스는 Flyway/Liquibase 등 마이그레이션 파일로 추가하고,EXPLAIN으로 range scan이 적용되는지 확인한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/infrastructure/ranking/RankingRepositoryImpl.java` around lines 19 - 25, The DAILY_SQL query uses only "ORDER BY score DESC" which yields nondeterministic ties and may not use the existing (product_id, metric_date) index; change the query in DAILY_SQL to include a deterministic tiebreaker (e.g., "ORDER BY score DESC, product_id ASC") and add a supporting composite index for fast TOP N lookups (e.g., a migration that creates an index on (metric_date, score DESC, product_id) or (metric_date, score, product_id)); after applying the migration, verify with EXPLAIN that the planner uses an index range scan and add a test that repeatedly calls the API with multiple products having identical scores to ensure stable ordering.apps/commerce-batch/src/main/java/com/loopers/infrastructure/ranking/RankingSnapshotRedisRepository.java (1)
36-44:tuple.getScore()널 가능성에 대한 방어 고려 건이다.
TypedTuple#getScore()는Double을 반환하므로 이론상null일 수 있으며, 이 경우 Line 40에서double로의 언박싱 중NullPointerException이 발생하여 스냅샷 배치 전체가 실패한다. 현재catch는NumberFormatException만 처리하므로 NPE는 스킵 로직을 타지 않는다. 운영 관점에서 Redis ZSET에 어떤 이유로든 score 없는 멤버가 들어오는 엣지 케이스가 생기면 당일 배치가 통째로 중단되므로, score null 검사를 추가하여 skip 처리하도록 하는 것이 안전하다. 단, 현행RankingScoreService단일 진입점 정책상 발생 확률이 낮다는 점은 인지하고 있으며, 방어 여부는 팀 정책에 맡긴다.♻️ 제안 수정안
for (ZSetOperations.TypedTuple<String> tuple : tuples) { String value = tuple.getValue(); + Double rawScore = tuple.getScore(); + if (rawScore == null) { + log.warn("Redis score가 null - key: {}, value: '{}' (skip)", key, value); + continue; + } try { Long productId = Long.valueOf(value); - double score = tuple.getScore(); - result.add(new ProductScore(productId, score)); + result.add(new ProductScore(productId, rawScore)); } catch (NumberFormatException e) { log.warn("Redis value 파싱 실패 - key: {}, value: '{}', score: {} (skip)", key, value, tuple.getScore()); } }추가 테스트: score가
null인TypedTuple이 섞여 있을 때 해당 항목만 skip되고 나머지는 정상 적재됨을 검증한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-batch/src/main/java/com/loopers/infrastructure/ranking/RankingSnapshotRedisRepository.java` around lines 36 - 44, In RankingSnapshotRedisRepository's loop over ZSetOperations.TypedTuple<String> (the block that currently calls tuple.getScore() and new ProductScore(productId, score)), guard against null scores by checking if tuple.getScore() == null before unboxing; if null, log a warning (similar to the NumberFormatException case) and continue/skip that tuple so you do not trigger a NullPointerException when creating the ProductScore(double). Update the try/catch to only handle NumberFormatException for Long.valueOf and avoid relying on NPE, and add a unit test that mixes a TypedTuple with a null score to verify only that entry is skipped and others are processed.apps/commerce-batch/src/main/java/com/loopers/batch/job/dailymetrics/step/DailyMetricsSnapshotTasklet.java (1)
94-101: 제네릭RuntimeException사용은 운영상 원인 추적을 어렵게 하므로 구체적 예외로 교체한다.검증 실패는
IllegalArgumentException(파라미터 부재/미래 날짜)처럼 의미 있는 타입으로 던져야 한다. 현재 코드는 상위 호출자가catch (RuntimeException)이외로 구분할 수 없어 Spring Batch의ExitStatus원인 분석이나 알림/모니터링 분기에 불리하다. 또한 가이드라인상 사용자/로그 메시지 분리와 cause 보존이 필요하므로, 추후 유사 예외에 대비해 계층적 커스텀 예외(예:InvalidBatchParameterException)를 도입하는 방향도 권장한다.♻️ 수정 예시
private void validateRequestDate() { if (requestDate == null) { - throw new RuntimeException("requestDate is required"); + throw new IllegalArgumentException("requestDate is required"); } - if (requestDate.isAfter(LocalDate.now())) { - throw new RuntimeException("requestDate는 미래 날짜일 수 없습니다: " + requestDate); + if (requestDate.isAfter(LocalDate.now(ZoneId.of("Asia/Seoul")))) { + throw new IllegalArgumentException("requestDate는 미래 날짜일 수 없습니다: " + requestDate); } }추가 테스트: null/미래일 두 케이스에서 던져지는 예외 타입이
IllegalArgumentException임을 검증하는 단위 테스트가 필요하다. 기존 E2E는ExitStatus.FAILED만 확인하므로 tasklet 단위 테스트에서 메시지/타입 단위 검증을 보강하는 것이 바람직하다.As per coding guidelines: "예외 처리 시 cause를 보존하고, 사용자 메시지와 로그 메시지를 분리하도록 제안한다."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-batch/src/main/java/com/loopers/batch/job/dailymetrics/step/DailyMetricsSnapshotTasklet.java` around lines 94 - 101, Replace the generic RuntimeException thrown from validateRequestDate() with a meaningful exception type (e.g., IllegalArgumentException or a new InvalidBatchParameterException) and preserve cause information; update validateRequestDate() to throw IllegalArgumentException for null requestDate and for future dates with clear user-facing messages while ensuring any wrapped lower-level cause is passed through if applicable, and add unit tests asserting the thrown exception type and message for the null and future-date cases; keep user-facing messages separate from log messages when you log the error elsewhere.apps/commerce-batch/src/test/java/com/loopers/job/rankaggregate/RankAggregationJobE2ETest.java (1)
264-311: 멱등성 검증에서 monthly MV 테이블 결과가 함께 확인되지 않는다.
rankAggregationJob은 weekly/monthly 두 스텝을 순차 실행하며, 멱등성은 두 MV 모두에서 보장되어야 한다. 현재 테스트는weeklyRanks만 검증하므로monthlyProductRankJpaRepository의 재실행 교체 동작이 회귀해도 감지할 수 없다.♻️ 수정 예시
List<WeeklyProductRank> weeklyRanks = weeklyProductRankJpaRepository.findAll(); + List<MonthlyProductRank> monthlyRanks = monthlyProductRankJpaRepository.findAll(); assertAll( () -> assertThat(weeklyRanks).hasSize(2), + () -> assertThat(monthlyRanks).hasSize(2), () -> assertThat(weeklyRanks) .filteredOn(r -> r.getRanking() == 1) .first() .satisfies(r -> { assertThat(r.getProductId()).isEqualTo(2L); assertThat(r.getTotalScore()).isEqualTo(200.0); - }) + }), + () -> assertThat(monthlyRanks) + .filteredOn(r -> r.getRanking() == 1) + .first() + .satisfies(r -> assertThat(r.getProductId()).isEqualTo(2L)) );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-batch/src/test/java/com/loopers/job/rankaggregate/RankAggregationJobE2ETest.java` around lines 264 - 311, Test only asserts weekly MV idempotency; you must also verify monthly MV is replaced on rerun by querying monthlyProductRankJpaRepository after launching jobParameters2 and adding assertions similar to weekly checks: assert exit status completed, fetch List<MonthlyProductRank> monthlyRanks = monthlyProductRankJpaRepository.findAll(), then assert expected size and that the top ranking entry reflects productId 2 and totalScore 200.0 (use the same run re-execution scenario in replacesExistingDataOnRerun to ensure both weekly and monthly steps are validated).apps/commerce-batch/src/test/java/com/loopers/job/dailymetrics/DailyMetricsSnapshotJobE2ETest.java (1)
183-214: 비정상 Redis 값 케이스의 경계 검증이 보강되면 좋다.현재
"abc"한 건만 검사하지만, 운영에서 마주칠 수 있는 경계값 — 빈 문자열, 공백, 음수 문자열,Long.MAX_VALUE + 1같은 범위 초과 숫자,null-like 값 — 중 최소 1~2개를 추가해RankingSnapshotRedisRepository의 skip 동작이 다양한 입력에 대해 견고함을 증명하는 것이 좋다. 또한 skip 건수가 로그에만 남고 검증되지 않으므로,StepExecution의writeCount가 2임을 함께 단언하면 회귀 감지력이 올라간다.♻️ 수정 예시
redisTemplate.opsForZSet().add(redisKey, "1", 100.0); redisTemplate.opsForZSet().add(redisKey, "abc", 50.0); + redisTemplate.opsForZSet().add(redisKey, "", 10.0); + redisTemplate.opsForZSet().add(redisKey, "99999999999999999999", 20.0); redisTemplate.opsForZSet().add(redisKey, "3", 200.0); ... - assertThat(jobExecution.getExitStatus().getExitCode()) - .isEqualTo(ExitStatus.COMPLETED.getExitCode()); + assertThat(jobExecution.getExitStatus().getExitCode()) + .isEqualTo(ExitStatus.COMPLETED.getExitCode()); + assertThat(jobExecution.getStepExecutions().iterator().next().getWriteCount()) + .isEqualTo(2);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-batch/src/test/java/com/loopers/job/dailymetrics/DailyMetricsSnapshotJobE2ETest.java` around lines 183 - 214, The test skipsInvalidRedisValues should be extended to cover more boundary invalid Redis values and assert the step's write count: add extra invalid entries into the same redis key (e.g., empty string "", whitespace " ", a negative numeric string like "-1", and an out-of-range numeric string such as a value > Long.MAX_VALUE) so RankingSnapshotRedisRepository's parsing skip logic is exercised for multiple failure modes, then after launching the job assert both that productMetricsJpaRepository.findAll() has size 2 with productIds 1 and 3 and that the job/step execution's writeCount (from the StepExecution or JobExecution returned by jobLauncherTestUtils.launchJob) equals 2 to verify only valid records were written.apps/commerce-batch/src/main/java/com/loopers/batch/job/rankaggregate/RankAggregationJobConfig.java (1)
52-60:@JobScope프록시 의존 호출은 주석으로 의도를 명시해 두는 편이 안전하다.
weeklyRankStep(null)/monthlyRankStep(null)은@JobScope프록시가 실제JobParameters바인딩 시점까지 호출을 지연시키기 때문에 현재는 정상 동작한다. 그러나 누군가 향후@JobScope를 제거하거나 스코프를 바꾸면 null이 그대로validateRequestDate에 전달되어 기동 시 NPE/예외가 발생한다. 의도가 드러나도록 짧은 주석을 남기거나,stepBuilderFactory에서@Value대신JobParametersExtractor/ExecutionContext조회 패턴으로 바꾸는 것을 권장한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-batch/src/main/java/com/loopers/batch/job/rankaggregate/RankAggregationJobConfig.java` around lines 52 - 60, The calls weeklyRankStep(null) and monthlyRankStep(null) in rankAggregationJob() rely on `@JobScope` proxying to defer parameter binding and should be explicitly documented or made safer; either add a short inline comment above those calls stating that null is passed intentionally because weeklyRankStep(...) and monthlyRankStep(...) are `@JobScope` proxies whose real JobParameters are bound at runtime, or refactor the step bean implementations (the methods weeklyRankStep, monthlyRankStep and validateRequestDate usage) to fetch parameters via JobParametersExtractor/ExecutionContext or stepBuilderFactory patterns instead of passing null so a future removal of `@JobScope` won't cause a null to reach validateRequestDate.apps/commerce-api/src/test/java/com/loopers/interfaces/api/RankingV1ApiE2ETest.java (2)
113-146: 응답 본문 null 가능성과 MV 관련 엣지 케이스가 부족하다.
- 운영 관점:
response.getBody().data().rankings()는getBody()가 null일 때 NPE로 테스트가 흐려진다(서버 500/직렬화 실패 등의 진짜 원인 대신 NPE로 리포트됨). 단언 전에getBody()not-null을 확인하거나 AssertJ의extracting체인을 사용한다.- 테스트 보강:
WEEKLY/MONTHLY에도size파라미터, 잘못된date포맷, 미래date, 대소문자 다른period(daily) 케이스가 필요하다. 현재는 DAILY 위주로만 검증되어 컨트롤러/파서의 경계 동작을 상당 부분 놓친다.♻️ 수정 예시
- () -> assertThat(response.getBody().data().rankings()).hasSize(3), + () -> assertThat(response.getBody()).isNotNull(), + () -> assertThat(response.getBody().data().rankings()).hasSize(3),As per coding guidelines: "단위 테스트는 경계값/실패 케이스/예외 흐름을 포함하는지 점검한다."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/interfaces/api/RankingV1ApiE2ETest.java` around lines 113 - 146, The test currently dereferences response.getBody() (in RankingV1ApiE2ETest assertions like response.getBody().data().rankings()) which will produce an NPE that masks real errors; update the assertions to first assertThat(response.getBody()).isNotNull() or use AssertJ extracting chains (e.g., extracting("data").extracting("rankings")...) before checking size/contents, and ensure the exchange() call result is validated for non-OK status when getBody() is null; additionally add boundary tests in this class covering period= WEEKLY and MONTHLY with size limiting, invalid date formats, a future date, and a case-insensitive period value (e.g., "daily") to cover MV/parser edge cases.
84-92:TRUNCATE와DatabaseCleanUp.truncateAllTables()가 중복 호출된다.
databaseCleanUp.truncateAllTables()이 이미 전체 테이블을 비운다면, 그 앞의 수동TRUNCATE3건과SET FOREIGN_KEY_CHECKS토글은 중복이자 MySQL 의존이다. 정리 책임을 한 곳으로 모으면 다른 RDBMS/방언으로 옮길 때도 유지보수 비용이 줄어든다.♻️ 수정 예시
`@AfterEach` void tearDown() { - jdbcTemplate.execute("SET FOREIGN_KEY_CHECKS = 0"); - jdbcTemplate.execute("TRUNCATE TABLE product_metrics"); - jdbcTemplate.execute("TRUNCATE TABLE mv_product_rank_weekly"); - jdbcTemplate.execute("TRUNCATE TABLE mv_product_rank_monthly"); - jdbcTemplate.execute("SET FOREIGN_KEY_CHECKS = 1"); databaseCleanUp.truncateAllTables(); }추가 테스트:
DatabaseCleanUp이 새로 추가된 MV 테이블까지 실제로 truncate하는지 단위 테스트로 확인한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/interfaces/api/RankingV1ApiE2ETest.java` around lines 84 - 92, The tearDown method currently duplicates cleanup by manually toggling foreign key checks and truncating product_metrics, mv_product_rank_weekly and mv_product_rank_monthly via jdbcTemplate before calling databaseCleanUp.truncateAllTables(); remove the manual jdbcTemplate.execute(...) calls and rely solely on databaseCleanUp.truncateAllTables() inside tearDown (method name: tearDown, symbol: databaseCleanUp.truncateAllTables) and, if those MV tables are not covered, update DatabaseCleanUp to include them (class/method: DatabaseCleanUp.truncateAllTables) and add a unit test to assert DatabaseCleanUp actually truncates the new MV tables.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@apps/commerce-api/src/main/java/com/loopers/domain/ranking/RankingService.java`:
- Around line 16-23: Remove the `@Transactional`(readOnly = true) annotation from
RankingService.getRanking and add `@Transactional`(readOnly = true) to the
corresponding Application layer method RankingFacade.getRanking; ensure the
domain class RankingService.getRanking remains plain (no transactional) and do
not add transactions to the infrastructure implementation RankingRepositoryImpl
(JdbcTemplate) so transaction boundaries are defined at the Application service
layer.
In
`@apps/commerce-api/src/main/java/com/loopers/interfaces/api/ranking/RankingV1Controller.java`:
- Around line 45-51: parseRankPeriod currently uses period.toUpperCase() which
is locale-dependent and can fail (e.g., Turkish locale); change it to use
period.toUpperCase(Locale.ROOT) before calling RankPeriod.valueOf to ensure
deterministic ASCII uppercasing, keep the same IllegalArgumentException handling
and CoreException throw, and add a unit test that sets Locale.setDefault(new
Locale("tr")) and verifies parseRankPeriod (or the controller endpoint) maps
"daily" to RankPeriod.DAILY.
In
`@apps/commerce-batch/src/main/java/com/loopers/batch/job/dailymetrics/step/DailyMetricsSnapshotTasklet.java`:
- Around line 95-100: The validation uses LocalDate.now() which depends on the
JVM default timezone; update DailyMetricsSnapshotTasklet to use an injected
Clock (or at least an explicit ZoneId) so the "today" boundary is deterministic:
replace calls to LocalDate.now() with LocalDate.now(clock) and add a constructor
/ field for Clock (or pass ZoneId.of("Asia/Seoul") if you must) so production
wiring sets Clock.system(ZoneId.of("Asia/Seoul")) (and unit tests can supply
Clock.fixed(...) to exercise yesterday/today/tomorrow edges); update the
requestDate null/after check to use LocalDate.now(clock) and add tests for the
boundary cases.
- Around line 76-92: Ensure transactional intent and batch insert behavior are
explicit: add `@Transactional`(readOnly = false) to
ProductMetricsJpaRepository.deleteByMetricDate() so the modifying delete is
clear; document in a comment inside saveInBatches(List<ProductMetrics>) why
entityManager.flush() and entityManager.clear() are called and how BATCH_SIZE
affects memory; if you need real JDBC batching change BaseEntity ID generation
away from GenerationType.IDENTITY to a sequence-based strategy (e.g.,
GenerationType.SEQUENCE or AUTO with sequence dialect) and enable
hibernate.jdbc.batch_size plus order_inserts/order_updates in config so calls
from saveInBatches actually use JDBC batch inserts.
In
`@apps/commerce-batch/src/main/java/com/loopers/batch/job/rankaggregate/RankAggregationJobConfig.java`:
- Around line 110-117: Replace the ad-hoc validateRequestDate(LocalDate) in
RankAggregationJobConfig with a shared utility (e.g.,
BatchRequestDateValidator.validate(requestDate, ZoneId.of("Asia/Seoul"))) and
remove the duplicate method from DailyMetricsSnapshotTasklet; the validator
should throw IllegalArgumentException (preserving an optional cause when
applicable), compare requestDate against LocalDate.now(zone) to avoid JVM
default timezone issues, and produce separate user-facing exception messages vs.
log messages; add unit tests for null/future/today/past cases to assert
exception type and messages.
In
`@apps/commerce-batch/src/main/java/com/loopers/batch/job/rankaggregate/step/WeeklyRankWriter.java`:
- Around line 31-52: getRankOffset currently returns 0 causing
duplicate/overlapping ranks across chunks; change to maintain a cumulative
counter in WeeklyRankWriter (e.g. a private int rankCounter field) that
getRankOffset returns, initialize it at step start (implement `@BeforeStep` or an
open method) and after weeklyProductRankRepository.saveAll(ranks) increment
rankCounter by ranks.size(); apply the same pattern to MonthlyRankWriter so each
chunk uses the running offset rather than always starting at 0.
In
`@apps/commerce-batch/src/main/java/com/loopers/infrastructure/productmetrics/ProductMetricsJpaRepository.java`:
- Around line 16-18: The bulk DELETE method
ProductMetricsJpaRepository.deleteByMetricDate needs JPA modifying options to
avoid stale entities and unflushed state issues; update the `@Modifying`
annotation on the deleteByMetricDate(`@Param`("metricDate") LocalDate metricDate)
method to include clearAutomatically = true (and consider adding
flushAutomatically = true) so the persistence context is synchronized after the
bulk operation, and add/adjust integration tests in DailyMetricsSnapshotTasklet
flow to verify saveAll/deleteByMetricDate/saveAll idempotency.
In
`@apps/commerce-batch/src/main/java/com/loopers/infrastructure/productrank/WeeklyProductRankJpaRepository.java`:
- Around line 13-15: The bulk DELETE method deleteByBaseDate in
WeeklyProductRankJpaRepository should declare synchronization to the persistence
context to avoid stale entities and unique-constraint conflicts; update the
`@Modifying` on deleteByBaseDate to include clearAutomatically = true (and add
flushAutomatically = true if you expect pre-flush before delete) so the
EntityManager is flushed/cleared after the bulk operation; make the same change
in MonthlyProductRankJpaRepository and add an E2E test that runs the weekly
aggregate twice for the same baseDate to assert mv_product_rank_weekly contains
exactly the configured TOP N after consecutive runs.
In
`@apps/commerce-batch/src/test/java/com/loopers/job/dailymetrics/DailyMetricsSnapshotJobE2ETest.java`:
- Around line 90-99: 현재 테스트에서 고정 문자열 dateKey ("20260412")와 LocalDate.of(2026, 4,
12)로 날짜를 하드코딩해 플래키 위험이 있으니, 테스트를 시간독립적으로 바꾸어야 합니다: 테스트 내에서
LocalDate.now(ZoneId.of("Asia/Seoul"))를 기준으로 dateKey를 yyyyMMdd 포맷으로 생성하거나(또는 더
우수하게는 Clock을 주입해 고정된 Clock을 사용하여 LocalDate.now(clock)로 생성) KEY_PREFIX + dateKey로
redisKey를 만들고 JobParametersBuilder의 requestDate에도 동일한 계산된 날짜를 사용하도록 변경해 월말/연말 및
KST 자정 경계에서 안정적으로 동작하도록 보장하세요; 또한 dateKey 생성 로직은 yyyyMMdd 포맷을 책임지는 유니크한 위치(테스트
helper 또는 동일 테스트 메서드의 변수)로 추출해 재사용하고 검증(assert)이
LocalDate.now(ZoneId.of("Asia/Seoul")) 기준과 비교되도록 맞추세요.
In
`@apps/commerce-batch/src/test/java/com/loopers/job/rankaggregate/RankAggregationJobE2ETest.java`:
- Around line 55-60: The test uses a hard-coded BASE_DATE constant in
RankAggregationJobE2ETest which makes validateRequestDate fail when the real
clock moves; change the test to compute the base date relative to the current
date (e.g., LocalDate.now() or LocalDate.now(clock).minusDays/with(...) as
needed) or refactor production/test code to accept an injectable Clock and pass
a fixed Clock in tests; update references to BASE_DATE and any test setup that
relies on it so tests use the relative date or injected Clock, and add/adjust
edge-case tests for month/year boundaries to verify weekly/monthly range
calculations still behave correctly.
---
Nitpick comments:
In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/ranking/RankingRepositoryImpl.java`:
- Around line 19-25: The DAILY_SQL query uses only "ORDER BY score DESC" which
yields nondeterministic ties and may not use the existing (product_id,
metric_date) index; change the query in DAILY_SQL to include a deterministic
tiebreaker (e.g., "ORDER BY score DESC, product_id ASC") and add a supporting
composite index for fast TOP N lookups (e.g., a migration that creates an index
on (metric_date, score DESC, product_id) or (metric_date, score, product_id));
after applying the migration, verify with EXPLAIN that the planner uses an index
range scan and add a test that repeatedly calls the API with multiple products
having identical scores to ensure stable ordering.
In
`@apps/commerce-api/src/main/java/com/loopers/interfaces/api/ranking/RankingV1ApiSpec.java`:
- Around line 18-19: 현재 RankingV1ApiSpec.java의 API method 파라미터 `period`는
String으로만 선언되어 있어 Swagger UI에서 선택형이 되지 않습니다; 유지할 String 타입은 그대로 두고 해당 파라미터의
`@Parameter` 어노테이션에 schema 정보(`@Schema`(allowableValues =
{"DAILY","WEEKLY","MONTHLY"}))를 추가해 스키마에 enum 값들을 노출하도록 수정하세요 (대상 식별자: `period`
파라미터, 파일: RankingV1ApiSpec.java).
In
`@apps/commerce-api/src/test/java/com/loopers/interfaces/api/RankingV1ApiE2ETest.java`:
- Around line 113-146: The test currently dereferences response.getBody() (in
RankingV1ApiE2ETest assertions like response.getBody().data().rankings()) which
will produce an NPE that masks real errors; update the assertions to first
assertThat(response.getBody()).isNotNull() or use AssertJ extracting chains
(e.g., extracting("data").extracting("rankings")...) before checking
size/contents, and ensure the exchange() call result is validated for non-OK
status when getBody() is null; additionally add boundary tests in this class
covering period= WEEKLY and MONTHLY with size limiting, invalid date formats, a
future date, and a case-insensitive period value (e.g., "daily") to cover
MV/parser edge cases.
- Around line 84-92: The tearDown method currently duplicates cleanup by
manually toggling foreign key checks and truncating product_metrics,
mv_product_rank_weekly and mv_product_rank_monthly via jdbcTemplate before
calling databaseCleanUp.truncateAllTables(); remove the manual
jdbcTemplate.execute(...) calls and rely solely on
databaseCleanUp.truncateAllTables() inside tearDown (method name: tearDown,
symbol: databaseCleanUp.truncateAllTables) and, if those MV tables are not
covered, update DatabaseCleanUp to include them (class/method:
DatabaseCleanUp.truncateAllTables) and add a unit test to assert DatabaseCleanUp
actually truncates the new MV tables.
In
`@apps/commerce-batch/src/main/java/com/loopers/batch/job/dailymetrics/step/DailyMetricsSnapshotTasklet.java`:
- Around line 94-101: Replace the generic RuntimeException thrown from
validateRequestDate() with a meaningful exception type (e.g.,
IllegalArgumentException or a new InvalidBatchParameterException) and preserve
cause information; update validateRequestDate() to throw
IllegalArgumentException for null requestDate and for future dates with clear
user-facing messages while ensuring any wrapped lower-level cause is passed
through if applicable, and add unit tests asserting the thrown exception type
and message for the null and future-date cases; keep user-facing messages
separate from log messages when you log the error elsewhere.
In
`@apps/commerce-batch/src/main/java/com/loopers/batch/job/rankaggregate/RankAggregationJobConfig.java`:
- Around line 52-60: The calls weeklyRankStep(null) and monthlyRankStep(null) in
rankAggregationJob() rely on `@JobScope` proxying to defer parameter binding and
should be explicitly documented or made safer; either add a short inline comment
above those calls stating that null is passed intentionally because
weeklyRankStep(...) and monthlyRankStep(...) are `@JobScope` proxies whose real
JobParameters are bound at runtime, or refactor the step bean implementations
(the methods weeklyRankStep, monthlyRankStep and validateRequestDate usage) to
fetch parameters via JobParametersExtractor/ExecutionContext or
stepBuilderFactory patterns instead of passing null so a future removal of
`@JobScope` won't cause a null to reach validateRequestDate.
In
`@apps/commerce-batch/src/main/java/com/loopers/batch/job/rankaggregate/step/RankAggregationReader.java`:
- Around line 12-19: The SQL in AGGREGATION_SQL inside RankAggregationReader
uses only "ORDER BY total_score DESC", which can produce non-deterministic
ordering for tied scores; update the query to include a deterministic tiebreaker
such as "product_id" (e.g., ORDER BY total_score DESC, pm.product_id ASC) so TOP
100 selection is stable across runs, and add a test that verifies repeated
executions produce the same ordering for rows with identical total_score.
In
`@apps/commerce-batch/src/main/java/com/loopers/domain/productrank/BaseProductRank.java`:
- Around line 36-50: Split the single guard() into per-field guard methods to
make validations reusable: create private methods guardProductId() to check
productId != null, guardTotalScore() to check totalScore >= 0, guardRanking() to
check ranking >= 1, and guardBaseDate() to check baseDate != null, then have
guard() call these methods; reference BaseProductRank, guard(), productId,
totalScore, ranking, and baseDate when implementing.
- Around line 20-21: The field BaseProductRank.ranking uses the column name
"ranking" which can clash with reserved identifiers in some RDBMS/reporting
tools; update the mapping to a neutral name (e.g., change the `@Column` name to
"product_ranking" and rename the Java field or keep field name but map to
"product_ranking") and propagate this change to schema migrations, SQL queries,
DTOs and any uses of BaseProductRank to ensure consistency; alternatively, if
you must keep "ranking", apply a project-wide escape strategy for column names
and document it.
In
`@apps/commerce-batch/src/main/java/com/loopers/infrastructure/productrank/MonthlyProductRankJpaRepository.java`:
- Around line 11-16: The JPQL bulk delete in
MonthlyProductRankJpaRepository.deleteByBaseDate bypasses the persistence
context which can leave the first-level cache out of sync if the same
transaction later reuses entity instances (e.g., future changes to
WeeklyRankWriter/MonthlyRankWriter). To fix: either (a) add
`@Modifying`(clearAutomatically = true, flushAutomatically = true) to the
deleteByBaseDate method to flush/clear the persistence context after the bulk
delete, or (b) add a concise code comment on
MonthlyProductRankJpaRepository.deleteByBaseDate warning about
persistence-context invalidation and advising callers to clear the EntityManager
or avoid reusing managed entities in the same transaction; choose one of these
and apply consistently.
In
`@apps/commerce-batch/src/main/java/com/loopers/infrastructure/ranking/RankingSnapshotRedisRepository.java`:
- Around line 36-44: In RankingSnapshotRedisRepository's loop over
ZSetOperations.TypedTuple<String> (the block that currently calls
tuple.getScore() and new ProductScore(productId, score)), guard against null
scores by checking if tuple.getScore() == null before unboxing; if null, log a
warning (similar to the NumberFormatException case) and continue/skip that tuple
so you do not trigger a NullPointerException when creating the
ProductScore(double). Update the try/catch to only handle NumberFormatException
for Long.valueOf and avoid relying on NPE, and add a unit test that mixes a
TypedTuple with a null score to verify only that entry is skipped and others are
processed.
In
`@apps/commerce-batch/src/test/java/com/loopers/domain/productmetrics/ProductMetricsUnitTest.java`:
- Around line 12-50: Add boundary and message assertions to ProductMetrics unit
tests: add a new test (or extend createsWithValidValues) that constructs
ProductMetrics with score = 0.0 and asserts creation succeeds and fields match;
update the failure tests for ProductMetrics constructor (failsWithNullProductId,
failsWithNullMetricDate, failsWithNegativeScore) to also assert the thrown
IllegalArgumentException's message contains the expected keywords (e.g.,
"productId", "metricDate", "score" or "non-negative") so the tests validate both
exception type and intent.
In
`@apps/commerce-batch/src/test/java/com/loopers/domain/productrank/MonthlyProductRankUnitTest.java`:
- Around line 13-62: The tests duplicate validation across
WeeklyProductRankUnitTest and MonthlyProductRankUnitTest and miss
boundary-success cases; extract and parametrize guard rule tests into a single
BaseProductRank test suite that exercises BaseProductRank.guard() with negative,
zero, and valid boundary inputs (include totalScore == 0 and ranking == 1 as
success cases), then simplify MonthlyProductRankUnitTest and
WeeklyProductRankUnitTest to only verify value mapping on construction (e.g.,
MonthlyProductRank and WeeklyProductRank constructors produce expected
productId, totalScore, ranking, baseDate) while removing duplicated validation
assertions; ensure references to MonthlyProductRank, WeeklyProductRank, and
BaseProductRank.guard() are used to locate and change the tests.
In
`@apps/commerce-batch/src/test/java/com/loopers/domain/productrank/WeeklyProductRankUnitTest.java`:
- Around line 41-53: Add explicit boundary-success unit tests to ensure
BaseProductRank.guard() accepts the allowed edges: add tests in
WeeklyProductRankUnitTest that construct a WeeklyProductRank with totalScore ==
0 (e.g., new WeeklyProductRank(1L, 0.0, 1, LocalDate.of(...))) and with ranking
== 1 (and totalScore non-negative) and assert no exception is thrown; reference
the WeeklyProductRank constructor and BaseProductRank.guard() so the tests will
fail if the guard is changed to reject these boundary values.
In
`@apps/commerce-batch/src/test/java/com/loopers/job/dailymetrics/DailyMetricsSnapshotJobE2ETest.java`:
- Around line 183-214: The test skipsInvalidRedisValues should be extended to
cover more boundary invalid Redis values and assert the step's write count: add
extra invalid entries into the same redis key (e.g., empty string "", whitespace
" ", a negative numeric string like "-1", and an out-of-range numeric string
such as a value > Long.MAX_VALUE) so RankingSnapshotRedisRepository's parsing
skip logic is exercised for multiple failure modes, then after launching the job
assert both that productMetricsJpaRepository.findAll() has size 2 with
productIds 1 and 3 and that the job/step execution's writeCount (from the
StepExecution or JobExecution returned by jobLauncherTestUtils.launchJob) equals
2 to verify only valid records were written.
In
`@apps/commerce-batch/src/test/java/com/loopers/job/rankaggregate/RankAggregationJobE2ETest.java`:
- Around line 264-311: Test only asserts weekly MV idempotency; you must also
verify monthly MV is replaced on rerun by querying
monthlyProductRankJpaRepository after launching jobParameters2 and adding
assertions similar to weekly checks: assert exit status completed, fetch
List<MonthlyProductRank> monthlyRanks =
monthlyProductRankJpaRepository.findAll(), then assert expected size and that
the top ranking entry reflects productId 2 and totalScore 200.0 (use the same
run re-execution scenario in replacesExistingDataOnRerun to ensure both weekly
and monthly steps are validated).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 941314f7-c511-4614-a021-26f8a4c6e80d
📒 Files selected for processing (37)
apps/commerce-api/src/main/java/com/loopers/application/ranking/RankingFacade.javaapps/commerce-api/src/main/java/com/loopers/application/ranking/RankingInfo.javaapps/commerce-api/src/main/java/com/loopers/domain/ranking/RankPeriod.javaapps/commerce-api/src/main/java/com/loopers/domain/ranking/RankingRepository.javaapps/commerce-api/src/main/java/com/loopers/domain/ranking/RankingService.javaapps/commerce-api/src/main/java/com/loopers/infrastructure/ranking/RankingRepositoryImpl.javaapps/commerce-api/src/main/java/com/loopers/interfaces/api/ranking/RankingV1ApiSpec.javaapps/commerce-api/src/main/java/com/loopers/interfaces/api/ranking/RankingV1Controller.javaapps/commerce-api/src/main/java/com/loopers/interfaces/api/ranking/RankingV1Dto.javaapps/commerce-api/src/test/java/com/loopers/interfaces/api/RankingV1ApiE2ETest.javaapps/commerce-batch/src/main/java/com/loopers/batch/job/dailymetrics/DailyMetricsSnapshotJobConfig.javaapps/commerce-batch/src/main/java/com/loopers/batch/job/dailymetrics/step/DailyMetricsSnapshotTasklet.javaapps/commerce-batch/src/main/java/com/loopers/batch/job/rankaggregate/RankAggregationJobConfig.javaapps/commerce-batch/src/main/java/com/loopers/batch/job/rankaggregate/step/MonthlyRankWriter.javaapps/commerce-batch/src/main/java/com/loopers/batch/job/rankaggregate/step/ProductScoreAggregation.javaapps/commerce-batch/src/main/java/com/loopers/batch/job/rankaggregate/step/RankAggregationReader.javaapps/commerce-batch/src/main/java/com/loopers/batch/job/rankaggregate/step/WeeklyRankWriter.javaapps/commerce-batch/src/main/java/com/loopers/domain/productmetrics/ProductMetrics.javaapps/commerce-batch/src/main/java/com/loopers/domain/productmetrics/ProductMetricsRepository.javaapps/commerce-batch/src/main/java/com/loopers/domain/productrank/BaseProductRank.javaapps/commerce-batch/src/main/java/com/loopers/domain/productrank/MonthlyProductRank.javaapps/commerce-batch/src/main/java/com/loopers/domain/productrank/MonthlyProductRankRepository.javaapps/commerce-batch/src/main/java/com/loopers/domain/productrank/WeeklyProductRank.javaapps/commerce-batch/src/main/java/com/loopers/domain/productrank/WeeklyProductRankRepository.javaapps/commerce-batch/src/main/java/com/loopers/domain/ranking/RankingSnapshotRepository.javaapps/commerce-batch/src/main/java/com/loopers/infrastructure/productmetrics/ProductMetricsJpaRepository.javaapps/commerce-batch/src/main/java/com/loopers/infrastructure/productmetrics/ProductMetricsRepositoryImpl.javaapps/commerce-batch/src/main/java/com/loopers/infrastructure/productrank/MonthlyProductRankJpaRepository.javaapps/commerce-batch/src/main/java/com/loopers/infrastructure/productrank/MonthlyProductRankRepositoryImpl.javaapps/commerce-batch/src/main/java/com/loopers/infrastructure/productrank/WeeklyProductRankJpaRepository.javaapps/commerce-batch/src/main/java/com/loopers/infrastructure/productrank/WeeklyProductRankRepositoryImpl.javaapps/commerce-batch/src/main/java/com/loopers/infrastructure/ranking/RankingSnapshotRedisRepository.javaapps/commerce-batch/src/test/java/com/loopers/domain/productmetrics/ProductMetricsUnitTest.javaapps/commerce-batch/src/test/java/com/loopers/domain/productrank/MonthlyProductRankUnitTest.javaapps/commerce-batch/src/test/java/com/loopers/domain/productrank/WeeklyProductRankUnitTest.javaapps/commerce-batch/src/test/java/com/loopers/job/dailymetrics/DailyMetricsSnapshotJobE2ETest.javaapps/commerce-batch/src/test/java/com/loopers/job/rankaggregate/RankAggregationJobE2ETest.java
| @Transactional(readOnly = true) | ||
| public List<RankingRepository.RankingEntry> getRanking(RankPeriod period, LocalDate date, int size) { | ||
| return switch (period) { | ||
| case DAILY -> rankingRepository.findDailyRanking(date, size); | ||
| case WEEKLY -> rankingRepository.findWeeklyRanking(date, size); | ||
| case MONTHLY -> rankingRepository.findMonthlyRanking(date, size); | ||
| }; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
fd -t f 'RankingRepositoryImpl.java' | xargs -I{} sh -c 'echo "=== {} ==="; cat "{}"'Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 2526
🏁 Script executed:
# Find ApplicationService or other service classes that use RankingService
rg -t java 'RankingService' --max-count=20 | head -30Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 458
🏁 Script executed:
# Check the RankingService class definition to understand its role
fd -t f 'RankingService.java' | xargs -I{} sh -c 'echo "=== {} ==="; head -40 "{}"'Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 959
🏁 Script executed:
# Check RankingFacade to see if it has `@Transactional`
fd -t f 'RankingFacade.java' | xargs -I{} sh -c 'echo "=== {} ==="; cat "{}"'Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 802
@transactional을 Application 계층(RankingFacade)으로 이동해야 한다
PR#189 아키텍처 원칙에 따르면 @Transactional 선언은 Application 계층(ApplicationService)의 책임이다. 현재 RankingService는 Domain 계층에 위치하고 있으므로 @Transactional(readOnly=true)를 제거하고, Application 계층인 RankingFacade의 getRanking() 메서드에 @Transactional(readOnly=true)를 선언해야 한다. Infrastructure 계층인 RankingRepositoryImpl은 JdbcTemplate 기반이므로 올바르게 @Transactional이 없다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/domain/ranking/RankingService.java`
around lines 16 - 23, Remove the `@Transactional`(readOnly = true) annotation from
RankingService.getRanking and add `@Transactional`(readOnly = true) to the
corresponding Application layer method RankingFacade.getRanking; ensure the
domain class RankingService.getRanking remains plain (no transactional) and do
not add transactions to the infrastructure implementation RankingRepositoryImpl
(JdbcTemplate) so transaction boundaries are defined at the Application service
layer.
| private RankPeriod parseRankPeriod(String period) { | ||
| try { | ||
| return RankPeriod.valueOf(period.toUpperCase()); | ||
| } catch (IllegalArgumentException e) { | ||
| throw new CoreException(ErrorType.BAD_REQUEST, "잘못된 period 값입니다: " + period); | ||
| } | ||
| } |
There was a problem hiding this comment.
toUpperCase() 로케일 의존성이 운영 환경에 따라 파싱 실패를 유발할 수 있다.
JVM 기본 로케일이 터키어(tr)로 설정된 환경에서는 "daily".toUpperCase()가 "DAİLY"가 되어 RankPeriod.valueOf가 항상 실패하고, 전체 랭킹 API가 400으로만 응답하게 된다. 컨테이너 이미지/운영 환경의 로케일 설정에 결과가 흔들리지 않도록 Locale.ROOT를 명시한다.
♻️ 제안 수정안
- private RankPeriod parseRankPeriod(String period) {
- try {
- return RankPeriod.valueOf(period.toUpperCase());
- } catch (IllegalArgumentException e) {
- throw new CoreException(ErrorType.BAD_REQUEST, "잘못된 period 값입니다: " + period);
- }
- }
+ private RankPeriod parseRankPeriod(String period) {
+ try {
+ return RankPeriod.valueOf(period.toUpperCase(java.util.Locale.ROOT));
+ } catch (IllegalArgumentException | NullPointerException e) {
+ throw new CoreException(ErrorType.BAD_REQUEST, "잘못된 period 값입니다: " + period);
+ }
+ }추가 테스트: Locale.setDefault(new Locale("tr")) 상태에서 period=daily 요청이 정상적으로 DAILY로 매핑되는지 검증하는 단위 테스트를 추가한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/interfaces/api/ranking/RankingV1Controller.java`
around lines 45 - 51, parseRankPeriod currently uses period.toUpperCase() which
is locale-dependent and can fail (e.g., Turkish locale); change it to use
period.toUpperCase(Locale.ROOT) before calling RankPeriod.valueOf to ensure
deterministic ASCII uppercasing, keep the same IllegalArgumentException handling
and CoreException throw, and add a unit test that sets Locale.setDefault(new
Locale("tr")) and verifies parseRankPeriod (or the controller endpoint) maps
"daily" to RankPeriod.DAILY.
| productMetricsRepository.deleteByMetricDate(requestDate); | ||
| saveInBatches(metricsList); | ||
|
|
||
| contribution.incrementWriteCount(metricsList.size()); | ||
| log.info("product_metrics 적재 완료 - 적재: {} 건, skip: {} 건", metricsList.size(), skipCount); | ||
|
|
||
| return RepeatStatus.FINISHED; | ||
| } | ||
|
|
||
| private void saveInBatches(List<ProductMetrics> metricsList) { | ||
| for (int i = 0; i < metricsList.size(); i += BATCH_SIZE) { | ||
| int end = Math.min(i + BATCH_SIZE, metricsList.size()); | ||
| productMetricsRepository.saveAll(metricsList.subList(i, end)); | ||
| entityManager.flush(); | ||
| entityManager.clear(); | ||
| } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# hibernate batch 관련 설정 존재 여부 점검
rg -nP --type=yaml --type=properties '(hibernate\.jdbc\.batch_size|order_inserts|order_updates)' -C2
# ProductMetrics ID 생성 전략 확인 (IDENTITY면 배치 인서트 미동작)
rg -nP --type=java -C3 '@GeneratedValue' -g '**/ProductMetrics.java'
# saveInBatches 경로에서의 트랜잭션 경계 확인
ast-grep --pattern $'@Transactional
$$$
public $_ deleteByMetricDate($$$) { $$$ }'Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 119
🏁 Script executed:
#!/bin/bash
# application.yml에서 hibernate batch 관련 설정 확인
rg -nP --type=yaml 'hibernate\.jdbc\.batch_size|order_inserts|order_updates' -C2
# ProductMetrics 파일 위치 찾기
find . -name 'ProductMetrics.java' -type f
# DailyMetricsSnapshotTasklet 전체 파일 크기 확인
wc -l ./apps/commerce-batch/src/main/java/com/loopers/batch/job/dailymetrics/step/DailyMetricsSnapshotTasklet.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 287
🏁 Script executed:
#!/bin/bash
# ProductMetrics의 ID 생성 전략 확인
rg -nP --type=java '@GeneratedValue' -C3 | grep -A3 'ProductMetrics'Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 67
🏁 Script executed:
#!/bin/bash
# DailyMetricsSnapshotTasklet에서 `@Transactional` 및 메서드 확인
cat -n ./apps/commerce-batch/src/main/java/com/loopers/batch/job/dailymetrics/step/DailyMetricsSnapshotTasklet.java | head -100Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 4710
🏁 Script executed:
#!/bin/bash
# productMetricsRepository의 deleteByMetricDate 메서드 정의 확인
find . -name '*ProductMetricsRepository*' -type fRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 296
🏁 Script executed:
#!/bin/bash
# ProductMetrics의 ID 생성 전략 확인
cat -n ./apps/commerce-batch/src/main/java/com/loopers/domain/productmetrics/ProductMetrics.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 1997
🏁 Script executed:
#!/bin/bash
# ProductMetricsRepository 인터페이스 확인
cat -n ./apps/commerce-batch/src/main/java/com/loopers/domain/productmetrics/ProductMetricsRepository.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 582
🏁 Script executed:
#!/bin/bash
# ProductMetricsRepositoryImpl 구현 확인
cat -n ./apps/commerce-batch/src/main/java/com/loopers/infrastructure/productmetrics/ProductMetricsRepositoryImpl.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 1488
🏁 Script executed:
#!/bin/bash
# DailyMetricsSnapshotJobConfig 확인
find . -name 'DailyMetricsSnapshotJobConfig.java' -type f -exec cat -n {} \;Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 2590
🏁 Script executed:
#!/bin/bash
# application.yml 파일 찾기 및 hibernate 설정 확인
find . -name 'application*.yml' -type f | head -5Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 256
🏁 Script executed:
#!/bin/bash
# BaseEntity의 ID 생성 전략 확인
find . -name 'BaseEntity.java' -type f -exec cat -n {} \;Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 2291
🏁 Script executed:
#!/bin/bash
# 배치 application.yml의 hibernate 설정 확인
cat ./apps/commerce-batch/src/main/resources/application.ymlRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 783
🏁 Script executed:
#!/bin/bash
# ProductMetricsJpaRepository 확인
find . -name 'ProductMetricsJpaRepository.java' -type f -exec cat -n {} \;Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 934
🏁 Script executed:
#!/bin/bash
# jpa.yml 설정 확인
find . -name 'jpa.yml' -type f -exec cat {} \;Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 2231
DELETE 이후 배치 저장의 트랜잭션 안전성과 배치 인서트 성능을 점검한다.
-
트랜잭션 경계: Step 레벨에서 트랜잭션이 관리되므로,
deleteByMetricDate이후saveInBatches도중 예외가 발생하면 전체 Step이 롤백되어 DELETE만 반영되는 상황은 발생하지 않는다. 다만ProductMetricsJpaRepository.deleteByMetricDate()에@Modifying외에 명시적@Transactional(readOnly = false)을 추가하여 쿼리 의도를 명확히 해야 한다. -
배치 인서트 비활성화:
BaseEntity가GenerationType.IDENTITY를 사용하고 있고, 설정에hibernate.jdbc.batch_size,order_inserts=true,order_updates=true가 전혀 없다. IDENTITY 전략 자체가 Hibernate 배치를 비활성화하므로,BATCH_SIZE = 1000으로 나눈saveAll호출도 JDBC 배치로 전송되지 않는다. 배치 성능을 기대하려면 ID 전략을GenerationType.SEQUENCE또는GenerationType.AUTO(with sequence dialect)로 변경하고, 설정에 배치 옵션을 추가해야 한다. -
메모리 누수 방지:
flush()와clear()를 명시적으로 호출하는 것은 영속성 컨텍스트를 비우기 위함이므로, 목적과 batch 크기의 관계를 주석으로 문서화하고, 실제 메모리 사용량이 기대치인지 모니터링하는 테스트를 추가해야 한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-batch/src/main/java/com/loopers/batch/job/dailymetrics/step/DailyMetricsSnapshotTasklet.java`
around lines 76 - 92, Ensure transactional intent and batch insert behavior are
explicit: add `@Transactional`(readOnly = false) to
ProductMetricsJpaRepository.deleteByMetricDate() so the modifying delete is
clear; document in a comment inside saveInBatches(List<ProductMetrics>) why
entityManager.flush() and entityManager.clear() are called and how BATCH_SIZE
affects memory; if you need real JDBC batching change BaseEntity ID generation
away from GenerationType.IDENTITY to a sequence-based strategy (e.g.,
GenerationType.SEQUENCE or AUTO with sequence dialect) and enable
hibernate.jdbc.batch_size plus order_inserts/order_updates in config so calls
from saveInBatches actually use JDBC batch inserts.
| if (requestDate == null) { | ||
| throw new RuntimeException("requestDate is required"); | ||
| } | ||
| if (requestDate.isAfter(LocalDate.now())) { | ||
| throw new RuntimeException("requestDate는 미래 날짜일 수 없습니다: " + requestDate); | ||
| } |
There was a problem hiding this comment.
LocalDate.now()는 JVM 기본 타임존에 의존하므로 배치 서버 환경에 따라 검증이 흔들릴 수 있다.
배치 컨테이너/서버의 타임존이 UTC이고 업무 기준이 KST라면, 자정 부근에서 "미래 날짜"가 아님에도 미래로 판정되거나 반대 상황이 발생할 수 있다. 운영 기준 타임존을 명시하여 결정론적으로 동작하도록 고정하는 편이 안전하다.
♻️ 수정 예시
- if (requestDate.isAfter(LocalDate.now())) {
+ if (requestDate.isAfter(LocalDate.now(ZoneId.of("Asia/Seoul")))) {
throw new IllegalArgumentException("requestDate는 미래 날짜일 수 없습니다: " + requestDate);
}추가 테스트: 시스템 Clock을 주입 가능하게 분리하여 경계(오늘/내일/어제)에서 결정론적으로 검증하는 단위 테스트가 권장된다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-batch/src/main/java/com/loopers/batch/job/dailymetrics/step/DailyMetricsSnapshotTasklet.java`
around lines 95 - 100, The validation uses LocalDate.now() which depends on the
JVM default timezone; update DailyMetricsSnapshotTasklet to use an injected
Clock (or at least an explicit ZoneId) so the "today" boundary is deterministic:
replace calls to LocalDate.now() with LocalDate.now(clock) and add a constructor
/ field for Clock (or pass ZoneId.of("Asia/Seoul") if you must) so production
wiring sets Clock.system(ZoneId.of("Asia/Seoul")) (and unit tests can supply
Clock.fixed(...) to exercise yesterday/today/tomorrow edges); update the
requestDate null/after check to use LocalDate.now(clock) and add tests for the
boundary cases.
| private void validateRequestDate(LocalDate requestDate) { | ||
| if (requestDate == null) { | ||
| throw new RuntimeException("requestDate is required"); | ||
| } | ||
| if (requestDate.isAfter(LocalDate.now())) { | ||
| throw new RuntimeException("requestDate는 미래 날짜일 수 없습니다: " + requestDate); | ||
| } | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
제네릭 RuntimeException과 타임존 미지정, 그리고 DailyMetricsSnapshotTasklet과의 검증 로직 중복을 정리한다.
- 예외 타입:
IllegalArgumentException처럼 의미 있는 타입으로 전환하여 배치 실패 원인을 호출측에서 구분 가능하게 한다. 가이드라인상 cause 보존과 사용자/로그 메시지 분리도 함께 고려한다. - 타임존:
LocalDate.now()는 JVM 기본 타임존에 의존하므로, 배치 컨테이너 타임존이 업무 기준과 다를 때 자정 부근에서 잘못 미래로 판정될 수 있다.ZoneId.of("Asia/Seoul")등 운영 기준 타임존을 명시한다. - 중복: 동일 검증이
apps/commerce-batch/src/main/java/com/loopers/batch/job/dailymetrics/step/DailyMetricsSnapshotTasklet.java의validateRequestDate와 거의 동일하다. 공통 유틸(예:BatchRequestDateValidator)로 추출하여 DRY를 유지한다.
♻️ 수정 예시
- private void validateRequestDate(LocalDate requestDate) {
- if (requestDate == null) {
- throw new RuntimeException("requestDate is required");
- }
- if (requestDate.isAfter(LocalDate.now())) {
- throw new RuntimeException("requestDate는 미래 날짜일 수 없습니다: " + requestDate);
- }
- }
+ private void validateRequestDate(LocalDate requestDate) {
+ if (requestDate == null) {
+ throw new IllegalArgumentException("requestDate is required");
+ }
+ if (requestDate.isAfter(LocalDate.now(ZoneId.of("Asia/Seoul")))) {
+ throw new IllegalArgumentException("requestDate는 미래 날짜일 수 없습니다: " + requestDate);
+ }
+ }추가 테스트: 공통 유틸로 분리 시 단위 테스트에서 null/미래/현재/과거 4케이스의 예외 타입과 메시지를 직접 검증한다.
As per coding guidelines: "예외 처리 시 cause를 보존하고, 사용자 메시지와 로그 메시지를 분리하도록 제안한다."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-batch/src/main/java/com/loopers/batch/job/rankaggregate/RankAggregationJobConfig.java`
around lines 110 - 117, Replace the ad-hoc validateRequestDate(LocalDate) in
RankAggregationJobConfig with a shared utility (e.g.,
BatchRequestDateValidator.validate(requestDate, ZoneId.of("Asia/Seoul"))) and
remove the duplicate method from DailyMetricsSnapshotTasklet; the validator
should throw IllegalArgumentException (preserving an optional cause when
applicable), compare requestDate against LocalDate.now(zone) to avoid JVM
default timezone issues, and produce separate user-facing exception messages vs.
log messages; add unit tests for null/future/today/past cases to assert
exception type and messages.
| int rankOffset = getRankOffset(chunk); | ||
|
|
||
| for (int i = 0; i < chunk.size(); i++) { | ||
| ProductScoreAggregation agg = chunk.getItems().get(i); | ||
| ranks.add(new WeeklyProductRank( | ||
| agg.productId(), | ||
| agg.totalScore(), | ||
| rankOffset + i + 1, | ||
| baseDate | ||
| )); | ||
| } | ||
|
|
||
| weeklyProductRankRepository.saveAll(ranks); | ||
| log.info("weekly 랭킹 적재 - {} 건 (ranking {} ~ {})", | ||
| ranks.size(), rankOffset + 1, rankOffset + ranks.size()); | ||
| } | ||
|
|
||
| private int getRankOffset(Chunk<? extends ProductScoreAggregation> chunk) { | ||
| // TOP 100이므로 chunkSize >= 100이면 항상 첫 번째 chunk에서 모두 처리됨 | ||
| // 만약 chunkSize가 100보다 작은 경우를 대비한 안전 장치 | ||
| return 0; | ||
| } |
There was a problem hiding this comment.
getRankOffset 고정 0 반환은 잠재적 랭킹 오류의 원인이 된다.
현재는 Reader의 LIMIT 100과 CHUNK_SIZE >= 100 조합에 의존하여 항상 첫 번째 chunk에서 끝난다는 가정에 기반한다. 향후 CHUNK_SIZE가 낮아지거나, 모니터링/튜닝 목적으로 LIMIT이 변경되면 두 번째 chunk부터 i+1이 다시 1부터 시작하여 동일한 (baseDate, product_id) 유니크 제약 위반 또는 ranking이 중복되는 심각한 데이터 결함이 발생한다. 운영 관점에서 "설정 한 줄 바꿨더니 주간/월간 랭킹 배치가 깨졌다" 유형의 회귀 장애로 이어지기 쉬운 구조이다. 필드 카운터로 chunk 간 누적 순위를 유지하는 것이 안전하다.
♻️ 제안 수정안
private final WeeklyProductRankRepository weeklyProductRankRepository;
private final LocalDate baseDate;
private final AtomicBoolean deleted = new AtomicBoolean(false);
+ private final AtomicInteger rankCounter = new AtomicInteger(0);
@@
- List<WeeklyProductRank> ranks = new ArrayList<>();
- int rankOffset = getRankOffset(chunk);
-
- for (int i = 0; i < chunk.size(); i++) {
- ProductScoreAggregation agg = chunk.getItems().get(i);
- ranks.add(new WeeklyProductRank(
- agg.productId(),
- agg.totalScore(),
- rankOffset + i + 1,
- baseDate
- ));
- }
+ List<WeeklyProductRank> ranks = new ArrayList<>();
+ int startRank = rankCounter.get() + 1;
+ for (ProductScoreAggregation agg : chunk.getItems()) {
+ ranks.add(new WeeklyProductRank(
+ agg.productId(),
+ agg.totalScore(),
+ rankCounter.incrementAndGet(),
+ baseDate
+ ));
+ }
@@
- weeklyProductRankRepository.saveAll(ranks);
- log.info("weekly 랭킹 적재 - {} 건 (ranking {} ~ {})",
- ranks.size(), rankOffset + 1, rankOffset + ranks.size());
- }
-
- private int getRankOffset(Chunk<? extends ProductScoreAggregation> chunk) {
- // TOP 100이므로 chunkSize >= 100이면 항상 첫 번째 chunk에서 모두 처리됨
- // 만약 chunkSize가 100보다 작은 경우를 대비한 안전 장치
- return 0;
- }
+ weeklyProductRankRepository.saveAll(ranks);
+ log.info("weekly 랭킹 적재 - {} 건 (ranking {} ~ {})",
+ ranks.size(), startRank, rankCounter.get());
+ }추가 테스트: CHUNK_SIZE를 50으로 낮추고 120건을 투입했을 때, 저장된 100건의 ranking이 1..100으로 유니크하게 채워지는지 검증한다. 동일한 구조적 이슈가 MonthlyRankWriter에도 존재하므로 동일 수정을 적용해야 한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-batch/src/main/java/com/loopers/batch/job/rankaggregate/step/WeeklyRankWriter.java`
around lines 31 - 52, getRankOffset currently returns 0 causing
duplicate/overlapping ranks across chunks; change to maintain a cumulative
counter in WeeklyRankWriter (e.g. a private int rankCounter field) that
getRankOffset returns, initialize it at step start (implement `@BeforeStep` or an
open method) and after weeklyProductRankRepository.saveAll(ranks) increment
rankCounter by ranks.size(); apply the same pattern to MonthlyRankWriter so each
chunk uses the running offset rather than always starting at 0.
| @Modifying | ||
| @Query("DELETE FROM ProductMetrics pm WHERE pm.metricDate = :metricDate") | ||
| void deleteByMetricDate(@Param("metricDate") LocalDate metricDate); |
There was a problem hiding this comment.
@Modifying 벌크 DELETE 후 영속성 컨텍스트 동기화 옵션이 필요하다.
DailyMetricsSnapshotTasklet은 동일 트랜잭션에서 deleteByMetricDate(requestDate) 호출 직후 saveAll(...)을 수행한다. 현재 설정에서는 아래 두 가지 운영 리스크가 존재한다:
- 동일 트랜잭션 내에
findByMetricDate등으로 이미 로드된 엔티티가 있다면 벌크 DELETE가 영속성 컨텍스트를 우회하여 스테일 엔티티가 잔존하고, 이후 merge/flush 시 재삽입 혹은EntityExistsException/유니크 제약 위반이 발생할 수 있다. - 영속성 컨텍스트에 아직 flush되지 않은 동일
metric_date변경 사항이 있으면 DELETE가 해당 변경을 반영하지 못하고(product_id, metric_date)유니크 제약 충돌로 배치가 중단될 수 있다.
재실행 멱등성을 위해 clearAutomatically = true(경우에 따라 flushAutomatically = true 병행)를 명시할 것을 권고한다.
🔧 제안 수정
- `@Modifying`
+ `@Modifying`(clearAutomatically = true, flushAutomatically = true)
`@Query`("DELETE FROM ProductMetrics pm WHERE pm.metricDate = :metricDate")
void deleteByMetricDate(`@Param`("metricDate") LocalDate metricDate);추가 테스트: 동일 트랜잭션에서 saveAll → deleteByMetricDate → saveAll 순으로 호출했을 때 최종 결과가 마지막 saveAll과 일치하는지 검증하는 통합 테스트를 추가할 것을 권고한다.
As per coding guidelines: "대량 데이터에서의 병목을 점검한다. 트랜잭션 밖 Lazy 로딩 문제와 영속성 컨텍스트 오염 가능성을 점검한다."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-batch/src/main/java/com/loopers/infrastructure/productmetrics/ProductMetricsJpaRepository.java`
around lines 16 - 18, The bulk DELETE method
ProductMetricsJpaRepository.deleteByMetricDate needs JPA modifying options to
avoid stale entities and unflushed state issues; update the `@Modifying`
annotation on the deleteByMetricDate(`@Param`("metricDate") LocalDate metricDate)
method to include clearAutomatically = true (and consider adding
flushAutomatically = true) so the persistence context is synchronized after the
bulk operation, and add/adjust integration tests in DailyMetricsSnapshotTasklet
flow to verify saveAll/deleteByMetricDate/saveAll idempotency.
| @Modifying | ||
| @Query("DELETE FROM WeeklyProductRank r WHERE r.baseDate = :baseDate") | ||
| void deleteByBaseDate(@Param("baseDate") LocalDate baseDate); |
There was a problem hiding this comment.
@Modifying 벌크 DELETE 후 영속성 컨텍스트 동기화 옵션이 필요하다.
WeeklyRankWriter.write()는 동일 트랜잭션에서 deleteByBaseDate(baseDate) 호출 후 동일 청크 루프 안에서 saveAll(ranks)을 수행하며, 이후 청크에서도 같은 트랜잭션 범위로 계속 저장이 누적된다. 이 경우 다음 운영 리스크가 존재한다:
- 동일 트랜잭션 내 기존 영속화 엔티티가 있을 경우 벌크 DELETE가 영속성 컨텍스트를 우회하여 스테일 엔티티가 남고, 후속
saveAll에서(base_date, product_id)유니크 제약 충돌이 발생할 수 있다. - 청크 간 트랜잭션이 유지되는 구성이라면 이전 청크의 미플러시 변경이 DELETE에 반영되지 않아 집계 결과가 일부 유실될 수 있다.
clearAutomatically = true(필요시 flushAutomatically = true 병행)를 명시하여 재실행 멱등성을 보강할 것을 권고한다.
🔧 제안 수정
- `@Modifying`
+ `@Modifying`(clearAutomatically = true, flushAutomatically = true)
`@Query`("DELETE FROM WeeklyProductRank r WHERE r.baseDate = :baseDate")
void deleteByBaseDate(`@Param`("baseDate") LocalDate baseDate);추가 테스트: 동일 baseDate로 주간 집계 잡을 연속 2회 실행했을 때 mv_product_rank_weekly의 최종 레코드 수가 TOP 100(또는 설정값)과 정확히 일치하는지 검증하는 E2E 테스트를 추가할 것을 권고한다. 월간(MonthlyProductRankJpaRepository)에도 동일 수정이 필요하다.
As per coding guidelines: "트랜잭션 밖 Lazy 로딩 문제와 영속성 컨텍스트 오염 가능성을 점검한다."
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| @Modifying | |
| @Query("DELETE FROM WeeklyProductRank r WHERE r.baseDate = :baseDate") | |
| void deleteByBaseDate(@Param("baseDate") LocalDate baseDate); | |
| `@Modifying`(clearAutomatically = true, flushAutomatically = true) | |
| `@Query`("DELETE FROM WeeklyProductRank r WHERE r.baseDate = :baseDate") | |
| void deleteByBaseDate(`@Param`("baseDate") LocalDate baseDate); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-batch/src/main/java/com/loopers/infrastructure/productrank/WeeklyProductRankJpaRepository.java`
around lines 13 - 15, The bulk DELETE method deleteByBaseDate in
WeeklyProductRankJpaRepository should declare synchronization to the persistence
context to avoid stale entities and unique-constraint conflicts; update the
`@Modifying` on deleteByBaseDate to include clearAutomatically = true (and add
flushAutomatically = true if you expect pre-flush before delete) so the
EntityManager is flushed/cleared after the bulk operation; make the same change
in MonthlyProductRankJpaRepository and add an E2E test that runs the weekly
aggregate twice for the same baseDate to assert mv_product_rank_weekly contains
exactly the configured TOP N after consecutive runs.
| String dateKey = "20260412"; | ||
| String redisKey = KEY_PREFIX + dateKey; | ||
| redisTemplate.opsForZSet().add(redisKey, "1", 150.5); | ||
| redisTemplate.opsForZSet().add(redisKey, "2", 300.0); | ||
| redisTemplate.opsForZSet().add(redisKey, "3", 75.2); | ||
|
|
||
| var jobParameters = new JobParametersBuilder() | ||
| .addLocalDate("requestDate", LocalDate.of(2026, 4, 12)) | ||
| .addLong("run.id", 100L) | ||
| .toJobParameters(); |
There was a problem hiding this comment.
실행 시점에 의존하는 고정 날짜 사용으로 잠재적 플래키 위험이 있다.
"20260412" / LocalDate.of(2026, 4, 12)는 현재는 과거 날짜로 동작하지만, 테스트 코드가 시간이 흐른 뒤 과거 고정 날짜로만 의미를 갖게 되고, 반대로 시스템 시계를 과거로 돌려 실행하면 검증(isAfter(LocalDate.now()))에 걸려 의도치 않게 실패한다. LocalDate.now(ZoneId.of("Asia/Seoul"))를 기준으로 하는 상대 날짜 또는 주입 가능한 Clock을 사용해 시간 독립적으로 만드는 편이 운영/CI 안정성에 유리하다.
추가 테스트: 월말/연말 경계에서 yyyyMMdd 키 포맷이 정확한지, KST 기준 자정 경계에서 동작이 기대대로인지 검증한다.
As per coding guidelines: "통합 테스트는 격리 수준, 플래키 가능성, 테스트 데이터 준비/정리 전략을 점검한다."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-batch/src/test/java/com/loopers/job/dailymetrics/DailyMetricsSnapshotJobE2ETest.java`
around lines 90 - 99, 현재 테스트에서 고정 문자열 dateKey ("20260412")와 LocalDate.of(2026,
4, 12)로 날짜를 하드코딩해 플래키 위험이 있으니, 테스트를 시간독립적으로 바꾸어야 합니다: 테스트 내에서
LocalDate.now(ZoneId.of("Asia/Seoul"))를 기준으로 dateKey를 yyyyMMdd 포맷으로 생성하거나(또는 더
우수하게는 Clock을 주입해 고정된 Clock을 사용하여 LocalDate.now(clock)로 생성) KEY_PREFIX + dateKey로
redisKey를 만들고 JobParametersBuilder의 requestDate에도 동일한 계산된 날짜를 사용하도록 변경해 월말/연말 및
KST 자정 경계에서 안정적으로 동작하도록 보장하세요; 또한 dateKey 생성 로직은 yyyyMMdd 포맷을 책임지는 유니크한 위치(테스트
helper 또는 동일 테스트 메서드의 변수)로 추출해 재사용하고 검증(assert)이
LocalDate.now(ZoneId.of("Asia/Seoul")) 기준과 비교되도록 맞추세요.
| private static final LocalDate BASE_DATE = LocalDate.of(2026, 4, 12); | ||
|
|
||
| @AfterEach | ||
| void tearDown() { | ||
| databaseCleanUp.truncateAllTables(); | ||
| } |
There was a problem hiding this comment.
고정된 BASE_DATE는 시간 흐름에 따라 플래키 원인이 될 수 있다.
현재 BASE_DATE = 2026-04-12는 실행 시점이 이 날짜 이전이 되면 validateRequestDate에서 미래 날짜로 간주되어 전체 테스트가 실패한다. 또한 과거로 너무 멀어지면 CI 로그상 의미가 흐려지고, 외부 라이브러리 동작(캘린더/ZoneId)이 바뀌면 회귀가 어렵다. 운영 관점에서 E2E는 "언제 실행해도 동일 결과"가 되어야 하므로 기준일을 LocalDate.now() 기반의 상대 날짜로 계산하거나, Clock을 주입 가능하게 분리하는 것이 바람직하다.
♻️ 수정 예시
- private static final LocalDate BASE_DATE = LocalDate.of(2026, 4, 12);
+ // 운영 타임존 기준 오늘을 기준일로 사용하여 시간 흐름에 무관하게 동작시킨다.
+ private final LocalDate baseDate = LocalDate.now(ZoneId.of("Asia/Seoul")).minusDays(1);추가 테스트: 월/연 경계(월초, 월말, 연말→연초) 케이스에서 주간/월간 범위 계산이 정확한지 검증하는 테스트를 보강한다.
As per coding guidelines: "통합 테스트는 격리 수준, 플래키 가능성, 테스트 데이터 준비/정리 전략을 점검한다."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-batch/src/test/java/com/loopers/job/rankaggregate/RankAggregationJobE2ETest.java`
around lines 55 - 60, The test uses a hard-coded BASE_DATE constant in
RankAggregationJobE2ETest which makes validateRequestDate fail when the real
clock moves; change the test to compute the base date relative to the current
date (e.g., LocalDate.now() or LocalDate.now(clock).minusDays/with(...) as
needed) or refactor production/test code to accept an injectable Clock and pass
a fixed Clock in tests; update references to BASE_DATE and any test setup that
relies on it so tests use the relative date or injected Clock, and add/adjust
edge-case tests for month/year boundaries to verify weekly/monthly range
calculations still behave correctly.
📌 Summary
이번 PR에서는 Redis 기반 실시간 랭킹 시스템을 확장하여,
일간 스냅샷 → 주간/월간 집계 → 조회 전용 테이블(MV) → period 기반 API 조회까지 이어지는
랭킹 데이터 파이프라인 전체를 구현했습니다.
이번 브랜치의 구현 흐름은 아래 5개 커밋으로 구성되어 있습니다.
7c9d5f4)0451247)fcffa97)9abc3ce)bfb798a)🧭 Why
기존 랭킹 구조는 Redis 기반 실시간 조회에는 적합했지만
**시간 기반 집계(주간/월간)**에는 한계가 있었습니다.
👉 Redis = 현재 상태
👉 필요한 것 = 기간 기반 집계 가능 구조
🏗️ Design Overview
전체 흐름
flowchart TD A[User Activity] --> B[Redis Ranking] B --> C[Daily Snapshot Batch] C --> D[product_metrics] D --> E[Aggregation Batch] E --> F[Weekly Rank] E --> G[Monthly Rank] F --> H[Ranking API] G --> H D --> H⚙️ Daily Snapshot Batch
역할
Redis 데이터를
👉 집계 가능한 기준 데이터(product_metrics) 로 변환합니다.
핵심 로직
핵심 문제 & 해결
문제:
해결:
⚙️ Aggregation Batch
역할
product_metrics기반👉 주간 / 월간 랭킹 생성
흐름
flowchart LR A[product_metrics] --> B[기간 필터] B --> C[상품별 그룹핑] C --> D[점수 합산] D --> E[정렬] E --> F[TOP 100] F --> G[MV 저장]설계 포인트
📦 MV 테이블
📡 API
GET
/api/v1/rankings분기
🔧 안정성
🎯 결론
👉 “랭킹을 다시 계산할 수 있는 구조”를 만든 작업
🚀 결과
✅ Checklist
변경 목적
Redis 실시간 랭킹 시스템을 바탕으로 기간별(일일/주간/월간) 집계 및 조회를 지원하는 배치 시스템을 구축하고, 이를 위한 API 계층을 새로 구현.
핵심 변경점
리스크/주의사항
spring.batch.job.name파라미터(dailyMetricsSnapshotJob 또는 rankAggregationJob)로 수동 트리거되어야 하는데, 자동 스케줄링 로직이 구현되어 있지 않음 - 별도의 스케줄러(크론잡, 외부 오케스트레이션)가 필요한지 확인 필요테스트/검증 방법