[volume-10] Spring Batch 기반 주간/월간 랭킹 시스템 구현 - 김윤선#393
[volume-10] Spring Batch 기반 주간/월간 랭킹 시스템 구현 - 김윤선#393kimyam1008 wants to merge 8 commits intoLoopers-dev-lab:kimyam1008from
Conversation
📝 WalkthroughWalkthrough
Changes
Sequence DiagramsequenceDiagram
actor User
participant Controller as RankingV1Controller
participant Facade as RankingFacade
participant Cache as RankingCacheStore
participant Repo as MvRankingRepository<br/>(weekly/monthly)
participant OldRepo as RankingRepository<br/>(daily)
participant Product as ProductRepository
participant Brand as BrandRepository
participant DB as Database
User->>Controller: GET /api/v1/rankings?period=weekly&date=...
Controller->>Facade: getRankings(period, date, page, size)
alt period = "weekly" or "monthly"
Facade->>Cache: getRankings(period, date, page, size)
alt Cache Hit
Cache-->>Facade: List<RankingItemInfo>
else Cache Miss
Facade->>Repo: findWeeklyRankings(aggregatedAt, page, size)
Repo->>DB: Query mv_product_rank_weekly
DB-->>Repo: List<MvProductRankWeekly>
Repo-->>Facade: List<MvProductRankWeekly>
Facade->>Product: findAllByIdIn(productIds)
Product->>DB: Query products
DB-->>Product: Product list
Product-->>Facade: Product list
Facade->>Brand: findAllByIdIn(brandIds)
Brand->>DB: Query brands
DB-->>Brand: Brand list
Brand-->>Facade: Brand list
Facade->>Facade: Build RankingItemInfo
Facade->>Cache: putRankings(period, date, page, size, result)
Cache->>DB: Store in Redis with period-based TTL
Cache-->>Facade: done
Facade-->>Controller: List<RankingItemInfo>
end
else period = "daily" or other
Facade->>Cache: getRankings(period, date, page, size)
alt Cache Hit
Cache-->>Facade: List<RankingItemInfo>
else Cache Miss
Facade->>OldRepo: findRankings(date, page, size)
OldRepo->>DB: Query rankings
DB-->>OldRepo: Ranking data
OldRepo-->>Facade: Ranking data
Facade->>Product: findAllByIdIn(productIds)
Product-->>Facade: Product list
Facade->>Brand: findAllByIdIn(brandIds)
Brand-->>Facade: Brand list
Facade->>Facade: Build RankingItemInfo
Facade->>Cache: putRankings(period, date, page, size, result)
Cache-->>Facade: done
Facade-->>Controller: List<RankingItemInfo>
end
end
Controller-->>User: JSON response
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
운영/장애/보안 관점 주요 지적1. 캐시 TTL 전략 검토 필요문제점: 주간/월간 랭킹에 7일 TTL을 사용하지만, 배치는 매일 집계를 수행한다. 날짜 경계에서 캐시 불일치 발생 가능성이 있다. 수정안:
추가 테스트: 배치 실행 후 구 데이터 캐시 반환 여부 검증 2. 포화 함수 점수 계산의 일관성 보증문제점: 수정안:
추가 테스트: 설정 변경 시 점수 재계산 결과 검증, 배치 재실행 멱등성 확인 3. 메트릭 데이터 정합성문제점: 수정안:
추가 테스트: 자정 전후 이벤트 발생 시 메트릭 기록 날짜 검증 4. 복합 유니크 제약 위반 처리문제점: 수정안:
추가 테스트: 동시 upsert 시나리오 및 제약 위반 복구 검증 5. 배치 작업 모니터링 부족문제점: 수정안:
추가 테스트: 불완전한 메트릭 데이터 처리 시 배치 안정성 검증 6. 페이징 경계값 검증 누락문제점: 수정안:
추가 테스트: 경계값( 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 10
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
apps/commerce-api/src/test/java/com/loopers/interfaces/api/ranking/RankingE2ETest.java (1)
98-102:⚠️ Potential issue | 🟠 MajorMV 테이블이
tearDown()메서드에서 명시적으로 정리되도록 수정이 필요하다.
DatabaseCleanUp.truncateAllTables()는 JPA 메타모델에서@Entity로 매핑된 테이블만 정리하므로, 데이터베이스 뷰인mv_product_rank_weekly,mv_product_rank_monthly는 포함되지 않는다. 현재tearDown()에서 이 테이블들을 정리하지 않으면 이전 테스트 데이터가 다음 테스트에 영향을 주어 Flaky 테스트가 발생할 수 있다.다음과 같이 MV 테이블을 명시적으로 정리하도록 수정한다:
수정 방안
`@AfterEach` void tearDown() { databaseCleanUp.truncateAllTables(); redisCleanUp.truncateAll(); jdbcTemplate.update("DELETE FROM mv_product_rank_weekly"); jdbcTemplate.update("DELETE FROM mv_product_rank_monthly"); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/interfaces/api/ranking/RankingE2ETest.java` around lines 98 - 102, tearDown() currently calls databaseCleanUp.truncateAllTables() and redisCleanUp.truncateAll() but does not clear the materialized view tables; modify the tearDown() method to explicitly delete rows from mv_product_rank_weekly and mv_product_rank_monthly by invoking jdbcTemplate.update("DELETE FROM mv_product_rank_weekly") and jdbcTemplate.update("DELETE FROM mv_product_rank_monthly") after the existing cleanup calls so those MV tables are cleared between tests.
🧹 Nitpick comments (10)
apps/commerce-streamer/src/test/java/com/loopers/application/metrics/MetricsEventServiceTest.java (1)
43-44:any(LocalDate.class)검증만으로는 날짜 집계 회귀를 잡기 어렵다현재 검증은
LocalDate타입 여부만 확인하므로, 잘못된 날짜 계산(처리일 사용, 타임존 경계 오프셋 등)이 생겨도 테스트가 통과한다. 운영에서 일/주/월 집계 오류를 조기 탐지하려면ArgumentCaptor<LocalDate>로 실제 전달 날짜를 고정값과 비교해야 한다.수정안 예시 (한 케이스)
+import org.mockito.ArgumentCaptor; ... metricsEventService.process("catalog.events.topic-v1", "event-4", "ORDER_PLACED", 10L, 3, null, EVENT_TIME); verify(eventHandledRepository).save(any(EventHandled.class)); - verify(productMetricsRepository).upsertSalesCount(eq(10L), any(LocalDate.class), eq(3)); + ArgumentCaptor<LocalDate> dateCaptor = ArgumentCaptor.forClass(LocalDate.class); + verify(productMetricsRepository).upsertSalesCount(eq(10L), dateCaptor.capture(), eq(3)); + assertThat(dateCaptor.getValue()).isEqualTo(EVENT_TIME.atZone(ZoneId.of("Asia/Seoul")).toLocalDate());추가 테스트로 UTC/KST 날짜 경계 이벤트를 별도로 넣어 변환 정확도를 검증하는 것이 필요하다.
As per coding guidelines,**/*Test*.java: 단위 테스트는 경계값/실패 케이스/예외 흐름을 포함하는지 점검한다.Also applies to: 55-56, 66-67, 77-78, 89-90, 101-102, 112-113
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-streamer/src/test/java/com/loopers/application/metrics/MetricsEventServiceTest.java` around lines 43 - 44, Replace the weak any(LocalDate.class) verifications in MetricsEventServiceTest with ArgumentCaptor<LocalDate> to capture the actual LocalDate passed to productMetricsRepository.upsertViewCount (and other similar calls) and assert it equals an expected fixed date (and add separate tests for UTC/KST boundary cases); specifically, add an ArgumentCaptor<LocalDate> instance, use verify(productMetricsRepository).upsertViewCount(eq(10L), dateCaptor.capture(), eq(1)), assertThat(dateCaptor.getValue()).isEqualTo(LocalDate.of(...)) (and repeat for the other upsertViewCount/upsertX calls referenced in the test), and add additional tests that simulate timestamps on UTC/KST boundaries to assert correct date conversion in the service.apps/commerce-api/src/main/java/com/loopers/domain/ranking/mv/MvProductRankWeekly.java (1)
10-40:MvProductRankMonthly와 동일한 구조로, 동일한 개선사항이 적용된다.
@Immutable어노테이션 추가 권장 및double타입score필드의 정밀도 고려가 필요하다. 두 엔티티가 구조적으로 동일하므로 공통 추상 클래스 추출도 고려할 수 있으나, 읽기 전용 MV 엔티티의 단순성을 위해 현재 구조도 수용 가능하다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/domain/ranking/mv/MvProductRankWeekly.java` around lines 10 - 40, Add the same improvements as applied to MvProductRankMonthly: mark the entity MvProductRankWeekly as read-only by annotating the class with `@Immutable`, and change the primitive double score field to a BigDecimal (or a wrapper with defined precision/scale via `@Column`) to avoid precision loss; optionally consider extracting a common abstract base for shared fields if you later want to deduplicate, but minimally update the MvProductRankWeekly class signature, imports, and the score field type and its `@Column` precision/scale metadata.apps/commerce-api/src/main/java/com/loopers/domain/ranking/mv/MvProductRankMonthly.java (2)
23-24:score필드의double타입 사용 시 부동소수점 정밀도 이슈를 고려해야 한다.saturation 기반 점수(
x/(x+k))는 소수점 연산이 빈번하며,double타입은 부동소수점 연산 시 정밀도 손실이 발생할 수 있다. 현재 테스트에서0.1462,0.2917등 소수점 4자리까지 비교하는데, 반올림 오차로 인한 flaky 테스트 가능성이 있다.운영 관점:
- 랭킹 점수 비교 시 미세한 차이로 순위가 뒤바뀔 수 있다
- 배치 집계 결과와 API 응답 간 불일치 디버깅이 어려워질 수 있다
수정안:
BigDecimal사용 또는 현재 정밀도 요구사항이 낮다면 문서화로 명시🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/domain/ranking/mv/MvProductRankMonthly.java` around lines 23 - 24, The score field in class MvProductRankMonthly uses primitive double which can introduce floating-point precision errors for saturation-based scores; change the field type to java.math.BigDecimal (update the field declaration, any getters/setters, constructors, equals/hashCode and usages in MvProductRankMonthly) and update the JPA mapping on the score column to specify precision and scale via `@Column`(precision = ..., scale = ...) to ensure consistent persistence; also update any calculations that assign to score to produce/scale BigDecimal values (or convert consistently using BigDecimal.valueOf and setScale) and update tests to compare BigDecimal values or use an appropriate comparator.
10-40: 읽기 전용 엔티티에@Immutable어노테이션 추가를 권장한다.이 엔티티는 배치에서 적재하고 API에서 읽기만 하는 MV 테이블용이다. Hibernate의
@Immutable어노테이션을 추가하면 dirty checking을 건너뛰어 읽기 성능이 개선되고, 실수로 수정 시도 시 예외가 발생하여 의도하지 않은 변경을 방지할 수 있다.♻️ `@Immutable` 적용 예시
import jakarta.persistence.*; import lombok.AccessLevel; import lombok.Getter; import lombok.NoArgsConstructor; +import org.hibernate.annotations.Immutable; import java.time.LocalDate; `@Entity` `@Table`(name = "mv_product_rank_monthly") +@Immutable `@Getter` `@NoArgsConstructor`(access = AccessLevel.PROTECTED) public class MvProductRankMonthly {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/domain/ranking/mv/MvProductRankMonthly.java` around lines 10 - 40, Annotate the MvProductRankMonthly entity with Hibernate's `@Immutable`: add the annotation above the class declaration for MvProductRankMonthly and import org.hibernate.annotations.Immutable; ensure there are no setter methods or mutable mappings that would contradict immutability so the entity remains read-only for the MV table.apps/commerce-api/src/test/java/com/loopers/interfaces/api/ranking/RankingE2ETest.java (2)
242-268: 주간/월간 랭킹 테스트에서 경계값 및 실패 케이스가 누락되어 있다.현재 테스트는 happy path만 커버하며, 다음 케이스들이 누락되어 있다:
- MV 테이블에 데이터가 없는 경우:
period=weekly로 요청했으나 해당 날짜에 MV 데이터가 없을 때 빈 배열 반환 확인- 잘못된 period 값:
period=invalid입력 시 예상 동작 검증- pagination 경계:
page값이 데이터 범위를 초과할 때의 동작운영 관점: 배치 실패 또는 지연으로 MV 테이블이 비어있을 수 있으며, 이 상황에서의 API 동작을 명시적으로 테스트해야 한다.
💚 추가 테스트 케이스 예시
`@DisplayName`("GET /api/v1/rankings?period=weekly: MV 데이터가 없으면 빈 목록을 반환한다") `@Test` void getRankings_weeklyPeriod_noData_returnsEmpty() throws Exception { // MV 테이블에 데이터 없이 요청 mockMvc.perform(get("/api/v1/rankings") .param("period", "weekly") .param("date", TODAY)) .andExpect(status().isOk()) .andExpect(jsonPath("$.data.content").isArray()) .andExpect(jsonPath("$.data.content.length()").value(0)); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/interfaces/api/ranking/RankingE2ETest.java` around lines 242 - 268, Add missing negative/boundary E2E tests for rankings: create new tests alongside getRankings_weeklyPeriod()—e.g., getRankings_weeklyPeriod_noData_returnsEmpty(), getRankings_invalidPeriod_returnsBadRequestOrHandled(), and getRankings_pagination_outOfRange_returnsEmptyOrLastPage()—that (1) ensure mv_product_rank_weekly is empty (truncate or delete via jdbcTemplate) and then assert an empty $.data.content for period=weekly, (2) call /api/v1/rankings with period=invalid and assert the expected error status/body per controller behavior, and (3) insert a small known set of MV rows and request a page number beyond the range to assert either an empty content array or correct paging response; use the same MockMvc flow as getRankings_weeklyPeriod() and reference product-ranking assertions consistently.
299-329: 캐시 테스트에서 캐시 키 격리 및 TTL 차등 적용 검증이 부족하다.PR 목적에 따르면 daily는 5분, weekly/monthly는 1시간 TTL이 적용되어야 한다. 현재 테스트는 캐시 적용 여부만 검증하며, 다음이 누락되어 있다:
- period별 캐시 키 격리: 동일 date에 대해 daily/weekly/monthly가 별도 캐시 엔트리로 저장되는지
- TTL 차등 적용: 설정된 TTL이 올바르게 적용되는지 (통합 테스트에서 직접 검증은 어려우나, 설정값 검증 가능)
💚 캐시 키 격리 테스트 예시
`@DisplayName`("period별로 캐시가 독립적으로 관리된다") `@Test` void getRankings_cacheIsolationByPeriod() throws Exception { // daily용 Redis 데이터 (setUp에서 이미 적재됨) // weekly용 MV 데이터 jdbcTemplate.update( "INSERT INTO mv_product_rank_weekly (product_id, score, ranking, view_count, like_count, sales_count, aggregated_at) VALUES (?, ?, ?, ?, ?, ?, ?)", productId2, 5.0, 1, 100, 10, 5, LocalDate.parse(TODAY, DateTimeFormatter.ofPattern("yyyyMMdd")) ); // daily 조회 → daily 캐시 저장 mockMvc.perform(get("/api/v1/rankings").param("period", "daily").param("date", TODAY)) .andExpect(jsonPath("$.data.content.length()").value(3)); // weekly 조회 → weekly 캐시 저장 (daily 캐시와 독립) mockMvc.perform(get("/api/v1/rankings").param("period", "weekly").param("date", TODAY)) .andExpect(jsonPath("$.data.content.length()").value(1)); // daily 재조회 → daily 캐시에서 반환 (weekly와 무관) mockMvc.perform(get("/api/v1/rankings").param("period", "daily").param("date", TODAY)) .andExpect(jsonPath("$.data.content.length()").value(3)); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/interfaces/api/ranking/RankingE2ETest.java` around lines 299 - 329, The test only checks that weekly results are cached but doesn't verify cache-key isolation by period or that TTLs match the PR (daily=5min, weekly/monthly=1h); add a new test in RankingE2ETest (e.g., getRankings_cacheIsolationByPeriod) that uses mockMvc.perform(get("/api/v1/rankings").param("period", "<period>").param("date", TODAY)) with different periods (daily, weekly, monthly), seed MV table rows (mv_product_rank_weekly) and the existing Redis/daily seed to assert each period returns its own cached content (use productId1/productId2/TODAY to distinguish), and add a short test (or assertion) that verifies configured TTLs by reading the cache configuration or Redis key expiry (e.g., via redisTemplate.getExpire or the cache config constant used by your Ranking cache config) to ensure daily TTL == 5 minutes and weekly/monthly TTL == 1 hour.apps/commerce-batch/src/main/java/com/loopers/infrastructure/ranking/MvProductRankMonthlyJpaRepository.java (1)
16-18: 두 저장소 모두@Modifying(clearAutomatically = true)를 적용하도록 한다.배치 작업에서 delete-then-insert 패턴을 사용할 때는 bulk 연산 후 영속성 컨텍스트를 명시적으로 갱신하는 것이 안전하다. 현재는 JDBC 리더를 통해 데이터를 조회하므로 즉각적인 캐시 오염 위험은 낮지만, 배치 재실행 또는 향후 코드 변경 시 영속성 컨텍스트 간섭으로 인한 데이터 불일치를 사전에 방지할 수 있다.
♻️ 적용 대상
MvProductRankMonthlyJpaRepository.deleteByAggregatedAt()MvProductRankWeeklyJpaRepository.deleteByAggregatedAt()- `@Modifying` + `@Modifying`(clearAutomatically = true) `@Query`("DELETE FROM MvProductRank*** m WHERE m.aggregatedAt = :aggregatedAt") void deleteByAggregatedAt(`@Param`("aggregatedAt") LocalDate aggregatedAt);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-batch/src/main/java/com/loopers/infrastructure/ranking/MvProductRankMonthlyJpaRepository.java` around lines 16 - 18, Update the repository delete methods to clear the persistence context after the bulk delete by adding clearAutomatically = true to the `@Modifying` annotation on both MvProductRankMonthlyJpaRepository.deleteByAggregatedAt and MvProductRankWeeklyJpaRepository.deleteByAggregatedAt; specifically change `@Modifying` to `@Modifying`(clearAutomatically = true) so the EntityManager is refreshed after the bulk DELETE operation to avoid stale entities in the persistence context during batch delete-then-insert flows.apps/commerce-batch/src/main/java/com/loopers/domain/ranking/ProductMetricsAggregation.java (1)
8-15: 스코어 파라미터를 코드 상수로 고정하지 않는 편이 안전하다.지금처럼 weight와
SATURATION_K를 코드에 고정하면 운영 데이터 기반 튜닝 때마다 재배포가 필요하고, 배치 결과 비교나 즉시 롤백도 어려워진다.@ConfigurationProperties나 JobParameter로 외부화하고 코드에는 기본값만 남겨 두는 편이 운영상 안전하다. 서로 다른 파라미터 조합에서도 점수가 0~1 범위를 벗어나지 않고 count 증가에 따라 단조 증가하는지 확인하는 회귀 테스트를 추가해 달라.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-batch/src/main/java/com/loopers/domain/ranking/ProductMetricsAggregation.java` around lines 8 - 15, The constants WEIGHT_VIEW, WEIGHT_LIKE, WEIGHT_SALES and SATURATION_K in ProductMetricsAggregation should be externalized so they are configurable at runtime; refactor ProductMetricsAggregation to read these values from a ConfigurationProperties bean (or JobParameters) with the current constants as defaults, and update places that reference the static fields to use the injected/configured values instead; add regression tests that feed varying counts and different weight/saturation combinations asserting the computed score stays within [0,1] and is monotonic with increasing counts to catch regressions and allow safe tuning without code redeploys.apps/commerce-api/src/main/java/com/loopers/infrastructure/ranking/RankingCacheStore.java (1)
47-52: 지원하지 않는 period를 묵살하지 말고 즉시 차단하는 편이 안전하다.현재는
"daily"가 아니면 모두 주/월 TTL과 별도 키로 캐시된다. 운영에서 period 오타나 신규 값 누락이 생기면 잘못된 캐시가 조용히 쌓여 원인 파악이 어려워진다. 문자열 대신 enum을 넘기거나, 최소한 허용값을 명시적으로 검증해 실패시키고 unsupported period에서는 Redis read/write가 일어나지 않는 테스트를 추가하는 편이 안전하다.수정 예시
public Optional<List<RankingDto.RankingItemInfo>> getRankings(String period, String date, int page, int size) { + validatePeriod(period); try { String json = redisTemplate.opsForValue().get(cacheKey(period, date, page, size)); if (json == null) return Optional.empty(); return Optional.of(objectMapper.readValue(json, new TypeReference<>() {})); } catch (Exception e) { @@ public void putRankings(String period, String date, int page, int size, List<RankingDto.RankingItemInfo> rankings) { + validatePeriod(period); try { String json = objectMapper.writeValueAsString(rankings); redisTemplate.opsForValue().set(cacheKey(period, date, page, size), json, ttlFor(period)); } catch (Exception e) { log.warn("랭킹 캐시 저장 실패 - period:{} date:{} page:{}", period, date, page, e); } } + private void validatePeriod(String period) { + if (!"daily".equals(period) && !"weekly".equals(period) && !"monthly".equals(period)) { + throw new IllegalArgumentException("Unsupported ranking period: " + period); + } + } + private Duration ttlFor(String period) { return "daily".equals(period) ? DAILY_CACHE_TTL : WEEKLY_MONTHLY_CACHE_TTL; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/infrastructure/ranking/RankingCacheStore.java` around lines 47 - 52, The ttlFor and cacheKey helpers currently accept any string and treat non-"daily" values as weekly/monthly; change them to validate the period explicitly (preferably switch to an enum like Period { DAILY, WEEKLY, MONTHLY } or validate against an explicit set {"daily","weekly","monthly"}) and throw a clear IllegalArgumentException (or similar) for unsupported values so bad input is rejected immediately; update ttlFor to only return DAILY_CACHE_TTL for "daily" and WEEKLY_MONTHLY_CACHE_TTL for the allowed weekly/monthly enum/strings, and update cacheKey to reject unsupported periods before constructing keys (referencing ttlFor, cacheKey, and RANKING_CACHE_KEY_PREFIX), and add tests asserting that unsupported periods throw and that no Redis read/write occurs for invalid periods.apps/commerce-batch/src/main/java/com/loopers/domain/ranking/MvProductRank.java (1)
40-49: 생성 시점 불변식 검증을 추가하는 편이 안전하다.
productId나aggregatedAt가 null이거나rank <= 0이어도 현재는 객체가 만들어지고, 문제는 배치 flush 시점에야 드러나 청크 전체 롤백으로 번질 수 있다. 생성자에서 즉시 검증하도록 바꾸고, null/0 이하 rank 케이스 단위 테스트를 추가하는 편이 안전하다.수정 예시
+import java.util.Objects; ... protected MvProductRank(Long productId, double score, int rank, long viewCount, long likeCount, long salesCount, LocalDate aggregatedAt) { - this.productId = productId; + this.productId = Objects.requireNonNull(productId, "productId"); + if (rank <= 0) { + throw new IllegalArgumentException("rank must be positive"); + } this.score = score; this.rank = rank; this.viewCount = viewCount; this.likeCount = likeCount; this.salesCount = salesCount; - this.aggregatedAt = aggregatedAt; + this.aggregatedAt = Objects.requireNonNull(aggregatedAt, "aggregatedAt"); }As per coding guidelines
**/domain/**/*.java: 엔티티/값 객체/DTO 경계를 명확히 하고, 불변성과 캡슐화를 점검한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-batch/src/main/java/com/loopers/domain/ranking/MvProductRank.java` around lines 40 - 49, The MvProductRank constructor currently allows invalid state (null productId or aggregatedAt, and rank <= 0); inside the protected MvProductRank(Long productId, double score, int rank, long viewCount, long likeCount, long salesCount, LocalDate aggregatedAt) constructor add immediate validation: require productId and aggregatedAt non-null and rank > 0, throwing clear exceptions (e.g., NullPointerException or IllegalArgumentException) with descriptive messages; update callers/tests as needed and add unit tests that instantiate MvProductRank with null productId, null aggregatedAt, and rank <= 0 to assert the constructor throws the expected exceptions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@apps/commerce-api/src/main/java/com/loopers/application/ranking/RankingFacade.java`:
- Around line 41-50: Reject invalid period values before any cache/store access
in RankingFacade.getRankings: validate that the incoming period string is one of
"daily","weekly","monthly" at the top of getRankings(), and if not throw a
CoreException so the error flows to ApiControllerAdvice; only after validation
perform rankingCacheStore.getRankings(...) and routing to
loadMvRankings/loadRankings as currently implemented; update any unit tests to
assert that a request with period="foo" results in a CoreException-driven API
error and that rankingCacheStore.put/get and backing loaders are not invoked for
that call.
- Around line 97-98: The method loadMvRankings currently calls
LocalDate.parse(date, DATE_FORMAT) which can throw DateTimeParseException and
bypass the unified error response; catch DateTimeParseException inside
loadMvRankings (or the calling path) and rethrow a CoreException with a clear
error code/message so ApiControllerAdvice produces the standard response, or
move the parsing into a shared resolver that validates/returns LocalDate and
throws CoreException on failure; update tests to cover invalid date inputs for
period=weekly and period=monthly to assert the standard error code/message is
returned.
In
`@apps/commerce-api/src/main/java/com/loopers/interfaces/api/ranking/RankingV1Controller.java`:
- Line 26: The controller currently accepts a String period (RankingV1Controller
parameter period) but does not normalize or validate it, causing the
RankingFacade switch to be case-sensitive and silently fall back to daily;
update RankingV1Controller to normalize period (e.g., period =
period.toLowerCase(Locale.ROOT)) before passing to RankingFacade or validate
against allowed values and throw a clear exception for invalid input, and add
E2E tests for case variants like "Weekly" and "MONTHLY" to ensure correct
behavior.
In
`@apps/commerce-batch/src/main/java/com/loopers/batch/job/ranking/RankingAggregationJobConfig.java`:
- Around line 127-135: The current logic deletes existing MV rows inside the
chunk ItemWriter (weeklyWriter using
weeklyRepository.deleteByAggregatedAt(baseDate)), so if a re-run produces zero
items the writer never runs and stale rows remain; move the cleanup out of the
chunk writer into a step-start tasklet or a BeforeStep listener that calls
weeklyRepository.deleteByAggregatedAt(baseDate) (and the equivalent
monthlyRepository.deleteByAggregatedAt in the monthlyWriter case referenced
around the 187-195 region) so deletion always runs once at step start, and add
an E2E test that re-runs the job with the same baseDate but no input and asserts
both MV tables have 0 rows.
- Around line 40-59: The SATURATION_K constant is duplicated between
RankingAggregationJobConfig (SATURATION_K used in AGGREGATION_SQL) and
ProductMetricsAggregation.calculateScore(), causing drift when tuning; extract
the saturation/k and any related score weights into a single shared
configuration or constants holder used by both RankingAggregationJobConfig
(replace SATURATION_K and formatted SQL usage in AGGREGATION_SQL and TOP_N) and
ProductMetricsAggregation.calculateScore(), and add a contract test that builds
a few representative ProductMetricsAggregation datasets, runs the SQL query (via
the same config) and the in-memory calculateScore() and asserts the orderings
and top-N results match to catch future divergence.
- Around line 53-58: RankingAggregationJobConfig의 SQL ORDER BY가 계산된 score만 사용해
동점일 때 비결정적이므로, ORDER BY 절에 명시적 동점 해소 기준(예: pm.product_id ASC 또는 pm.updated_at
DESC 같은 안정적인 유니크/타임스탬프 컬럼)을 추가하여 정렬을 결정적으로 만드세요; 또한
RankingAggregationJobConfig에서 사용하는 쿼리 템플릿(ORDER BY (...) DESC LIMIT ...)을 수정하고,
동일 score를 가진 2개 이상의 상품을 넣어 재실행해도 순위가 고정되는 통합/단위 테스트를 추가해 검증하도록 하세요.
In
`@apps/commerce-batch/src/main/java/com/loopers/infrastructure/ranking/MvProductRankWeeklyJpaRepository.java`:
- Around line 16-18: deleteByAggregatedAt 메서드가 청크마다 호출되면 앞선 청크가 적재한 결과를 다음 청크 시작
시 삭제하므로, deleteByAggregatedAt(`@Param`("aggregatedAt") LocalDate aggregatedAt)를
writer 내부에서 매번 호출하지 않도록 옮기세요; 구체적으로 RankingAggregationJobConfig의 writer에서 삭제 호출을
제거하고 대신 Step 시작 시 한 번 실행되도록(step scope의 tasklet 또는 StepExecutionListener에서 호출)
하거나 staging 테이블에 청크별로 적재한 뒤 교체하는 방식으로 전환하세요(교체 방식은 staging→target 배치교체 로직 추가).
또한 입력이 2개 이상 청크로 분할되는 시나리오에서 전체 랭킹이 남는지와 재실행(멱등성)을 검증하는 E2E 테스트를 추가하여
deleteByAggregatedAt 이동/교체 로직이 올바른지 확인하세요.
In
`@apps/commerce-batch/src/test/java/com/loopers/job/ranking/RankingAggregationJobE2ETest.java`:
- Around line 136-164: The idempotency test idempotentReExecution currently only
checks row count; update it to verify that existing rows are actually replaced
by asserting that product 1's weekly record fields (score, viewCount, likeCount,
salesCount) match the new values inserted via insertMetrics after the second
launch (use weeklyRepository.findByAggregatedAtOrderByRankAsc(BASE_DATE) to
locate the record for product 1), and add an additional step that re-runs the
job with no product_metrics rows (delete or clear via jdbcTemplate.execute) for
the same baseDate and asserts that both weeklyRepository and monthlyRepository
return zero records for that baseDate to prove stale data is removed on empty
re-execution.
In
`@apps/commerce-streamer/src/main/java/com/loopers/application/metrics/MetricsEventService.java`:
- Around line 36-44: Compute the metric date from the event's occurredAt
timestamp (converted to Asia/Seoul) instead of using LocalDate.now; in
MetricsEventService replace the LocalDate.now(...) usage with a metricDate
derived from occurredAt (e.g., if occurredAt != null then
Instant->atZone(ZoneId.of("Asia/Seoul")).toLocalDate(), otherwise fall back to
Instant.now().atZone(...).toLocalDate() or skip processing) and use that
metricDate for all calls to
productMetricsRepository.upsertViewCount/upsertLikeCount/upsertSalesCount/upsertPrice;
add a null-check for occurredAt to prevent NPE and ensure consistent date-based
aggregation (also add tests for delayed events and UTC-to-KST date boundary
conversions).
In
`@apps/commerce-streamer/src/main/java/com/loopers/domain/metrics/ProductMetrics.java`:
- Around line 45-48: ProductMetrics.create currently allows null productId and
metricDate which defers failures to DB flush; update ProductMetrics.create to
validate inputs (e.g., use Objects.requireNonNull or explicit checks) and throw
a clear runtime exception (IllegalArgumentException/NullPointerException with
message) when productId or metricDate is null to fail fast; reference the
ProductMetrics.create factory and the productId/metricDate fields when adding
the checks, and add unit tests asserting create(null, someDate) and
create(someId, null) both throw the expected exception.
---
Outside diff comments:
In
`@apps/commerce-api/src/test/java/com/loopers/interfaces/api/ranking/RankingE2ETest.java`:
- Around line 98-102: tearDown() currently calls
databaseCleanUp.truncateAllTables() and redisCleanUp.truncateAll() but does not
clear the materialized view tables; modify the tearDown() method to explicitly
delete rows from mv_product_rank_weekly and mv_product_rank_monthly by invoking
jdbcTemplate.update("DELETE FROM mv_product_rank_weekly") and
jdbcTemplate.update("DELETE FROM mv_product_rank_monthly") after the existing
cleanup calls so those MV tables are cleared between tests.
---
Nitpick comments:
In
`@apps/commerce-api/src/main/java/com/loopers/domain/ranking/mv/MvProductRankMonthly.java`:
- Around line 23-24: The score field in class MvProductRankMonthly uses
primitive double which can introduce floating-point precision errors for
saturation-based scores; change the field type to java.math.BigDecimal (update
the field declaration, any getters/setters, constructors, equals/hashCode and
usages in MvProductRankMonthly) and update the JPA mapping on the score column
to specify precision and scale via `@Column`(precision = ..., scale = ...) to
ensure consistent persistence; also update any calculations that assign to score
to produce/scale BigDecimal values (or convert consistently using
BigDecimal.valueOf and setScale) and update tests to compare BigDecimal values
or use an appropriate comparator.
- Around line 10-40: Annotate the MvProductRankMonthly entity with Hibernate's
`@Immutable`: add the annotation above the class declaration for
MvProductRankMonthly and import org.hibernate.annotations.Immutable; ensure
there are no setter methods or mutable mappings that would contradict
immutability so the entity remains read-only for the MV table.
In
`@apps/commerce-api/src/main/java/com/loopers/domain/ranking/mv/MvProductRankWeekly.java`:
- Around line 10-40: Add the same improvements as applied to
MvProductRankMonthly: mark the entity MvProductRankWeekly as read-only by
annotating the class with `@Immutable`, and change the primitive double score
field to a BigDecimal (or a wrapper with defined precision/scale via `@Column`) to
avoid precision loss; optionally consider extracting a common abstract base for
shared fields if you later want to deduplicate, but minimally update the
MvProductRankWeekly class signature, imports, and the score field type and its
`@Column` precision/scale metadata.
In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/ranking/RankingCacheStore.java`:
- Around line 47-52: The ttlFor and cacheKey helpers currently accept any string
and treat non-"daily" values as weekly/monthly; change them to validate the
period explicitly (preferably switch to an enum like Period { DAILY, WEEKLY,
MONTHLY } or validate against an explicit set {"daily","weekly","monthly"}) and
throw a clear IllegalArgumentException (or similar) for unsupported values so
bad input is rejected immediately; update ttlFor to only return DAILY_CACHE_TTL
for "daily" and WEEKLY_MONTHLY_CACHE_TTL for the allowed weekly/monthly
enum/strings, and update cacheKey to reject unsupported periods before
constructing keys (referencing ttlFor, cacheKey, and RANKING_CACHE_KEY_PREFIX),
and add tests asserting that unsupported periods throw and that no Redis
read/write occurs for invalid periods.
In
`@apps/commerce-api/src/test/java/com/loopers/interfaces/api/ranking/RankingE2ETest.java`:
- Around line 242-268: Add missing negative/boundary E2E tests for rankings:
create new tests alongside getRankings_weeklyPeriod()—e.g.,
getRankings_weeklyPeriod_noData_returnsEmpty(),
getRankings_invalidPeriod_returnsBadRequestOrHandled(), and
getRankings_pagination_outOfRange_returnsEmptyOrLastPage()—that (1) ensure
mv_product_rank_weekly is empty (truncate or delete via jdbcTemplate) and then
assert an empty $.data.content for period=weekly, (2) call /api/v1/rankings with
period=invalid and assert the expected error status/body per controller
behavior, and (3) insert a small known set of MV rows and request a page number
beyond the range to assert either an empty content array or correct paging
response; use the same MockMvc flow as getRankings_weeklyPeriod() and reference
product-ranking assertions consistently.
- Around line 299-329: The test only checks that weekly results are cached but
doesn't verify cache-key isolation by period or that TTLs match the PR
(daily=5min, weekly/monthly=1h); add a new test in RankingE2ETest (e.g.,
getRankings_cacheIsolationByPeriod) that uses
mockMvc.perform(get("/api/v1/rankings").param("period",
"<period>").param("date", TODAY)) with different periods (daily, weekly,
monthly), seed MV table rows (mv_product_rank_weekly) and the existing
Redis/daily seed to assert each period returns its own cached content (use
productId1/productId2/TODAY to distinguish), and add a short test (or assertion)
that verifies configured TTLs by reading the cache configuration or Redis key
expiry (e.g., via redisTemplate.getExpire or the cache config constant used by
your Ranking cache config) to ensure daily TTL == 5 minutes and weekly/monthly
TTL == 1 hour.
In
`@apps/commerce-batch/src/main/java/com/loopers/domain/ranking/MvProductRank.java`:
- Around line 40-49: The MvProductRank constructor currently allows invalid
state (null productId or aggregatedAt, and rank <= 0); inside the protected
MvProductRank(Long productId, double score, int rank, long viewCount, long
likeCount, long salesCount, LocalDate aggregatedAt) constructor add immediate
validation: require productId and aggregatedAt non-null and rank > 0, throwing
clear exceptions (e.g., NullPointerException or IllegalArgumentException) with
descriptive messages; update callers/tests as needed and add unit tests that
instantiate MvProductRank with null productId, null aggregatedAt, and rank <= 0
to assert the constructor throws the expected exceptions.
In
`@apps/commerce-batch/src/main/java/com/loopers/domain/ranking/ProductMetricsAggregation.java`:
- Around line 8-15: The constants WEIGHT_VIEW, WEIGHT_LIKE, WEIGHT_SALES and
SATURATION_K in ProductMetricsAggregation should be externalized so they are
configurable at runtime; refactor ProductMetricsAggregation to read these values
from a ConfigurationProperties bean (or JobParameters) with the current
constants as defaults, and update places that reference the static fields to use
the injected/configured values instead; add regression tests that feed varying
counts and different weight/saturation combinations asserting the computed score
stays within [0,1] and is monotonic with increasing counts to catch regressions
and allow safe tuning without code redeploys.
In
`@apps/commerce-batch/src/main/java/com/loopers/infrastructure/ranking/MvProductRankMonthlyJpaRepository.java`:
- Around line 16-18: Update the repository delete methods to clear the
persistence context after the bulk delete by adding clearAutomatically = true to
the `@Modifying` annotation on both
MvProductRankMonthlyJpaRepository.deleteByAggregatedAt and
MvProductRankWeeklyJpaRepository.deleteByAggregatedAt; specifically change
`@Modifying` to `@Modifying`(clearAutomatically = true) so the EntityManager is
refreshed after the bulk DELETE operation to avoid stale entities in the
persistence context during batch delete-then-insert flows.
In
`@apps/commerce-streamer/src/test/java/com/loopers/application/metrics/MetricsEventServiceTest.java`:
- Around line 43-44: Replace the weak any(LocalDate.class) verifications in
MetricsEventServiceTest with ArgumentCaptor<LocalDate> to capture the actual
LocalDate passed to productMetricsRepository.upsertViewCount (and other similar
calls) and assert it equals an expected fixed date (and add separate tests for
UTC/KST boundary cases); specifically, add an ArgumentCaptor<LocalDate>
instance, use verify(productMetricsRepository).upsertViewCount(eq(10L),
dateCaptor.capture(), eq(1)),
assertThat(dateCaptor.getValue()).isEqualTo(LocalDate.of(...)) (and repeat for
the other upsertViewCount/upsertX calls referenced in the test), and add
additional tests that simulate timestamps on UTC/KST boundaries to assert
correct date conversion in the service.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: fa5612c3-8d33-42ef-a198-ff94a6062793
⛔ Files ignored due to path filters (3)
docs/design/02-sequence-diagrams.mdis excluded by!**/*.mdand included by**docs/design/03-class-diagram.mdis excluded by!**/*.mdand included by**docs/design/04-erd.mdis excluded by!**/*.mdand included by**
📒 Files selected for processing (29)
apps/commerce-api/src/main/java/com/loopers/application/ranking/RankingFacade.javaapps/commerce-api/src/main/java/com/loopers/domain/ranking/mv/MvProductRankMonthly.javaapps/commerce-api/src/main/java/com/loopers/domain/ranking/mv/MvProductRankWeekly.javaapps/commerce-api/src/main/java/com/loopers/domain/ranking/mv/MvRankingRepository.javaapps/commerce-api/src/main/java/com/loopers/infrastructure/ranking/RankingCacheStore.javaapps/commerce-api/src/main/java/com/loopers/infrastructure/ranking/mv/MvProductRankMonthlyJpaRepository.javaapps/commerce-api/src/main/java/com/loopers/infrastructure/ranking/mv/MvProductRankWeeklyJpaRepository.javaapps/commerce-api/src/main/java/com/loopers/infrastructure/ranking/mv/MvRankingRepositoryAdapter.javaapps/commerce-api/src/main/java/com/loopers/interfaces/api/ranking/RankingV1Controller.javaapps/commerce-api/src/test/java/com/loopers/application/ranking/RankingFacadeTest.javaapps/commerce-api/src/test/java/com/loopers/interfaces/api/ranking/RankingE2ETest.javaapps/commerce-batch/src/main/java/com/loopers/batch/job/ranking/RankingAggregationJobConfig.javaapps/commerce-batch/src/main/java/com/loopers/domain/metrics/ProductMetrics.javaapps/commerce-batch/src/main/java/com/loopers/domain/product/Product.javaapps/commerce-batch/src/main/java/com/loopers/domain/ranking/MvProductRank.javaapps/commerce-batch/src/main/java/com/loopers/domain/ranking/MvProductRankMonthly.javaapps/commerce-batch/src/main/java/com/loopers/domain/ranking/MvProductRankWeekly.javaapps/commerce-batch/src/main/java/com/loopers/domain/ranking/ProductMetricsAggregation.javaapps/commerce-batch/src/main/java/com/loopers/infrastructure/ranking/MvProductRankMonthlyJpaRepository.javaapps/commerce-batch/src/main/java/com/loopers/infrastructure/ranking/MvProductRankWeeklyJpaRepository.javaapps/commerce-batch/src/test/java/com/loopers/domain/ranking/ProductMetricsAggregationTest.javaapps/commerce-batch/src/test/java/com/loopers/job/ranking/RankingAggregationJobE2ETest.javaapps/commerce-streamer/src/main/java/com/loopers/application/metrics/MetricsEventService.javaapps/commerce-streamer/src/main/java/com/loopers/domain/metrics/ProductMetrics.javaapps/commerce-streamer/src/main/java/com/loopers/domain/metrics/ProductMetricsRepository.javaapps/commerce-streamer/src/main/java/com/loopers/infrastructure/metrics/ProductMetricsJpaRepository.javaapps/commerce-streamer/src/main/java/com/loopers/infrastructure/metrics/ProductMetricsRepositoryAdapter.javaapps/commerce-streamer/src/test/java/com/loopers/application/metrics/MetricsEventServiceTest.javaapps/commerce-streamer/src/test/java/com/loopers/domain/metrics/ProductMetricsTest.java
| public List<RankingDto.RankingItemInfo> getRankings(String period, String date, int page, int size) { | ||
| return rankingCacheStore.getRankings(period, date, page, size) | ||
| .orElseGet(() -> { | ||
| List<RankingDto.RankingItemInfo> result = loadRankings(date, page, size); | ||
| List<RankingDto.RankingItemInfo> result = switch (period) { | ||
| case "weekly" -> loadMvRankings(date, page, size, true); | ||
| case "monthly" -> loadMvRankings(date, page, size, false); | ||
| default -> loadRankings(date, page, size); | ||
| }; | ||
| if (!result.isEmpty()) { | ||
| rankingCacheStore.putRankings(date, page, size, result); | ||
| rankingCacheStore.putRankings(period, date, page, size, result); |
There was a problem hiding this comment.
허용되지 않은 period를 daily로 묵살하지 말아야 한다.
현재는 "weekly", "monthly" 외의 모든 값이 default로 일간 랭킹으로 내려가고, 잘못된 period가 그대로 캐시 키에도 반영된다. 운영에서는 잘못된 호출을 조용히 성공 처리해 계약 위반을 숨기고, 캐시 오염까지 남긴다. 캐시 조회 전에 daily|weekly|monthly만 허용하도록 검증하고, 그 외 값은 CoreException으로 명시적으로 거절해 달라. period=foo 요청이 표준 오류 응답으로 실패하고 캐시/저장소가 호출되지 않는 테스트도 추가해 달라. Based on learnings: In the loop-pack-be-l2-vol3-java project, enforce unified error handling by routing errors through CoreException to ApiControllerAdvice to ensure a consistent response format.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/application/ranking/RankingFacade.java`
around lines 41 - 50, Reject invalid period values before any cache/store access
in RankingFacade.getRankings: validate that the incoming period string is one of
"daily","weekly","monthly" at the top of getRankings(), and if not throw a
CoreException so the error flows to ApiControllerAdvice; only after validation
perform rankingCacheStore.getRankings(...) and routing to
loadMvRankings/loadRankings as currently implemented; update any unit tests to
assert that a request with period="foo" results in a CoreException-driven API
error and that rankingCacheStore.put/get and backing loaders are not invoked for
that call.
| private List<RankingDto.RankingItemInfo> loadMvRankings(String date, int page, int size, boolean weekly) { | ||
| LocalDate aggregatedAt = LocalDate.parse(date, DATE_FORMAT); |
There was a problem hiding this comment.
주간/월간 날짜 파싱 예외를 그대로 노출하면 응답 형식이 깨진다.
여기서 LocalDate.parse가 던지는 DateTimeParseException이 그대로 새면 일간 경로와 다르게 공통 오류 응답 형식을 보장하지 못한다. 날짜 검증/파싱을 공통 resolver로 올리거나 이 지점에서 CoreException으로 변환해 달라. period=weekly|monthly에서 잘못된 날짜 형식이 들어왔을 때 표준 오류 코드와 메시지로 내려가는 테스트를 추가해 달라. Based on learnings: In the loop-pack-be-l2-vol3-java project, enforce unified error handling by routing errors through CoreException to ApiControllerAdvice to ensure a consistent response format.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/application/ranking/RankingFacade.java`
around lines 97 - 98, The method loadMvRankings currently calls
LocalDate.parse(date, DATE_FORMAT) which can throw DateTimeParseException and
bypass the unified error response; catch DateTimeParseException inside
loadMvRankings (or the calling path) and rethrow a CoreException with a clear
error code/message so ApiControllerAdvice produces the standard response, or
move the parsing into a shared resolver that validates/returns LocalDate and
throws CoreException on failure; update tests to cover invalid date inputs for
period=weekly and period=monthly to assert the standard error code/message is
returned.
|
|
||
| @GetMapping("/api/v1/rankings") | ||
| public ApiResponse<RankingV1Dto.RankingPageResponse> getRankings( | ||
| @RequestParam(defaultValue = "daily") String period, |
There was a problem hiding this comment.
period 파라미터 검증 부재로 인한 운영 이슈 가능성이 있다.
period 파라미터가 문자열로 받아지며, RankingFacade의 switch문에서 대소문자 구분("weekly", "monthly")을 한다. 사용자가 "Weekly" 또는 "DAILY" 등 대소문자가 다른 값을 입력하면 silent하게 daily 랭킹으로 fallback되어, 운영 환경에서 오동작 디버깅이 어려워질 수 있다.
수정안:
period.toLowerCase()로 정규화하거나- 유효하지 않은 period 값에 대해 명시적 예외를 던지도록 도메인 레이어에서 검증
추가 테스트: "Weekly", "MONTHLY" 등 대소문자 변형 입력에 대한 E2E 테스트 추가 권장
🔧 정규화 적용 예시
`@GetMapping`("/api/v1/rankings")
public ApiResponse<RankingV1Dto.RankingPageResponse> getRankings(
`@RequestParam`(defaultValue = "daily") String period,
`@RequestParam`(required = false) String date,
`@RequestParam`(defaultValue = "20") int size,
`@RequestParam`(defaultValue = "0") int page
) {
String resolvedDate = (date != null) ? date : todayDate();
- List<RankingDto.RankingItemInfo> rankings = rankingFacade.getRankings(period, resolvedDate, page, size);
+ List<RankingDto.RankingItemInfo> rankings = rankingFacade.getRankings(period.toLowerCase(), resolvedDate, page, size);
return ApiResponse.success(RankingV1Dto.RankingPageResponse.of(rankings, page, size));
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/interfaces/api/ranking/RankingV1Controller.java`
at line 26, The controller currently accepts a String period
(RankingV1Controller parameter period) but does not normalize or validate it,
causing the RankingFacade switch to be case-sensitive and silently fall back to
daily; update RankingV1Controller to normalize period (e.g., period =
period.toLowerCase(Locale.ROOT)) before passing to RankingFacade or validate
against allowed values and throw a clear exception for invalid input, and add
E2E tests for case variants like "Weekly" and "MONTHLY" to ensure correct
behavior.
| // saturation 상수: ProductMetricsAggregation.SATURATION_K(=100)와 일치해야 한다. | ||
| // TODO: 실제 운영 데이터로 튜닝 필요. 현재는 임의값. | ||
| private static final double SATURATION_K = 100.0; | ||
|
|
||
| private static final String AGGREGATION_SQL = """ | ||
| SELECT pm.product_id, | ||
| SUM(pm.view_count) AS view_count, | ||
| SUM(pm.like_count) AS like_count, | ||
| SUM(pm.sales_count) AS sales_count | ||
| FROM product_metrics pm | ||
| INNER JOIN products p ON pm.product_id = p.id AND p.deleted_at IS NULL | ||
| WHERE pm.metric_date BETWEEN ? AND ? | ||
| GROUP BY pm.product_id | ||
| ORDER BY ( | ||
| GREATEST(SUM(pm.view_count), 0) / (GREATEST(SUM(pm.view_count), 0) + %1$s) * 0.1 + | ||
| GREATEST(SUM(pm.like_count), 0) / (GREATEST(SUM(pm.like_count), 0) + %1$s) * 0.2 + | ||
| GREATEST(SUM(pm.sales_count), 0) / (GREATEST(SUM(pm.sales_count), 0) + %1$s) * 0.7 | ||
| ) DESC | ||
| LIMIT %2$s | ||
| """.formatted(SATURATION_K, TOP_N); |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
점수 정책 상수가 SQL과 Java에 이중 정의되어 있다.
지금은 TOP N 선별과 정렬은 SQL이, 저장되는 score 값은 ProductMetricsAggregation.calculateScore()가 각각 별도 상수와 공식을 들고 계산한다. 운영에서 saturation 가중치나 k 값을 튜닝할 때 한쪽만 수정되면, 선별된 순서와 저장된 score가 바로 어긋난다. 점수 정책을 단일 구성원으로 추출해 SQL 생성과 Java 계산이 같은 상수를 공유하게 만들고, 대표 데이터셋 몇 개에 대해 SQL 정렬 결과와 in-memory score 정렬 결과가 일치하는 계약 테스트를 추가하는 편이 안전하다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-batch/src/main/java/com/loopers/batch/job/ranking/RankingAggregationJobConfig.java`
around lines 40 - 59, The SATURATION_K constant is duplicated between
RankingAggregationJobConfig (SATURATION_K used in AGGREGATION_SQL) and
ProductMetricsAggregation.calculateScore(), causing drift when tuning; extract
the saturation/k and any related score weights into a single shared
configuration or constants holder used by both RankingAggregationJobConfig
(replace SATURATION_K and formatted SQL usage in AGGREGATION_SQL and TOP_N) and
ProductMetricsAggregation.calculateScore(), and add a contract test that builds
a few representative ProductMetricsAggregation datasets, runs the SQL query (via
the same config) and the in-memory calculateScore() and asserts the orderings
and top-N results match to catch future divergence.
| ORDER BY ( | ||
| GREATEST(SUM(pm.view_count), 0) / (GREATEST(SUM(pm.view_count), 0) + %1$s) * 0.1 + | ||
| GREATEST(SUM(pm.like_count), 0) / (GREATEST(SUM(pm.like_count), 0) + %1$s) * 0.2 + | ||
| GREATEST(SUM(pm.sales_count), 0) / (GREATEST(SUM(pm.sales_count), 0) + %1$s) * 0.7 | ||
| ) DESC | ||
| LIMIT %2$s |
There was a problem hiding this comment.
동점 정렬 기준이 없어 순위가 비결정적이다.
현재는 계산된 score만으로 정렬하므로 동점 상품이 나오면 DB가 임의 순서를 반환할 수 있다. 운영에서는 재실행마다 순위가 흔들리고 캐시 churn이 생겨 사용자에게 설명하기 어려운 랭킹 뒤바뀜으로 이어진다. 동률 시 적용할 명시적 기준을 추가하고, 동일 score 상품 2개 이상을 넣어 재실행해도 순위가 고정되는 테스트를 추가하는 편이 안전하다.
수정 예시
ORDER BY (
GREATEST(SUM(pm.view_count), 0) / (GREATEST(SUM(pm.view_count), 0) + %1$s) * 0.1 +
GREATEST(SUM(pm.like_count), 0) / (GREATEST(SUM(pm.like_count), 0) + %1$s) * 0.2 +
GREATEST(SUM(pm.sales_count), 0) / (GREATEST(SUM(pm.sales_count), 0) + %1$s) * 0.7
- ) DESC
+ ) DESC,
+ pm.product_id ASC
LIMIT %2$s📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ORDER BY ( | |
| GREATEST(SUM(pm.view_count), 0) / (GREATEST(SUM(pm.view_count), 0) + %1$s) * 0.1 + | |
| GREATEST(SUM(pm.like_count), 0) / (GREATEST(SUM(pm.like_count), 0) + %1$s) * 0.2 + | |
| GREATEST(SUM(pm.sales_count), 0) / (GREATEST(SUM(pm.sales_count), 0) + %1$s) * 0.7 | |
| ) DESC | |
| LIMIT %2$s | |
| ORDER BY ( | |
| GREATEST(SUM(pm.view_count), 0) / (GREATEST(SUM(pm.view_count), 0) + %1$s) * 0.1 + | |
| GREATEST(SUM(pm.like_count), 0) / (GREATEST(SUM(pm.like_count), 0) + %1$s) * 0.2 + | |
| GREATEST(SUM(pm.sales_count), 0) / (GREATEST(SUM(pm.sales_count), 0) + %1$s) * 0.7 | |
| ) DESC, | |
| pm.product_id ASC | |
| LIMIT %2$s |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-batch/src/main/java/com/loopers/batch/job/ranking/RankingAggregationJobConfig.java`
around lines 53 - 58, RankingAggregationJobConfig의 SQL ORDER BY가 계산된 score만 사용해
동점일 때 비결정적이므로, ORDER BY 절에 명시적 동점 해소 기준(예: pm.product_id ASC 또는 pm.updated_at
DESC 같은 안정적인 유니크/타임스탬프 컬럼)을 추가하여 정렬을 결정적으로 만드세요; 또한
RankingAggregationJobConfig에서 사용하는 쿼리 템플릿(ORDER BY (...) DESC LIMIT ...)을 수정하고,
동일 score를 가진 2개 이상의 상품을 넣어 재실행해도 순위가 고정되는 통합/단위 테스트를 추가해 검증하도록 하세요.
| @Bean | ||
| @StepScope | ||
| public ItemWriter<MvProductRankWeekly> weeklyWriter( | ||
| @Value("#{jobParameters['baseDate']}") LocalDate baseDate) { | ||
|
|
||
| return chunk -> { | ||
| weeklyRepository.deleteByAggregatedAt(baseDate); | ||
| weeklyRepository.saveAll(chunk.getItems()); | ||
| }; |
There was a problem hiding this comment.
빈 집계 결과에서는 기존 MV 정리가 실행되지 않는다.
deleteByAggregatedAt(baseDate)가 writer 안에 있어서, 같은 baseDate로 재실행할 때 조회 결과가 0건이면 writer 자체가 호출되지 않는다. 그 경우 이전 실행에서 남은 weekly/monthly row가 그대로 유지되어 API가 stale 랭킹을 계속 서빙하게 된다. 정리 작업은 chunk writer가 아니라 step 시작 시 한 번만 실행되는 tasklet이나 listener로 옮기고, 같은 baseDate를 빈 입력으로 재실행했을 때 두 MV 테이블이 모두 0건이 되는 E2E 테스트를 추가하는 편이 안전하다.
Also applies to: 187-195
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-batch/src/main/java/com/loopers/batch/job/ranking/RankingAggregationJobConfig.java`
around lines 127 - 135, The current logic deletes existing MV rows inside the
chunk ItemWriter (weeklyWriter using
weeklyRepository.deleteByAggregatedAt(baseDate)), so if a re-run produces zero
items the writer never runs and stale rows remain; move the cleanup out of the
chunk writer into a step-start tasklet or a BeforeStep listener that calls
weeklyRepository.deleteByAggregatedAt(baseDate) (and the equivalent
monthlyRepository.deleteByAggregatedAt in the monthlyWriter case referenced
around the 187-195 region) so deletion always runs once at step start, and add
an E2E test that re-runs the job with the same baseDate but no input and asserts
both MV tables have 0 rows.
| @Modifying | ||
| @Query("DELETE FROM MvProductRankWeekly m WHERE m.aggregatedAt = :aggregatedAt") | ||
| void deleteByAggregatedAt(@Param("aggregatedAt") LocalDate aggregatedAt); |
There was a problem hiding this comment.
deleteByAggregatedAt를 청크마다 호출하면 마지막 청크만 남는다.
이 메서드는 기준일 전체 데이터를 지우는 연산이라 RankingAggregationJobConfig의 writer처럼 청크 내부에서 매번 호출되면, 앞선 청크가 적재한 랭킹이 다음 청크 시작 시 다시 삭제된다. 운영에서는 배치가 성공으로 끝나도 결과가 부분 유실될 수 있다. 삭제는 step 시작 시 1회만 수행하거나 staging 적재 후 교체하는 방식으로 옮겨 달라. 최소 2개 이상 청크로 분할되는 입력에서 전체 랭킹이 모두 남는지와 재실행 멱등성을 확인하는 E2E 테스트도 추가해 달라.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-batch/src/main/java/com/loopers/infrastructure/ranking/MvProductRankWeeklyJpaRepository.java`
around lines 16 - 18, deleteByAggregatedAt 메서드가 청크마다 호출되면 앞선 청크가 적재한 결과를 다음 청크
시작 시 삭제하므로, deleteByAggregatedAt(`@Param`("aggregatedAt") LocalDate aggregatedAt)를
writer 내부에서 매번 호출하지 않도록 옮기세요; 구체적으로 RankingAggregationJobConfig의 writer에서 삭제 호출을
제거하고 대신 Step 시작 시 한 번 실행되도록(step scope의 tasklet 또는 StepExecutionListener에서 호출)
하거나 staging 테이블에 청크별로 적재한 뒤 교체하는 방식으로 전환하세요(교체 방식은 staging→target 배치교체 로직 추가).
또한 입력이 2개 이상 청크로 분할되는 시나리오에서 전체 랭킹이 남는지와 재실행(멱등성)을 검증하는 E2E 테스트를 추가하여
deleteByAggregatedAt 이동/교체 로직이 올바른지 확인하세요.
| @DisplayName("재실행 시 기존 데이터를 교체한다 (멱등성)") | ||
| @Test | ||
| void idempotentReExecution() throws Exception { | ||
| jobLauncherTestUtils.setJob(job); | ||
|
|
||
| insertMetrics(1L, BASE_DATE, 100, 50, 10); | ||
|
|
||
| var jobParameters1 = new JobParametersBuilder() | ||
| .addLocalDate("baseDate", BASE_DATE) | ||
| .addLong("run.id", RUN_ID.getAndIncrement()) | ||
| .toJobParameters(); | ||
| jobLauncherTestUtils.launchJob(jobParameters1); | ||
|
|
||
| assertThat(weeklyRepository.findByAggregatedAtOrderByRankAsc(BASE_DATE)).hasSize(1); | ||
|
|
||
| // 데이터 변경 후 재실행 | ||
| jdbcTemplate.execute("DELETE FROM product_metrics"); | ||
| insertMetrics(1L, BASE_DATE, 200, 100, 20); | ||
| insertMetrics(2L, BASE_DATE, 50, 10, 5); | ||
|
|
||
| var jobParameters2 = new JobParametersBuilder() | ||
| .addLocalDate("baseDate", BASE_DATE) | ||
| .addLong("run.id", RUN_ID.getAndIncrement()) | ||
| .toJobParameters(); | ||
| jobLauncherTestUtils.launchJob(jobParameters2); | ||
|
|
||
| var weeklyRanks = weeklyRepository.findByAggregatedAtOrderByRankAsc(BASE_DATE); | ||
| assertThat(weeklyRanks).hasSize(2); | ||
| } |
There was a problem hiding this comment.
멱등성 테스트가 교체 여부까지 검증하지 못하고 있다.
현재는 두 번째 실행 후 건수만 2인지 확인하므로, 기존 row가 실제로 삭제되고 새 집계값으로 교체되었는지는 잡지 못한다. 운영에서는 stale 랭킹이 남아도 통과할 수 있다. 두 번째 실행 후 상품 1의 score/viewCount/likeCount/salesCount가 새 입력으로 바뀐 것을 단정하고, 같은 baseDate를 빈 입력으로 재실행했을 때 weekly/monthly가 모두 0건이 되는 케이스도 추가하는 편이 안전하다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-batch/src/test/java/com/loopers/job/ranking/RankingAggregationJobE2ETest.java`
around lines 136 - 164, The idempotency test idempotentReExecution currently
only checks row count; update it to verify that existing rows are actually
replaced by asserting that product 1's weekly record fields (score, viewCount,
likeCount, salesCount) match the new values inserted via insertMetrics after the
second launch (use weeklyRepository.findByAggregatedAtOrderByRankAsc(BASE_DATE)
to locate the record for product 1), and add an additional step that re-runs the
job with no product_metrics rows (delete or clear via jdbcTemplate.execute) for
the same baseDate and asserts that both weeklyRepository and monthlyRepository
return zero records for that baseDate to prove stale data is removed on empty
re-execution.
| LocalDate today = LocalDate.now(ZoneId.of("Asia/Seoul")); | ||
|
|
||
| switch (eventType) { | ||
| case "PRODUCT_VIEWED" -> productMetricsRepository.upsertViewCount(productId, 1); | ||
| case "PRODUCT_LIKED" -> productMetricsRepository.upsertLikeCount(productId, 1); | ||
| case "PRODUCT_UNLIKED" -> productMetricsRepository.upsertLikeCount(productId, -1); | ||
| case "ORDER_PLACED" -> productMetricsRepository.upsertSalesCount(productId, quantity); | ||
| case "ORDER_CANCELLED" -> productMetricsRepository.upsertSalesCount(productId, -quantity); | ||
| case "PRODUCT_PRICE_CHANGED" -> productMetricsRepository.upsertPrice(productId, price, occurredAt); | ||
| case "PRODUCT_VIEWED" -> productMetricsRepository.upsertViewCount(productId, today, 1); | ||
| case "PRODUCT_LIKED" -> productMetricsRepository.upsertLikeCount(productId, today, 1); | ||
| case "PRODUCT_UNLIKED" -> productMetricsRepository.upsertLikeCount(productId, today, -1); | ||
| case "ORDER_PLACED" -> productMetricsRepository.upsertSalesCount(productId, today, quantity); | ||
| case "ORDER_CANCELLED" -> productMetricsRepository.upsertSalesCount(productId, today, -quantity); | ||
| case "PRODUCT_PRICE_CHANGED" -> productMetricsRepository.upsertPrice(productId, today, price, occurredAt); |
There was a problem hiding this comment.
이벤트 발생일 대신 처리일을 사용해 일별 집계가 왜곡된다
Line 36에서 LocalDate.now(...)를 사용하면 지연 수신/재처리 이벤트가 실제 발생일이 아닌 처리일로 적재되어 주간·월간 랭킹이 운영 중 점진적으로 왜곡된다. metricDate는 occurredAt 기준(서울 타임존)으로 계산해야 한다. 또한 occurredAt null 방어를 추가해 NPE 가능성을 제거하는 편이 안전하다.
수정안
- LocalDate today = LocalDate.now(ZoneId.of("Asia/Seoul"));
+ Instant effectiveOccurredAt = occurredAt != null ? occurredAt : Instant.now();
+ LocalDate metricDate = effectiveOccurredAt.atZone(ZoneId.of("Asia/Seoul")).toLocalDate();
switch (eventType) {
- case "PRODUCT_VIEWED" -> productMetricsRepository.upsertViewCount(productId, today, 1);
- case "PRODUCT_LIKED" -> productMetricsRepository.upsertLikeCount(productId, today, 1);
- case "PRODUCT_UNLIKED" -> productMetricsRepository.upsertLikeCount(productId, today, -1);
- case "ORDER_PLACED" -> productMetricsRepository.upsertSalesCount(productId, today, quantity);
- case "ORDER_CANCELLED" -> productMetricsRepository.upsertSalesCount(productId, today, -quantity);
- case "PRODUCT_PRICE_CHANGED" -> productMetricsRepository.upsertPrice(productId, today, price, occurredAt);
+ case "PRODUCT_VIEWED" -> productMetricsRepository.upsertViewCount(productId, metricDate, 1);
+ case "PRODUCT_LIKED" -> productMetricsRepository.upsertLikeCount(productId, metricDate, 1);
+ case "PRODUCT_UNLIKED" -> productMetricsRepository.upsertLikeCount(productId, metricDate, -1);
+ case "ORDER_PLACED" -> productMetricsRepository.upsertSalesCount(productId, metricDate, quantity);
+ case "ORDER_CANCELLED" -> productMetricsRepository.upsertSalesCount(productId, metricDate, -quantity);
+ case "PRODUCT_PRICE_CHANGED" -> productMetricsRepository.upsertPrice(productId, metricDate, price, effectiveOccurredAt);추가 테스트로는 (1) 처리 시각과 occurredAt 날짜가 다른 지연 이벤트, (2) UTC 자정 경계(예: 2026-03-26T15:30:00Z)에서 KST 날짜 변환 검증이 필요하다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-streamer/src/main/java/com/loopers/application/metrics/MetricsEventService.java`
around lines 36 - 44, Compute the metric date from the event's occurredAt
timestamp (converted to Asia/Seoul) instead of using LocalDate.now; in
MetricsEventService replace the LocalDate.now(...) usage with a metricDate
derived from occurredAt (e.g., if occurredAt != null then
Instant->atZone(ZoneId.of("Asia/Seoul")).toLocalDate(), otherwise fall back to
Instant.now().atZone(...).toLocalDate() or skip processing) and use that
metricDate for all calls to
productMetricsRepository.upsertViewCount/upsertLikeCount/upsertSalesCount/upsertPrice;
add a null-check for occurredAt to prevent NPE and ensure consistent date-based
aggregation (also add tests for delayed events and UTC-to-KST date boundary
conversions).
| public static ProductMetrics create(Long productId, LocalDate metricDate) { | ||
| ProductMetrics metrics = new ProductMetrics(); | ||
| metrics.productId = productId; | ||
| metrics.metricDate = metricDate; |
There was a problem hiding this comment.
도메인 팩토리에서 null 입력을 즉시 차단해야 한다
Line 45-48은 productId/metricDate null을 허용한다. 이 경우 런타임 후반(DB flush 시점)에 제약조건 예외가 발생해 소비자 재시도 루프에서 장애 분석이 어려워진다. 도메인 생성 시점에 불변식을 강제해 빠르게 실패시키는 것이 안전하다.
수정안
+import java.util.Objects;
...
public static ProductMetrics create(Long productId, LocalDate metricDate) {
ProductMetrics metrics = new ProductMetrics();
- metrics.productId = productId;
- metrics.metricDate = metricDate;
+ metrics.productId = Objects.requireNonNull(productId, "productId must not be null");
+ metrics.metricDate = Objects.requireNonNull(metricDate, "metricDate must not be null");
metrics.viewCount = 0;추가 테스트로 create(null, date)와 create(productId, null)에서 예외가 발생하는지 검증이 필요하다.
As per coding guidelines, **/domain/**/*.java: 엔티티/값 객체/DTO 경계를 명확히 하고, 불변성과 캡슐화를 점검한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-streamer/src/main/java/com/loopers/domain/metrics/ProductMetrics.java`
around lines 45 - 48, ProductMetrics.create currently allows null productId and
metricDate which defers failures to DB flush; update ProductMetrics.create to
validate inputs (e.g., use Objects.requireNonNull or explicit checks) and throw
a clear runtime exception (IllegalArgumentException/NullPointerException with
message) when productId or metricDate is null to fail fast; reference the
ProductMetrics.create factory and the productId/metricDate fields when adding
the checks, and add unit tests asserting create(null, someDate) and
create(someId, null) both throw the expected exception.
📌 Summary
product_metrics(일별 스냅샷)를 읽어 Materialized View 테이블에 TOP 100을 적재하고, 기존 랭킹 API에period파라미터를 추가해 일간/주간/월간을 통합 제공한다.period=daily|weekly|monthly분기로 Redis/DB 데이터 소스를 선택 조회. 주간/월간에도 캐싱을 적용해 일관된 성능. 추가로 멘토 피드백을 반영해 선형 합산 → saturation 비선형 변환으로 롱테일 변별력 개선.🧭 Context & Decision
문제 정의
ranking:all:yyyyMMdd)에 Kafka 이벤트 기반 ZINCRBY로 점수 누적. TTL 2일.product_metrics테이블은 상품별 누적 카운트만 보유(일자 구분 없음). 일별 집계 기반 주간/월간 계산이 불가능.GET /api/v1/rankings?date=...).baseDate파라미터 기반 Spring Batch Job 실행 가능mv_product_rank_weekly,mv_product_rank_monthly테이블 멱등 적재 (DELETE + INSERT)period로 일간/주간/월간 분기 조회선택지와 결정
(1) 데이터 소스 선택
product_metrics에 일자 컬럼 추가(일별 스냅샷) ← 선택(2) 주간/월간 기간 기준
일, 1일말일)(3) Chunk vs Tasklet
ORDER BY score DESC LIMIT 100을 적용해 DB가 집계/정렬/제한을 처리JdbcCursorItemReader→ItemProcessor(rank 부여) →ItemWriter(DELETE + saveAll)(4) 점수 공식 (Round 9 피드백 반영)
view×0.1 + like×0.2 + sales×0.7x/(x+k)기반 비선형 합산 ← 선택(5) 캐싱 범위
RankingCacheStore확장 (cache key에 period 포함)트레이드오프
GREATEST(x, 0)으로 음수 방어 → 환불/좋아요 취소 누적이 과도해도 점수가 폭주하지 않음추후 개선 여지
🏗️ Design Overview
변경 범위
commerce-api,commerce-batch,commerce-streamercommerce-batch에 랭킹 집계 Job (RankingAggregationJobConfig,ProductMetricsAggregation,MvProductRankWeekly/Monthly엔티티)commerce-api에 MV 조회용 도메인/인프라 (MvRankingRepository+ Adapter,MvProductRankWeekly/Monthly엔티티)mv_product_rank_weekly,mv_product_rank_monthly테이블product_metrics스키마 변경: UNIQUE(product_id)→(product_id, metric_date)주요 컴포넌트 책임
RankingAggregationJobConfig: Job/Step/Reader/Processor/Writer 빈 구성.baseDateJob 파라미터로 주간(-6일)/월간(-29일) 기간 결정.JdbcCursorItemReader:product_metrics를 GROUP BY + saturation ORDER BY + LIMIT 100으로 읽어ProductMetricsAggregation으로 매핑.productsINNER JOIN으로 soft delete 상품 제외.ItemProcessor:AtomicInteger카운터로 rank 부여,MvProductRankWeekly/Monthly엔티티 생성.ItemWriter: 기준일 기존 데이터deleteByAggregatedAt후saveAll(같은 트랜잭션 내에서 멱등 교체).ProductMetricsAggregation.calculateScore(): saturation 수식 도메인 로직. 음수 카운트는 0 처리.RankingFacade: period별 분기 (Redis/DB), 캐싱 레이어 공통 적용.RankingCacheStore:period파라미터 추가, TTL 차등 (daily 5분 / 그외 1시간).MetricsEventService(streamer): 이벤트 수신 시 Asia/Seoul 오늘 날짜를metric_date로 적재.🔁 Flow Diagram
Main Flow — 배치 집계 & API 조회
sequenceDiagram autonumber participant Scheduler as 스케줄러 (매일 새벽) participant Job as RankingAggregationJob participant Reader as JdbcCursorItemReader participant Processor as ItemProcessor participant Writer as ItemWriter participant DB as MySQL (product_metrics, mv_*) Scheduler->>Job: baseDate 파라미터로 실행 Job->>Reader: SQL 실행 (GROUP BY + saturation ORDER BY + LIMIT 100) Reader->>DB: SELECT ... INNER JOIN products ON deleted_at IS NULL DB-->>Reader: 집계 결과 (최대 100건) loop 청크 단위 Reader-->>Processor: ProductMetricsAggregation Processor->>Processor: saturation score 계산 + rank 부여 Processor-->>Writer: MvProductRank* 엔티티 end Writer->>DB: DELETE FROM mv_* WHERE aggregated_at = baseDate Writer->>DB: INSERT INTO mv_* (...) Note over Writer,DB: Weekly Step → Monthly Step 순차 실행 participant Client participant API as RankingController participant Facade as RankingFacade participant Cache as RankingCacheStore (Redis) Client->>API: GET /api/v1/rankings?period=weekly&date=... API->>Facade: getRankings(period, date, page, size) Facade->>Cache: get(period, date, page, size) alt 캐시 적중 Cache-->>Facade: 캐시된 결과 else 캐시 미스 alt period = daily Facade->>DB: Redis ZSET 조회 + Product/Brand 조회 else period = weekly/monthly Facade->>DB: mv_product_rank_* 조회 + Product/Brand 조회 end Facade->>Cache: put(period, date, ..., result, TTL) end Facade-->>API: 랭킹 목록 API-->>Client: 200 OK예외/특이 흐름
IllegalArgumentException→ Step FAILED → Job FAILEDDELETE + INSERT가 같은 트랜잭션이라 멱등INNER JOIN ... AND deleted_at IS NULL에서 제외주간/월간 랭킹 시스템 구현 (Spring Batch + Materialized View)
변경 목적
기존 일간 Redis ZSET 랭킹 위에 Spring Batch 기반 집계 기능을 추가하여 주간/월간 랭킹을 제공하고, 랭킹 API에
period파라미터(daily|weekly|monthly)를 통해 일간/주간/월간을 통합 조회할 수 있도록 개선.핵심 변경점
product_metrics스키마를 일별 스냅샷으로 변경(복합 UNIQUE: product_id + metric_date).RankingAggregationJobConfig) 추가: Chunk-Oriented 처리(크기 100),baseDate파라미터로 롤링 기간 결정(주간 7일, 월간 30일),JdbcCursorItemReader→ItemProcessor(순위 부여)→ItemWriter(멱등 적재: DELETE 후 INSERT) 파이프라인 구성.x/(x+k)(k=100, 가중치: view 0.1, like 0.2, sales 0.7)로 이상치 완화.mv_product_rank_weekly,mv_product_rank_monthlyMV 테이블 및 관련 엔티티/레포지토리 추가.period포함하여 KEY 생성, TTL 차등 설정(daily 5분, weekly/monthly 1시간).WHERE p.deleted_at IS NULL).리스크/주의사항
k=100은 현재 임의값이며 실제 운영 데이터 분포를 바탕으로 튜닝 필요(TODO 주석 있음). TOP_N=100으로 고정되어 있는데, 실무 요구사항에 따라 조정 가능성 검토 필요.baseDate파라미터 필수 제공으로 누락 시 배치 실패 발생.테스트/검증
deleted_at설정 상품이 배치 결과에서 제외되는지 확인.