Round 10#414
Conversation
- DailyMetricSnapshot 레코드 추가 (일별 메트릭 데이터 도메인 운반용) - RankingScore.calculateWithDecay() 구현 (감쇠율 0.85^daysAgo 적용) - RankingType에 WEEKLY, MONTHLY enum 추가 (Redis key prefix + TTL 포함) - RankingDateKey.defaultKey(RankingType) 추가 (Controller 비즈니스 로직 도메인 위임) - 시간 감쇠 테스트 3개 추가 (최근 우대, 합산 일관성, 빈 스냅샷 엣지케이스) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- MvProductRankWeekly / MvProductRankMonthly 엔티티 (카운트만 저장, score/rank 없음) - (productId, calculatedDate) 유니크 제약 + calculatedDate 인덱스 - JPQL 벌크 DELETE (파생 deleteBy의 SELECT+개별DELETE → 유니크 제약 충돌 방지) - ProductMetricsDailyRepository.findByDateBetween() 추가 (롤링 윈도우 조회용) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Weekly/MonthlyRankingJobConfig — 2 Step 구조: - Step 1 (Tasklet): JPQL 벌크 DELETE로 기존 MV 데이터 삭제 (재실행 멱등성) - Step 2 (Chunk, size=100): - Reader: ProductMetricsDaily 7/30일치 조회 → productId별 그룹핑 - Processor: 카운트 합산(MV용) + 시간 감쇠 score 계산(Redis용) 동시 처리 - Writer: MV saveAll + Redis ZSET incrementScore (DB 조회 1번으로 최적화) E2E 테스트 10개: - Weekly 6개 (정상 실행, MV 저장, 카운트 합산×3, 재실행 멱등성) - Monthly 3개 (정상 실행, 카운트 합산×2) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- RankingController: defaultDateKey() 제거 → RankingDateKey.defaultKey(type) 위임 - RankingService: getRankings(4파라미터)에 @transactional(readOnly=true) 추가 - HOURLY/DAILY/WEEKLY/MONTHLY 전부 동일 경로 (RankingService → Redis ZSET) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Docker Engine 29.3.1(최소 API v1.40)에서 docker-java 기본 API v1.32 요청 시 빈 응답 반환 → "Could not find a valid Docker environment" 에러 발생. docker-java.properties에 api.version=1.44 설정으로 해결. 참고: testcontainers/testcontainers-java#11212 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
📝 WalkthroughWalkthrough주간 및 월간 랭킹 기능을 추가한다. RankingType 열거형에 WEEKLY와 MONTHLY를 확장하고, 시간 감쇄 알고리즘을 적용한 점수 계산 로직을 도입하며, 해당 배치 작업 설정과 JPA 엔티티, 리포지토리를 새로 구성한다. Changes
Sequence Diagram(s)sequenceDiagram
participant Job as Spring Batch Job
participant Reader as Item Reader
participant Processor as Item Processor
participant Writer as Item Writer
participant DB as Database
participant Repo as Repository
Job->>Job: 1. Cleanup Step: 기존 MV 데이터 삭제
Job->>Repo: deleteByCalculatedDate(baseDate)
Repo->>DB: DELETE FROM mv_product_rank_*
Job->>Job: 2. Aggregate Step 시작
Job->>Reader: 윈도우 범위 일간 메트릭 읽기
Reader->>DB: SELECT ProductMetricsDaily<br/>(7일 또는 30일 윈도우)
DB-->>Reader: List<ProductMetricsDaily>
Reader-->>Processor: ProductWeeklyAggregate<br/>또는 ProductMonthlyAggregate
loop 상품별 청크 처리
Processor->>Processor: 1. 메트릭 집계<br/>(viewCount, likesCount, salesCount)
Processor->>Processor: 2. DailyMetricSnapshot 생성
Processor->>Processor: 3. calculateWithDecay()<br/>시간 감쇄 점수 계산
Processor->>Processor: 4. MV 엔티티 생성
Processor-->>Writer: RankingResult<br/>(MV 엔티티 + 점수)
end
Writer->>DB: BATCH INSERT MvProductRank*
Writer->>Repo: productRankingRepository<br/>.increaseScore(...)
Repo->>DB: UPDATE ProductRanking
DB-->>Writer: 완료
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45분 근거: 도메인, 인프라, 배치 계층을 포함한 광범위한 변경으로 일관성 검토 필요. 배치 작업 설정이 Reader/Processor/Writer 파이프라인으로 구성되어 있고, 시간 감쇄 알고리즘 정확성 확인이 필수다. 다만 패턴이 명확하고 테스트 커버리지가 있어 최상위 복잡도는 아니다. Possibly related PRs
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 12
🧹 Nitpick comments (11)
domain/src/main/java/com/loopers/domain/ranking/DailyMetricSnapshot.java (1)
5-11: 의도에 부합하는 구조다.도메인 값 객체로서 불변
record선택이 적절하다. 별도 수정 사항은 없다. 다만 이후 감쇠 계산에서date == null입력이 들어오면 NPE가 난다. 팩토리 생성자(compact constructor)에서date != null정도의 방어 검증을 두면 상위 호출부(Batch Processor)의 버그를 조기에 드러낼 수 있다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@domain/src/main/java/com/loopers/domain/ranking/DailyMetricSnapshot.java` around lines 5 - 11, Add a compact constructor to the DailyMetricSnapshot record that defends against null dates by validating the date parameter (e.g., using Objects.requireNonNull(date, "date must not be null") or equivalent) so a clear NPE with context is thrown early; implement this inside the DailyMetricSnapshot declaration (compact constructor) to fail-fast when date is null.infrastructure/jpa/src/main/java/com/loopers/infrastructure/ranking/MvProductRankWeekly.java (1)
21-50: 엔티티 동등성/식별자 전략을 명시하는 것을 권고한다.
BaseTimeEntity에서@Id를 상속받는 구조로 보이나, 현재 클래스에는equals/hashCode가 정의되어 있지 않다. 도메인/인프라 경계 상 MV 엔티티는 동일(productId, calculatedDate)키로 반복 upsert 시나리오가 있으므로, 식별자 기반 동등성을BaseTimeEntity레벨에서 일관되게 강제하는지 확인을 권고한다. 또한 필드가 전부private단순 값이라 방어적 복사 이슈는 없으나,LocalDate는 불변이므로 getter 그대로 두어도 된다.추가로 정적 팩토리
of가 음수 카운트를 허용한다. 운영상 집계 버그로 음수가 들어오면 Redis ZSET 점수 왜곡이 자동 복구되지 않으므로,viewCount/likesCount/salesCount >= 0검증을 팩토리에 추가하여 조기 실패시킬 것을 권고한다.제안 수정안
public static MvProductRankWeekly of(Long productId, long viewCount, long likesCount, long salesCount, LocalDate calculatedDate) { + if (productId == null || calculatedDate == null) { + throw new IllegalArgumentException("productId, calculatedDate must not be null"); + } + if (viewCount < 0 || likesCount < 0 || salesCount < 0) { + throw new IllegalArgumentException("count must be non-negative"); + } return new MvProductRankWeekly(productId, viewCount, likesCount, salesCount, calculatedDate); }추가 테스트 제안: null/음수 입력에 대한
of팩토리 예외 테스트를 도메인 테스트에 추가한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@infrastructure/jpa/src/main/java/com/loopers/infrastructure/ranking/MvProductRankWeekly.java` around lines 21 - 50, Implement identity and input validation: add equals() and hashCode() on MvProductRankWeekly to enforce entity identity by the composite key (productId, calculatedDate) so upsert semantics are consistent (if BaseTimeEntity does not already enforce id-based equality). Also add validation inside the static factory method MvProductRankWeekly.of(...) to reject null productId/calculatedDate and any negative viewCount, likesCount, or salesCount by throwing IllegalArgumentException with clear messages. Ensure you reference the class MvProductRankWeekly, the of(...) factory, and the fields productId, calculatedDate, viewCount, likesCount, salesCount when making these changes.domain/src/main/java/com/loopers/domain/ranking/RankingDateKey.java (1)
19-36:today()/currentHour()는 JVM 기본 타임존에 의존하므로 운영 리스크가 있다.컨테이너/서버의 JVM TZ가 UTC이고 서비스가 KST 기준이라면 자정 근처에 랭킹 키가 하루 밀려 조회되어 "오늘 랭킹이 비어 있음" 장애가 재현된다. 배치 잡의
baseDate와 API 조회 측defaultKey가 서로 다른 TZ로 계산되면 키 불일치도 발생한다. 명시적 ZoneId(예:Asia/Seoul) 또는 주입받은Clock을 사용하도록 통일을 권고한다.제안 수정안
- public static String today() { - return of(LocalDate.now()); - } + private static final ZoneId ZONE = ZoneId.of("Asia/Seoul"); + + public static String today() { + return of(LocalDate.now(ZONE)); + } @@ - public static String currentHour() { - return ofHour(LocalDateTime.now()); - } + public static String currentHour() { + return ofHour(LocalDateTime.now(ZONE)); + }추가 테스트 제안:
Clock.fixed(..., ZoneOffset.UTC)를 주입하여 자정 경계(23:59 UTC = 08:59 KST 다음 날 아님 등)에서 키가 KST 기준으로 일관되게 결정되는지 확인한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@domain/src/main/java/com/loopers/domain/ranking/RankingDateKey.java` around lines 19 - 36, today() and currentHour() rely on the JVM default timezone, causing inconsistent keys across environments; change them to use an explicit ZoneId (e.g., ZoneId.of("Asia/Seoul")) or accept/inject a Clock so all date/time derivations are deterministic: update today(), ofHour(LocalDateTime)/currentHour() (and defaultKey(RankingType) callers) to use Zoned/Offset-aware APIs with the provided ZoneId/Clock and add tests using Clock.fixed(..., ZoneOffset.UTC) to verify KST-based boundaries produce consistent keys.infrastructure/redis/src/testFixtures/resources/docker-java.properties (1)
1-1: Docker API 버전 하드코딩의 운영 리스크를 점검한다.
api.version=1.44를 고정하면 CI/로컬의 Docker 엔진이 해당 API를 지원하지 않는 환경에서 Testcontainers 초기화가 실패할 수 있다. 운영 관점에서 이 픽스처가 필요한 이유(특정 이슈 회피)를 주석으로 남기고, 팀이 사용하는 최소 Docker 버전을 README에 명시할 것을 권고한다. 추가로 CI 파이프라인에서 Docker 버전을 로그로 남기는 단계를 두어 장애 분석을 쉽게 하도록 권고한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@infrastructure/redis/src/testFixtures/resources/docker-java.properties` at line 1, The hardcoded Docker API version property (api.version=1.44) in the test fixture should be annotated with a comment that explains why this specific API version is required (e.g., to work around a specific Testcontainers/Docker issue), update the repo README to state the minimum Docker Engine version (or API version) required by the team, and add/advise a CI step that logs the Docker/engine version before tests run to aid troubleshooting; locate the api.version entry in the docker-java.properties fixture and add the explanatory comment, then update documentation and CI config accordingly.domain/src/main/java/com/loopers/domain/ranking/RankingType.java (1)
5-8: TTL 설계 근거를 주석으로 명시할 것을 권고한다.WEEKLY 8일, MONTHLY 32일로 윈도(7/30일)보다 하루 긴 TTL을 둔 것은 배치 실패 시 이전 키 재사용을 위한 여유로 해석되나, 의도가 코드에 드러나지 않는다. 주석 또는 상수화로 "윈도 + 1일 버퍼"임을 명시하면 이후 유지보수 시 TTL을 무심코 줄여 배치 실패 구간에서 랭킹 공백(장애)이 발생하는 실수를 막을 수 있다. int 오버플로는 여유가 충분하다.
제안 수정안
- DAILY("ranking:daily:", 2 * 24 * 60 * 60), - HOURLY("ranking:hourly:", 24 * 60 * 60), - WEEKLY("ranking:weekly:", 8 * 24 * 60 * 60), - MONTHLY("ranking:monthly:", 32 * 24 * 60 * 60); + // TTL = 윈도 + 1일 버퍼 (배치 재시도/실패 허용) + DAILY("ranking:daily:", 2 * 24 * 60 * 60), + HOURLY("ranking:hourly:", 24 * 60 * 60), + WEEKLY("ranking:weekly:", 8 * 24 * 60 * 60), + MONTHLY("ranking:monthly:", 32 * 24 * 60 * 60);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@domain/src/main/java/com/loopers/domain/ranking/RankingType.java` around lines 5 - 8, The TTL values in RankingType (DAILY, HOURLY, WEEKLY, MONTHLY) use windows with an extra day (e.g., WEEKLY = 8 days, MONTHLY = 32 days) but lack any explanatory note; update the enum RankingType to document the design (e.g., add a Javadoc or inline comment on the enum and/or introduce a named constant like BUFFER_DAYS = 1) stating these TTLs are "window size + 1 day buffer to allow reuse when batch jobs fail", and apply that comment to the WEEKLY and MONTHLY entries (and optionally all entries) so future maintainers understand the intentional buffer.infrastructure/jpa/src/main/java/com/loopers/infrastructure/ranking/MvProductRankWeeklyRepository.java (1)
14-16:@Modifyingbulk delete의 영속성 컨텍스트를 명시적으로 정리한다.초기 실행 결과 분석에서 cleanup step과 aggregate step이 별개의 배치 트랜잭션으로 분리되어 있고, cleanup step이
.tasklet(cleanupTasklet, transactionManager)로 명시적으로 감싸져 있음을 확인했다. 따라서 delete 직후 같은 엔티티를 insert하는 동일 트랜잭션 시나리오는 현 구조에서 발생하지 않는다.다만 다음 개선을 권고한다.
clearAutomatically = true를 명시적으로 추가한다. bulk delete 이후 같은 엔티티 타입에 대한 조회/삽입이 이어질 경우, 1차 캐시에 남아 있던 삭제 엔티티 정보가 영속성 컨텍스트에 영향을 미칠 수 있다. 배치는 자주 재실행되는 특성상 영속성 컨텍스트를 명시적으로 정리하는 것이 운영 안정성을 높인다.제안 수정안
`@Modifying` + `@Modifying`(clearAutomatically = true) `@Query`("DELETE FROM MvProductRankWeekly m WHERE m.calculatedDate = :calculatedDate") void deleteByCalculatedDate(LocalDate calculatedDate);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@infrastructure/jpa/src/main/java/com/loopers/infrastructure/ranking/MvProductRankWeeklyRepository.java` around lines 14 - 16, The `@Modifying` annotation on the repository bulk-delete method should explicitly clear the persistence context to avoid stale first-level cache issues; update the annotation on MvProductRankWeeklyRepository.deleteByCalculatedDate to `@Modifying`(clearAutomatically = true) so the EntityManager is cleared after the bulk DELETE operation, ensuring subsequent reads/inserts for the same entity type in later steps see a fresh state.domain/src/main/java/com/loopers/domain/ranking/RankingScore.java (1)
39-47:daysAgo가 음수가 되는 입력에 대한 방어가 없다.현재 배치 리더(
baseDate.minusDays(1)까지 조회)에서는 발생하지 않지만, 도메인 유틸로서snapshot.date() > baseDate가 주어지면Math.pow(0.85, 음수)로 점수가 기하급수적으로 팽창해 운영 중 잘못된 데이터 한 건으로도 랭킹이 왜곡될 수 있다. 향후 리더 변경/직접 호출 시 회귀 위험이 있으므로 방어 분기와 경계 테스트를 권고한다.♻️ 방어 로직 및 테스트 보강 제안
public static double calculateWithDecay(List<DailyMetricSnapshot> snapshots, LocalDate baseDate) { double total = 0.0; for (DailyMetricSnapshot snapshot : snapshots) { long daysAgo = ChronoUnit.DAYS.between(snapshot.date(), baseDate); + if (daysAgo < 0) { + // 미래 날짜 스냅샷은 감쇄 의미가 없어 현재 기준(daysAgo=0)으로 보정한다. + daysAgo = 0; + } double dailyScore = calculateDaily(snapshot.viewCount(), snapshot.likesCount(), snapshot.salesCount()); total += dailyScore * Math.pow(DECAY_RATE, daysAgo); } return total; }추가 테스트 권고:
snapshot.date() == baseDate일 때dailyScore와 동일한 값이 되는지 검증snapshot.date() > baseDate케이스가 보정되어 유한 값이 되는지 검증🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@domain/src/main/java/com/loopers/domain/ranking/RankingScore.java` around lines 39 - 47, The calculateWithDecay method can produce exploding scores when snapshot.date() > baseDate because daysAgo becomes negative; modify calculateWithDecay in RankingScore to defend against negative daysAgo by either clamping daysAgo to 0 or skipping/folding future-dated snapshots before applying Math.pow(DECAY_RATE, daysAgo) (refer to calculateWithDecay and DECAY_RATE), and add unit tests to verify (1) snapshot.date() == baseDate yields dailyScore unchanged and (2) snapshot.date() > baseDate is handled to produce a finite, bounded contribution.presentation/commerce-batch/src/main/java/com/loopers/batch/job/ranking/MonthlyRankingJobConfig.java (2)
99-123: 리더의 전체 윈도우 일괄 로딩이 운영 규모에서 병목/OOM 리스크가 된다.
findByDateBetween(startDate, endDate)로 30일치 모든 상품의ProductMetricsDaily를 단일 List로 메모리에 적재한다. 상품 수 × 30일 규모가 커지면 힙 압박과 GC 지연으로 동일 JVM에서 공존하는 다른 잡에도 영향을 줄 수 있다. PR에 트레이드오프로 명시되었으므로 즉시 수정 필요는 없으나, 후속으로 다음을 권고한다.
- DB에서 productId 기준
GROUP BY로 집계(SUM)된 결과만 페이지네이션으로 내려받아 감쇄 스코어용 원본 스냅샷은 상품 단위로 최소 조회.JdbcCursorItemReader/JpaCursorItemReader로 스트리밍 처리 후ItemStream에서 상품 경계 기반 집계.- 청크 단위 commit-interval에 맞춰 리더 측 메모리 상한 설정.
추가 테스트 권고: 상품 N(수천)×30일 규모의 부하 테스트 또는 최소 프로파일링 스모크 테스트 추가.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@presentation/commerce-batch/src/main/java/com/loopers/batch/job/ranking/MonthlyRankingJobConfig.java` around lines 99 - 123, The reader monthlyRankingReader currently loads all ProductMetricsDaily via productMetricsDailyRepository.findByDateBetween(startDate, endDate) into memory, risking OOM; change to a streaming/paginated approach: replace the single-list load with a cursor or paginated query (e.g., JpaCursorItemReader/JdbcCursorItemReader or repository pagination) that yields rows ordered by productId and date, then perform per-product accumulation into ProductMonthlyAggregate inside the reader/ItemStream as rows stream in (respecting WINDOW_DAYS), or alternatively query aggregated SUMs grouped by productId and page through results; ensure the iterator-based return keeps only one product's dailyMetrics in memory at a time and honor chunk commit-intervals to bound reader memory.
155-170: 청크 트랜잭션 내 Redis 호출의 부분 실패 시 DB-Redis 비대칭이 발생한다.Writer는 Spring Batch 청크 트랜잭션 범위 안에서
saveAll후 RedisincrementScore를 순회 호출한다. 일부 항목에서 Redis 호출이 실패하면 DB 트랜잭션은 롤백되어 해당 청크 MV 행은 사라지지만, 이미 성공한 Redis 증가분은 외부 시스템이므로 보상되지 않는다. 또한 재시도 시 성공했던 Redis 증가분이 다시 가산되어 랭킹이 부풀려진다. PR에서 문서화된 한계이지만 운영상 다음을 권고한다.
- Redis 호출에 타임아웃 및 재시도 횟수 상한을 명시하고, 실패 시 해당 청크의
productId를 재처리 제외(멱등 키) 대상으로 기록.- 혹은
MULTI/파이프라인으로 청크 단위 일괄 적용하고 전체 성공/실패로 바꾼 뒤, 재처리 정책을 "ZSET 덮어쓰기(ZADD)"로 전환.- 최소한 Redis 타임아웃/커넥션 실패가 Step 실패로 전파되는지 명시적으로 검증하는 테스트 추가.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@presentation/commerce-batch/src/main/java/com/loopers/batch/job/ranking/MonthlyRankingJobConfig.java` around lines 155 - 170, monthlyRankingWriter currently calls mvProductRankMonthlyRepository.saveAll inside the chunk transaction and then loops calling productRankingRepository.incrementScore (with RankingDateKey.of(baseDate)), which can cause DB/Redis asymmetry if some Redis calls fail; fix by making Redis updates resilient and atomic: either (A) add explicit timeouts and bounded retry logic inside productRankingRepository.incrementScore (or a new method incrementScoresWithRetries) and record failed productIds to an idempotency/retry-exclude store so retries don’t double-count, or (B) implement a chunk-level batch Redis operation (e.g., productRankingRepository.incrementScoresBatch(dateKey, List<ProductScore>) that uses MULTI/pipeline with ZADD-overwrite semantics) and call that instead of per-item incrementScore so the whole chunk succeeds or fails together; also add a unit/integration test that simulates Redis timeout/connection failure to assert the Step fails (i.e., exception propagates) and verify the chosen retry/idempotency behavior.presentation/commerce-batch/src/main/java/com/loopers/batch/job/ranking/WeeklyRankingJobConfig.java (2)
108-113:computeIfAbsent반환값을 버리고get을 재호출한다.해시 조회가 2회 발생하고 가독성도 떨어진다.
computeIfAbsent의 반환값을 그대로 사용하면 한 번으로 충분하다.♻️ 단순화 예시
- Map<Long, ProductWeeklyAggregate> aggregateMap = new LinkedHashMap<>(); - for (ProductMetricsDaily daily : dailyMetrics) { - aggregateMap.computeIfAbsent(daily.getProductId(), - id -> new ProductWeeklyAggregate(id, new ArrayList<>())); - aggregateMap.get(daily.getProductId()).dailyMetrics().add(daily); - } + Map<Long, ProductWeeklyAggregate> aggregateMap = new LinkedHashMap<>(); + for (ProductMetricsDaily daily : dailyMetrics) { + aggregateMap.computeIfAbsent(daily.getProductId(), + id -> new ProductWeeklyAggregate(id, new ArrayList<>())) + .dailyMetrics().add(daily); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@presentation/commerce-batch/src/main/java/com/loopers/batch/job/ranking/WeeklyRankingJobConfig.java` around lines 108 - 113, computeIfAbsent 호출 뒤 반환값을 버리고 다시 Map.get을 호출해 해시 조회를 중복하고 있으니 WeeklyRankingJobConfig 내부의 aggregateMap 처리 루프에서 computeIfAbsent의 반환값을 바로 받아 사용하도록 변경하세요: ProductMetricsDaily 리스트를 순회하는 해당 for문에서 aggregateMap.computeIfAbsent(daily.getProductId(), id -> new ProductWeeklyAggregate(id, new ArrayList<>()))의 반환값을 변수(예: ProductWeeklyAggregate agg)로 받아 agg.dailyMetrics().add(daily)로 추가하면 해시 조회가 한 번으로 줄고 가독성이 개선됩니다.
42-95: Weekly/Monthly 배치의 구조적 중복이 커질 것으로 예상된다.윈도 길이(7/30), MV 엔티티/Repository,
RankingType만 다르고 Job/Step/Reader/Processor/Writer 골격이 동일하다. 현 상태로도 동작하지만 향후 로깅/재시도/멱등화 개선 시 두 파일을 병행 수정해야 하므로 드리프트 위험이 있다.수정안(선택):
- 윈도 길이, MV 리포지토리,
RankingType을 타입 파라미터/전략으로 주입받는 추상RollingRankingJobTemplate을 도입하고 Weekly/Monthly는 파라미터만 달리하는 얇은 구성으로 단순화한다.- 최소한 Reader의 그룹핑 로직과 Writer의 Redis 반영 로직은 공통 유틸로 추출한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@presentation/commerce-batch/src/main/java/com/loopers/batch/job/ranking/WeeklyRankingJobConfig.java` around lines 42 - 95, WeeklyRankingJobConfig duplicates the same Job/Step/Reader/Processor/Writer skeleton that Monthly will use differing only by window length, MV repository and RankingType; extract that common structure into an abstract template (e.g., RollingRankingJobTemplate) and make WeeklyRankingJobConfig a thin configurator that injects the window length (WINDOW_DAYS), the MV repository (mvProductRankWeeklyRepository), and the RankingType. Refactor weeklyRankingAggregateStep, weeklyRankingCleanupStep, weeklyRankingReader/Processor/Writer logic into parameterizable template methods or strategy interfaces so shared behaviors (chunk config, listeners, retry/idem handling, reader grouping and writer Redis update) live in the template and only the specific parameters and repository beans are provided by WeeklyRankingJobConfig. Ensure existing bean names (JOB_NAME, CLEANUP_STEP_NAME, AGGREGATE_STEP_NAME) are preserved by the concrete configs and that weeklyRankingCleanupTasklet still delegates deleteByCalculatedDate(baseDate) to the injected MV repository.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@domain/src/test/java/com/loopers/domain/ranking/RankingScoreTest.java`:
- Around line 87-102: Change the equality assertion in the test
'주간_점수는_여러_날의_감쇠_점수를_합산한다' to use an offset comparison to avoid flaky
floating-point equality: replace the exact equals check on 'combined' vs 'score1
+ score2' with an assertion that they are close within a small tolerance (e.g.
isCloseTo(..., within(1e-9))). Keep the rest of the setup that calls
RankingScore.calculateWithDecay(List<DailyMetricSnapshot>, today) for 'day1' and
'day2' unchanged.
In
`@infrastructure/jpa/src/main/java/com/loopers/infrastructure/metrics/ProductMetricsDailyRepository.java`:
- Line 15: ProductMetricsDailyRepository의 findByDateBetween 메서드는 date 컬럼에 인덱스가
없어 범위조회 시 풀테이블스캔을 유발하므로 ProductMetricsDaily 엔티티에 `@Index`(name =
"idx_product_metrics_daily_date", columnList = "date")를 추가하거나 DB 마이그레이션에 해당 인덱스를
생성하도록 변경하고, 대량 반환으로 인한 메모리 문제를 방지하기 위해 현재 List<ProductMetricsDaily> 반환을
Stream<ProductMetricsDaily> 또는 Spring Data의 Pageable 기반 페이징 API로 전환(예:
Stream<ProductMetricsDaily> findByDateBetween(...) 또는 Page<ProductMetricsDaily>
findByDateBetween(..., Pageable))하는 리팩터링을 적용하며, 리포지토리 슬라이스 테스트에 경계값(startDate ==
endDate, startDate > endDate)과 빈 결과 케이스를 추가해 검증하세요.
In
`@infrastructure/jpa/src/main/java/com/loopers/infrastructure/ranking/MvProductRankMonthly.java`:
- Around line 16-18: Update the JPA annotations so DB column names match the
physical naming strategy: in MvProductRankMonthly change the `@UniqueConstraint`
and `@Index` columnNames/columnList from "productId"/"calculatedDate" to the
actual snake_case names "product_id" and "calculated_date" (or alternatively add
`@Column`(name = "...") on the entity fields productId and calculatedDate to fix
the mapping); make the same fixes in ProductMetricsHourly (productId, hour ->
"product_id", "hour") and ProductMetricsDaily (productId, date -> "product_id",
"date") by aligning `@UniqueConstraint/`@Index or adding `@Column`(name=...) on
those fields; finally add an integration test that inserts duplicate keys for
each entity and asserts a DataIntegrityViolationException is thrown to validate
the unique constraints.
In
`@presentation/commerce-batch/src/main/java/com/loopers/batch/job/ranking/MonthlyRankingJobConfig.java`:
- Around line 77-83: The cleanup Tasklet (monthlyRankingCleanupTasklet) only
calls mvProductRank_monthlyRepository.deleteByCalculatedDate(baseDate) but does
not remove the Redis ZSET, causing duplicate score accumulation on re-run;
modify monthlyRankingCleanupTasklet to also delete the Redis key
RankingDateKey.of(baseDate) (use the existing Redis client/bean such as
StringRedisTemplate/RedisTemplate or RedisService) before returning, e.g. call
redisTemplate.delete(RankingDateKey.of(baseDate).getKey()) (ensure the Redis
bean is injected into the config), so the ZSET is removed together with the DB
rows and re-runs are idempotent.
In
`@presentation/commerce-batch/src/main/java/com/loopers/batch/job/ranking/WeeklyRankingJobConfig.java`:
- Around line 158-169: The writer in WeeklyRankingJobConfig mixes DB saveAll
(mvProductRankWeeklyRepository.saveAll) and Redis increments
(productRankingRepository.incrementScore), causing cross-system inconsistency on
partial failures; change the Redis update to an idempotent overwrite API (e.g.,
productRankingRepository.setScore) and move Redis writes out of the chunk
transaction so they occur only after DB commit — implement Redis application in
a separate Step that reads MV rows (using RankingDateKey.of(baseDate) and
WeeklyRankingResult.mvEntity mapping) or in a ChunkListener#afterChunk /
StepExecutionListener#afterStep callback; at minimum decouple failure handling
by adding retries and clear error logging (preserve cause) around Redis calls
instead of performing increments inside the transactional writer.
- Around line 97-103: The weeklyRankingReader currently assumes
jobParameters['baseDate'] is non-null and will throw an unclear
NullPointerException; add an explicit null check at the start of
weeklyRankingReader (e.g., Objects.requireNonNull(baseDate, "baseDate is
required (yyyy-MM-dd)")) and apply the same check to the corresponding
weeklyRankingProcessor/weeklyRankingWriter bean entry points referenced in this
class so they fail fast with a clear message; additionally register a
JobParametersValidator for the Job to validate presence/format of baseDate at
startup and add a unit/integration test asserting that missing or malformed
baseDate causes the job to fail with the explicit error message.
- Around line 69-83: The cleanup tasklet (weeklyRankingCleanupTasklet) currently
only deletes MV rows (mvProductRankWeeklyRepository.deleteByCalculatedDate) but
leaves the Redis ZSET, causing score accumulation on re-runs; update
weeklyRankingCleanupTasklet (and the Step weeklyRankingCleanupStep that
references it) to also remove the Redis ZSET for the given
baseDate/RankingType.WEEKLY before aggregation runs—compute the dateKey
(RankingDateKey.of(baseDate) with RankingType.WEEKLY) and call the Redis
deletion API (e.g., via productRankingRepository or redisTemplate.del / ZREM all
members) in the same tasklet so MV and Redis are cleared atomically for that
baseDate; alternatively, if you prefer the writer approach, add an idempotent
API on productRankingRepository that performs absolute ZADD (overwrite) instead
of incrementScore and switch the writer to use it, and add an E2E test asserting
two runs with the same baseDate produce identical MV rows and Redis ZSET
members/scores.
- Around line 99-123: weeklyRankingReader currently loads the entire WINDOW_DAYS
window via productMetricsDailyRepository.findByDateBetween into a LinkedHashMap
of ProductWeeklyAggregate, causing all per-day entities to stay in memory and
bypass CHUNK_SIZE; replace this by switching weeklyRankingReader to a
pageable/streaming reader (e.g., JpaPagingItemReader or RepositoryItemReader)
that queries ProductMetricsDaily ordered by productId and date and emits results
page-by-page, then wrap that reader with a grouping layer that accumulates
metrics only until productId boundary and returns a single
ProductWeeklyAggregate per product (flush/clear EntityManager between chunks);
additionally apply readOnly hints (setHint("org.hibernate.readOnly", true) or
`@Transactional`(readOnly=true)) to reduce 1st-level cache, or alternatively push
aggregation into the DB (SELECT productId, SUM(...), GROUP_CONCAT/ARRAY_AGG per
day) and stream those results instead; remove the LinkedHashMap and iterator
accumulation and ensure the new reader honors CHUNK_SIZE.
In
`@presentation/commerce-batch/src/test/java/com/loopers/job/ranking/MonthlyRankingJobE2ETest.java`:
- Around line 79-100: Extend the existing test 월간_랭킹_배치가_30일치_카운트를_합산한다 to add
four boundary cases and a Redis ZSET verification: create ProductMetricsDaily
entries for baseDate.minusDays(1) and baseDate.minusDays(30) (should be
included), and for baseDate (minusDays(0)) and baseDate.minusDays(31) (should be
excluded); save them via productMetricsDailyRepository, call runJob(today) and
assert mvProductRankMonthlyRepository.findByCalculatedDate(today) sums only the
included records; additionally verify the batch writer's final effect by
checking the Redis ZSET score(s) produced by monthlyRankingWriter (or the Redis
key used by the job) to ensure decay/score logic and that re-running runJob does
not double-count.
- Around line 62-77: The test method 월간_랭킹_배치가_정상_실행된다() seeds a
ProductMetricsDaily via ProductMetricsDaily.init(...) and incrementViews(...)
but only asserts the job exit status from runJob(today), making the test weak;
update the test to also verify that the expected MV record(s) were created and
contain expected values (e.g. assert productMetricsMonthlyRepository/findBy...
contains one record for the product id created from ProductMetricsDaily and that
a core field like views or rank equals the expected value), or if the intent is
to test an empty-input run, rename the test to reflect the empty scenario and
remove the seeded ProductMetricsDaily; ensure assertions reference
ProductMetricsDaily, productMetricsDailyRepository, runJob(...) and the monthly
projection repository used by the job (e.g., productMetricsMonthlyRepository) so
the test fails on regressions.
In
`@presentation/commerce-batch/src/test/java/com/loopers/job/ranking/WeeklyRankingJobE2ETest.java`:
- Around line 79-119: Update WeeklyRankingJobE2ETest to add boundary checks for
the weekly window: create/seed ProductMetricsDaily rows at baseDate.minusDays(7)
(should be included), baseDate (minusDays(0), should be excluded), and
baseDate.minusDays(8) (should be excluded), then run runJob(today) and assert
that the minusDays(7) record contributes to the aggregated MvProductRankWeekly
while the minusDays(0) and minusDays(8) records do not; place these assertions
in a new test method (or extend an existing one) to specifically validate
inclusion/exclusion at the window boundaries.
- Around line 163-180: The test 재실행시_기존_데이터를_삭제하고_새로_생성한다 currently only asserts
MV row count and must also assert Redis ZSET score idempotency: after the first
run capture the score via productRankingRepository.getScore(productId, dateKey,
RankingType.WEEKLY) (use the same productId from ProductMetricsDaily.init, e.g.,
1L, and the weekly date key derived from today), then run the job a second time
and assert the score is unchanged by calling
productRankingRepository.getScore(...) again; update the test method to call
runJob(today) once, read and save the score, call runJob(today) again, read the
score again, and assert equality to detect duplicate score accumulation.
---
Nitpick comments:
In `@domain/src/main/java/com/loopers/domain/ranking/DailyMetricSnapshot.java`:
- Around line 5-11: Add a compact constructor to the DailyMetricSnapshot record
that defends against null dates by validating the date parameter (e.g., using
Objects.requireNonNull(date, "date must not be null") or equivalent) so a clear
NPE with context is thrown early; implement this inside the DailyMetricSnapshot
declaration (compact constructor) to fail-fast when date is null.
In `@domain/src/main/java/com/loopers/domain/ranking/RankingDateKey.java`:
- Around line 19-36: today() and currentHour() rely on the JVM default timezone,
causing inconsistent keys across environments; change them to use an explicit
ZoneId (e.g., ZoneId.of("Asia/Seoul")) or accept/inject a Clock so all date/time
derivations are deterministic: update today(),
ofHour(LocalDateTime)/currentHour() (and defaultKey(RankingType) callers) to use
Zoned/Offset-aware APIs with the provided ZoneId/Clock and add tests using
Clock.fixed(..., ZoneOffset.UTC) to verify KST-based boundaries produce
consistent keys.
In `@domain/src/main/java/com/loopers/domain/ranking/RankingScore.java`:
- Around line 39-47: The calculateWithDecay method can produce exploding scores
when snapshot.date() > baseDate because daysAgo becomes negative; modify
calculateWithDecay in RankingScore to defend against negative daysAgo by either
clamping daysAgo to 0 or skipping/folding future-dated snapshots before applying
Math.pow(DECAY_RATE, daysAgo) (refer to calculateWithDecay and DECAY_RATE), and
add unit tests to verify (1) snapshot.date() == baseDate yields dailyScore
unchanged and (2) snapshot.date() > baseDate is handled to produce a finite,
bounded contribution.
In `@domain/src/main/java/com/loopers/domain/ranking/RankingType.java`:
- Around line 5-8: The TTL values in RankingType (DAILY, HOURLY, WEEKLY,
MONTHLY) use windows with an extra day (e.g., WEEKLY = 8 days, MONTHLY = 32
days) but lack any explanatory note; update the enum RankingType to document the
design (e.g., add a Javadoc or inline comment on the enum and/or introduce a
named constant like BUFFER_DAYS = 1) stating these TTLs are "window size + 1 day
buffer to allow reuse when batch jobs fail", and apply that comment to the
WEEKLY and MONTHLY entries (and optionally all entries) so future maintainers
understand the intentional buffer.
In
`@infrastructure/jpa/src/main/java/com/loopers/infrastructure/ranking/MvProductRankWeekly.java`:
- Around line 21-50: Implement identity and input validation: add equals() and
hashCode() on MvProductRankWeekly to enforce entity identity by the composite
key (productId, calculatedDate) so upsert semantics are consistent (if
BaseTimeEntity does not already enforce id-based equality). Also add validation
inside the static factory method MvProductRankWeekly.of(...) to reject null
productId/calculatedDate and any negative viewCount, likesCount, or salesCount
by throwing IllegalArgumentException with clear messages. Ensure you reference
the class MvProductRankWeekly, the of(...) factory, and the fields productId,
calculatedDate, viewCount, likesCount, salesCount when making these changes.
In
`@infrastructure/jpa/src/main/java/com/loopers/infrastructure/ranking/MvProductRankWeeklyRepository.java`:
- Around line 14-16: The `@Modifying` annotation on the repository bulk-delete
method should explicitly clear the persistence context to avoid stale
first-level cache issues; update the annotation on
MvProductRankWeeklyRepository.deleteByCalculatedDate to
`@Modifying`(clearAutomatically = true) so the EntityManager is cleared after the
bulk DELETE operation, ensuring subsequent reads/inserts for the same entity
type in later steps see a fresh state.
In `@infrastructure/redis/src/testFixtures/resources/docker-java.properties`:
- Line 1: The hardcoded Docker API version property (api.version=1.44) in the
test fixture should be annotated with a comment that explains why this specific
API version is required (e.g., to work around a specific Testcontainers/Docker
issue), update the repo README to state the minimum Docker Engine version (or
API version) required by the team, and add/advise a CI step that logs the
Docker/engine version before tests run to aid troubleshooting; locate the
api.version entry in the docker-java.properties fixture and add the explanatory
comment, then update documentation and CI config accordingly.
In
`@presentation/commerce-batch/src/main/java/com/loopers/batch/job/ranking/MonthlyRankingJobConfig.java`:
- Around line 99-123: The reader monthlyRankingReader currently loads all
ProductMetricsDaily via
productMetricsDailyRepository.findByDateBetween(startDate, endDate) into memory,
risking OOM; change to a streaming/paginated approach: replace the single-list
load with a cursor or paginated query (e.g.,
JpaCursorItemReader/JdbcCursorItemReader or repository pagination) that yields
rows ordered by productId and date, then perform per-product accumulation into
ProductMonthlyAggregate inside the reader/ItemStream as rows stream in
(respecting WINDOW_DAYS), or alternatively query aggregated SUMs grouped by
productId and page through results; ensure the iterator-based return keeps only
one product's dailyMetrics in memory at a time and honor chunk commit-intervals
to bound reader memory.
- Around line 155-170: monthlyRankingWriter currently calls
mvProductRankMonthlyRepository.saveAll inside the chunk transaction and then
loops calling productRankingRepository.incrementScore (with
RankingDateKey.of(baseDate)), which can cause DB/Redis asymmetry if some Redis
calls fail; fix by making Redis updates resilient and atomic: either (A) add
explicit timeouts and bounded retry logic inside
productRankingRepository.incrementScore (or a new method
incrementScoresWithRetries) and record failed productIds to an
idempotency/retry-exclude store so retries don’t double-count, or (B) implement
a chunk-level batch Redis operation (e.g.,
productRankingRepository.incrementScoresBatch(dateKey, List<ProductScore>) that
uses MULTI/pipeline with ZADD-overwrite semantics) and call that instead of
per-item incrementScore so the whole chunk succeeds or fails together; also add
a unit/integration test that simulates Redis timeout/connection failure to
assert the Step fails (i.e., exception propagates) and verify the chosen
retry/idempotency behavior.
In
`@presentation/commerce-batch/src/main/java/com/loopers/batch/job/ranking/WeeklyRankingJobConfig.java`:
- Around line 108-113: computeIfAbsent 호출 뒤 반환값을 버리고 다시 Map.get을 호출해 해시 조회를 중복하고
있으니 WeeklyRankingJobConfig 내부의 aggregateMap 처리 루프에서 computeIfAbsent의 반환값을 바로 받아
사용하도록 변경하세요: ProductMetricsDaily 리스트를 순회하는 해당 for문에서
aggregateMap.computeIfAbsent(daily.getProductId(), id -> new
ProductWeeklyAggregate(id, new ArrayList<>()))의 반환값을 변수(예:
ProductWeeklyAggregate agg)로 받아 agg.dailyMetrics().add(daily)로 추가하면 해시 조회가 한 번으로
줄고 가독성이 개선됩니다.
- Around line 42-95: WeeklyRankingJobConfig duplicates the same
Job/Step/Reader/Processor/Writer skeleton that Monthly will use differing only
by window length, MV repository and RankingType; extract that common structure
into an abstract template (e.g., RollingRankingJobTemplate) and make
WeeklyRankingJobConfig a thin configurator that injects the window length
(WINDOW_DAYS), the MV repository (mvProductRankWeeklyRepository), and the
RankingType. Refactor weeklyRankingAggregateStep, weeklyRankingCleanupStep,
weeklyRankingReader/Processor/Writer logic into parameterizable template methods
or strategy interfaces so shared behaviors (chunk config, listeners, retry/idem
handling, reader grouping and writer Redis update) live in the template and only
the specific parameters and repository beans are provided by
WeeklyRankingJobConfig. Ensure existing bean names (JOB_NAME, CLEANUP_STEP_NAME,
AGGREGATE_STEP_NAME) are preserved by the concrete configs and that
weeklyRankingCleanupTasklet still delegates deleteByCalculatedDate(baseDate) to
the injected MV repository.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: b824528c-c3d0-4666-8fe0-cbeddf87debd
📒 Files selected for processing (18)
application/commerce-service/src/main/java/com/loopers/application/service/RankingService.javadomain/src/main/java/com/loopers/domain/ranking/DailyMetricSnapshot.javadomain/src/main/java/com/loopers/domain/ranking/RankingDateKey.javadomain/src/main/java/com/loopers/domain/ranking/RankingScore.javadomain/src/main/java/com/loopers/domain/ranking/RankingType.javadomain/src/test/java/com/loopers/domain/ranking/RankingScoreTest.javainfrastructure/jpa/src/main/java/com/loopers/infrastructure/metrics/ProductMetricsDailyRepository.javainfrastructure/jpa/src/main/java/com/loopers/infrastructure/ranking/MvProductRankMonthly.javainfrastructure/jpa/src/main/java/com/loopers/infrastructure/ranking/MvProductRankMonthlyRepository.javainfrastructure/jpa/src/main/java/com/loopers/infrastructure/ranking/MvProductRankWeekly.javainfrastructure/jpa/src/main/java/com/loopers/infrastructure/ranking/MvProductRankWeeklyRepository.javainfrastructure/jpa/src/testFixtures/resources/docker-java.propertiesinfrastructure/redis/src/testFixtures/resources/docker-java.propertiespresentation/commerce-api/src/main/java/com/loopers/interfaces/api/ranking/RankingController.javapresentation/commerce-batch/src/main/java/com/loopers/batch/job/ranking/MonthlyRankingJobConfig.javapresentation/commerce-batch/src/main/java/com/loopers/batch/job/ranking/WeeklyRankingJobConfig.javapresentation/commerce-batch/src/test/java/com/loopers/job/ranking/MonthlyRankingJobE2ETest.javapresentation/commerce-batch/src/test/java/com/loopers/job/ranking/WeeklyRankingJobE2ETest.java
| @Test | ||
| void 주간_점수는_여러_날의_감쇠_점수를_합산한다() { | ||
| // given | ||
| LocalDate today = LocalDate.of(2026, 4, 16); | ||
| var day1 = new DailyMetricSnapshot(today.minusDays(1), 100, 10, 5); | ||
| var day2 = new DailyMetricSnapshot(today.minusDays(2), 200, 20, 10); | ||
|
|
||
| double score1 = RankingScore.calculateWithDecay(List.of(day1), today); | ||
| double score2 = RankingScore.calculateWithDecay(List.of(day2), today); | ||
|
|
||
| // when | ||
| double combined = RankingScore.calculateWithDecay(List.of(day1, day2), today); | ||
|
|
||
| // then | ||
| assertThat(combined).isEqualTo(score1 + score2); | ||
| } |
There was a problem hiding this comment.
부동소수점 동등 비교는 플래키 위험이 있어 오프셋 비교로 전환해야 한다.
calculateWithDecay가 내부에서 Math.pow(0.85, daysAgo)와 log10 조합을 사용한다면, 합산 순서와 최적화에 따라 combined과 score1 + score2가 비트 단위로 일치하지 않을 수 있다. 운영 관점에서 CI에서 간헐적 실패가 발생하면 배치/도메인 회귀 감지 신뢰도가 떨어진다. isCloseTo(expected, within(1e-9)) 로 전환을 권고한다.
제안 수정안
-import static org.assertj.core.api.Assertions.assertThat;
+import static org.assertj.core.api.Assertions.assertThat;
+import static org.assertj.core.api.Assertions.within;
@@
- assertThat(combined).isEqualTo(score1 + score2);
+ assertThat(combined).isCloseTo(score1 + score2, within(1e-9));추가 테스트 제안: (1) daysAgo == 0(today 본인)일 때 감쇠 계수 1.0이 적용되는지, (2) 윈도보다 오래된 날짜가 섞였을 때의 기대 동작(제외 vs 포함), (3) date가 today보다 미래인 경우의 정책을 각각 명시한 테스트를 추가하여 배치 경계 버그를 사전에 드러낸다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@domain/src/test/java/com/loopers/domain/ranking/RankingScoreTest.java` around
lines 87 - 102, Change the equality assertion in the test
'주간_점수는_여러_날의_감쇠_점수를_합산한다' to use an offset comparison to avoid flaky
floating-point equality: replace the exact equals check on 'combined' vs 'score1
+ score2' with an assertion that they are close within a small tolerance (e.g.
isCloseTo(..., within(1e-9))). Keep the rest of the setup that calls
RankingScore.calculateWithDecay(List<DailyMetricSnapshot>, today) for 'day1' and
'day2' unchanged.
|
|
||
| List<ProductMetricsDaily> findByDate(LocalDate date); | ||
|
|
||
| List<ProductMetricsDaily> findByDateBetween(LocalDate startDate, LocalDate endDate); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
rg -nP -C3 'findByDateBetween\s*\('Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 2883
🏁 Script executed:
# Verify the exact startDate and endDate calculations in the callers
cat -n presentation/commerce-batch/src/main/java/com/loopers/batch/job/ranking/WeeklyRankingJobConfig.java | sed -n '100,115p'Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 973
🏁 Script executed:
# Check MonthlyRankingJobConfig for the same pattern
cat -n presentation/commerce-batch/src/main/java/com/loopers/batch/job/ranking/MonthlyRankingJobConfig.java | sed -n '100,115p'Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 976
🏁 Script executed:
# Find the ProductMetricsDaily entity definition to check for indexing
fd -t f 'ProductMetricsDaily\.java' -x cat -n {}Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 2154
🏁 Script executed:
# Check if there are any index annotations or SQL `@Index` definitions
rg -n '@Index|@Column.*index' infrastructure/jpa/src/main/java/com/loopers/infrastructure/metrics/ProductMetricsDaily.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 67
🏁 Script executed:
# Check for any streaming or pagination-based approaches in batch jobs
rg -n 'Stream|Page|Pageable' presentation/commerce-batch/src/main/java/com/loopers/batch/job/ranking/Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 67
🏁 Script executed:
# Find WINDOW_DAYS constant values
rg -n 'WINDOW_DAYS\s*=' presentation/commerce-batch/src/main/java/com/loopers/batch/job/ranking/Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 384
🏁 Script executed:
# Check for database migration scripts/schema definitions
fd -t f -e 'sql' -e 'yml' -e 'yaml' | grep -i migrationRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 67
🏁 Script executed:
# Search for any index definitions in migration or schema files
rg -n 'CREATE.*INDEX.*product_metrics_daily|product_metrics_daily.*INDEX' --type sql --type yamlRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 67
🏁 Script executed:
# Check the complete ProductMetricsDailyRepository interface
cat -n infrastructure/jpa/src/main/java/com/loopers/infrastructure/metrics/ProductMetricsDailyRepository.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 723
findByDateBetween 성능과 인덱싱 최적화를 점검한다.
운영 관점에서 다음 두 가지를 개선해야 한다.
-
date 컬럼 인덱스 추가: ProductMetricsDaily 엔티티의 date 컬럼에 명시적 인덱스가 없다. 주간/월간 배치에서 BETWEEN 범위 쿼리를 실행할 때마다 풀 테이블 스캔이 발생한다.
@Index(name = "idx_product_metrics_daily_date", columnList = "date")또는 데이터베이스 마이그레이션에서 인덱스를 정의하도록 권고한다. -
메모리 효율성 개선: 월간 30일 데이터를
List<ProductMetricsDaily>형태로 한 번에 로딩하면 상품 수가 많을 경우 배치 JVM 힙 스파이크가 발생할 수 있다. 추후Stream<ProductMetricsDaily>또는 페이지네이션 기반 읽기로의 전환을 고려하도록 권고한다.
추가 테스트 제안: 경계값(startDate == endDate, startDate > endDate) 및 빈 결과 케이스를 리포지토리 슬라이스 테스트에 포함한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@infrastructure/jpa/src/main/java/com/loopers/infrastructure/metrics/ProductMetricsDailyRepository.java`
at line 15, ProductMetricsDailyRepository의 findByDateBetween 메서드는 date 컬럼에 인덱스가
없어 범위조회 시 풀테이블스캔을 유발하므로 ProductMetricsDaily 엔티티에 `@Index`(name =
"idx_product_metrics_daily_date", columnList = "date")를 추가하거나 DB 마이그레이션에 해당 인덱스를
생성하도록 변경하고, 대량 반환으로 인한 메모리 문제를 방지하기 위해 현재 List<ProductMetricsDaily> 반환을
Stream<ProductMetricsDaily> 또는 Spring Data의 Pageable 기반 페이징 API로 전환(예:
Stream<ProductMetricsDaily> findByDateBetween(...) 또는 Page<ProductMetricsDaily>
findByDateBetween(..., Pageable))하는 리팩터링을 적용하며, 리포지토리 슬라이스 테스트에 경계값(startDate ==
endDate, startDate > endDate)과 빈 결과 케이스를 추가해 검증하세요.
| @Table(name = "mv_product_rank_monthly", | ||
| uniqueConstraints = @UniqueConstraint(columnNames = {"productId", "calculatedDate"}), | ||
| indexes = @Index(name = "idx_monthly_calculated_date", columnList = "calculatedDate")) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# 1) 실제 DB 마이그레이션/스키마 파일에서 mv_product_rank_monthly 정의 및 컬럼명 확인
fd -e sql -e xml -e yaml -e yml | xargs rg -nP -C2 'mv_product_rank_(weekly|monthly)' 2>/dev/null
# 2) 네이밍 전략 설정 확인 (physical-strategy / implicit-strategy)
rg -nP 'physical-strategy|implicit-strategy|PhysicalNamingStrategy|ImplicitNamingStrategy' -g '*.yml' -g '*.yaml' -g '*.properties' -g '*.java'
# 3) 기존 MV 또는 유사 엔티티에서 uniqueConstraints/indexes 컬럼명이 어떻게 쓰였는지 사례 비교
ast-grep --pattern '@Table(name = $_, uniqueConstraints = $$$)'
rg -nP '@UniqueConstraint\(columnNames' -g '*.java' -C1
rg -nP '@Index\([^)]*columnList' -g '*.java' -C1Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 3773
🏁 Script executed:
# 1) MvProductRankMonthly 엔티티 필드 정의 및 `@Column` 어노테이션 확인
cat -n infrastructure/jpa/src/main/java/com/loopers/infrastructure/ranking/MvProductRankMonthly.java
# 2) MvProductRankWeekly 엔티티 필드 정의 확인
cat -n infrastructure/jpa/src/main/java/com/loopers/infrastructure/ranking/MvProductRankWeekly.java
# 3) DB 마이그레이션 파일에서 mv_product_rank_monthly 테이블 정의 검색
find . -type f \( -name "*.sql" -o -name "*.xml" \) | xargs grep -l "mv_product_rank_monthly" 2>/dev/null | head -5
# 4) 마이그레이션 파일 내용 확인 (컬럼명 파악)
find . -type f -name "*.sql" | xargs grep -A 10 "CREATE.*mv_product_rank_monthly" 2>/dev/nullRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 4188
🏁 Script executed:
# DB 마이그레이션 파일 위치 및 형식 파악
find . -type f \( -name "*.sql" -o -name "*migration*" -o -name "*schema*" \) | head -20
# mv_product_rank 관련 파일 검색
find . -type f -name "*.sql" | xargs grep -l "mv_product_rank" 2>/dev/null
# 마이그레이션 디렉토리 구조 확인
find . -type d -name "*migration*" -o -name "*flyway*" -o -name "*liquibase*" 2>/dev/null | head -10
# ProductMetricsHourly, ProductMetricsDaily 엔티티도 동일 패턴인지 확인 (필드명과 `@Column`)
sed -n '15,40p' infrastructure/jpa/src/main/java/com/loopers/infrastructure/metrics/ProductMetricsHourly.java
sed -n '15,40p' infrastructure/jpa/src/main/java/com/loopers/infrastructure/metrics/ProductMetricsDaily.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 1558
🏁 Script executed:
# constraint.sql 전체 내용
cat docs/sql/constraint.sql
# index.sql 전체 내용
cat docs/sql/index.sql
# mv_product_rank_monthly 관련 부분만 추출
grep -A 5 -B 5 "mv_product_rank_monthly" docs/sql/constraint.sql docs/sql/index.sql 2>/dev/nullRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 1407
🏁 Script executed:
# constraint.sql, index.sql 전체 라인 수 확인
wc -l docs/sql/constraint.sql docs/sql/index.sql
# 모든 SQL 파일에서 mv_product_rank 검색
find . -type f -name "*.sql" -exec grep -l "mv_product_rank" {} \;
# 모든 SQL 파일 목록 및 라인 수
find . -type f -name "*.sql" | xargs wc -l | sort -n
# 테이블 생성 관련 SQL 파일 찾기
find . -type f -name "*.sql" | xargs grep -l "CREATE TABLE" | head -10Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 241
🏁 Script executed:
# seed.sql도 확인 (테이블 생성 가능성)
cat docs/sql/seed.sql
# ProductMetricsHourly 전체 코드
cat infrastructure/jpa/src/main/java/com/loopers/infrastructure/metrics/ProductMetricsHourly.java
# ProductMetricsDaily 전체 코드
cat infrastructure/jpa/src/main/java/com/loopers/infrastructure/metrics/ProductMetricsDaily.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 5510
🏁 Script executed:
# application 설정 파일 찾기
find . -type f \( -name "application*.yml" -o -name "application*.yaml" -o -name "application*.properties" \)
# 네이밍 전략 설정 확인
rg -nP 'physical-naming-strategy|implicit-naming-strategy|hibernate.*naming|ddl-auto' -g '*.yml' -g '*.yaml' -g '*.properties'
# application.yml 내용 (있으면 전체 확인)
find . -type f -name "application*.yml" -o -name "application*.yaml" | head -1 | xargs cat 2>/dev/nullRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 1625
🏁 Script executed:
# jpa.yml 전체 내용
cat infrastructure/jpa/src/main/resources/jpa.yml
# 네이밍 전략 관련 더 구체적 검색
rg -nP 'naming-strategy|PhysicalNamingStrategy|ImplicitNamingStrategy' --type java --type ymlRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 2277
@UniqueConstraint/indexes 컬럼명이 실제 DB 컬럼명과 불일치한다.
Spring Boot 기본 네이밍 전략(SpringPhysicalNamingStrategy)은 camelCase 필드를 snake_case 컬럼으로 자동 변환한다. 본 엔티티의 productId, calculatedDate 필드는 실제 테이블에서 product_id, calculated_date로 생성되지만, @UniqueConstraint와 @Index 어노테이션은 camelCase를 직접 지정하고 있다. 이로 인해 JPA 메타데이터와 실제 스키마 간 불일치가 발생하며, unique constraint와 index 선언이 실효성을 잃게 된다.
ProductMetricsHourly, ProductMetricsDaily 엔티티도 동일한 패턴이므로 함께 수정이 필요하다.
수정 방법
`@Entity`
`@Table`(name = "mv_product_rank_monthly",
- uniqueConstraints = `@UniqueConstraint`(columnNames = {"productId", "calculatedDate"}),
- indexes = `@Index`(name = "idx_monthly_calculated_date", columnList = "calculatedDate"))
+ uniqueConstraints = `@UniqueConstraint`(columnNames = {"product_id", "calculated_date"}),
+ indexes = `@Index`(name = "idx_monthly_calculated_date", columnList = "product_id", "calculated_date"))또는 모든 필드에 @Column(name = "snake_case")를 명시하여 컬럼명을 고정한다.
동일 문제가 ProductMetricsHourly(productId, hour)와 ProductMetricsDaily(productId, date)에도 존재한다. 통합 테스트를 추가하여 중복 키 삽입 시 DataIntegrityViolationException 발생을 검증하도록 권고한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@infrastructure/jpa/src/main/java/com/loopers/infrastructure/ranking/MvProductRankMonthly.java`
around lines 16 - 18, Update the JPA annotations so DB column names match the
physical naming strategy: in MvProductRankMonthly change the `@UniqueConstraint`
and `@Index` columnNames/columnList from "productId"/"calculatedDate" to the
actual snake_case names "product_id" and "calculated_date" (or alternatively add
`@Column`(name = "...") on the entity fields productId and calculatedDate to fix
the mapping); make the same fixes in ProductMetricsHourly (productId, hour ->
"product_id", "hour") and ProductMetricsDaily (productId, date -> "product_id",
"date") by aligning `@UniqueConstraint/`@Index or adding `@Column`(name=...) on
those fields; finally add an integration test that inserts duplicate keys for
each entity and asserts a DataIntegrityViolationException is thrown to validate
the unique constraints.
| public org.springframework.batch.core.step.tasklet.Tasklet monthlyRankingCleanupTasklet( | ||
| @Value("#{jobParameters['baseDate']}") LocalDate baseDate) { | ||
| return (contribution, chunkContext) -> { | ||
| mvProductRankMonthlyRepository.deleteByCalculatedDate(baseDate); | ||
| return RepeatStatus.FINISHED; | ||
| }; | ||
| } |
There was a problem hiding this comment.
재실행 시 Redis ZSET이 정리되지 않아 점수 누적 왜곡이 발생한다.
Cleanup 단계는 mv_product_rank_monthly의 해당 baseDate 행만 삭제하고 Redis ZSET(RankingDateKey.of(baseDate) 키)은 그대로 둔다. 동일 baseDate로 배치가 재실행되면 Writer의 incrementScore가 기존 ZSET에 중복 가산되어 상위 노출 랭킹이 실제 점수의 배수로 왜곡된다. PR 설명에서 "idempotent score update approach 한계"로 언급되어 있으나, 운영에서 장애 복구/수동 재실행은 피할 수 없는 시나리오이므로 최소한 다음 중 하나는 필요하다.
- Cleanup 단계에서
RankingDateKey.of(baseDate)ZSET 키를DEL로 함께 제거한다. - Writer에서
ZADD(덮어쓰기) 의미의setScore를 사용한다. - Job 파라미터에
run.id가 아닌 실행 구분자를 포함시키고, 동일baseDate재실행을 차단하는 가드를 둔다.
🛠️ Cleanup에서 Redis 키 삭제를 함께 수행하는 예시
- public org.springframework.batch.core.step.tasklet.Tasklet monthlyRankingCleanupTasklet(
- `@Value`("#{jobParameters['baseDate']}") LocalDate baseDate) {
- return (contribution, chunkContext) -> {
- mvProductRankMonthlyRepository.deleteByCalculatedDate(baseDate);
- return RepeatStatus.FINISHED;
- };
- }
+ public org.springframework.batch.core.step.tasklet.Tasklet monthlyRankingCleanupTasklet(
+ `@Value`("#{jobParameters['baseDate']}") LocalDate baseDate) {
+ return (contribution, chunkContext) -> {
+ mvProductRankMonthlyRepository.deleteByCalculatedDate(baseDate);
+ productRankingRepository.deleteByDateKey(
+ RankingDateKey.of(baseDate), RankingType.MONTHLY);
+ return RepeatStatus.FINISHED;
+ };
+ }추가 테스트 권고:
- 동일
baseDate로 두 번 실행 후 Redis ZSET 점수가 1회 실행 결과와 동일한지 검증(현재 E2E는 MV rowCount만 검증한다).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@presentation/commerce-batch/src/main/java/com/loopers/batch/job/ranking/MonthlyRankingJobConfig.java`
around lines 77 - 83, The cleanup Tasklet (monthlyRankingCleanupTasklet) only
calls mvProductRank_monthlyRepository.deleteByCalculatedDate(baseDate) but does
not remove the Redis ZSET, causing duplicate score accumulation on re-run;
modify monthlyRankingCleanupTasklet to also delete the Redis key
RankingDateKey.of(baseDate) (use the existing Redis client/bean such as
StringRedisTemplate/RedisTemplate or RedisService) before returning, e.g. call
redisTemplate.delete(RankingDateKey.of(baseDate).getKey()) (ensure the Redis
bean is injected into the config), so the ZSET is removed together with the DB
rows and re-runs are idempotent.
| public Step weeklyRankingCleanupStep() { | ||
| return new StepBuilder(CLEANUP_STEP_NAME, jobRepository) | ||
| .tasklet(weeklyRankingCleanupTasklet(null), transactionManager) | ||
| .build(); | ||
| } | ||
|
|
||
| @StepScope | ||
| @Bean | ||
| public org.springframework.batch.core.step.tasklet.Tasklet weeklyRankingCleanupTasklet( | ||
| @Value("#{jobParameters['baseDate']}") LocalDate baseDate) { | ||
| return (contribution, chunkContext) -> { | ||
| mvProductRankWeeklyRepository.deleteByCalculatedDate(baseDate); | ||
| return RepeatStatus.FINISHED; | ||
| }; | ||
| } |
There was a problem hiding this comment.
재실행 시 Redis ZSET 점수가 누적되어 랭킹이 왜곡된다.
Cleanup 단계는 mv_product_rank_weekly 테이블의 행만 삭제하고, Redis ZSET은 그대로 둔다. Writer는 incrementScore(ZINCRBY 계열)로 점수를 더하므로, 동일 baseDate로 Job이 재실행되거나 재시도되면 이전 실행의 점수 위에 그대로 가산되어 결과가 2배/3배로 오염된다. MV는 재계산되어 정합하지만 Redis는 자기복구되지 않는다.
운영 관점에서 배치는 실패 → 재실행, 수동 백필 등 재실행 시나리오가 필수이므로 "동일 baseDate 재실행 = 동일 결과"가 보장되어야 한다.
수정안(택1):
- Cleanup 단계에서
dateKey = RankingDateKey.of(baseDate)+RankingType.WEEKLY에 해당하는 ZSET을DEL(또는 구성원 전부ZREM)로 초기화한 뒤 Aggregate 단계를 수행한다. MV 삭제와 Redis 삭제를 같은 Tasklet에 묶는다. - 또는 Writer가 누적형이 아닌 "절대값 set" 연산(ZADD, XX/NX 옵션 제거, 단순 덮어쓰기)으로 최종 점수를 기록하도록
productRankingRepository에 멱등 API를 추가해 사용한다. 배치 산출값은 이미 전체 윈도 합산이므로 누적이 필요 없다.
추가 테스트:
- 동일
baseDate로 Job을 2회 실행한 뒤 Redis ZSET 점수가 1회 실행 결과와 동일한지(멱등성) 검증하는 E2E 테스트를 추가한다. - MV 행 수가 1회 실행 결과와 동일한지, Redis ZSET 멤버 수/점수 합계가 기대값과 일치하는지 함께 단언한다.
🛠️ Cleanup 단계 보강 예시
return (contribution, chunkContext) -> {
mvProductRankWeeklyRepository.deleteByCalculatedDate(baseDate);
+ productRankingRepository.deleteRanking(
+ RankingDateKey.of(baseDate), RankingType.WEEKLY);
return RepeatStatus.FINISHED;
};Also applies to: 158-169
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@presentation/commerce-batch/src/main/java/com/loopers/batch/job/ranking/WeeklyRankingJobConfig.java`
around lines 69 - 83, The cleanup tasklet (weeklyRankingCleanupTasklet)
currently only deletes MV rows
(mvProductRankWeeklyRepository.deleteByCalculatedDate) but leaves the Redis
ZSET, causing score accumulation on re-runs; update weeklyRankingCleanupTasklet
(and the Step weeklyRankingCleanupStep that references it) to also remove the
Redis ZSET for the given baseDate/RankingType.WEEKLY before aggregation
runs—compute the dateKey (RankingDateKey.of(baseDate) with RankingType.WEEKLY)
and call the Redis deletion API (e.g., via productRankingRepository or
redisTemplate.del / ZREM all members) in the same tasklet so MV and Redis are
cleared atomically for that baseDate; alternatively, if you prefer the writer
approach, add an idempotent API on productRankingRepository that performs
absolute ZADD (overwrite) instead of incrementScore and switch the writer to use
it, and add an E2E test asserting two runs with the same baseDate produce
identical MV rows and Redis ZSET members/scores.
| return items -> { | ||
| List<MvProductRankWeekly> mvEntities = items.getItems().stream() | ||
| .map(WeeklyRankingResult::mvEntity) | ||
| .toList(); | ||
| mvProductRankWeeklyRepository.saveAll(mvEntities); | ||
|
|
||
| String dateKey = RankingDateKey.of(baseDate); | ||
| for (WeeklyRankingResult result : items) { | ||
| productRankingRepository.incrementScore( | ||
| result.productId(), result.score(), dateKey, RankingType.WEEKLY); | ||
| } | ||
| }; |
There was a problem hiding this comment.
DB 커밋과 Redis 증가가 한 트랜잭션 내에 섞여 부분 실패 시 정합성이 깨진다.
Writer는 Chunk 트랜잭션 안에서 saveAll 후 incrementScore를 수행한다. Redis는 트랜잭션 매니저의 롤백 대상이 아니므로 아래 시나리오에서 데이터가 어긋난다.
incrementScore중 예외 → 이미 성공한 앞쪽 상품의 Redis 증가분은 남고, DB는 롤백되어 MV 행이 사라진다. 이후 재시도 시 앞 상품은 다시 증가분을 받아 중복 가산된다.- DB 커밋 직전 실패 → 이미 Redis만 일부 증가된 채 MV는 롤백된다.
수정안:
- Redis 반영은
incrementScore(누적)가 아닌 멱등한setScore(ZADD 덮어쓰기)로 변경한다. 동일baseDate의 최종 점수는 배치가 산출한 단일 값이므로 누적 의미가 없다. - 쓰기 순서를 "DB saveAll 성공 후(=chunk 트랜잭션 커밋 후) Redis 반영"으로 재설계한다. 예: 별도 Step에서 MV 테이블을 읽어 Redis로 반영하거나,
ChunkListener#afterChunk/StepExecutionListener#afterStep에서 반영한다. - 최소한 Writer 내부에서 Redis 쪽 실패에 대한 재시도/보상 로깅(민감정보 제외, cause 보존)을 분리한다.
추가 테스트:
- Writer에서 Redis 호출이 한 번 실패하도록 목킹한 뒤, 재실행 시 MV 수/Redis 점수가 기대값과 일치하는지 검증한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@presentation/commerce-batch/src/main/java/com/loopers/batch/job/ranking/WeeklyRankingJobConfig.java`
around lines 158 - 169, The writer in WeeklyRankingJobConfig mixes DB saveAll
(mvProductRankWeeklyRepository.saveAll) and Redis increments
(productRankingRepository.incrementScore), causing cross-system inconsistency on
partial failures; change the Redis update to an idempotent overwrite API (e.g.,
productRankingRepository.setScore) and move Redis writes out of the chunk
transaction so they occur only after DB commit — implement Redis application in
a separate Step that reads MV rows (using RankingDateKey.of(baseDate) and
WeeklyRankingResult.mvEntity mapping) or in a ChunkListener#afterChunk /
StepExecutionListener#afterStep callback; at minimum decouple failure handling
by adding retries and clear error logging (preserve cause) around Redis calls
instead of performing increments inside the transactional writer.
| @Test | ||
| void 월간_랭킹_배치가_정상_실행된다() throws Exception { | ||
| // given | ||
| LocalDate today = LocalDate.of(2026, 4, 16); | ||
|
|
||
| ProductMetricsDaily daily = ProductMetricsDaily.init(1L, today.minusDays(10)); | ||
| incrementViews(daily, 100); | ||
| productMetricsDailyRepository.save(daily); | ||
|
|
||
| // when | ||
| var jobExecution = runJob(today); | ||
|
|
||
| // then | ||
| assertThat(jobExecution.getExitStatus().getExitCode()) | ||
| .isEqualTo(ExitStatus.COMPLETED.getExitCode()); | ||
| } |
There was a problem hiding this comment.
"정상 실행" 테스트의 given 데이터가 assertion과 무관해 테스트 의도가 약하다.
10일 전 뷰 100을 시드하지만 검증은 ExitStatus.COMPLETED뿐이다. 이 경우 데이터 없는 빈 윈도우에서도 PASS가 되어 회귀 감지력이 떨어진다. MV 레코드 1건 생성 여부와 핵심 필드 1개 정도는 함께 검증하거나, 빈 입력 시나리오로 이름을 변경해 의도를 명확히 할 것을 권고한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@presentation/commerce-batch/src/test/java/com/loopers/job/ranking/MonthlyRankingJobE2ETest.java`
around lines 62 - 77, The test method 월간_랭킹_배치가_정상_실행된다() seeds a
ProductMetricsDaily via ProductMetricsDaily.init(...) and incrementViews(...)
but only asserts the job exit status from runJob(today), making the test weak;
update the test to also verify that the expected MV record(s) were created and
contain expected values (e.g. assert productMetricsMonthlyRepository/findBy...
contains one record for the product id created from ProductMetricsDaily and that
a core field like views or rank equals the expected value), or if the intent is
to test an empty-input run, rename the test to reflect the empty scenario and
remove the seeded ProductMetricsDaily; ensure assertions reference
ProductMetricsDaily, productMetricsDailyRepository, runJob(...) and the monthly
projection repository used by the job (e.g., productMetricsMonthlyRepository) so
the test fails on regressions.
| @Test | ||
| void 월간_랭킹_배치가_30일치_카운트를_합산한다() throws Exception { | ||
| // given | ||
| LocalDate today = LocalDate.of(2026, 4, 16); | ||
|
|
||
| ProductMetricsDaily day1 = ProductMetricsDaily.init(1L, today.minusDays(5)); | ||
| incrementViews(day1, 100); | ||
| incrementSales(day1, 10); | ||
|
|
||
| ProductMetricsDaily day2 = ProductMetricsDaily.init(1L, today.minusDays(25)); | ||
| incrementViews(day2, 200); | ||
| incrementSales(day2, 20); | ||
|
|
||
| productMetricsDailyRepository.saveAll(List.of(day1, day2)); | ||
|
|
||
| // when | ||
| runJob(today); | ||
|
|
||
| // then | ||
| List<MvProductRankMonthly> results = mvProductRankMonthlyRepository.findByCalculatedDate(today); | ||
| assertThat(results.get(0).getViewCount()).isEqualTo(300); | ||
| } |
There was a problem hiding this comment.
윈도우 경계값 및 윈도우 밖 데이터 제외 케이스가 누락되어 회귀 위험이 크다.
현재 테스트는 5일 전/25일 전 두 지점만 검증한다. 리더의 윈도우는 startDate = baseDate.minusDays(30), endDate = baseDate.minusDays(1)로 findByDateBetween(양끝 포함)이므로 다음 경계가 회귀 시 쉽게 깨질 수 있다.
- 포함 경계:
minusDays(1),minusDays(30) - 제외 경계:
minusDays(0)(=baseDate),minusDays(31)
이 네 케이스가 기대대로 합산/제외되는지 검증하는 테스트를 추가할 것을 권고한다. 또한 본 배치의 최종 산출물인 Redis ZSET 점수는 전혀 검증하지 않는데(MV의 카운트만 검증), 감쇄 스코어 및 재실행 시 누적 여부를 확인하는 테스트도 함께 추가해야 MonthlyRankingJobConfig.monthlyRankingWriter의 실질적 회귀를 잡을 수 있다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@presentation/commerce-batch/src/test/java/com/loopers/job/ranking/MonthlyRankingJobE2ETest.java`
around lines 79 - 100, Extend the existing test 월간_랭킹_배치가_30일치_카운트를_합산한다 to add
four boundary cases and a Redis ZSET verification: create ProductMetricsDaily
entries for baseDate.minusDays(1) and baseDate.minusDays(30) (should be
included), and for baseDate (minusDays(0)) and baseDate.minusDays(31) (should be
excluded); save them via productMetricsDailyRepository, call runJob(today) and
assert mvProductRankMonthlyRepository.findByCalculatedDate(today) sums only the
included records; additionally verify the batch writer's final effect by
checking the Redis ZSET score(s) produced by monthlyRankingWriter (or the Redis
key used by the job) to ensure decay/score logic and that re-running runJob does
not double-count.
| @Test | ||
| void 주간_랭킹_배치가_MV_테이블에_집계_결과를_저장한다() throws Exception { | ||
| // given | ||
| LocalDate today = LocalDate.of(2026, 4, 16); | ||
|
|
||
| ProductMetricsDaily day1 = ProductMetricsDaily.init(1L, today.minusDays(1)); | ||
| incrementViews(day1, 100); | ||
|
|
||
| ProductMetricsDaily day2 = ProductMetricsDaily.init(1L, today.minusDays(3)); | ||
| incrementViews(day2, 50); | ||
|
|
||
| productMetricsDailyRepository.saveAll(List.of(day1, day2)); | ||
|
|
||
| // when | ||
| runJob(today); | ||
|
|
||
| // then | ||
| List<MvProductRankWeekly> results = mvProductRankWeeklyRepository.findByCalculatedDate(today); | ||
| assertThat(results).hasSize(1); | ||
| } | ||
|
|
||
| @Test | ||
| void 주간_랭킹_배치가_상품별_카운트를_합산한다() throws Exception { | ||
| // given | ||
| LocalDate today = LocalDate.of(2026, 4, 16); | ||
|
|
||
| ProductMetricsDaily day1 = ProductMetricsDaily.init(1L, today.minusDays(1)); | ||
| incrementViews(day1, 100); | ||
|
|
||
| ProductMetricsDaily day2 = ProductMetricsDaily.init(1L, today.minusDays(3)); | ||
| incrementViews(day2, 50); | ||
|
|
||
| productMetricsDailyRepository.saveAll(List.of(day1, day2)); | ||
|
|
||
| // when | ||
| runJob(today); | ||
|
|
||
| // then | ||
| List<MvProductRankWeekly> results = mvProductRankWeeklyRepository.findByCalculatedDate(today); | ||
| assertThat(results.get(0).getViewCount()).isEqualTo(150); | ||
| } |
There was a problem hiding this comment.
주간 윈도우 경계 테스트가 없어 오프바이원 회귀를 감지하지 못한다.
리더의 윈도우는 baseDate.minusDays(7) ~ baseDate.minusDays(1)(양끝 포함)이다. 현재 시드는 minusDays(1)/minusDays(3)만 사용해 다음 회귀를 놓칠 수 있다.
minusDays(7)포함 검증minusDays(0)(=baseDate) 제외 검증minusDays(8)제외 검증
해당 경계 케이스 3종의 테스트 추가를 권고한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@presentation/commerce-batch/src/test/java/com/loopers/job/ranking/WeeklyRankingJobE2ETest.java`
around lines 79 - 119, Update WeeklyRankingJobE2ETest to add boundary checks for
the weekly window: create/seed ProductMetricsDaily rows at baseDate.minusDays(7)
(should be included), baseDate (minusDays(0), should be excluded), and
baseDate.minusDays(8) (should be excluded), then run runJob(today) and assert
that the minusDays(7) record contributes to the aggregated MvProductRankWeekly
while the minusDays(0) and minusDays(8) records do not; place these assertions
in a new test method (or extend an existing one) to specifically validate
inclusion/exclusion at the window boundaries.
| @Test | ||
| void 재실행시_기존_데이터를_삭제하고_새로_생성한다() throws Exception { | ||
| // given | ||
| LocalDate today = LocalDate.of(2026, 4, 16); | ||
|
|
||
| ProductMetricsDaily daily = ProductMetricsDaily.init(1L, today.minusDays(1)); | ||
| incrementViews(daily, 100); | ||
| productMetricsDailyRepository.save(daily); | ||
|
|
||
| runJob(today); | ||
|
|
||
| // when | ||
| runJob(today); | ||
|
|
||
| // then | ||
| List<MvProductRankWeekly> results = mvProductRankWeeklyRepository.findByCalculatedDate(today); | ||
| assertThat(results).hasSize(1); | ||
| } |
There was a problem hiding this comment.
재실행 검증이 MV row count만 확인해 핵심 회귀(Redis 점수 누적)를 잡지 못한다.
현재 테스트는 재실행 후 MV가 1건인 것만 검증한다. 그러나 실제 운영 리스크는 Redis ZSET 점수 중복 가산(MonthlyRankingJobConfig에 대한 지적과 동일 구조)에 있다. 두 번 실행 후 productRankingRepository.getScore(productId, dateKey, RankingType.WEEKLY)가 1회 실행 결과와 동일함을 검증하도록 보강해야 Writer/Cleanup의 멱등성 회귀를 조기에 감지할 수 있다.
♻️ 보강 예시
// then
List<MvProductRankWeekly> results = mvProductRankWeeklyRepository.findByCalculatedDate(today);
assertThat(results).hasSize(1);
+Double scoreAfterTwoRuns =
+ productRankingRepository.getScore(1L, RankingDateKey.of(today), RankingType.WEEKLY);
+// 단일 실행 결과와 동일해야 한다(누적 금지)
+assertThat(scoreAfterTwoRuns).isEqualTo(expectedSingleRunScore);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| @Test | |
| void 재실행시_기존_데이터를_삭제하고_새로_생성한다() throws Exception { | |
| // given | |
| LocalDate today = LocalDate.of(2026, 4, 16); | |
| ProductMetricsDaily daily = ProductMetricsDaily.init(1L, today.minusDays(1)); | |
| incrementViews(daily, 100); | |
| productMetricsDailyRepository.save(daily); | |
| runJob(today); | |
| // when | |
| runJob(today); | |
| // then | |
| List<MvProductRankWeekly> results = mvProductRankWeeklyRepository.findByCalculatedDate(today); | |
| assertThat(results).hasSize(1); | |
| } | |
| `@Test` | |
| void 재실행시_기존_데이터를_삭제하고_새로_생성한다() throws Exception { | |
| // given | |
| LocalDate today = LocalDate.of(2026, 4, 16); | |
| ProductMetricsDaily daily = ProductMetricsDaily.init(1L, today.minusDays(1)); | |
| incrementViews(daily, 100); | |
| productMetricsDailyRepository.save(daily); | |
| runJob(today); | |
| // when | |
| Double scoreAfterFirstRun = | |
| productRankingRepository.getScore(1L, RankingDateKey.of(today), RankingType.WEEKLY); | |
| runJob(today); | |
| Double scoreAfterSecondRun = | |
| productRankingRepository.getScore(1L, RankingDateKey.of(today), RankingType.WEEKLY); | |
| // then | |
| List<MvProductRankWeekly> results = mvProductRankWeeklyRepository.findByCalculatedDate(today); | |
| assertThat(results).hasSize(1); | |
| // 멱등성 검증: 재실행 후에도 Redis 점수는 누적되지 않음 | |
| assertThat(scoreAfterSecondRun).isEqualTo(scoreAfterFirstRun); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@presentation/commerce-batch/src/test/java/com/loopers/job/ranking/WeeklyRankingJobE2ETest.java`
around lines 163 - 180, The test 재실행시_기존_데이터를_삭제하고_새로_생성한다 currently only
asserts MV row count and must also assert Redis ZSET score idempotency: after
the first run capture the score via productRankingRepository.getScore(productId,
dateKey, RankingType.WEEKLY) (use the same productId from
ProductMetricsDaily.init, e.g., 1L, and the weekly date key derived from today),
then run the job a second time and assert the score is unchanged by calling
productRankingRepository.getScore(...) again; update the test method to call
runJob(today) once, read and save the score, call runJob(today) again, read the
score again, and assert equality to detect duplicate score accumulation.
📌 Summary
@Scheduled스케줄러는 실행 이력 추적, 재시도, 파라미터 기반 실행이 불가했습니다.RankingService→ Redis ZSET으로 통일했습니다. 배치 테스트 10개 + 도메인 테스트 3개 통과.🧭 Context & Decision
1. 롤링 윈도우 vs 고정 기간
2. MV + Redis 이중 저장 구조
saveAll+ RedisincrementScore를 동시 처리하여 일관성 보장.findByDateBetween()쿼리를 2번 실행하는 문제가 있어 합침.3. Chunk 처리 구조 — Reader 그룹핑 + Processor 감쇠
calculateWithDecay) 같은 동적 처리가 가능. 공식 변경 시 Processor만 수정.findByDateBetween). 상품 수가 극단적으로 많아지면 OOM 위험. 현재 규모(활동 상품 × 7/30일)에서는 문제 없으나, 대규모 서비스에서는 cursor/paging Reader로 전환 필요.4. 삭제 → 재생성 + JPQL 벌크 DELETE
deleteByCalculatedDate는 내부적으로 SELECT 후 개별 DELETE를 실행하여, 같은 트랜잭션 내 INSERT와 유니크 제약 충돌 발생.deleteByCalculatedDate는 먼저 SELECT로 엔티티를 로딩한 뒤 개별 DELETE를 실행. 같은 TX 내에서 Writer의saveAll이 INSERT를 시도하면, JPA flush 타이밍에 따라 DELETE 전에 INSERT가 실행되어Duplicate entry유니크 제약 위반 발생. JPQL 벌크 DELETE는 단일 SQL로 즉시 삭제.🏗️ Design Overview
배치 Job 흐름
sequenceDiagram autonumber participant Scheduler participant DailyJob as DailyRankingJob participant WeeklyJob as WeeklyRankingJob participant DB as MySQL participant Redis Note over Scheduler: 매일 03:00 Scheduler->>DailyJob: 실행 DailyJob->>DB: ProductMetricsDaily 어제 조회 DailyJob->>Redis: ranking:daily:{date} ZADD DailyJob-->>Scheduler: COMPLETED Scheduler->>WeeklyJob: 실행 (Daily 성공 시에만) Note over WeeklyJob: Step 1 — Cleanup WeeklyJob->>DB: DELETE mv_product_rank_weekly WHERE calculated_date = today Note over WeeklyJob: Step 2 — Chunk WeeklyJob->>DB: SELECT product_metrics_daily WHERE date BETWEEN (today-7, today-1) Note over WeeklyJob: Reader: productId별 그룹핑 Note over WeeklyJob: Processor: 카운트 합산 + 감쇠 score 계산 WeeklyJob->>DB: INSERT mv_product_rank_weekly (카운트) WeeklyJob->>Redis: ranking:weekly:{date} ZINCRBY (감쇠 score) WeeklyJob-->>Scheduler: COMPLETEDMonthlyRankingJob도 동일 구조 (WINDOW_DAYS=30, RankingType.MONTHLY).
예외/실패 흐름
변경 범위
MV 테이블 구조
score/rank없음 — 공식 변경 유연성. score는 Redis에, rank는 ZSET 순서로 결정.calculated_date에 인덱스 —=비교로 특정 날짜 랭킹 조회. DATE 타입 B-Tree 인덱스 정상 동작 (MySQL 확인).📊 테스트
테스트 ↔ 프로덕션 매핑
주간_랭킹_배치가_정상_실행된다주간_랭킹_배치가_MV_테이블에_집계_결과를_저장한다주간_랭킹_배치가_상품별_카운트를_합산한다주간_랭킹_배치가_좋아요_카운트를_합산한다주간_랭킹_배치가_판매_카운트를_합산한다재실행시_기존_데이터를_삭제하고_새로_생성한다월간_랭킹_*(3개)주간_점수는_최근_날짜에_높은_가중치를_적용한다주간_점수는_여러_날의_감쇠_점수를_합산한다빈_스냅샷이면_점수는_0이다Round 09 이슈 인식
@Transactional누락 (RankingService 4파라미터)💬 리뷰 포인트
1. MV + Redis 이중 구조가 맞는가
MV에 카운트만 저장하고, Redis에 점수를 따로 저장하는 구조를 택했습니다. 가중치 공식이 바뀌어도 MV는 건드릴 필요 없이 배치만 다시 돌리면 되기 때문입니다. 다만 저장소가 두 곳이다 보니 관리 포인트가 늘어나는 것이 부담인데, 이 구조가 저장소를 하나로 통일하는 것보다 나은 선택인지 의견을 여쭙니다.
2. MV에 전체 활동 상품 저장 vs TOP 100만 저장
MV를 순수 집계 테이블로 쓰기 위해 score를 빼고 카운트만 저장했습니다. 그래서 TOP 100 선별은 배치가 아닌 Redis에서 처리하는 구조인데, MV에 전체 상품을 넣는 것과 TOP 100만 넣는 것 중 어떤 선택이 더 적합할지 의견을 여쭙니다.
3. 랭킹 점수에 반영할 수 있는 다른 요소
현재는 일자별 시간 감쇠만 적용했고, 리서치 과정에서 판매 속도(최근 며칠간 급상승한 상품 우대)도 반영할 수 있다는 걸 알게 됐습니다. 감성/라이프스타일 이커머스에서 이런 류의 랭킹 보정 요소로 또 어떤 것들이 있을지 궁금합니다.