[volume-10] Spring Batch 기반 주간/월간 랭킹 시스템 구현#400
[volume-10] Spring Batch 기반 주간/월간 랭킹 시스템 구현#400madirony wants to merge 8 commits intoLoopers-dev-lab:madironyfrom
Conversation
주간/월간 랭킹 배치 집계 결과를 저장할 Materialized View 엔티티 추가. 복합 PK(product_id + ranking_week/ranking_month)로 UPSERT 멱등성 보장.
Reader SQL에서 GROUP BY + 스코어 계산 + ROW_NUMBER() ranking 부여까지 완료. Writer는 NamedParameterJdbcTemplate.batchUpdate로 MV 테이블에 UPSERT. Chunk-Oriented 패턴(Reader → Writer)으로 Processor 없이 구성.
MvRankingAppService로 DB MV 조회, Facade에서 period별 데이터 소스 분기. Controller에 period 파라미터(daily/weekly/monthly) switch 추가. Infrastructure에 JPA Repository + Pageable 기반 페이지네이션 구현.
Weekly 3건(집계/범위제외/멱등성), Monthly 4건(집계/범위제외/멱등성/합산정확성). 각 테스트는 고유한 targetDate + run.id로 Job 격리.
주간/월간 정상 조회 및 빈 목록, 입력 검증(page/size/date/period/hour) 총 10건. MV 데이터 직접 적재 후 API 응답 검증.
- QueueSchedulerTest: RedissonClient mock 누락 → NPE 수정 - ConcurrencyTest: @async AFTER_COMMIT 비동기 대기 추가 - CommerceBatchApplicationTest: Job name 없는 의미 없는 테스트 삭제
Reader는 GROUP BY + SUM 집계 후 스코어 기준 ORDER BY로 TOP 100 필터링만 담당. Processor(RankingScoreProcessor)에서 스코어 계산 + ranking 부여. Writer는 기존 batchUpdate UPSERT 유지.
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 7 minutes and 27 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (5)
📝 WalkthroughWalkthrough주간 및 월간 상위 순위 조회 기능을 추가하였다. 배치 작업을 통해 메트릭을 집계하여 MV 테이블에 저장하고, API 계층에서 Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Controller as RankingController
participant Facade as RankingFacade
participant MvService as MvRankingAppService
participant Repo as ProductRankMvRepository
participant ProductService as ProductAppService
Client->>Controller: GET /api/v1/rankings?period=weekly&date=...
activate Controller
Controller->>Controller: toYearWeek(date)
Controller->>Facade: getWeeklyTopRankings(yearWeek, page, size)
deactivate Controller
activate Facade
Facade->>MvService: getWeeklyRankings(yearWeek, page, size)
deactivate Facade
activate MvService
MvService->>Repo: findWeeklyRankings(yearWeek, page, size)
activate Repo
Repo-->>MvService: List<MvProductRankWeekly>
deactivate Repo
MvService->>MvService: 각 행을 RankingEntry로 변환
MvService-->>Facade: List<RankingEntry>
deactivate MvService
activate Facade
Facade->>Facade: enrichWithProductInfo(entries)
activate Facade
Facade->>ProductService: getByIds(productIds)
activate ProductService
ProductService-->>Facade: Map<productId, Product>
deactivate ProductService
Facade->>Facade: entries를 Product 정보로 enriching하여 RankingInfo 생성
Facade-->>Facade: List<RankingInfo>
deactivate Facade
Facade-->>Controller: List<RankingInfo>
deactivate Facade
activate Controller
Controller-->>Client: ApiResponse<RankingListResponse>
deactivate Controller
sequenceDiagram
participant Job as WeeklyRankingJob
participant Reader as JdbcCursorItemReader
participant Processor as RankingScoreProcessor
participant Writer as ItemWriter
participant DB as Database
participant MvTable as mv_product_rank_weekly
Job->>Job: targetDate 파라미터로 주차 범위 계산
loop 청크 (size=100)
Job->>Reader: read()
activate Reader
Reader->>DB: product_daily_metrics 조회<br/>(주차 범위, 집계)
DB-->>Reader: AggregatedMetricRow 리스트
Reader-->>Job: AggregatedMetricRow
deactivate Reader
Job->>Processor: process(AggregatedMetricRow)
activate Processor
Processor->>Processor: 점수 계산<br/>(views×0.1 + likes×0.2 + log(amount)×0.7)
Processor->>Processor: ranking 번호 할당 (1부터)
Processor-->>Job: RankingScoreRow
deactivate Processor
Job->>Writer: write(List<RankingScoreRow>)
activate Writer
Writer->>MvTable: INSERT ... ON DUPLICATE KEY UPDATE<br/>(yearWeek 기준)
MvTable-->>Writer: 반영 완료
Writer-->>Job: 성공
deactivate Writer
end
Job-->>Job: 작업 완료
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45분 Possibly related PRs
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (6)
apps/commerce-api/src/test/java/com/loopers/infrastructure/queue/QueueSchedulerTest.java (1)
45-46: 분산락 호출 검증이 없어 회귀를 놓칠 수 있다운영 관점에서 락 획득/해제 회귀를 테스트가 잡지 못하면 다중 인스턴스 환경에서 큐 중복 처리와 토큰 중복 발급으로 이어질 수 있다.
수정안으로 각 시나리오에서lock.lock()/lock.unlock()호출을 명시 검증하고, 예외 발생 시에도unlock()이 호출되는 경로를 추가 검증하는 것이 좋다.
추가 테스트로queueService.peekBatch(...)가 예외를 던질 때unlock()이 1회 호출되는 케이스를 넣는 것이 좋다.제안 diff
@@ void process_emptyQueue() { given(queueService.peekBatch(14)).willReturn(List.of()); queueScheduler.process(); + verify(lock).lock(); + verify(lock).unlock(); verify(queueService, never()).remove(org.mockito.ArgumentMatchers.any()); verify(tokenService, never()).issue(org.mockito.ArgumentMatchers.any()); } + + `@Test` + `@DisplayName`("처리 중 예외가 발생해도 락을 해제한다.") + void process_unlockOnException() { + given(queueService.peekBatch(14)).willThrow(new RuntimeException("boom")); + + org.junit.jupiter.api.Assertions.assertThrows(RuntimeException.class, () -> queueScheduler.process()); + + verify(lock).lock(); + verify(lock).unlock(); + }As per coding guidelines
**/*Test*.java: 단위 테스트는 경계값/실패 케이스/예외 흐름을 포함하는지 점검한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/infrastructure/queue/QueueSchedulerTest.java` around lines 45 - 46, In QueueSchedulerTest, add explicit verifications that the distributed lock is acquired and released by asserting calls to redissonClient.getLock(...), lock.lock() and lock.unlock() (use Mockito.verify on the mock Lock) for each scenario; also add a test where queueService.peekBatch(...) throws an exception and verify that lock.unlock() is still invoked exactly once to cover error/exception paths and prevent regression of lock behavior.apps/commerce-api/src/main/java/com/loopers/infrastructure/ranking/ProductRankMvRepositoryImpl.java (1)
39-49:deleteAll사용 시 엔티티 개수만큼 DELETE 쿼리가 발생할 수 있다.
deleteAll(Iterable)은 기본적으로 각 엔티티에 대해 개별 DELETE 쿼리를 실행한다. 현재 TOP 100으로 제한되어 있어 운영상 큰 문제는 아니지만, 향후 확장 시 병목이 될 수 있다.개선 방안: JPQL/네이티브 쿼리로 벌크 삭제 구현
♻️ 벌크 삭제 쿼리 예시
JPA Repository에 다음 메서드를 추가한다:
// MvProductRankWeeklyJpaRepository `@Modifying` `@Query`("DELETE FROM MvProductRankWeekly m WHERE m.yearWeek = :yearWeek") void deleteByYearWeek(`@Param`("yearWeek") String yearWeek); // MvProductRankMonthlyJpaRepository `@Modifying` `@Query`("DELETE FROM MvProductRankMonthly m WHERE m.yearMonth = :yearMonth") void deleteByYearMonth(`@Param`("yearMonth") String yearMonth);그리고 Impl에서 직접 호출:
`@Override` public void deleteWeeklyByYearWeek(String yearWeek) { - List<MvProductRankWeekly> existing = weeklyJpaRepository.findByYearWeekOrderByRankingAsc(yearWeek); - weeklyJpaRepository.deleteAll(existing); + weeklyJpaRepository.deleteByYearWeek(yearWeek); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/infrastructure/ranking/ProductRankMvRepositoryImpl.java` around lines 39 - 49, deleteAll(existing) in ProductRankMvRepositoryImpl.deleteWeeklyByYearWeek and deleteMonthlyByYearMonth causes N delete statements (one per entity); replace this with repository-level bulk deletes by adding `@Modifying` JPQL methods in MvProductRankWeeklyJpaRepository and MvProductRankMonthlyJpaRepository (e.g., deleteByYearWeek(String yearWeek) and deleteByYearMonth(String yearMonth)) and invoke those methods from ProductRankMvRepositoryImpl (replace calls to weeklyJpaRepository.deleteAll(existing) and monthlyJpaRepository.deleteAll(existing) with weeklyJpaRepository.deleteByYearWeek(yearWeek) and monthlyJpaRepository.deleteByYearMonth(yearMonth)); ensure the `@Modifying` methods execute within a transactional context.apps/commerce-api/src/main/java/com/loopers/interfaces/api/ranking/RankingController.java (1)
93-99:parseDate와validateDate메서드가 중복 로직을 포함하고 있다.두 메서드 모두
yyyyMMdd형식 파싱을 수행하며, 예외 처리 로직이 동일하다.validateDate를parseDate를 활용하도록 리팩토링하면 중복을 제거할 수 있다.♻️ 리팩토링 예시
private String validateDate(String date) { String rankingDate = date != null ? date : LocalDate.now().format(DATE_FORMAT); - try { - LocalDate.parse(rankingDate, DATE_FORMAT); - return rankingDate; - } catch (Exception e) { - throw new CoreException(ErrorType.BAD_REQUEST, "date는 yyyyMMdd 형식이어야 합니다."); - } + parseDate(rankingDate); // 형식 검증 + return rankingDate; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/interfaces/api/ranking/RankingController.java` around lines 93 - 99, The parseDate and validateDate methods duplicate yyyyMMdd parsing and identical exception handling; refactor validateDate to call parseDate (using DATE_FORMAT) and remove the duplicated parse/try-catch logic so parsing/exception creation is centralized in parseDate (throwing CoreException with ErrorType.BAD_REQUEST and the existing message), keeping DATE_FORMAT as the single source of truth and updating any callers to rely on parseDate for validation/parsing.apps/commerce-batch/src/test/java/com/loopers/job/ranking/WeeklyRankingJobE2ETest.java (1)
132-160: 멱등성 검증 시나리오가 운영 리프레시 실패를 포착하지 못한다운영에서는 동일 주차 재집계 시 기존 TOP100에서 탈락한 상품이 반드시 제거되어야 하는데, 현재 테스트는 “같은 단일 데이터 재실행”만 검증하여 잔존 레코드(stale row) 문제를 놓칠 수 있다.
수정안으로 1차 실행에 상품 2개를 넣고, 2차 실행 전에 한 상품이 주간 TOP100에서 탈락하도록 데이터를 변경한 뒤 재실행하여mv_product_rank_weekly에서 탈락 상품이 사라졌는지까지 검증하는 케이스를 추가하는 것이 좋다.
추가 테스트로 “run1: A/B 존재 → run2: A만 유효” 시나리오에서 최종 결과가 A만 남는지 검증하기 바란다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-batch/src/test/java/com/loopers/job/ranking/WeeklyRankingJobE2ETest.java` around lines 132 - 160, Update the weeklyRankingJob_idempotent test to verify removal of stale rows by making the first run ingest two products (call insertMetrics for product A and product B), launch the job with jobLauncherTestUtils using JobParametersBuilder as before, then mutate the source data before the second run so one product falls out of weekly TOP100 (e.g., update or delete metrics for product B via insertMetrics/DB helper), launch the job again with a new run.id, and assert the job completed and that jdbcTemplate.queryForList("SELECT * FROM mv_product_rank_weekly WHERE ranking_week = '2026-W02'") returns only the remaining product A and contains no record for product B; keep references to insertMetrics, weeklyRankingJob_idempotent, jobLauncherTestUtils, mv_product_rank_weekly and JobParametersBuilder when locating and changing the test.apps/commerce-api/src/test/java/com/loopers/interfaces/api/RankingApiE2ETest.java (1)
164-171: 성공 케이스가 개수만 검증해 잘못된 응답 내용 회귀를 놓칠 수 있다운영에서는 응답 개수보다 “어떤 상품이 어떤 순위로 내려오는지”가 핵심인데, 현재 검증은
hasSize(2)중심이라 잘못된 상품/정렬 회귀가 발생해도 테스트가 통과할 수 있다.
수정안으로rankings[0].productId,rankings[0].ranking,rankings[*]의 정렬/식별자를 명시적으로 검증하기 바란다.
추가 테스트로 의도적으로 저장 순서와 랭킹 값을 다르게 주입한 뒤, API가 랭킹 기준으로 정확히 반환하는지 검증하기 바란다.As per coding guidelines
**/*Test*.java: 통합 테스트는 격리 수준, 플래키 가능성, 테스트 데이터 준비/정리 전략을 점검한다.Also applies to: 218-225
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/interfaces/api/RankingApiE2ETest.java` around lines 164 - 171, The current assertions in RankingApiE2ETest only check the size of the "rankings" list and can miss regressions in which products or ranking values are incorrect; update the assertAll block(s) that reference response.getBody(), Map body, and List rankings (in RankingApiE2ETest, including the similar block around lines 218-225) to assert specific fields: verify rankings[0].productId and rankings[0].ranking (and other entries as needed), assert the list is sorted by ranking (e.g., ranking values are in ascending/descending order), and assert expected productIds appear in the expected positions; add an extra test that inserts items in a different storage order with differing ranking values and asserts the API returns them ordered by the ranking criteria to prevent order-related regressions.apps/commerce-batch/src/test/java/com/loopers/job/ranking/MonthlyRankingJobE2ETest.java (1)
130-158: 월간 멱등성 테스트에 탈락 상품 정리 검증이 필요하다운영 재집계에서는 같은 월을 다시 계산할 때 기존 랭킹에서 제외된 상품이 남아 있으면 API 결과가 오염된다. 현재 테스트는 중복 생성만 확인해서 이 리스크를 검출하지 못한다.
수정안으로 1차 실행 후 2차 실행에서 일부 상품이 월간 TOP100 밖으로 밀리도록 데이터를 바꾸고, 재실행 뒤 해당 상품 레코드가mv_product_rank_monthly에서 제거되었는지 검증하는 케이스를 추가하기 바란다.
추가 테스트로 “run1: A/B 저장 → run2: A만 유효” 결과를 검증하면 운영 회귀를 안정적으로 막을 수 있다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-batch/src/test/java/com/loopers/job/ranking/MonthlyRankingJobE2ETest.java` around lines 130 - 158, The current idempotency test monthlyRankingJob_idempotent only checks duplicate insertion; update it to also verify that products dropped from the monthly TOP ranking are removed from mv_product_rank_monthly on re-run: prepare metrics for at least two product IDs via insertMetrics (e.g., 1L and 2L), run the job once with jobLauncherTestUtils.launchJob using JobParametersBuilder, then modify metrics so one product falls out of TOP100 (update/delete metrics for that product or reduce its counts via insertMetrics), run the job again, and assert the exit status is COMPLETED and that mv_product_rank_monthly contains only the remaining valid product (and does not contain the dropped product ID) to ensure cleanup on upsert.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@apps/commerce-api/src/test/java/com/loopers/application/concurrency/ConcurrencyTest.java`:
- Around line 281-282: Replace the brittle Thread.sleep(2000) wait with a
bounded polling/assertion that repeatedly checks the observed likeCount until it
reaches the expected value (or until a timeout) instead of sleeping a fixed 2s;
locate the sleep in ConcurrencyTest (related to LikeCountEventListener/@Async
AFTER_COMMIT handling) and implement either Awaitility-style await().until(...)
or a small-loop that polls the likeCount getter/endpoint every 50–200ms and
fails after a configurable timeout (e.g. 5–10s), keeping an additional test case
that asserts timeout behavior for delayed event processing.
In
`@apps/commerce-batch/src/main/java/com/loopers/batch/job/ranking/MonthlyRankingJobConfig.java`:
- Around line 95-97: The ORDER BY clause in MonthlyRankingJobConfig currently
sorts only by the computed score, causing non-deterministic ordering for ties;
update the SQL string in MonthlyRankingJobConfig (the multi-line SQL that
contains "ORDER BY (SUM(view_count) * 0.1 + SUM(like_count) * 0.2 + LOG(1 +
SUM(order_amount)) * 0.7) DESC LIMIT 100") to append a stable secondary key such
as ", product_id ASC" after the score expression, and add/extend a test that
inserts two products with identical computed scores and re-runs the ranking to
assert their relative order remains consistent across runs.
- Around line 125-152: The current upsert logic in the lambda returned by
MonthlyRankingJobConfig leaves old rows for the same ranking_month intact;
change the write step to first delete existing rows for the target yearMonth and
then insert the new batch inside the same transactional boundary so stale
products are removed atomically. Specifically, before building/executing the
INSERT (sql) and jdbcTemplate.batchUpdate call, execute a DELETE FROM
mv_product_rank_monthly WHERE ranking_month = :yearMonth (using
jdbcTemplate.update or NamedParameterJdbcTemplate with the same yearMonth param)
and ensure both operations run in one transaction. Also add the suggested test
case (initial run saves 2 products, second run saves 1) to assert the MV ends up
with only the current TOP entries for that yearMonth.
In
`@apps/commerce-batch/src/main/java/com/loopers/batch/job/ranking/WeeklyRankingJobConfig.java`:
- Around line 96-97: The ORDER BY clause used in WeeklyRankingJobConfig
currently sorts only by the computed score, which causes nondeterministic
ordering for tied scores; update the SQL in WeeklyRankingJobConfig to add a
deterministic tie-breaker such as appending ", product_id ASC" (i.e., ORDER BY
(SUM(view_count) * 0.1 + SUM(like_count) * 0.2 + LOG(1 + SUM(order_amount)) *
0.7) DESC, product_id ASC) so results remain stable across runs, then run
tie-case tests against the ranking query to verify repeated executions produce
identical ordering.
- Around line 128-155: The writer currently only UPSERTs rows which can leave
stale entries for the same ranking_week; modify the ItemWriter returned in
WeeklyRankingJobConfig so it first deletes existing rows for the target yearWeek
(e.g., run a jdbcTemplate.update DELETE FROM mv_product_rank_weekly WHERE
ranking_week = :yearWeek) before performing the batch insert/upsert (or switch
to a delete-then-batch-insert full-refresh), ensure the delete and insert run in
the same transaction/context, and add a test that runs the job twice with
different input sets (A/B then A) asserting that entries from the first run (B)
are removed from mv_product_rank_weekly for that yearWeek.
---
Nitpick comments:
In
`@apps/commerce-api/src/main/java/com/loopers/infrastructure/ranking/ProductRankMvRepositoryImpl.java`:
- Around line 39-49: deleteAll(existing) in
ProductRankMvRepositoryImpl.deleteWeeklyByYearWeek and deleteMonthlyByYearMonth
causes N delete statements (one per entity); replace this with repository-level
bulk deletes by adding `@Modifying` JPQL methods in
MvProductRankWeeklyJpaRepository and MvProductRankMonthlyJpaRepository (e.g.,
deleteByYearWeek(String yearWeek) and deleteByYearMonth(String yearMonth)) and
invoke those methods from ProductRankMvRepositoryImpl (replace calls to
weeklyJpaRepository.deleteAll(existing) and
monthlyJpaRepository.deleteAll(existing) with
weeklyJpaRepository.deleteByYearWeek(yearWeek) and
monthlyJpaRepository.deleteByYearMonth(yearMonth)); ensure the `@Modifying`
methods execute within a transactional context.
In
`@apps/commerce-api/src/main/java/com/loopers/interfaces/api/ranking/RankingController.java`:
- Around line 93-99: The parseDate and validateDate methods duplicate yyyyMMdd
parsing and identical exception handling; refactor validateDate to call
parseDate (using DATE_FORMAT) and remove the duplicated parse/try-catch logic so
parsing/exception creation is centralized in parseDate (throwing CoreException
with ErrorType.BAD_REQUEST and the existing message), keeping DATE_FORMAT as the
single source of truth and updating any callers to rely on parseDate for
validation/parsing.
In
`@apps/commerce-api/src/test/java/com/loopers/infrastructure/queue/QueueSchedulerTest.java`:
- Around line 45-46: In QueueSchedulerTest, add explicit verifications that the
distributed lock is acquired and released by asserting calls to
redissonClient.getLock(...), lock.lock() and lock.unlock() (use Mockito.verify
on the mock Lock) for each scenario; also add a test where
queueService.peekBatch(...) throws an exception and verify that lock.unlock() is
still invoked exactly once to cover error/exception paths and prevent regression
of lock behavior.
In
`@apps/commerce-api/src/test/java/com/loopers/interfaces/api/RankingApiE2ETest.java`:
- Around line 164-171: The current assertions in RankingApiE2ETest only check
the size of the "rankings" list and can miss regressions in which products or
ranking values are incorrect; update the assertAll block(s) that reference
response.getBody(), Map body, and List rankings (in RankingApiE2ETest, including
the similar block around lines 218-225) to assert specific fields: verify
rankings[0].productId and rankings[0].ranking (and other entries as needed),
assert the list is sorted by ranking (e.g., ranking values are in
ascending/descending order), and assert expected productIds appear in the
expected positions; add an extra test that inserts items in a different storage
order with differing ranking values and asserts the API returns them ordered by
the ranking criteria to prevent order-related regressions.
In
`@apps/commerce-batch/src/test/java/com/loopers/job/ranking/MonthlyRankingJobE2ETest.java`:
- Around line 130-158: The current idempotency test monthlyRankingJob_idempotent
only checks duplicate insertion; update it to also verify that products dropped
from the monthly TOP ranking are removed from mv_product_rank_monthly on re-run:
prepare metrics for at least two product IDs via insertMetrics (e.g., 1L and
2L), run the job once with jobLauncherTestUtils.launchJob using
JobParametersBuilder, then modify metrics so one product falls out of TOP100
(update/delete metrics for that product or reduce its counts via insertMetrics),
run the job again, and assert the exit status is COMPLETED and that
mv_product_rank_monthly contains only the remaining valid product (and does not
contain the dropped product ID) to ensure cleanup on upsert.
In
`@apps/commerce-batch/src/test/java/com/loopers/job/ranking/WeeklyRankingJobE2ETest.java`:
- Around line 132-160: Update the weeklyRankingJob_idempotent test to verify
removal of stale rows by making the first run ingest two products (call
insertMetrics for product A and product B), launch the job with
jobLauncherTestUtils using JobParametersBuilder as before, then mutate the
source data before the second run so one product falls out of weekly TOP100
(e.g., update or delete metrics for product B via insertMetrics/DB helper),
launch the job again with a new run.id, and assert the job completed and that
jdbcTemplate.queryForList("SELECT * FROM mv_product_rank_weekly WHERE
ranking_week = '2026-W02'") returns only the remaining product A and contains no
record for product B; keep references to insertMetrics,
weeklyRankingJob_idempotent, jobLauncherTestUtils, mv_product_rank_weekly and
JobParametersBuilder when locating and changing the test.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 16da1b8b-8682-4eba-bc11-cec395384434
📒 Files selected for processing (22)
apps/commerce-api/src/main/java/com/loopers/application/ranking/MvRankingAppService.javaapps/commerce-api/src/main/java/com/loopers/application/ranking/RankingFacade.javaapps/commerce-api/src/main/java/com/loopers/infrastructure/ranking/MvProductRankMonthlyJpaRepository.javaapps/commerce-api/src/main/java/com/loopers/infrastructure/ranking/MvProductRankWeeklyJpaRepository.javaapps/commerce-api/src/main/java/com/loopers/infrastructure/ranking/ProductRankMvRepositoryImpl.javaapps/commerce-api/src/main/java/com/loopers/interfaces/api/ranking/RankingController.javaapps/commerce-api/src/test/java/com/loopers/application/concurrency/ConcurrencyTest.javaapps/commerce-api/src/test/java/com/loopers/infrastructure/queue/QueueSchedulerTest.javaapps/commerce-api/src/test/java/com/loopers/interfaces/api/RankingApiE2ETest.javaapps/commerce-batch/src/main/java/com/loopers/batch/job/ranking/MonthlyRankingJobConfig.javaapps/commerce-batch/src/main/java/com/loopers/batch/job/ranking/WeeklyRankingJobConfig.javaapps/commerce-batch/src/main/java/com/loopers/batch/job/ranking/step/AggregatedMetricRow.javaapps/commerce-batch/src/main/java/com/loopers/batch/job/ranking/step/RankingScoreProcessor.javaapps/commerce-batch/src/main/java/com/loopers/batch/job/ranking/step/RankingScoreRow.javaapps/commerce-batch/src/test/java/com/loopers/CommerceBatchApplicationTest.javaapps/commerce-batch/src/test/java/com/loopers/job/ranking/MonthlyRankingJobE2ETest.javaapps/commerce-batch/src/test/java/com/loopers/job/ranking/WeeklyRankingJobE2ETest.javamodules/jpa/src/main/java/com/loopers/domain/ranking/MvProductRankMonthly.javamodules/jpa/src/main/java/com/loopers/domain/ranking/MvProductRankMonthlyId.javamodules/jpa/src/main/java/com/loopers/domain/ranking/MvProductRankWeekly.javamodules/jpa/src/main/java/com/loopers/domain/ranking/MvProductRankWeeklyId.javamodules/jpa/src/main/java/com/loopers/domain/ranking/ProductRankMvRepository.java
💤 Files with no reviewable changes (1)
- apps/commerce-batch/src/test/java/com/loopers/CommerceBatchApplicationTest.java
- ConcurrencyTest: Thread.sleep(2000) → polling loop (100ms 간격, 10초 timeout) - Weekly/MonthlyRankingJobConfig: ORDER BY에 product_id ASC tie-breaker 추가 - Weekly/MonthlyRankingJobConfig: UPSERT → DELETE+INSERT로 stale 행 원자적 제거 - 테스트 추가: tie-break 정렬 검증, 재실행 시 탈락 상품 제거 검증
📌 Summary
product_daily_metrics일간 집계 데이터를 주간/월간 단위로 배치 집계하고, Materialized View(MV) 테이블에 TOP 100 랭킹을 적재한다. 기존 Ranking API를 확장하여period파라미터로 일간(Redis ZSET)/주간(DB MV)/월간(DB MV)을 분기 제공한다.🧭 Context & Decision
문제 정의
product_daily_metrics→ 5분 주기 스케줄러 → Redis ZSET으로 실시간 제공 중log1p(orderAmount)비선형 공식에서 취소 시 역산 불가ADR-1: 배치 vs 실시간 — 왜 주간/월간은 배치인가
log1p)에서 취소/수정 시 증분 역산 불가. 7~30일치 누적 score에서 특정 주문만 빼려면 원본 금액을 알아야 하는데 ZSET에는 합산 score만 있음product_daily_metrics에서 실시간 집계product_daily_metrics)에서 처음부터 다시 계산하므로 취소/수정이 자동 반영. 조회는 MV 테이블 단순 SELECT. DB가 SSOT(V9 ADR-3)인 구조와 자연스럽게 연결ADR-2: Chunk-Oriented vs Tasklet
책임 분리:
product_daily_metrics에서 기간별 SUM 집계 후 스코어 기준ORDER BY DESC LIMIT 100으로 TOP 100 후보 필터링 →AggregatedMetricRow출력RankingScoreProcessor): 가중치 기반 스코어 계산 (view×0.1 + like×0.2 + log1p(amount)×0.7) +AtomicInteger로 ranking 부여 →RankingScoreRow출력NamedParameterJdbcTemplate.batchUpdate()로 MV 테이블에 INSERT (stale 행 원자적 제거)스코어 공식이 Reader ORDER BY와 Processor에 양쪽 존재하는 이유: Reader의 ORDER BY는 TOP 100을 정확히 뽑기 위한 필터링 목적이고, Processor의 스코어 계산은 실제 저장할 정확한 score 값을 산출하는 목적이다. Reader는 "누구를 뽑을지", Processor는 "뽑힌 데이터에 무엇을 부여할지"로 책임이 분리된다.
ADR-3: MV 테이블 설계 — 복합 PK + DELETE+INSERT 멱등성
product_id+ranking_week/ranking_month)RunIdIncrementer로 같은 파라미터로 재실행 가능컬럼명 선정: 초기에
year_month로 설계했으나 MySQL 예약어(YEAR_MONTH은 INTERVAL keyword)와 충돌하여 Hibernate DDL 생성 시 syntax error가 발생했다. backtick 이스케이프로 우회하기보다ranking_month/ranking_week로 근본적으로 회피했다.ADR-4: API 데이터 소스 분기 — Facade 패턴
RankingAppService(Redis), 주간/월간은MvRankingAppService(DB)로 이미 분리되어 있음. 하나의 AppService가 둘 다 아는 건 책임 과잉RankingFacade가 period에 따라 적절한 AppService를 호출하고 상품 정보를 Aggregation. Facade의 원래 역할(오케스트레이션)에 부합🏗️ Design Overview
변경 범위
commerce-api,commerce-batch,modules/jpamodules/jpa:MvProductRankWeekly,MvProductRankMonthly(엔티티 + 복합 PK),ProductRankMvRepository(인터페이스)commerce-batch:WeeklyRankingJobConfig,MonthlyRankingJobConfig(Job + Step + Reader + Processor + Writer),AggregatedMetricRow,RankingScoreRow(record),RankingScoreProcessorcommerce-api:MvRankingAppService,ProductRankMvRepositoryImpl,MvProductRankWeeklyJpaRepository,MvProductRankMonthlyJpaRepositoryRankingController:period파라미터 추가, switch 분기 (daily/weekly/monthly)RankingFacade:MvRankingAppService주입,getWeeklyTopRankings()/getMonthlyTopRankings()추가CommerceBatchApplicationTest: Job name 없이 contextLoads — 의미 없는 테스트 제거QueueSchedulerTest:RedissonClientmock 누락 → NPE 수정ConcurrencyTest: 좋아요 카운트@AsyncAFTER_COMMIT 비동기 대기 —Thread.sleep(2000)→ polling loop (100ms 간격, 10초 timeout)주요 컴포넌트 책임
WeeklyRankingJobConfigmv_product_rank_weeklyDELETE+INSERTMonthlyRankingJobConfigmv_product_rank_monthlyDELETE+INSERTAggregatedMetricRowRankingScoreProcessorRankingScoreRowMvProductRankWeekly/Monthlycreate()팩토리ProductRankMvRepositoryProductRankMvRepositoryImplMvRankingAppServiceRankingEntry변환RankingFacadeRankingControllerperiod파라미터 switch (daily/weekly/monthly)🔁 Flow Diagram
배치 집계 흐름
graph LR subgraph "Spring Batch (commerce-batch)" A[JobLauncher<br/>targetDate 파라미터] --> B[weeklyRankingJob<br/>or monthlyRankingJob] B --> C[Step] C --> D[Reader<br/>JdbcCursorItemReader] D --> E[Processor<br/>RankingScoreProcessor] E --> F[Writer<br/>DELETE + batchInsert] end D -->|AggregatedMetricRow| E E -->|RankingScoreRow| F subgraph "Reader: 집계 + TOP 100 필터링" D --> G["GROUP BY product_id<br/>SUM(view/like/amount)<br/>ORDER BY score DESC LIMIT 100"] end subgraph "Processor: 스코어 계산 + ranking" E --> I["score = view×0.1 + like×0.2<br/>+ log1p(amount)×0.7<br/>ranking = AtomicInteger"] end F --> H[(mv_product_rank_weekly<br/>or mv_product_rank_monthly)] style H fill:#4CAF50,color:#fffAPI 조회 분기 흐름
graph TD A["GET /api/v1/rankings<br/>?period=weekly&date=20260408"] --> B[RankingController] B --> C{period?} C -->|daily| D[RankingFacade.getTopRankings<br/>→ RankingAppService<br/>→ Redis ZSET] C -->|weekly| E[RankingFacade.getWeeklyTopRankings<br/>→ MvRankingAppService<br/>→ DB mv_product_rank_weekly] C -->|monthly| F[RankingFacade.getMonthlyTopRankings<br/>→ MvRankingAppService<br/>→ DB mv_product_rank_monthly] D --> G[enrichWithProductInfo] E --> G F --> G G --> H["ApiResponse<RankingListResponse>"] style D fill:#FF9800,color:#fff style E fill:#4CAF50,color:#fff style F fill:#4CAF50,color:#fff스코어 계산 (단일 책임 — Reader SQL)
Reader와 Processor에 스코어 공식이 양쪽에 존재하는 이유: Reader의 ORDER BY는 TOP 100을 정확히 뽑기 위한 필터링 목적이고, Processor의 스코어 계산은 실제 저장할 정확한 score 값을 산출하는 목적이다. Reader는 "누구를 뽑을지", Processor는 "뽑힌 데이터에 무엇을 부여할지"로 책임이 분리된다.
가중치는 V9과 동일하나,
order_amount가중치가 0.6 → 0.7로 조정 (V9 실시간에서는 carry-over에 0.1을 배분했으나, 배치에서는 carry-over가 없으므로 주문에 재배분. 합계 1.0 유지).🧪 테스트
배치 테스트 (commerce-batch)
WeeklyRankingJobE2ETestMonthlyRankingJobE2ETestAPI 테스트 (commerce-api)
RankingApiE2ETestperiod=yearly→ 400 BAD_REQUEST2026-04-09(하이픈 형식) → 400 BAD_REQUEST전체 테스트 All Green. ArchUnit 15개 룰 통과.
📎 의식적 트레이드오프
product_id ASCtie-breaker 추가ranking_week/ranking_month컬럼명YEAR_MONTH) 충돌을 backtick이 아닌 이름 변경으로 근본 해결skip/limit대신 DB 레벨 LIMIT/OFFSET📦 의존성 변경
없음. Spring Batch는
commerce-batch모듈에 이미 포함. MV 엔티티는modules/jpa에 추가 (기존 의존성 범위 내).🔗 V9 → V10 연결 포인트
product_daily_metrics(DB SSOT)view×0.1 + like×0.2 + log1p(amount)×weight)RankingFacade+enrichWithProductInfo()RankingEntryrecord