[volume-10] Spring Batch 랭킹 집계#407
Conversation
- mv_product_rank_weekly, mv_product_rank_monthly 테이블 DDL 추가 - product_metrics_daily 테이블 DDL 추가 - ProductRankWeeklyJpaEntity, ProductRankMonthlyJpaEntity 구현 - Repository 인터페이스 및 구현체 추가 Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Spring Batch 의존성 및 설정 추가 (batch.yml) - 공통 클래스: RankingJobConstants, RankingMetricsAggregation - WeeklyRankingJobConfig: 주간 랭킹 집계 배치 Job - MonthlyRankingJobConfig: 월간 랭킹 집계 배치 Job - RankingJobScheduler: 운영용 스케줄러 (cron 기반) - BatchSchedulerProperties: 스케줄러 설정 Chunk-Oriented Processing 패턴 적용: - Reader: JdbcCursorItemReader (GROUP BY 집계) - Processor: rank_number 부여 + Entity 변환 - Writer: DELETE + INSERT (멱등성 보장) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- BatchAdminV1Controller: REST API로 배치 Job 트리거 - POST /api-admin/v1/batch/weekly-ranking - POST /api-admin/v1/batch/monthly-ranking - BatchAdminV1Dto: JobExecution 응답 DTO - ErrorType 추가: BATCH_INVALID_DATE_FORMAT, BATCH_JOB_FAILED - HTTP 요청 예시 파일 추가 Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- RankingPeriod enum: DAILY, WEEKLY, MONTHLY - PeriodRankingResult: 기간별 랭킹 결과 DTO - RankingQueryService: getPeriodRankings() 메서드 추가 - DAILY: Redis ZSET 조회 (기존 로직) - WEEKLY: mv_product_rank_weekly 조회 - MONTHLY: mv_product_rank_monthly 조회 - RankingV1Controller: period 파라미터 추가 (기본값: DAILY) - 응답에 viewCount, likeCount, orderCount, periodStart, periodEnd 추가 - HTTP 요청 예시 업데이트 Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
배치 Job 테스트: - WeeklyRankingJobTest: 주간 집계, 날짜 경계, 멱등성 검증 - MonthlyRankingJobTest: 월간 집계, 윤년/평년/30일 월 처리 검증 - RankingJobConstantsTest: 상수 및 날짜 계산 유틸 단위 테스트 - BatchAdminV1ControllerTest: Admin API 통합 테스트 랭킹 API 테스트: - RankingV1PeriodApiTest: 기간별 랭킹 조회 테스트 통합 테스트: - RankingPipelineIntegrationTest: 전체 파이프라인 E2E 검증 - 테스트 데이터 생성 → 배치 실행 → API 조회 → 멱등성 확인 테스트 데이터: - ranking_test_data.sql: 200상품 x 30일 테스트 데이터 스크립트 Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
주요 의사결정 5가지 기록: 1. Batch 처리 방식: Chunk-Oriented 선택 (vs Tasklet) 2. MV 갱신 전략: DELETE + INSERT 선택 (vs UPSERT) 3. 일간 데이터 소스: Redis 유지 (vs MV 통일) 4. API 분기 전략: Switch 분기 선택 (vs Strategy 패턴) 5. Batch Job: 분리 선택 (vs 통합) 각 의사결정에 대해 선택지, 장단점, 근거, 향후 개선 포인트 기록 Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
📝 WalkthroughWalkthroughSpring Batch를 도입하여 주간 및 월간 제품 랭킹을 집계하는 배치 파이프라인을 추가했다. Redis 기반의 일간 랭킹과 함께 데이터베이스에 저장된 주간/월간 랭킹을 조회할 수 있도록 API를 확장했다. Changes
Sequence Diagram(s)sequenceDiagram
participant Scheduler as 배치 스케줄러<br/>(RankingJobScheduler)
participant JobLauncher as JobLauncher
participant Reader as JDBC Cursor Reader
participant Processor as ItemProcessor<br/>(rank assign)
participant Writer as ItemWriter
participant Repository as JPA Repository
participant DB as 데이터베이스
Scheduler->>JobLauncher: launchJob(targetDate)
JobLauncher->>Reader: read()
Reader->>DB: SELECT aggregated metrics<br/>FROM product_metrics_daily<br/>WHERE date IN [period]<br/>ORDER BY score DESC
DB-->>Reader: RankingMetricsAggregation[]
Reader-->>Processor: RankingMetricsAggregation
Processor->>Processor: assign rankNumber++<br/>map to JpaEntity<br/>with period boundaries
Processor-->>Writer: ProductRank[Weekly|Monthly]JpaEntity
Writer->>Repository: deleteByPeriodStartDate(period)<br/>(idempotency)
Repository->>DB: DELETE FROM mv_product_rank_*<br/>WHERE period_start_date = ?
Writer->>Repository: saveAll(entities)
Repository->>DB: INSERT INTO mv_product_rank_*<br/>VALUES (rank, metrics, period)
DB-->>Repository: saved entities
Scheduler->>Scheduler: log job completion
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
운영 관점의 주요 검토 사항1. 배치 스케줄 관리의 프로필 분리
2. 배치 멱등성 보장
3. JDBC 스키마 초기화 설정
4. 데이터 일관성: Redis vs DB일간 랭킹은 Redis ZSET에서 실시간 제공되고, 주간/월간은 DB의 물화 뷰에서 제공된다. 두 소스 간 데이터 동기화 지점이 명확하지 않다.
5. 장시간 실행되는 배치 잡의 성능주간/월간 배치에서 JDBC 커서 리더가 모든 매칭 메트릭을 읽고, 프로세서가 rank를 순차적으로 할당하며, Writer가 대량 INSERT를 수행한다.
6. 관리자 API의 날짜 파라미터 검증
코드 품질 검토 추가 사항7. 트레이스 가능성배치 실행 시
8. 트랜잭션 경계 명시
이 PR은 광범위한 배치 인프라와 통합 테스트를 제공하여 기능 완성도가 높으나, 위의 운영 관점 항목들을 검토하고 보완하면 프로덕션 안정성을 한층 더 높일 수 있다. 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 16
🧹 Nitpick comments (31)
http/batch-admin-api.http (1)
6-16: HTTP 요청 호스트를 변수화하는 편이 운영 안전성이 높다.왜 문제인지(운영 관점): Line 6, 11, 15처럼 호스트가 하드코딩되면 로컬/스테이징 전환 시 수동 수정이 반복되어 잘못된 대상 호출 위험이 커진다.
수정안: 파일 상단에@host변수를 선언하고 모든 요청 URL에서 재사용하도록 변경하는 편이 낫다.
추가 테스트: HTTP Client 환경 파일(예: local/staging)별로@host만 바꿔 3개 요청이 동일하게 동작하는지 수동 실행으로 검증하면 된다.변경 예시
+@host = http://localhost:8080 + ### 주간 랭킹 집계 Job 실행 -POST http://localhost:8080/api-admin/v1/batch/weekly-ranking?targetDate=20250414 +POST {{host}}/api-admin/v1/batch/weekly-ranking?targetDate=20250414 Content-Type: application/json ### 월간 랭킹 집계 Job 실행 -POST http://localhost:8080/api-admin/v1/batch/monthly-ranking?targetDate=20250401 +POST {{host}}/api-admin/v1/batch/monthly-ranking?targetDate=20250401 Content-Type: application/json ### 잘못된 날짜 형식 테스트 (400 에러 예상) -POST http://localhost:8080/api-admin/v1/batch/weekly-ranking?targetDate=2025-04-14 +POST {{host}}/api-admin/v1/batch/weekly-ranking?targetDate=2025-04-14 Content-Type: application/json🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@http/batch-admin-api.http` around lines 6 - 16, Introduce a reusable host variable (e.g., declare `@host` at the top) and replace the hardcoded "http://localhost:8080" in all POST requests for /api-admin/v1/batch/weekly-ranking and /api-admin/v1/batch/monthly-ranking with that `@host` variable so the three requests use @{host}/api-admin/v1/batch/weekly-ranking?targetDate=... and @{host}/api-admin/v1/batch/monthly-ranking?targetDate=...; update any environment-specific HTTP client configs (local/staging) to only change `@host` and manually verify all three requests still behave as expected.apps/commerce-api/src/test/java/com/loopers/batch/job/common/RankingJobConstantsTest.java (1)
25-52: 실패/예외 경로 테스트가 빠져 있어 배치 입력 회귀를 놓칠 수 있다현재는 정상 입력만 검증하므로 잘못된 날짜 문자열이 유입될 때 파싱 실패를 사전에 고정하지 못한다. 운영에서는 잘못된 파라미터 한 번으로 스케줄 배치가 연쇄 실패할 수 있으므로 예외 흐름을 명시적으로 테스트해야 한다.
- 수정안:
assertThatThrownBy로 잘못된 형식(2025-04-14)과 불가능한 날짜(20251301)에 대한 실패 케이스를 추가하면 된다.- 추가 테스트:
DateTimeParseException발생 여부와 메시지 핵심 키워드(형식 오류/유효하지 않은 월)를 함께 검증하면 회귀 탐지력이 높아진다.테스트 보강 예시
`@Nested` `@DisplayName`("DATE_FORMATTER") class DateFormatterTest { @@ void formatDate() { @@ assertThat(formatted).isEqualTo("20250414"); } + + `@Test` + `@DisplayName`("잘못된 형식 문자열은 파싱에 실패한다") + void parseDateFormat_invalidPattern() { + assertThatThrownBy(() -> LocalDate.parse("2025-04-14", RankingJobConstants.DATE_FORMATTER)) + .isInstanceOf(java.time.format.DateTimeParseException.class); + } + + `@Test` + `@DisplayName`("유효하지 않은 날짜는 파싱에 실패한다") + void parseDateFormat_invalidDate() { + assertThatThrownBy(() -> LocalDate.parse("20251301", RankingJobConstants.DATE_FORMATTER)) + .isInstanceOf(java.time.format.DateTimeParseException.class); + } }As per coding guidelines
**/*Test*.java: 단위 테스트는 경계값/실패 케이스/예외 흐름을 포함하는지 점검한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/batch/job/common/RankingJobConstantsTest.java` around lines 25 - 52, Add negative-case unit tests for RankingJobConstants.DATE_FORMATTER: create tests (e.g., parseDateFormat_InvalidFormat and parseDateFormat_InvalidDate) that use assertThatThrownBy to verify parsing invalid strings like "2025-04-14" and impossible dates like "20251301" throw DateTimeParseException, and assert the exception message contains relevant keywords (format/invalid month) to ensure regression detection in RankingJobConstants.DATE_FORMATTER; reference the existing tests parseDateFormat and formatDate to place these new tests in the same test class.scripts/migration/V003__create_product_metrics_daily_table.sql (1)
20-23: 음수 메트릭 유입 방지 제약이 없어 랭킹 무결성이 깨질 수 있다카운트/점수 컬럼이 애플리케이션 버그나 수동 적재 실수로 음수로 들어가면 주/월 집계 결과가 왜곡된다. 운영에서 왜곡 데이터가 누적되면 복구 비용이 커진다.
- 수정안:
CHECK제약으로view_count,like_count,order_count,score의 하한을 0으로 고정한다.- 추가 테스트: 음수 값을
INSERT시도했을 때 실패하는지 마이그레이션 검증 테스트에 포함하면 된다.DDL 보강 예시
UNIQUE INDEX uk_daily_product_date (product_id, metric_date), + CONSTRAINT chk_daily_counts_non_negative + CHECK (view_count >= 0 AND like_count >= 0 AND order_count >= 0), + CONSTRAINT chk_daily_score_non_negative + CHECK (score >= 0) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/migration/V003__create_product_metrics_daily_table.sql` around lines 20 - 23, Add NOT NULL CHECK constraints to prevent negative metrics in the product_metrics_daily table: update the CREATE TABLE/ALTER TABLE logic handling the columns view_count, like_count, order_count, and score so each has a CHECK (column_name >= 0) constraint (or a single table-level CHECK combining them) to enforce a 0 lower bound; ensure the migration script that defines these columns (referenced by the column names view_count, like_count, order_count, score in V003__create_product_metrics_daily_table.sql) includes these checks and add a simple migration test that attempts to INSERT negative values and expects failure.scripts/migration/V002__create_product_rank_mv_tables.sql (1)
50-53: 상품별 이력 조회 정렬 비용을 줄이려면 복합 인덱스가 필요하다현재는
product_id단일 인덱스라ORDER BY period_start_date DESC에서 추가 정렬 비용이 발생할 수 있다. 기간 데이터가 늘어나면 조회 지연과 DB CPU 스파이크로 이어질 수 있다.
- 수정안: 주간/월간 모두
(product_id, period_start_date DESC)복합 인덱스로 교체하거나 추가한다.- 추가 테스트: 실제 조회 쿼리에 대해
EXPLAIN에서Using filesort제거 여부를 확인하면 된다.DDL 보강 예시
- INDEX idx_weekly_product (product_id), + INDEX idx_weekly_product_period (product_id, period_start_date DESC), @@ - INDEX idx_monthly_product (product_id), + INDEX idx_monthly_product_period (product_id, period_start_date DESC),Also applies to: 80-82
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/migration/V002__create_product_rank_mv_tables.sql` around lines 50 - 53, Replace the single-column index idx_weekly_product on mv_product_rank_weekly with a composite index on (product_id, period_start_date DESC) (or add that composite alongside it) to avoid filesort for the query SELECT * FROM mv_product_rank_weekly WHERE product_id = ? ORDER BY period_start_date DESC; do the same for the monthly view (mv_product_rank_monthly / its idx_monthly_product equivalent). Locate the index definitions (idx_weekly_product and the monthly index) and update/create composite indexes keyed by product_id then period_start_date DESC, and afterwards verify with EXPLAIN that the query no longer shows "Using filesort".apps/commerce-api/src/main/java/com/loopers/infrastructure/persistence/jpa/rank/ProductRankWeeklyMapper.java (1)
12-37: 매퍼 입력 null 가드를 추가해 장애 진단성을 높여야 한다현재는
domain/entity가 null일 때 내부 호출 지점에서 늦게 NPE가 발생해 운영 로그 상 원인 파악이 어렵다.
수정안으로 메서드 진입 시Objects.requireNonNull로 빠르게 실패시키는 편이 안전하다.
추가 테스트로toJpaEntity(null),toDomain(null)호출 시 즉시 예외와 메시지를 검증하는 단위 테스트를 권장한다.제안 패치
+import java.util.Objects; + public class ProductRankWeeklyMapper { public ProductRankWeeklyJpaEntity toJpaEntity(ProductRankWeekly domain) { + Objects.requireNonNull(domain, "domain must not be null"); return new ProductRankWeeklyJpaEntity( domain.getProductId(), domain.getRankNumber(), @@ } public ProductRankWeekly toDomain(ProductRankWeeklyJpaEntity entity) { + Objects.requireNonNull(entity, "entity must not be null"); return ProductRankWeekly.reconstitute( entity.getId(), entity.getProductId(),As per coding guidelines
**/*.java: null 처리와 방어적 코딩을 점검해야 한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/infrastructure/persistence/jpa/rank/ProductRankWeeklyMapper.java` around lines 12 - 37, Add null guards at the start of ProductRankWeeklyMapper.toJpaEntity and toDomain by calling Objects.requireNonNull on the incoming ProductRankWeekly (domain) and ProductRankWeeklyJpaEntity (entity) respectively so failures surface immediately with clear messages; update the methods to throw with a descriptive message (e.g., "domain must not be null" / "entity must not be null") and add unit tests that assert toJpaEntity(null) and toDomain(null) throw the expected NullPointerException and message.http/ranking-api.http (1)
7-16: 예시 날짜를 변수화해 문서 실행성을 유지해야 한다고정 날짜 예시는 시간이 지나면 검증 데이터와 어긋나 문서 기반 API 스모크가 자주 실패한다.
수정안으로@sampleDate변수를 선언하고 모든 요청에서 재사용하는 구성이 유지보수에 유리하다.
추가 테스트로 문서의 DAILY/WEEKLY/MONTHLY 예시를 동일 변수 기준으로 한 번씩 실행해 응답 스키마를 점검해야 한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@http/ranking-api.http` around lines 7 - 16, Replace the hard-coded date query param in all GET /api/v1/rankings examples with a single variable (e.g., declare `@sampleDate`) and reuse it across every request so the docs remain executable; update the four GET lines that currently use "20250407" to use the variable instead, and then execute the three period variants (period=DAILY, WEEKLY, MONTHLY) plus a pagination example (page=2) against the same `@sampleDate` to verify responses conform to the expected ranking response schema.apps/commerce-api/src/main/java/com/loopers/infrastructure/persistence/jpa/rank/ProductRankMonthlyMapper.java (1)
12-37: 주간/월간 매퍼 중복을 줄여 필드 드리프트를 예방해야 한다월간 매퍼가 주간 매퍼와 거의 동일한 매핑 코드를 반복하고 있어, 향후 필드 추가 시 한쪽만 수정되는 운영 데이터 불일치 위험이 있다.
수정안으로 공통 매핑 규약(공유 유틸/추상화 또는 매퍼 계약 테스트)을 도입해 두 매퍼의 필드 동기화를 강제하는 편이 안전하다.
추가 테스트로 주간/월간 엔티티 각각에 대해 round-trip(도메인→엔티티→도메인) 파라미터화 테스트를 추가해 매핑 누락을 검출해야 한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/infrastructure/persistence/jpa/rank/ProductRankMonthlyMapper.java` around lines 12 - 37, ProductRankMonthlyMapper currently duplicates field mapping logic that mirrors the weekly mapper; extract the common mapping into a shared helper or abstract base (e.g., ProductRankMapperBase or ProductRankMapperUtil) that exposes methods to build a ProductRankMonthlyJpaEntity and to reconstitute a ProductRankMonthly from a JPA entity, then have ProductRankMonthlyMapper.toJpaEntity and toDomain delegate to those shared methods; additionally add parameterized round-trip tests (domain -> JPA entity -> domain) for both ProductRankMonthlyMapper and the weekly mapper to assert all fields are preserved (use ProductRankMonthly.reconstitute and the JPA entity constructor/fields in the assertions) so future field additions must be updated in one place.apps/commerce-api/src/main/java/com/loopers/interfaces/api/ranking/RankingV1Controller.java (1)
52-80: 기간 경계 계산을 컨트롤러에 중복 두지 마십시오.여기서 계산한
periodStart/periodEnd는RankingQueryService의 주간·월간 조회 로직과 같은 규칙을 별도로 다시 구현한 것이다. 두 곳이 조금만 어긋나도 실제 조회 데이터와 응답 메타데이터가 달라져 프런트 표시, 캐시 키, 장애 분석이 모두 혼란스러워진다. 이 계산은RankingPeriod나 서비스 공용 헬퍼로 올려 단일 소스로 관리하고, DAILY/WEEKLY/MONTHLY 각각에 대해 응답 메타데이터가 실제 조회 구간과 일치하는 테스트를 추가하기 바란다.As per coding guidelines "Controller는 요청 검증(Bean Validation)과 응답 조립에 집중하고 비즈니스 로직은 Service로 이동한다."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/interfaces/api/ranking/RankingV1Controller.java` around lines 52 - 80, The controller currently duplicates period boundary logic in calculatePeriodStart and calculatePeriodEnd; move this logic into a single canonical place (e.g., add methods to RankingPeriod or a shared helper used by RankingQueryService) and have RankingV1Controller call that shared implementation instead of its own methods; update/remove calculatePeriodStart/calculatePeriodEnd in RankingV1Controller, update RankingQueryService to use the same new method (or delegate to RankingPeriod), and add tests verifying that the response metadata (periodStart/periodEnd) exactly matches the query service's actual lookup ranges for DAILY/WEEKLY/MONTHLY.apps/commerce-api/src/main/java/com/loopers/batch/scheduler/RankingJobScheduler.java (1)
74-85: 배치 실패를 로그만 남기고 종료하지 마십시오.현재는 집계 실패가 발생해도 메서드가 정상 종료되어 운영 측에서 실패 신호를 놓치기 쉽다. 주간/월간 랭킹이 갱신되지 않아도 다음 조회는 그냥 오래된 데이터를 계속 반환하게 되므로, 로그 외에 실패를 감지할 경로가 필요하다. 로그 후 예외를 다시 던지거나, 최소한 실패 메트릭/이벤트를 남겨 알림과 재처리 흐름을 붙이기 바란다.
JobLauncher가 예외를 던질 때 실패가 외부로 전파되거나 실패 메트릭이 증가하는 테스트도 추가하는 것이 좋다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/batch/scheduler/RankingJobScheduler.java` around lines 74 - 85, The launchJob method currently swallows exceptions after logging; change it so failures are not just logged: inside RankingJobScheduler.launchJob, when jobLauncher.run(job, params) throws, either rethrow the exception after logging or record a failure metric/event (e.g., increment a Counter via your metrics registry) before rethrowing so the error propagates to callers/monitoring; update the catch block that references job.getName() and exception e to (1) emit a failure metric or event and (2) rethrow the exception (or wrap in a RuntimeException) instead of returning silently; add a unit/integration test that simulates jobLauncher.run throwing and verifies that an exception is propagated and the failure metric/event was incremented.apps/commerce-api/src/test/java/com/loopers/batch/job/weeklyranking/WeeklyRankingJobTest.java (2)
46-47: 사용되지 않는 EntityManager 주입
EntityManager가 주입되었으나 테스트 코드 내에서 사용되지 않는다. 불필요한 의존성은 제거하는 것이 좋다.제안 수정
- `@Autowired` - private EntityManager entityManager;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/batch/job/weeklyranking/WeeklyRankingJobTest.java` around lines 46 - 47, WeeklyRankingJobTest 클래스에서 주입된 EntityManager 필드(entityManager)가 테스트에서 사용되지 않으므로 제거하세요: WeeklyRankingJobTest 클래스의 private EntityManager entityManager; 선언을 삭제하고 관련 임포트(javax.persistence.EntityManager)가 남아있다면 함께 정리하여 불필요한 의존성을 제거합니다.
64-105: 테스트 커버리지 보완 제안현재 테스트가 정상 케이스를 잘 커버하고 있으나, 운영 환경에서 발생 가능한 예외 상황에 대한 테스트 추가를 권장한다:
- 잘못된 targetDate 형식:
targetDate가yyyyMMdd형식이 아닌 경우DateTimeParseException발생 검증- 동일 상품의 동일 날짜 중복 메트릭:
product_metrics_daily에 같은(product_id, metric_date)조합이 중복으로 들어온 경우의 집계 동작 검증- score가 null 또는 음수인 경우: 데이터 무결성 문제 시 배치 실패 여부 검증
이러한 테스트가 없으면 운영 중 데이터 이상 시 배치 실패의 원인 파악이 어려울 수 있다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/batch/job/weeklyranking/WeeklyRankingJobTest.java` around lines 64 - 105, Add three focused tests to WeeklyRankingJobTest to cover edge cases: (1) a test (e.g., invalidTargetDateFormatThrows) that supplies a malformed "targetDate" via JobParameters to jobLauncherTestUtils.launchJob(params) and asserts a DateTimeParseException is thrown from the batch entry point that parses the date; (2) a test (e.g., duplicateDailyMetricsAggregated) that uses the existing insertDailyMetrics helper (or a low-level insert helper) to insert duplicate rows with the same (product_id, metric_date) for the same product, run the job, and assert weeklyRankRepository.findByPeriodStartDateOrderByRankNumberAsc(...) shows correct aggregated totals (no double-counting or consistent dedup behavior); and (3) a test (e.g., nullOrNegativeScoreHandling) that inserts a daily metric with null or negative totalScore, runs the job, and asserts the batch either fails cleanly (expecting a specific exception) or skips/normalizes the record according to intended behavior (assert by checking jobExecution.getStatus() and weeklyRankRepository results). Reference test helpers insertDailyMetrics, jobLauncherTestUtils.launchJob, and weeklyRankRepository / ProductRankWeeklyJpaEntity when adding these tests.apps/commerce-api/src/main/java/com/loopers/infrastructure/persistence/jpa/rank/ProductRankMonthlyJpaRepository.java (1)
35-37:@Modifying에clearAutomatically = true추가 필요
ProductRankWeeklyJpaRepository와 동일한 패턴으로,MonthlyRankingJobConfig의 writer에서 동일 트랜잭션 내deleteByPeriodStartDate후saveAll을 수행한다. 영속성 컨텍스트 정합성을 위해clearAutomatically = true옵션 추가가 필요하다.제안 수정
- `@Modifying` + `@Modifying`(clearAutomatically = true) `@Query`("DELETE FROM ProductRankMonthlyJpaEntity e WHERE e.periodStartDate = :periodStartDate") void deleteByPeriodStartDate(`@Param`("periodStartDate") LocalDate periodStartDate);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/infrastructure/persistence/jpa/rank/ProductRankMonthlyJpaRepository.java` around lines 35 - 37, ProductRankMonthlyJpaRepository의 deleteByPeriodStartDate 메서드에 `@Modifying` 애너테이션에 clearAutomatically = true 옵션을 추가하세요: 동일 트랜잭션 내 MonthlyRankingJobConfig의 writer에서 deleteByPeriodStartDate 호출 후 saveAll 실행 시 영속성 컨텍스트 정합성을 유지하도록 ProductRankMonthlyJpaRepository의 deleteByPeriodStartDate 메서드(`@Modifying`, `@Query`(...))에 clearAutomatically = true를 설정해 Weekly 레포지토리와 동일한 패턴으로 동작하게 합니다.apps/commerce-api/src/main/java/com/loopers/infrastructure/persistence/jpa/rank/ProductRankWeeklyJpaRepository.java (1)
35-37:@Modifying에clearAutomatically = true추가 필요
WeeklyRankingJobConfig의 writer에서 동일 트랜잭션 내에서deleteByPeriodStartDate호출 후saveAll을 수행한다.clearAutomatically = true가 없으면 영속성 컨텍스트에 삭제된 엔티티가 남아 있어 예기치 않은 동작이 발생할 수 있다.제안 수정
- `@Modifying` + `@Modifying`(clearAutomatically = true) `@Query`("DELETE FROM ProductRankWeeklyJpaEntity e WHERE e.periodStartDate = :periodStartDate") void deleteByPeriodStartDate(`@Param`("periodStartDate") LocalDate periodStartDate);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/infrastructure/persistence/jpa/rank/ProductRankWeeklyJpaRepository.java` around lines 35 - 37, The `@Modifying` annotation on ProductRankWeeklyJpaRepository.deleteByPeriodStartDate needs clearAutomatically = true to avoid stale entities in the persistence context when WeeklyRankingJobConfig's writer calls deleteByPeriodStartDate and then saveAll in the same transaction; update the `@Modifying` on the deleteByPeriodStartDate method to include clearAutomatically = true so the persistence context is cleared after the bulk delete.apps/commerce-api/src/main/java/com/loopers/batch/job/weeklyranking/WeeklyRankingJobConfig.java (1)
77-79: 날짜 파싱 로직 중복
targetDate파싱 및weekStart/weekEnd계산 로직이weeklyMetricsReader,weeklyRankingProcessor,weeklyRankingWriter세 곳에서 반복된다. 파싱 오류 발생 시 디버깅이 어렵고, 날짜 계산 로직 변경 시 세 곳을 모두 수정해야 한다.공통 유틸리티 메서드로 추출하거나, Step 레벨에서 파싱 후 ExecutionContext에 저장하여 재사용하는 것을 권장한다.
제안 수정 - 공통 메서드 추출
private record WeekRange(LocalDate weekStart, LocalDate weekEnd) { static WeekRange from(String targetDate) { LocalDate target = LocalDate.parse(targetDate, RankingJobConstants.DATE_FORMATTER); LocalDate weekStart = target.with(TemporalAdjusters.previousOrSame(DayOfWeek.MONDAY)); LocalDate weekEnd = target.with(TemporalAdjusters.nextOrSame(DayOfWeek.SUNDAY)); return new WeekRange(weekStart, weekEnd); } }Also applies to: 98-100, 124-125
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/batch/job/weeklyranking/WeeklyRankingJobConfig.java` around lines 77 - 79, The date parsing and weekStart/weekEnd calculation is duplicated across weeklyMetricsReader, weeklyRankingProcessor, and weeklyRankingWriter; extract this into a single reusable utility (e.g., a private record WeekRange with a static from(String targetDate) that uses RankingJobConstants.DATE_FORMATTER and TemporalAdjusters) or compute once at Step startup and store in the Step ExecutionContext, then replace the duplicated LocalDate.parse(...) and .with(...) logic in the three methods to use WeekRange.from(targetDate) or read the precomputed weekStart/weekEnd from the ExecutionContext so parsing and formatter usage is centralized and parsing errors are handled in one place.apps/commerce-api/src/main/java/com/loopers/application/ranking/RankingQueryService.java (1)
5-8: Application 레이어에서 Infrastructure JPA Repository 직접 참조Application 서비스에서
ProductRankWeeklyJpaRepository,ProductRankMonthlyJpaRepository를 직접 주입받고 있다. 기존ProductRepository처럼 도메인 Repository 인터페이스를 통해 접근하는 것이 레이어 분리 원칙에 부합한다.
ProductRankWeeklyRepository,ProductRankMonthlyRepository도메인 인터페이스를 주입받아 사용하는 것을 권장한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/application/ranking/RankingQueryService.java` around lines 5 - 8, RankingQueryService currently depends directly on infrastructure JPA types (ProductRankWeeklyJpaRepository, ProductRankMonthlyJpaRepository and related JpaEntity imports); change it to depend on the domain repository interfaces (ProductRankWeeklyRepository, ProductRankMonthlyRepository) instead, remove the JPA imports from RankingQueryService, update the constructor/field injection to accept ProductRankWeeklyRepository and ProductRankMonthlyRepository, and ensure any code that relied on JpaEntity types uses the domain model or a mapper/adapter provided by the repository implementations; if an adapter layer is missing, create repository implementations that adapt JpaEntity to the domain DTOs so RankingQueryService only references domain interfaces and models.apps/commerce-api/src/main/java/com/loopers/batch/domain/ProductRankMonthly.java (1)
51-73:create()메서드의LocalDateTime.now()사용 - 테스트 어려움
create()메서드에서LocalDateTime.now()를 직접 호출하여createdAt을 설정한다. 이는 단위 테스트에서 시간 의존성을 제어하기 어렵게 만든다.
Clock을 주입받거나,createdAt을 파라미터로 받도록 변경하면 테스트 시 특정 시간을 지정할 수 있다. 배치 실행 시점과createdAt값의 일관성 검증 테스트가 필요한 경우 현재 구조로는 검증이 불가능하다.제안 수정
public static ProductRankMonthly create( Long productId, int rankNumber, BigDecimal totalScore, long totalViewCount, long totalLikeCount, long totalOrderCount, LocalDate periodStartDate, - LocalDate periodEndDate + LocalDate periodEndDate, + LocalDateTime createdAt ) { return new ProductRankMonthly( null, productId, rankNumber, totalScore, totalViewCount, totalLikeCount, totalOrderCount, periodStartDate, periodEndDate, - LocalDateTime.now() + createdAt ); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/batch/domain/ProductRankMonthly.java` around lines 51 - 73, The create() factory currently sets createdAt via LocalDateTime.now(), making tests time-dependent; update ProductRankMonthly.create(...) to accept either a LocalDateTime createdAt parameter or a Clock (e.g., add an overload create(..., LocalDateTime createdAt) or create(..., Clock clock)) and use that value to set the createdAt field instead of calling LocalDateTime.now(); modify the ProductRankMonthly constructor invocations and any callers to pass a deterministic createdAt (or a Clock) so tests can control time while preserving the existing behavior with a convenience overload that delegates to LocalDateTime.now() when not provided.apps/commerce-api/src/main/java/com/loopers/batch/domain/ProductRankWeekly.java (1)
11-95: ProductRankMonthly와 구조 중복
ProductRankWeekly와ProductRankMonthly가 필드, 생성자, 팩토리 메서드, getter 모두 동일한 구조다. 향후 필드 추가나 로직 변경 시 두 클래스를 모두 수정해야 하는 유지보수 부담이 있다.공통 추상 클래스나 제네릭 기반 클래스로 통합하는 것을 고려할 수 있으나, 현재 규모에서는 선택 사항이다.
create()메서드의LocalDateTime.now()사용에 대한 테스트 어려움은ProductRankMonthly코멘트와 동일하게 적용된다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/batch/domain/ProductRankWeekly.java` around lines 11 - 95, ProductRankWeekly duplicates ProductRankMonthly (fields, constructor, create/reconstitute) and uses LocalDateTime.now() which hampers testing; refactor by extracting a shared base class or generic (e.g., ProductRankBase or ProductRankPeriod<T>) that holds id, productId, rankNumber, totalScore, totalViewCount, totalLikeCount, totalOrderCount, periodStartDate, periodEndDate, createdAt and common constructors/reconstitute logic, then have ProductRankWeekly and ProductRankMonthly extend it; also make the create(...) factory on ProductRankWeekly use an injectable time source (pass a java.time.Clock or Supplier<LocalDateTime> into create or into the base factory) instead of calling LocalDateTime.now() directly so tests can supply a fixed clock.apps/commerce-api/src/test/java/com/loopers/batch/job/monthlyranking/MonthlyRankingJobTest.java (2)
45-54:@AfterEach정리 로직이 없어 테스트 격리 문제가 발생할 수 있다.
@BeforeEach에서만 정리하면 테스트가 실패 시 잔류 데이터가 다른 테스트에 영향을 줄 수 있다. 특히@SpringBatchTest와 함께 사용 시 Job 메타데이터도 고려해야 한다.♻️ `@AfterEach` 추가 제안
+import org.junit.jupiter.api.AfterEach; `@BeforeEach` void setUp() { jobLauncherTestUtils.setJob(monthlyRankingJob); cleanUpTestData(); } +@AfterEach +void tearDown() { + cleanUpTestData(); +}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/batch/job/monthlyranking/MonthlyRankingJobTest.java` around lines 45 - 54, The test class MonthlyRankingJobTest currently only cleans data in setUp() via cleanUpTestData(), which can leave residual state if a test fails; add an `@AfterEach` teardown that calls cleanUpTestData() (and also resets Job metadata if needed via jobLauncherTestUtils.getJobExecution()/remove or similar cleanup) to ensure test isolation; locate setUp(), cleanUpTestData(), and add a new method annotated with `@AfterEach` that invokes cleanUpTestData() and clears any Batch job metadata/state related to monthlyRankingJob/jobLauncherTestUtils.
209-245: 멱등성 테스트가 "데이터 갱신" 시나리오를 테스트하지만 네이밍과 실제 동작이 다르다.
rerunUpdatesExistingData테스트는 첫 실행 후 새 데이터를 추가한 뒤 재실행하여 결과가 갱신됨을 검증한다. 이는 "멱등성"보다는 "증분 데이터 반영"에 가깝다. 진정한 멱등성 테스트는 동일 입력에 동일 출력을 검증해야 한다.별도로 "동일 데이터로 재실행 시 결과 동일" 테스트를 추가하는 것을 권장한다.
🧪 진정한 멱등성 테스트 추가 예시
`@Test` `@DisplayName`("동일 데이터로 재실행해도 결과가 동일하다") void rerunWithSameDataProducesSameResult() throws Exception { // Arrange insertDailyMetrics(100L, LocalDate.of(2025, 1, 10), 10, 5, 2, BigDecimal.valueOf(3.0)); JobParameters params1 = new JobParametersBuilder() .addString("targetDate", "20250110") .addLong("runId", 1L) .toJobParameters(); // Act - 첫 번째 실행 jobLauncherTestUtils.launchJob(params1); var firstRunRankings = monthlyRankRepository .findByPeriodStartDateOrderByRankNumberAsc(LocalDate.of(2025, 1, 1)); // Act - 두 번째 실행 (동일 데이터) JobParameters params2 = new JobParametersBuilder() .addString("targetDate", "20250110") .addLong("runId", 2L) .toJobParameters(); jobLauncherTestUtils.launchJob(params2); var secondRunRankings = monthlyRankRepository .findByPeriodStartDateOrderByRankNumberAsc(LocalDate.of(2025, 1, 1)); // Assert - 결과가 동일 assertThat(secondRunRankings).hasSize(firstRunRankings.size()); assertThat(secondRunRankings.get(0).getTotalScore()) .isEqualByComparingTo(firstRunRankings.get(0).getTotalScore()); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/batch/job/monthlyranking/MonthlyRankingJobTest.java` around lines 209 - 245, The current test rerunUpdatesExistingData actually verifies incremental data reflection, not true idempotency; add a new test (e.g., rerunWithSameDataProducesSameResult) that inserts the same daily metrics, runs the job twice with identical JobParameters (change only runId), and asserts that monthlyRankRepository.findByPeriodStartDateOrderByRankNumberAsc(LocalDate.of(2025,1,1)) yields identical results (same size and equal totalScore/rank for each entry), and consider renaming rerunUpdatesExistingData to indicate "incremental update" to avoid confusion.apps/commerce-api/src/test/java/com/loopers/interfaces/api/batch/BatchAdminV1ControllerTest.java (4)
33-42:@AfterEach정리 로직이 없어 테스트 실패 시 데이터가 잔류할 수 있다.
@BeforeEach에서만 정리하면, 테스트가 중간에 실패할 경우 다음 테스트 실행 전까지 데이터가 남아있다. 병렬 테스트 실행이나 디버깅 시 예상치 못한 상태를 유발할 수 있다.♻️ `@AfterEach` 추가 제안
+import org.junit.jupiter.api.AfterEach; `@BeforeEach` void setUp() { cleanUpTestData(); } +@AfterEach +void tearDown() { + cleanUpTestData(); +}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/interfaces/api/batch/BatchAdminV1ControllerTest.java` around lines 33 - 42, The test currently only calls cleanUpTestData() in setUp() (BeforeEach), so leftover DB rows can persist if a test fails; add an `@AfterEach-annotated` method that invokes cleanUpTestData() to guarantee teardown after each test (ensure the new method calls the existing cleanUpTestData() helper so setUp(), cleanUpTestData(), and the new `@AfterEach` method coordinate cleanup).
76-81: 누락된 파라미터 테스트가 에러 응답 본문을 검증하지 않는다.
status().isBadRequest()만 확인하고meta.result나meta.errorCode를 검증하지 않는다. Spring의 기본MissingServletRequestParameterException처리가 프로젝트의ApiControllerAdvice표준 형식을 따르는지 확인이 필요하다.🧪 에러 응답 형식 검증 추가
`@Test` `@DisplayName`("targetDate 파라미터가 없으면 400 에러를 반환한다") void runWeeklyRankingJob_MissingParameter() throws Exception { mockMvc.perform(post("/api-admin/v1/batch/weekly-ranking")) - .andExpect(status().isBadRequest()); + .andExpect(status().isBadRequest()) + .andExpect(jsonPath("$.meta.result", is("FAIL"))); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/interfaces/api/batch/BatchAdminV1ControllerTest.java` around lines 76 - 81, The test runWeeklyRankingJob_MissingParameter should assert the error response body follows ApiControllerAdvice's standard format: after performing post("/api-admin/v1/batch/weekly-ranking"), add assertions to verify JSON paths like $.meta.result (equals "error" or expected value) and $.meta.errorCode (equals the expected code for MissingServletRequestParameterException) using MockMvc jsonPath matchers; update BatchAdminV1ControllerTest to check these fields so the MissingServletRequestParameterException handling via ApiControllerAdvice is validated.
48-64: 주간 배치 성공 테스트의 데이터 세팅 날짜가 배치 대상 주간과 불일치한다.
insertDailyMetrics에서2025-01-13데이터를 삽입하고targetDate=20250115로 배치를 실행한다. 2025-01-15(수)가 속한 주는 1월 13일(월)~19일(일)이므로 데이터는 포함된다. 그러나 테스트 의도가 명확하지 않아 유지보수 시 혼란을 줄 수 있다.주석 또는 날짜 일치를 통해 테스트 의도를 명확히 하는 것을 권장한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/interfaces/api/batch/BatchAdminV1ControllerTest.java` around lines 48 - 64, The test runWeeklyRankingJob_Success inserts daily metrics for 2025-01-13 but calls the batch with targetDate=20250115, which can be confusing; update the test so the inserted metric date clearly matches the target week (either change insertDailyMetrics to use LocalDate.of(2025, 1, 15) or add a clarifying comment above the insert explaining that the target week for 2025-01-15 spans 2025-01-13 to 2025-01-19), referencing the runWeeklyRankingJob_Success test, the insertDailyMetrics call, and the mockMvc.perform(... .param("targetDate", "20250115")) invocation.
84-115: 월간 배치 테스트에서 누락된 파라미터 테스트가 없다.주간 배치 테스트(
WeeklyRankingJob)에는runWeeklyRankingJob_MissingParameter테스트가 있으나, 월간 배치 테스트(MonthlyRankingJob)에는 동일한 케이스가 누락되었다. 테스트 커버리지의 일관성을 위해 추가를 권장한다.🧪 누락된 테스트 케이스 추가
`@Test` `@DisplayName`("targetDate 파라미터가 없으면 400 에러를 반환한다") void runMonthlyRankingJob_MissingParameter() throws Exception { mockMvc.perform(post("/api-admin/v1/batch/monthly-ranking")) .andExpect(status().isBadRequest()); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/interfaces/api/batch/BatchAdminV1ControllerTest.java` around lines 84 - 115, Add a missing test in the MonthlyRankingJob nested class to mirror the WeeklyRankingJob's missing-parameter case: create a test method named runMonthlyRankingJob_MissingParameter (annotated with `@Test` and `@DisplayName` like others) that performs a POST to "/api-admin/v1/batch/monthly-ranking" without the targetDate param and asserts a 400 Bad Request response (status().isBadRequest()). Place this test alongside the existing MonthlyRankingJob tests to ensure consistent coverage for missing parameter validation.apps/commerce-api/src/test/java/com/loopers/batch/RankingPipelineIntegrationTest.java (3)
357-387: 멱등성 테스트가 "데이터 갱신" 시나리오를 검증하지 않는다.현재 테스트는 재실행 후 row 개수만 확인한다. 그러나 멱등성의 핵심은 "동일 입력에 동일 결과"이다. 첫 실행과 재실행 사이에 source 데이터 변경 없이 재실행하여 rank, score 값이 동일한지 검증해야 진정한 멱등성 테스트가 된다.
🧪 멱등성 검증 강화 예시
// Assert - 데이터 중복 없음 (동일 개수) int countAfterSecondRun = weeklyRankRepository.findByPeriodStartDateOrderByRankNumberAsc(weekStart).size(); assertThat(countAfterSecondRun).isEqualTo(countAfterFirstRun); + + // Assert - 랭킹 데이터가 동일한지 검증 + var rankingsAfterSecondRun = weeklyRankRepository.findByPeriodStartDateOrderByRankNumberAsc(weekStart); + assertThat(rankingsAfterSecondRun.get(0).getProductId()).isEqualTo(testProducts.get(0).getId()); + assertThat(rankingsAfterSecondRun.get(0).getTotalScore()).isEqualByComparingTo(/* expected score */);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/batch/RankingPipelineIntegrationTest.java` around lines 357 - 387, The test in RankingPipelineIntegrationTest currently only asserts row counts after re-running the batch; update the batchIdempotencyTest to assert true idempotency by fetching the ordered list of weekly ranks from weeklyRankRepository (use findByPeriodStartDateOrderByRankNumberAsc) after the first run, store each record's identifying fields (e.g., rankNumber and score or equivalent getters), then re-run the batch (the same POST to /api-admin/v1/batch/weekly-ranking) and fetch the list again and assert that for each corresponding element the rank and score values are equal to the previously stored values; keep the existing job completion assertions but add element-wise equality checks to ensure identical results, not just identical counts.
91-101:@BeforeEach와@AfterEach에서cleanUp()중복 호출은 의도적이나 비효율적이다.
tearDown()에서cleanUp()후databaseCleanUp.truncateAllTables()를 호출하면 이미 삭제된 테이블을 다시 truncate하게 된다.truncateAllTables()가 MV 테이블을 포함한다면cleanUp()의 DELETE 구문은 불필요하다.♻️ tearDown 정리 제안
`@AfterEach` void tearDown() { - cleanUp(); + cleanUpRedisKeys(); databaseCleanUp.truncateAllTables(); } + +private void cleanUpRedisKeys() { + Set<String> keys = redisTemplate.keys("ranking:*"); + if (keys != null && !keys.isEmpty()) { + redisTemplate.delete(keys); + } +}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/batch/RankingPipelineIntegrationTest.java` around lines 91 - 101, The test currently calls cleanUp() in both setUp() and tearDown(), causing redundant deletions before databaseCleanUp.truncateAllTables(); update the lifecycle to avoid duplicate work by removing one of the calls: either keep cleanUp() in setUp() and change tearDown() to only call databaseCleanUp.truncateAllTables(), or keep cleanUp() in tearDown() and remove it from setUp(); modify the methods setUp(), tearDown(), and references to cleanUp() and databaseCleanUp.truncateAllTables() accordingly so MV table truncation is the sole teardown operation.
203-240: 주간 배치 테스트에서 데이터 세팅 시점과 배치 실행 기간의 정합성을 검증해야 한다.
TARGET_DATE = "20250414"는 4월 14일(월)20일(일) 주간을 집계하나,30일 데이터를 생성한다. 테스트는 통과하지만, 주간 집계 결과가 해당 주(14~20일)의 7일치만 반영하는지 명시적으로 검증하지 않는다.setupTestData()는 4월 1일운영 환경에서 배치 로직 변경 시 이 테스트가 잘못된 집계를 감지하지 못할 수 있다.
totalViewCount등의 집계 값이 7일분인지 검증하는 assertion 추가를 권장한다.🧪 주간 집계 정확성 검증 추가 예시
// 1위: Product 1 assertThat(weeklyRankings.get(0).getProductId()).isEqualTo(testProducts.get(0).getId()); assertThat(weeklyRankings.get(0).getRankNumber()).isEqualTo(1); + // 7일분 집계 검증 (일별 viewCount=120 * 7일 = 840) + assertThat(weeklyRankings.get(0).getTotalViewCount()).isEqualTo(120 * 7);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/batch/RankingPipelineIntegrationTest.java` around lines 203 - 240, The test runWeeklyBatchJobAndVerify currently uses TARGET_DATE ("20250414") but doesn't assert that weekly aggregates only include the 7-day window; update the test to fetch the expected 7-day totals from the test fixture created by setupTestData() for the period LocalDate weekStart = LocalDate.of(2025,4,14) through weekStart.plusDays(6) and assert that each weeklyRankings.get(i).getTotalViewCount() (and other aggregate fields like totalOrderCount/totalSales if present) equals the sum of the per-day values for that product in that 7-day range; use weeklyRankRepository and the testProducts list to map productId → expected sum and compare to the persisted weeklyRankings entries so the test fails if the job aggregates a different date range.apps/commerce-api/src/test/java/com/loopers/interfaces/api/ranking/RankingV1PeriodApiTest.java (3)
103-122: 기본 period 테스트가LocalDate.now()를 사용하여 자정 경계에서 flaky할 수 있다.테스트 실행 시점에 따라 Redis 데이터가 없는 날짜로 조회될 수 있다. 이 테스트는 period 기본값만 검증하므로 데이터 유무와 무관하게 통과하지만, 향후 테스트 확장 시 문제가 될 수 있다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/interfaces/api/ranking/RankingV1PeriodApiTest.java` around lines 103 - 122, The test defaultPeriodIsDaily uses LocalDate.now() (with DATE_FORMATTER) which can be flaky at midnight; change it to use a stable, fixed test date (e.g., a constant string like "2023-01-01") or inject a controllable Clock for the test so testRestTemplate.exchange("/api/v1/rankings?date=" + today, ...) always queries a date that exists in test fixtures; update the variable assignment in defaultPeriodIsDaily (and any helper DATE_FORMATTER usage) to use that fixed date value so the assertion on response.getBody().data().period().name() and date() is deterministic.
274-296:insertWeeklyRanking과insertMonthlyRanking헬퍼 메서드가 거의 동일하다.두 메서드는 테이블명만 다르고 동일한 구조다. 테스트 유틸리티로 추출하거나 파라미터화하여 중복을 줄일 수 있다. 현재 규모에서는 큰 문제가 아니나, 향후 컬럼 추가 시 두 곳 모두 수정해야 한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/interfaces/api/ranking/RankingV1PeriodApiTest.java` around lines 274 - 296, The two helper methods insertWeeklyRanking and insertMonthlyRanking are duplicated; refactor by creating a single reusable helper such as insertRanking that takes the target table name (e.g., "mv_product_rank_weekly" or "mv_product_rank_monthly") plus the existing parameters (productId, rank, periodStart, periodEnd, score, viewCount, likeCount, orderCount) and have both original methods delegate to it; update calls to use jdbcTemplate.update with the same SQL template but interpolate the table name parameter, or build the SQL string within insertRanking and ensure both insertWeeklyRanking and insertMonthlyRanking call insertRanking to avoid duplicating the INSERT logic.
99-272: 에러 케이스(잘못된 period 값, 잘못된 date 형식)에 대한 테스트가 누락되었다.API 안정성 검증을 위해 다음 케이스 추가를 권장한다:
period=INVALID등 잘못된 period 값date=2025-01-15등 잘못된 날짜 형식page=0또는page=-1등 잘못된 페이지 값🧪 에러 케이스 테스트 추가 예시
`@Test` `@DisplayName`("잘못된 period 값은 400 에러를 반환한다") void invalidPeriodReturnsError() { ResponseEntity<ApiResponse<?>> response = testRestTemplate.exchange( "/api/v1/rankings?date=20250115&period=INVALID", HttpMethod.GET, null, new ParameterizedTypeReference<>() {} ); assertThat(response.getStatusCode()).isEqualTo(HttpStatus.BAD_REQUEST); } `@Test` `@DisplayName`("잘못된 date 형식은 400 에러를 반환한다") void invalidDateFormatReturnsError() { ResponseEntity<ApiResponse<?>> response = testRestTemplate.exchange( "/api/v1/rankings?date=2025-01-15&period=WEEKLY", HttpMethod.GET, null, new ParameterizedTypeReference<>() {} ); assertThat(response.getStatusCode()).isEqualTo(HttpStatus.BAD_REQUEST); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/test/java/com/loopers/interfaces/api/ranking/RankingV1PeriodApiTest.java` around lines 99 - 272, The test suite PeriodRankings is missing error-case tests for invalid inputs; add unit tests in the PeriodRankings nested class such as invalidPeriodReturnsError, invalidDateFormatReturnsError, and invalidPageValuesReturnError that call the same endpoint via testRestTemplate.exchange with queries like "?date=20250115&period=INVALID", "?date=2025-01-15&period=WEEKLY", and "?date=20250115&period=DAILY&page=0" (and page=-1), and assert the ResponseEntity status is HttpStatus.BAD_REQUEST (and response body null/contains error as appropriate) to verify the API returns 400 for bad period, bad date format, and invalid page values.apps/commerce-api/src/main/java/com/loopers/batch/job/monthlyranking/MonthlyRankingJobConfig.java (1)
117-131: Processor의AtomicInteger상태가 청크 간 유지되어 rank 계산은 정상 동작한다.
@StepScope로 인해 Step 실행 시 한 번만 인스턴스화되므로 청크 간 rank 연속성이 보장된다. 그러나 병렬 처리 전환 시 동시성 문제가 발생할 수 있으므로 주석으로 단일 스레드 전제를 명시하면 유지보수에 도움이 된다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/batch/job/monthlyranking/MonthlyRankingJobConfig.java` around lines 117 - 131, The rankCounter AtomicInteger used in MonthlyRankingJobConfig's processor lambda (which creates ProductRankMonthlyJpaEntity) relies on the Step being instantiated once via `@StepScope` and therefore assumes single-threaded chunk processing; add a clear comment next to the AtomicInteger/rank creation explaining this single-threaded assumption (mention `@StepScope` and that rank continuity relies on non-parallel execution) and warn that switching to parallel/partitioned steps will break correctness unless replaced with a thread-safe ranking strategy (e.g., partition-aware counters or external persistent sequence) so future maintainers understand the concurrency risk.apps/commerce-api/src/main/java/com/loopers/infrastructure/persistence/jpa/rank/ProductRankMonthlyRepositoryImpl.java (1)
47-52:findByProductIdOrderByPeriodStartDateDesc메서드는 현재 미사용 상태이며, 만약 API에 노출될 경우 페이징 지원이 필수다.해당 메서드는 현재 서비스 레이어에서 호출되지 않고 있다. 그러나 특정 상품의 월간 랭킹 이력이 수년간 누적될 경우, 페이징 없이 전체 조회 시 메모리 부담이 발생할 수 있다. 다음 중 하나를 선택해야 한다:
- 미사용 메서드 제거: 현재 사용 계획이 없다면 인터페이스 및 구현체에서 제거한다.
- 페이징 지원 추가: 향후 API 노출을 계획한다면
Pageable파라미터를 받도록 리팩토링하고,Page<ProductRankMonthly>반환 타입으로 변경한다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/commerce-api/src/main/java/com/loopers/infrastructure/persistence/jpa/rank/ProductRankMonthlyRepositoryImpl.java` around lines 47 - 52, The method findByProductIdOrderByPeriodStartDateDesc in ProductRankMonthlyRepositoryImpl is unused and can cause OOM if large histories are fetched; either remove the method from the repository interface and this implementation (delete findByProductIdOrderByPeriodStartDateDesc and its usage references) or refactor it to support pagination by changing the signature to accept a Pageable and return Page<ProductRankMonthly> (invoke jpaRepository.findByProductIdOrderByPeriodStartDateDesc(productId, pageable), map entities to domain via mapper::toDomain while preserving paging). Ensure corresponding changes in the jpaRepository interface and any callers.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 17db85de-9d42-4aad-ad5e-67d8f7502bb5
⛔ Files ignored due to path filters (1)
.docs/ranking-batch-design-decisions.mdis excluded by!**/*.mdand included by**
📒 Files selected for processing (46)
apps/commerce-api/build.gradle.ktsapps/commerce-api/src/main/java/com/loopers/application/ranking/PeriodRankingResult.javaapps/commerce-api/src/main/java/com/loopers/application/ranking/RankingPeriod.javaapps/commerce-api/src/main/java/com/loopers/application/ranking/RankingQueryService.javaapps/commerce-api/src/main/java/com/loopers/batch/config/BatchConfig.javaapps/commerce-api/src/main/java/com/loopers/batch/domain/ProductRankMonthly.javaapps/commerce-api/src/main/java/com/loopers/batch/domain/ProductRankMonthlyRepository.javaapps/commerce-api/src/main/java/com/loopers/batch/domain/ProductRankWeekly.javaapps/commerce-api/src/main/java/com/loopers/batch/domain/ProductRankWeeklyRepository.javaapps/commerce-api/src/main/java/com/loopers/batch/job/common/RankingJobConstants.javaapps/commerce-api/src/main/java/com/loopers/batch/job/common/RankingMetricsAggregation.javaapps/commerce-api/src/main/java/com/loopers/batch/job/monthlyranking/MonthlyRankingJobConfig.javaapps/commerce-api/src/main/java/com/loopers/batch/job/weeklyranking/WeeklyMetricsAggregation.javaapps/commerce-api/src/main/java/com/loopers/batch/job/weeklyranking/WeeklyRankingJobConfig.javaapps/commerce-api/src/main/java/com/loopers/batch/scheduler/BatchSchedulerProperties.javaapps/commerce-api/src/main/java/com/loopers/batch/scheduler/RankingJobScheduler.javaapps/commerce-api/src/main/java/com/loopers/infrastructure/persistence/jpa/rank/ProductMetricsDailyJpaEntity.javaapps/commerce-api/src/main/java/com/loopers/infrastructure/persistence/jpa/rank/ProductRankMonthlyJpaEntity.javaapps/commerce-api/src/main/java/com/loopers/infrastructure/persistence/jpa/rank/ProductRankMonthlyJpaRepository.javaapps/commerce-api/src/main/java/com/loopers/infrastructure/persistence/jpa/rank/ProductRankMonthlyMapper.javaapps/commerce-api/src/main/java/com/loopers/infrastructure/persistence/jpa/rank/ProductRankMonthlyRepositoryImpl.javaapps/commerce-api/src/main/java/com/loopers/infrastructure/persistence/jpa/rank/ProductRankWeeklyJpaEntity.javaapps/commerce-api/src/main/java/com/loopers/infrastructure/persistence/jpa/rank/ProductRankWeeklyJpaRepository.javaapps/commerce-api/src/main/java/com/loopers/infrastructure/persistence/jpa/rank/ProductRankWeeklyMapper.javaapps/commerce-api/src/main/java/com/loopers/infrastructure/persistence/jpa/rank/ProductRankWeeklyRepositoryImpl.javaapps/commerce-api/src/main/java/com/loopers/interfaces/api/batch/BatchAdminV1ApiSpec.javaapps/commerce-api/src/main/java/com/loopers/interfaces/api/batch/BatchAdminV1Controller.javaapps/commerce-api/src/main/java/com/loopers/interfaces/api/batch/BatchAdminV1Dto.javaapps/commerce-api/src/main/java/com/loopers/interfaces/api/ranking/RankingV1ApiSpec.javaapps/commerce-api/src/main/java/com/loopers/interfaces/api/ranking/RankingV1Controller.javaapps/commerce-api/src/main/java/com/loopers/interfaces/api/ranking/RankingV1Dto.javaapps/commerce-api/src/main/java/com/loopers/support/error/ErrorType.javaapps/commerce-api/src/main/resources/application.ymlapps/commerce-api/src/main/resources/batch.ymlapps/commerce-api/src/test/java/com/loopers/batch/BatchMetaTableTest.javaapps/commerce-api/src/test/java/com/loopers/batch/RankingPipelineIntegrationTest.javaapps/commerce-api/src/test/java/com/loopers/batch/job/common/RankingJobConstantsTest.javaapps/commerce-api/src/test/java/com/loopers/batch/job/monthlyranking/MonthlyRankingJobTest.javaapps/commerce-api/src/test/java/com/loopers/batch/job/weeklyranking/WeeklyRankingJobTest.javaapps/commerce-api/src/test/java/com/loopers/interfaces/api/batch/BatchAdminV1ControllerTest.javaapps/commerce-api/src/test/java/com/loopers/interfaces/api/ranking/RankingV1PeriodApiTest.javahttp/batch-admin-api.httphttp/ranking-api.httpscripts/migration/V002__create_product_rank_mv_tables.sqlscripts/migration/V003__create_product_metrics_daily_table.sqlscripts/test-data/ranking_test_data.sql
| private List<PeriodRankingResult> getWeeklyPeriodRankings(LocalDate date, int size, int offset) { | ||
| LocalDate weekStart = date.with(TemporalAdjusters.previousOrSame(DayOfWeek.MONDAY)); | ||
|
|
||
| List<ProductRankWeeklyJpaEntity> entities = weeklyRankRepository | ||
| .findByPeriodStartDateOrderByRankNumberAsc(weekStart); | ||
|
|
||
| if (entities.isEmpty()) { | ||
| return List.of(); | ||
| } | ||
|
|
||
| // 페이징 적용 | ||
| int endIndex = Math.min(offset + size, entities.size()); | ||
| if (offset >= entities.size()) { | ||
| return List.of(); | ||
| } | ||
| List<ProductRankWeeklyJpaEntity> pagedEntities = entities.subList(offset, endIndex); | ||
|
|
||
| // 상품 정보 조회 | ||
| List<Long> productIds = pagedEntities.stream() | ||
| .map(ProductRankWeeklyJpaEntity::getProductId) | ||
| .toList(); | ||
|
|
||
| Map<Long, Product> productMap = productRepository.findAllByIds(productIds).stream() | ||
| .collect(Collectors.toMap(Product::getId, Function.identity())); | ||
|
|
||
| return pagedEntities.stream() | ||
| .map(entity -> { | ||
| Product product = productMap.get(entity.getProductId()); | ||
| return PeriodRankingResult.ofPeriod( | ||
| entity.getRankNumber(), | ||
| entity.getProductId(), | ||
| product != null ? product.getName() : null, | ||
| product != null ? product.getPrice().amount() : null, | ||
| product != null ? product.getImageUrl() : null, | ||
| entity.getTotalScore(), | ||
| entity.getTotalViewCount(), | ||
| entity.getTotalLikeCount(), | ||
| entity.getTotalOrderCount(), | ||
| RankingPeriod.WEEKLY, | ||
| entity.getPeriodStartDate(), | ||
| entity.getPeriodEndDate() | ||
| ); | ||
| }) | ||
| .filter(r -> r.productName() != null) // 삭제된 상품 필터링 | ||
| .toList(); |
There was a problem hiding this comment.
전체 데이터 로드 후 subList 페이징 - 성능 및 정확성 문제
-
성능:
findByPeriodStartDateOrderByRankNumberAsc가 해당 주간의 모든 랭킹(최대 100건)을 메모리에 로드한 후subList로 페이징한다. 현재 TOP 100 제한으로 영향이 작으나, 확장 시 문제가 될 수 있다. -
정확성: Line 330의
.filter(r -> r.productName() != null)필터가 페이징 적용 후에 실행되어, 삭제된 상품이 포함된 경우 요청한size보다 적은 결과가 반환될 수 있다.
운영 관점에서 클라이언트가 size=10을 요청했으나 8건만 반환되면 페이징 UI가 깨질 수 있다. 삭제된 상품 필터링을 페이징 전에 적용하거나, DB 레벨에서 JOIN으로 필터링하는 것을 권장한다.
제안 수정 - 필터링 순서 변경
private List<PeriodRankingResult> getWeeklyPeriodRankings(LocalDate date, int size, int offset) {
LocalDate weekStart = date.with(TemporalAdjusters.previousOrSame(DayOfWeek.MONDAY));
List<ProductRankWeeklyJpaEntity> entities = weeklyRankRepository
.findByPeriodStartDateOrderByRankNumberAsc(weekStart);
if (entities.isEmpty()) {
return List.of();
}
- // 페이징 적용
- int endIndex = Math.min(offset + size, entities.size());
- if (offset >= entities.size()) {
- return List.of();
- }
- List<ProductRankWeeklyJpaEntity> pagedEntities = entities.subList(offset, endIndex);
-
// 상품 정보 조회
- List<Long> productIds = pagedEntities.stream()
+ List<Long> productIds = entities.stream()
.map(ProductRankWeeklyJpaEntity::getProductId)
.toList();
Map<Long, Product> productMap = productRepository.findAllByIds(productIds).stream()
.collect(Collectors.toMap(Product::getId, Function.identity()));
- return pagedEntities.stream()
+ // 삭제된 상품 필터링 후 페이징 적용
+ List<PeriodRankingResult> allResults = entities.stream()
.map(entity -> {
Product product = productMap.get(entity.getProductId());
+ if (product == null) {
+ return null;
+ }
return PeriodRankingResult.ofPeriod(
entity.getRankNumber(),
entity.getProductId(),
- product != null ? product.getName() : null,
- product != null ? product.getPrice().amount() : null,
- product != null ? product.getImageUrl() : null,
+ product.getName(),
+ product.getPrice().amount(),
+ product.getImageUrl(),
entity.getTotalScore(),
entity.getTotalViewCount(),
entity.getTotalLikeCount(),
entity.getTotalOrderCount(),
RankingPeriod.WEEKLY,
entity.getPeriodStartDate(),
entity.getPeriodEndDate()
);
})
- .filter(r -> r.productName() != null) // 삭제된 상품 필터링
+ .filter(Objects::nonNull)
.toList();
+
+ // 페이징 적용
+ int endIndex = Math.min(offset + size, allResults.size());
+ if (offset >= allResults.size()) {
+ return List.of();
+ }
+ return allResults.subList(offset, endIndex);
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/application/ranking/RankingQueryService.java`
around lines 287 - 331, getWeeklyPeriodRankings currently loads all weekly
entities, pages them, then filters out deleted products which can shrink the
returned page and hurt performance; instead, ensure deleted-product filtering
happens before paging by (A) fetching products for all candidate entities,
removing entities whose productId is missing, then applying pagination on the
filtered list before mapping to PeriodRankingResult.ofPeriod, or (B, preferred
for scale) change the data access to a repository method that does a DB-level
join/filter and accepts a Pageable (e.g., replace
weeklyRankRepository.findByPeriodStartDateOrderByRankNumberAsc with a pageable
query) so you only load and page already-filtered records; update code
references getWeeklyPeriodRankings,
weeklyRankRepository.findByPeriodStartDateOrderByRankNumberAsc,
productRepository.findAllByIds, and PeriodRankingResult.ofPeriod accordingly.
| private long getWeeklyTotalCount(LocalDate date) { | ||
| LocalDate weekStart = date.with(TemporalAdjusters.previousOrSame(DayOfWeek.MONDAY)); | ||
| return weeklyRankRepository.findByPeriodStartDateOrderByRankNumberAsc(weekStart).size(); | ||
| } | ||
|
|
||
| private long getMonthlyTotalCount(LocalDate date) { | ||
| LocalDate monthStart = YearMonth.from(date).atDay(1); | ||
| return monthlyRankRepository.findByPeriodStartDateOrderByRankNumberAsc(monthStart).size(); | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
totalCount 조회 시 전체 데이터 로드 - 불필요한 리소스 사용
getWeeklyTotalCount와 getMonthlyTotalCount가 전체 엔티티 목록을 로드한 후 size()를 호출한다. 단순 카운트를 위해 모든 컬럼 데이터를 메모리에 로드하는 것은 비효율적이다.
Repository에 countByPeriodStartDate(LocalDate) 메서드를 추가하여 DB 레벨에서 COUNT 쿼리를 실행하는 것을 권장한다.
제안 수정
private long getWeeklyTotalCount(LocalDate date) {
LocalDate weekStart = date.with(TemporalAdjusters.previousOrSame(DayOfWeek.MONDAY));
- return weeklyRankRepository.findByPeriodStartDateOrderByRankNumberAsc(weekStart).size();
+ return weeklyRankRepository.countByPeriodStartDate(weekStart);
}
private long getMonthlyTotalCount(LocalDate date) {
LocalDate monthStart = YearMonth.from(date).atDay(1);
- return monthlyRankRepository.findByPeriodStartDateOrderByRankNumberAsc(monthStart).size();
+ return monthlyRankRepository.countByPeriodStartDate(monthStart);
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/application/ranking/RankingQueryService.java`
around lines 381 - 389, getWeeklyTotalCount and getMonthlyTotalCount currently
load full entity lists then call size(), which wastes memory; add a repository
method long countByPeriodStartDate(LocalDate periodStart) to
weeklyRankRepository and monthlyRankRepository (or a shared interface) so the DB
performs COUNT, then change getWeeklyTotalCount to return
weeklyRankRepository.countByPeriodStartDate(weekStart) and getMonthlyTotalCount
to return monthlyRankRepository.countByPeriodStartDate(monthStart).
| ORDER BY total_score DESC | ||
| LIMIT {limit} |
There was a problem hiding this comment.
동점 점수 정렬 기준이 없어 랭킹 결과가 재실행마다 흔들릴 수 있다.
왜 문제인지(운영 관점): Line 29~30이 total_score만 정렬하면 동점 상품의 순서가 DB 실행 계획에 따라 바뀔 수 있어, 동일 데이터 재집계에서도 순위가 달라지는 운영 이슈가 발생한다.
수정안: 2차 정렬 키(product_id ASC 등)를 추가해 결과를 결정적으로 고정해야 한다.
추가 테스트: 동일 total_score를 가진 상품 2개 이상을 넣고 집계 Job을 2회 실행한 뒤, 두 실행의 rank 순서가 완전히 동일한지 통합 테스트로 검증해야 한다.
수정 예시
- ORDER BY total_score DESC
+ ORDER BY total_score DESC, product_id ASC
LIMIT {limit}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/batch/job/common/RankingJobConstants.java`
around lines 29 - 30, The ORDER BY clause in RankingJobConstants (the SQL
fragment containing "ORDER BY total_score DESC LIMIT {limit}") lacks a
deterministic tie-breaker, causing non-deterministic ranking for equal
total_score; update that SQL fragment to add a stable secondary sort key (e.g.,
"product_id ASC" or another unique, immutable column) to the ORDER BY (so it
becomes ORDER BY total_score DESC, product_id ASC ...), and add an integration
test that inserts two or more products with identical total_score, runs the
ranking job twice, and asserts the resulting rank ordering is identical across
runs.
| return items -> { | ||
| if (!items.getItems().isEmpty()) { | ||
| log.info("Deleting existing monthly ranking data: monthStart={}", monthStart); | ||
| monthlyRankRepository.deleteByPeriodStartDate(monthStart); | ||
| } | ||
|
|
||
| log.info("Saving {} monthly ranking records for monthStart={}", items.size(), monthStart); | ||
| monthlyRankRepository.saveAll(items.getItems()); | ||
| }; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# WeeklyRankingJobConfig에서 동일한 패턴이 사용되는지 확인
rg -n "deleteBy.*periodStartDate" --type=java -B5 -A10 apps/commerce-api/src/main/java/com/loopers/batch/job/Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 67
🏁 Script executed:
find . -name "MonthlyRankingJobConfig.java" -type fRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 184
🏁 Script executed:
cat -n ./apps/commerce-api/src/main/java/com/loopers/batch/job/monthlyranking/MonthlyRankingJobConfig.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 7210
🏁 Script executed:
find . -name "RankingJobConstants.java" -type fRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 172
🏁 Script executed:
cat -n ./apps/commerce-api/src/main/java/com/loopers/batch/job/common/RankingJobConstants.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 1653
🏁 Script executed:
find . -name "WeeklyRankingJobConfig.java" -type fRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 182
🏁 Script executed:
sed -n '140,160p' ./apps/commerce-api/src/main/java/com/loopers/batch/job/weeklyranking/WeeklyRankingJobConfig.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 67
🏁 Script executed:
wc -l ./apps/commerce-api/src/main/java/com/loopers/batch/job/weeklyranking/WeeklyRankingJobConfig.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 186
🏁 Script executed:
cat -n ./apps/commerce-api/src/main/java/com/loopers/batch/job/weeklyranking/WeeklyRankingJobConfig.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 6607
🏁 Script executed:
find . -name "*MonthlyRankingJobConfig*Test*" -o -name "*WeeklyRankingJobConfig*Test*" | head -20Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 67
🏁 Script executed:
find . -path "*test*" -name "*RankingJobConfig*" -type fRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 67
🏁 Script executed:
find ./apps/commerce-api -type d -name "*test*" | head -10Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 113
🏁 Script executed:
find ./apps/commerce-api/src/test -name "*Ranking*" -o -name "*Batch*" | head -20Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 911
🏁 Script executed:
cat -n ./apps/commerce-api/src/test/java/com/loopers/batch/job/monthlyranking/MonthlyRankingJobTest.javaRepository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 16270
ItemWriter의 DELETE 로직이 매 청크마다 실행되어 데이터 손실 위험이 있다.
ItemWriter.write()는 청크 단위로 호출된다. 현재 구현은 조건문으로 보호되지 않아 매 청크 처리 시 DELETE가 실행되고, 첫 청크에서 저장한 데이터가 다음 청크 처리 시 삭제된다. 예를 들어 첫 청크(100건)에서 DELETE 후 INSERT하면, 두 번째 청크(100건) 처리 시 다시 DELETE가 실행되어 첫 번째 청크의 데이터가 모두 삭제된다.
운영 환경에서 데이터가 청크 크기를 초과하는 경우 마지막 청크의 데이터만 남게 되어 심각한 데이터 손실이 발생한다. WeeklyRankingJobConfig도 동일한 패턴을 사용하고 있어 같은 위험에 노출되어 있다.
현재 테스트는 소규모 데이터(3~5건)로만 검증하기 때문에 이 버그를 감지하지 못한다.
🐛 StepExecutionListener를 활용한 수정안
+import org.springframework.batch.core.StepExecution;
+import org.springframework.batch.core.StepExecutionListener;
+import java.util.concurrent.atomic.AtomicBoolean;
`@Bean`
`@StepScope`
public ItemWriter<ProductRankMonthlyJpaEntity> monthlyRankingWriter(
`@Value`("#{jobParameters['targetDate']}") String targetDate
) {
LocalDate target = LocalDate.parse(targetDate, RankingJobConstants.DATE_FORMATTER);
YearMonth yearMonth = YearMonth.from(target);
LocalDate monthStart = yearMonth.atDay(1);
+ AtomicBoolean deleted = new AtomicBoolean(false);
return items -> {
- if (!items.getItems().isEmpty()) {
+ if (!items.getItems().isEmpty() && deleted.compareAndSet(false, true)) {
log.info("Deleting existing monthly ranking data: monthStart={}", monthStart);
monthlyRankRepository.deleteByPeriodStartDate(monthStart);
}
log.info("Saving {} monthly ranking records for monthStart={}", items.size(), monthStart);
monthlyRankRepository.saveAll(items.getItems());
};
}또는 @BeforeStep을 사용하는 방식:
`@Bean`
`@StepScope`
public ItemWriter<ProductRankMonthlyJpaEntity> monthlyRankingWriter(
`@Value`("#{jobParameters['targetDate']}") String targetDate
) {
LocalDate target = LocalDate.parse(targetDate, RankingJobConstants.DATE_FORMATTER);
YearMonth yearMonth = YearMonth.from(target);
LocalDate monthStart = yearMonth.atDay(1);
return new ItemWriter<>() {
private boolean deleted = false;
`@BeforeStep`
public void beforeStep(StepExecution stepExecution) {
log.info("Deleting existing monthly ranking data: monthStart={}", monthStart);
monthlyRankRepository.deleteByPeriodStartDate(monthStart);
deleted = true;
}
`@Override`
public void write(Chunk<? extends ProductRankMonthlyJpaEntity> items) {
log.info("Saving {} monthly ranking records for monthStart={}", items.size(), monthStart);
monthlyRankRepository.saveAll(items.getItems());
}
};
}추가로 다음 테스트를 작성하여 다중 청크 시나리오를 검증해야 한다:
`@Test`
`@DisplayName`("대규모 데이터 처리 시 여러 청크로 분산되어도 데이터 손실이 없어야 한다")
void multiChunkProcessing() throws Exception {
// CHUNK_SIZE(100) 초과 데이터 삽입
for (int i = 1; i <= 250; i++) {
insertDailyMetrics(
(long) i,
LocalDate.of(2025, 1, 15),
10 + i, 5, 2,
BigDecimal.valueOf(3.0 + i * 0.1)
);
}
JobParameters params = new JobParametersBuilder()
.addString("targetDate", "20250115")
.addLong("runId", System.currentTimeMillis())
.toJobParameters();
JobExecution jobExecution = jobLauncherTestUtils.launchJob(params);
assertThat(jobExecution.getStatus()).isEqualTo(BatchStatus.COMPLETED);
List<ProductRankMonthlyJpaEntity> rankings = monthlyRankRepository
.findByPeriodStartDateOrderByRankNumberAsc(LocalDate.of(2025, 1, 1));
// TOP_N(100) 개의 랭킹이 모두 유지되어야 함
assertThat(rankings).hasSize(100);
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/batch/job/monthlyranking/MonthlyRankingJobConfig.java`
around lines 147 - 155, The ItemWriter implementation in monthlyRankingWriter
currently calls monthlyRankRepository.deleteByPeriodStartDate inside
write(Chunk...) causing the delete to run every chunk and wipe prior chunk
inserts; change the deletion to run once per step by moving the delete out of
write and into a step-scoped before-step hook (use a StepExecutionListener
`@BeforeStep` or implement StepExecutionListener in the anonymous ItemWriter) so
monthlyRankRepository.deleteByPeriodStartDate(monthStart) executes exactly once
before any chunks are written, and keep write(Chunk<? extends
ProductRankMonthlyJpaEntity> items) only calling
monthlyRankRepository.saveAll(items.getItems()); apply the same fix to
WeeklyRankingJobConfig and add the suggested multi-chunk test to verify
behavior.
| public record WeeklyMetricsAggregation( | ||
| Long productId, | ||
| long totalViewCount, | ||
| long totalLikeCount, | ||
| long totalOrderCount, | ||
| BigDecimal totalScore | ||
| ) { |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# 목적: WeeklyMetricsAggregation 실제 사용 여부와 중복 경로를 확인한다.
# 기대 결과:
# 1) WeeklyMetricsAggregation 참조가 없다면 제거 후보다.
# 2) RankingMetricsAggregation과 혼재 참조가 확인되면 단일화가 필요하다.
rg -n --type=java '\bWeeklyMetricsAggregation\b'
rg -n --type=java '\bRankingMetricsAggregation\b'Repository: Loopers-dev-lab/loop-pack-be-l2-vol3-java
Length of output: 3151
WeeklyMetricsAggregation 미사용 코드 제거 필요
WeeklyMetricsAggregation은 정의만 존재하고 코드베이스 어디에서도 임포트되거나 사용되지 않는다. 실제 주간 배치 작업(WeeklyRankingJobConfig)은 공통 패키지의 RankingMetricsAggregation을 사용하고 있다. 미사용 코드는 유지보수 부담을 증가시키고 개발자의 혼동을 야기하므로 제거해야 한다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/main/java/com/loopers/batch/job/weeklyranking/WeeklyMetricsAggregation.java`
around lines 9 - 15, Remove the unused record WeeklyMetricsAggregation since the
project uses the common RankingMetricsAggregation; delete the
WeeklyMetricsAggregation declaration and any lingering imports or references to
it, and verify WeeklyRankingJobConfig (and related classes) import and use
com...common.RankingMetricsAggregation instead—run a project-wide search for
WeeklyMetricsAggregation to ensure no remaining usages before committing.
| @Test | ||
| @Order(7) | ||
| @DisplayName("7. 일간 랭킹 API 조회 (Redis 기반)") | ||
| void getDailyRankingsApi() { | ||
| // Arrange - Redis에 일간 데이터 삽입 | ||
| String today = LocalDate.now().format(DATE_FORMATTER); | ||
| String key = "ranking:all:" + today; | ||
|
|
||
| redisTemplate.opsForZSet().add(key, String.valueOf(testProducts.get(0).getId()), 100.0); | ||
| redisTemplate.opsForZSet().add(key, String.valueOf(testProducts.get(1).getId()), 80.0); | ||
| redisTemplate.opsForZSet().add(key, String.valueOf(testProducts.get(2).getId()), 60.0); | ||
|
|
||
| // Act - 일간 랭킹 API 조회 | ||
| ResponseEntity<ApiResponse<RankingV1Dto.PeriodRankingPageResponse>> response = testRestTemplate.exchange( | ||
| "/api/v1/rankings?date=" + today + "&period=DAILY&size=10&page=1", | ||
| HttpMethod.GET, | ||
| null, | ||
| new ParameterizedTypeReference<>() {} | ||
| ); | ||
|
|
||
| // Assert | ||
| assertThat(response.getStatusCode()).isEqualTo(HttpStatus.OK); | ||
| assertThat(response.getBody()).isNotNull(); | ||
|
|
||
| var data = response.getBody().data(); | ||
| assertThat(data.period().name()).isEqualTo("DAILY"); | ||
| assertThat(data.periodStart()).isEqualTo(today); | ||
| assertThat(data.periodEnd()).isEqualTo(today); | ||
| assertThat(data.rankings()).hasSize(3); | ||
|
|
||
| // 일간 랭킹은 viewCount/likeCount/orderCount가 null | ||
| assertThat(data.rankings().get(0).viewCount()).isNull(); | ||
| assertThat(data.rankings().get(0).likeCount()).isNull(); | ||
| assertThat(data.rankings().get(0).orderCount()).isNull(); | ||
| } |
There was a problem hiding this comment.
일간 랭킹 테스트가 LocalDate.now() 사용으로 인해 flaky할 수 있다.
테스트가 자정 전후에 실행되면 Redis 키 생성 시점과 API 조회 시점의 날짜가 달라질 수 있다. 다른 테스트들은 고정 날짜(TARGET_DATE)를 사용하므로 일관성을 위해 고정 날짜 사용을 권장한다.
🛡️ 고정 날짜 사용 수정안
`@Test`
`@Order`(7)
`@DisplayName`("7. 일간 랭킹 API 조회 (Redis 기반)")
void getDailyRankingsApi() {
// Arrange - Redis에 일간 데이터 삽입
- String today = LocalDate.now().format(DATE_FORMATTER);
+ String today = "20250414"; // 고정 날짜 사용
String key = "ranking:all:" + today;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/commerce-api/src/test/java/com/loopers/batch/RankingPipelineIntegrationTest.java`
around lines 389 - 423, The test getDailyRankingsApi in
RankingPipelineIntegrationTest uses LocalDate.now() which makes the Redis key
and API call flaky; replace LocalDate.now().format(DATE_FORMATTER) with the
fixed TARGET_DATE (formatted with DATE_FORMATTER) so both the
redisTemplate.opsForZSet().add calls and the testRestTemplate.exchange request
use the same stable date string; update the local variable today to be
TARGET_DATE.format(DATE_FORMATTER) (keeping DATE_FORMATTER) so assertions on
periodStart/periodEnd remain valid.
| rank_number INT NOT NULL COMMENT '해당 주간 순위 (1~100)', | ||
| total_score DECIMAL(15, 4) NOT NULL DEFAULT 0 COMMENT '주간 합산 점수', | ||
| total_view_count BIGINT NOT NULL DEFAULT 0 COMMENT '주간 조회수 합계', | ||
| total_like_count BIGINT NOT NULL DEFAULT 0 COMMENT '주간 좋아요수 합계', | ||
| total_order_count BIGINT NOT NULL DEFAULT 0 COMMENT '주간 주문수 합계', | ||
| period_start_date DATE NOT NULL COMMENT '집계 시작일 (월요일)', | ||
| period_end_date DATE NOT NULL COMMENT '집계 종료일 (일요일)', |
There was a problem hiding this comment.
랭크 범위와 집계값 무결성을 DB에서 강제해야 한다
배치 버그나 수동 적재 실수로 rank_number가 1~100 범위를 벗어나거나 집계값이 음수로 저장되면 API 응답 신뢰도가 떨어진다. 운영에서는 잘못된 순위 노출이 즉시 장애 이슈가 된다.
- 수정안: 주간/월간 테이블 모두
CHECK (rank_number BETWEEN 1 AND 100)및 집계값 비음수 제약을 추가한다. - 추가 테스트: 경계 밖 랭크(0, 101)와 음수 집계값 삽입이 실패하는지 마이그레이션 검증에 포함하면 된다.
DDL 보강 예시
UNIQUE INDEX uk_weekly_period_product (period_start_date, product_id),
+ CONSTRAINT chk_weekly_rank_range CHECK (rank_number BETWEEN 1 AND 100),
+ CONSTRAINT chk_weekly_totals_non_negative
+ CHECK (total_score >= 0 AND total_view_count >= 0 AND total_like_count >= 0 AND total_order_count >= 0)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
@@
UNIQUE INDEX uk_monthly_period_product (period_start_date, product_id),
+ CONSTRAINT chk_monthly_rank_range CHECK (rank_number BETWEEN 1 AND 100),
+ CONSTRAINT chk_monthly_totals_non_negative
+ CHECK (total_score >= 0 AND total_view_count >= 0 AND total_like_count >= 0 AND total_order_count >= 0)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ciAlso applies to: 66-73
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/migration/V002__create_product_rank_mv_tables.sql` around lines 35 -
41, Add DB-level CHECK constraints to the migration DDL to enforce rank and
aggregate non-negativity: for the tables created in
V002__create_product_rank_mv_tables.sql add CHECK (rank_number BETWEEN 1 AND
100) on the rank_number column and CHECKs ensuring total_score >= 0 AND
total_view_count >= 0 AND total_like_count >= 0 AND total_order_count >= 0
(apply these constraints to both weekly and monthly MV tables defined in this
file); update the CREATE TABLE statements (the column blocks including
rank_number, total_score, total_view_count, total_like_count, total_order_count,
period_start_date, period_end_date) to include these CHECK clauses and add
migration verification tests that attempt inserts with rank_number 0 and 101 and
negative aggregate values to ensure they fail.
| INDEX idx_daily_metric_date (metric_date), | ||
|
|
||
| -- 특정 상품의 일별 메트릭 이력 조회 | ||
| INDEX idx_daily_product (product_id), | ||
|
|
||
| -- 동일 날짜에 동일 상품 중복 방지 | ||
| UNIQUE INDEX uk_daily_product_date (product_id, metric_date) |
There was a problem hiding this comment.
집계 배치 쿼리 성능을 위해 날짜+상품 복합 인덱스가 필요하다
현재 인덱스 구성에서는 metric_date 범위 스캔 후 product_id 그룹핑 시 임시 정렬/추가 스캔 비용이 커질 수 있다. 데이터가 누적되면 배치 시간이 길어지고 스케줄 지연으로 이어질 수 있다.
- 수정안:
INDEX (metric_date, product_id)를 추가해 집계 쿼리 접근 패턴에 맞춘다. - 추가 테스트:
EXPLAIN으로 주간/월간 집계 SQL 실행 계획을 비교해rows,using temporary/filesort감소를 확인하면 된다.
DDL 보강 예시
INDEX idx_daily_metric_date (metric_date),
+ INDEX idx_daily_metric_date_product (metric_date, product_id),📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| INDEX idx_daily_metric_date (metric_date), | |
| -- 특정 상품의 일별 메트릭 이력 조회 | |
| INDEX idx_daily_product (product_id), | |
| -- 동일 날짜에 동일 상품 중복 방지 | |
| UNIQUE INDEX uk_daily_product_date (product_id, metric_date) | |
| INDEX idx_daily_metric_date (metric_date), | |
| INDEX idx_daily_metric_date_product (metric_date, product_id), | |
| -- 특정 상품의 일별 메트릭 이력 조회 | |
| INDEX idx_daily_product (product_id), | |
| -- 동일 날짜에 동일 상품 중복 방지 | |
| UNIQUE INDEX uk_daily_product_date (product_id, metric_date) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/migration/V003__create_product_metrics_daily_table.sql` around lines
31 - 37, The current indexes idx_daily_metric_date and idx_daily_product are
suboptimal for aggregation scans; add a composite index on (metric_date,
product_id) (e.g., INDEX idx_daily_date_product (metric_date, product_id)) to
support range-by-date then group-by-product access patterns and avoid
temporary/filesort; update the migration to create that composite index
alongside existing UNIQUE INDEX uk_daily_product_date and validate by running
EXPLAIN on weekly/monthly aggregation queries to confirm reduced rows and
removal of using temporary/filesort.
| -- 상품 ID 1~200, 일별 메트릭 | ||
| -- ============================================================ |
There was a problem hiding this comment.
주석의 상품 범위와 실제 적재 범위가 불일치한다
주석은 1~200이라고 안내하지만 실제 데이터는 1~50만 적재한다. 테스트 운영자가 데이터 볼륨을 잘못 가정하면 성능/결과 해석이 왜곡된다.
- 수정안: 주석을
1~50으로 정정하거나 실제 적재 범위를 주석과 일치시켜야 한다. - 추가 테스트: 적재 후
COUNT(DISTINCT product_id)가 기대값(50 또는 200)과 일치하는지 검증 쿼리를 추가하면 된다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/test-data/ranking_test_data.sql` around lines 11 - 12, The SQL file's
header comment states "상품 ID 1~200" but the actual load only creates product_id
1~50; update the header comment to "1~50" or expand the inserted data to cover
1~200 so comment and data match, and add a post-load verification query using
COUNT(DISTINCT product_id) to assert the expected cardinality (either 50 or 200)
to prevent future mismatch; targeted places to change are the top-of-file
comment string ("상품 ID 1~200") and the insertion/generation block that sets
product_id range, plus add the COUNT(DISTINCT product_id) check after the
insert.
| INSERT INTO product_metrics_daily (product_id, metric_date, view_count, like_count, order_count, score, created_at, updated_at) | ||
| SELECT 1, DATE_ADD('2025-04-01', INTERVAL n DAY), | ||
| 100 + FLOOR(RAND() * 50), -- view: 100~150 | ||
| 50 + FLOOR(RAND() * 20), -- like: 50~70 | ||
| 20 + FLOOR(RAND() * 10), -- order: 20~30 | ||
| 25.0 + RAND() * 5, -- score: 25~30 | ||
| NOW(), NOW() | ||
| FROM ( | ||
| SELECT 0 AS n UNION SELECT 1 UNION SELECT 2 UNION SELECT 3 UNION SELECT 4 | ||
| UNION SELECT 5 UNION SELECT 6 UNION SELECT 7 UNION SELECT 8 UNION SELECT 9 | ||
| UNION SELECT 10 UNION SELECT 11 UNION SELECT 12 UNION SELECT 13 UNION SELECT 14 | ||
| UNION SELECT 15 UNION SELECT 16 UNION SELECT 17 UNION SELECT 18 UNION SELECT 19 | ||
| UNION SELECT 20 UNION SELECT 21 UNION SELECT 22 UNION SELECT 23 UNION SELECT 24 | ||
| UNION SELECT 25 UNION SELECT 26 UNION SELECT 27 UNION SELECT 28 UNION SELECT 29 | ||
| ) AS days | ||
| WHERE DATE_ADD('2025-04-01', INTERVAL n DAY) <= '2025-04-30'; | ||
|
|
||
| -- Product 2: 2위 예상 상품 | ||
| INSERT INTO product_metrics_daily (product_id, metric_date, view_count, like_count, order_count, score, created_at, updated_at) | ||
| SELECT 2, DATE_ADD('2025-04-01', INTERVAL n DAY), | ||
| 80 + FLOOR(RAND() * 40), -- view: 80~120 | ||
| 40 + FLOOR(RAND() * 15), -- like: 40~55 | ||
| 15 + FLOOR(RAND() * 8), -- order: 15~23 | ||
| 20.0 + RAND() * 4, -- score: 20~24 | ||
| NOW(), NOW() | ||
| FROM ( | ||
| SELECT 0 AS n UNION SELECT 1 UNION SELECT 2 UNION SELECT 3 UNION SELECT 4 | ||
| UNION SELECT 5 UNION SELECT 6 UNION SELECT 7 UNION SELECT 8 UNION SELECT 9 | ||
| UNION SELECT 10 UNION SELECT 11 UNION SELECT 12 UNION SELECT 13 UNION SELECT 14 | ||
| UNION SELECT 15 UNION SELECT 16 UNION SELECT 17 UNION SELECT 18 UNION SELECT 19 | ||
| UNION SELECT 20 UNION SELECT 21 UNION SELECT 22 UNION SELECT 23 UNION SELECT 24 | ||
| UNION SELECT 25 UNION SELECT 26 UNION SELECT 27 UNION SELECT 28 UNION SELECT 29 | ||
| ) AS days | ||
| WHERE DATE_ADD('2025-04-01', INTERVAL n DAY) <= '2025-04-30'; | ||
|
|
||
| -- Product 3: 3위 예상 상품 | ||
| INSERT INTO product_metrics_daily (product_id, metric_date, view_count, like_count, order_count, score, created_at, updated_at) | ||
| SELECT 3, DATE_ADD('2025-04-01', INTERVAL n DAY), | ||
| 60 + FLOOR(RAND() * 30), -- view: 60~90 | ||
| 30 + FLOOR(RAND() * 10), -- like: 30~40 | ||
| 10 + FLOOR(RAND() * 5), -- order: 10~15 | ||
| 15.0 + RAND() * 3, -- score: 15~18 | ||
| NOW(), NOW() | ||
| FROM ( | ||
| SELECT 0 AS n UNION SELECT 1 UNION SELECT 2 UNION SELECT 3 UNION SELECT 4 | ||
| UNION SELECT 5 UNION SELECT 6 UNION SELECT 7 UNION SELECT 8 UNION SELECT 9 | ||
| UNION SELECT 10 UNION SELECT 11 UNION SELECT 12 UNION SELECT 13 UNION SELECT 14 | ||
| UNION SELECT 15 UNION SELECT 16 UNION SELECT 17 UNION SELECT 18 UNION SELECT 19 | ||
| UNION SELECT 20 UNION SELECT 21 UNION SELECT 22 UNION SELECT 23 UNION SELECT 24 | ||
| UNION SELECT 25 UNION SELECT 26 UNION SELECT 27 UNION SELECT 28 UNION SELECT 29 | ||
| ) AS days | ||
| WHERE DATE_ADD('2025-04-01', INTERVAL n DAY) <= '2025-04-30'; | ||
|
|
||
| -- Products 4~50: 중간 순위 상품들 (랜덤 점수) | ||
| INSERT INTO product_metrics_daily (product_id, metric_date, view_count, like_count, order_count, score, created_at, updated_at) | ||
| SELECT product_id, DATE_ADD('2025-04-01', INTERVAL n DAY), | ||
| 20 + FLOOR(RAND() * 50), -- view: 20~70 | ||
| 10 + FLOOR(RAND() * 20), -- like: 10~30 | ||
| 2 + FLOOR(RAND() * 8), -- order: 2~10 | ||
| 5.0 + RAND() * 8, -- score: 5~13 | ||
| NOW(), NOW() | ||
| FROM ( | ||
| SELECT 4 AS product_id UNION SELECT 5 UNION SELECT 6 UNION SELECT 7 UNION SELECT 8 | ||
| UNION SELECT 9 UNION SELECT 10 UNION SELECT 11 UNION SELECT 12 UNION SELECT 13 | ||
| UNION SELECT 14 UNION SELECT 15 UNION SELECT 16 UNION SELECT 17 UNION SELECT 18 | ||
| UNION SELECT 19 UNION SELECT 20 UNION SELECT 21 UNION SELECT 22 UNION SELECT 23 | ||
| UNION SELECT 24 UNION SELECT 25 UNION SELECT 26 UNION SELECT 27 UNION SELECT 28 | ||
| UNION SELECT 29 UNION SELECT 30 UNION SELECT 31 UNION SELECT 32 UNION SELECT 33 | ||
| UNION SELECT 34 UNION SELECT 35 UNION SELECT 36 UNION SELECT 37 UNION SELECT 38 | ||
| UNION SELECT 39 UNION SELECT 40 UNION SELECT 41 UNION SELECT 42 UNION SELECT 43 | ||
| UNION SELECT 44 UNION SELECT 45 UNION SELECT 46 UNION SELECT 47 UNION SELECT 48 | ||
| UNION SELECT 49 UNION SELECT 50 | ||
| ) AS products | ||
| CROSS JOIN ( | ||
| SELECT 0 AS n UNION SELECT 1 UNION SELECT 2 UNION SELECT 3 UNION SELECT 4 | ||
| UNION SELECT 5 UNION SELECT 6 UNION SELECT 7 UNION SELECT 8 UNION SELECT 9 | ||
| UNION SELECT 10 UNION SELECT 11 UNION SELECT 12 UNION SELECT 13 UNION SELECT 14 | ||
| UNION SELECT 15 UNION SELECT 16 UNION SELECT 17 UNION SELECT 18 UNION SELECT 19 | ||
| UNION SELECT 20 UNION SELECT 21 UNION SELECT 22 UNION SELECT 23 UNION SELECT 24 | ||
| UNION SELECT 25 UNION SELECT 26 UNION SELECT 27 UNION SELECT 28 UNION SELECT 29 | ||
| ) AS days | ||
| WHERE DATE_ADD('2025-04-01', INTERVAL n DAY) <= '2025-04-30'; |
There was a problem hiding this comment.
비결정적 RAND() 사용으로 테스트 재현성이 떨어진다
동일 브랜치에서 스크립트를 다시 실행할 때마다 값이 달라지면 실패 재현과 회귀 분석이 어려워진다. CI 불안정성이 생기면 장애 대응 속도가 크게 저하된다.
- 수정안: 시드 기반 난수(
RAND(고정식)) 또는 결정식 수식(MOD)으로 값을 고정해 실행마다 동일 데이터가 생성되게 해야 한다. - 추가 테스트: 스크립트를 2회 연속 실행한 뒤
SUM(score), SUM(view_count)결과가 동일한지 비교 검증하면 된다.
결정적 데이터 생성 예시 (패턴 적용)
- 100 + FLOOR(RAND() * 50), -- view: 100~150
- 50 + FLOOR(RAND() * 20), -- like: 50~70
- 20 + FLOOR(RAND() * 10), -- order: 20~30
- 25.0 + RAND() * 5, -- score: 25~30
+ 100 + FLOOR(RAND(1000 + n) * 50), -- view: 100~150 (deterministic)
+ 50 + FLOOR(RAND(2000 + n) * 20), -- like: 50~70 (deterministic)
+ 20 + FLOOR(RAND(3000 + n) * 10), -- order: 20~30 (deterministic)
+ 25.0 + RAND(4000 + n) * 5, -- score: 25~30 (deterministic)🧰 Tools
🪛 SQLFluff (4.1.0)
[error] 31-31: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 31-31: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 31-31: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 31-31: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 32-32: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 32-32: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 32-32: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 32-32: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 32-32: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 33-33: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 33-33: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 33-33: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 33-33: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 33-33: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 34-34: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 34-34: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 34-34: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 34-34: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 34-34: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 35-35: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 35-35: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 35-35: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 35-35: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 35-35: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 36-36: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 36-36: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 36-36: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 36-36: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 36-36: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 49-49: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 49-49: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 49-49: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 49-49: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 50-50: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 50-50: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 50-50: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 50-50: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 50-50: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 51-51: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 51-51: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 51-51: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 51-51: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 51-51: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 52-52: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 52-52: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 52-52: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 52-52: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 52-52: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 53-53: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 53-53: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 53-53: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 53-53: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 53-53: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 54-54: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 54-54: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 54-54: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 54-54: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 54-54: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 67-67: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 67-67: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 67-67: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 67-67: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 68-68: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 68-68: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 68-68: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 68-68: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 68-68: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 69-69: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 69-69: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 69-69: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 69-69: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 69-69: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 70-70: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 70-70: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 70-70: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 70-70: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 70-70: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 71-71: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 71-71: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 71-71: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 71-71: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 71-71: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 72-72: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 72-72: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 72-72: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 72-72: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 72-72: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 85-85: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 85-85: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 85-85: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 85-85: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 86-86: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 86-86: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 86-86: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 86-86: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 86-86: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 87-87: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 87-87: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 87-87: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 87-87: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 87-87: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 88-88: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 88-88: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 88-88: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 88-88: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 88-88: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 89-89: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 89-89: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 89-89: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 89-89: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 89-89: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 90-90: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 90-90: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 90-90: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 90-90: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 90-90: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 91-91: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 91-91: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 91-91: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 91-91: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 91-91: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 92-92: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 92-92: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 92-92: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 92-92: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 92-92: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 93-93: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 93-93: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 93-93: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 93-93: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 93-93: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 94-94: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 94-94: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 97-97: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 97-97: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 97-97: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 97-97: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 98-98: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 98-98: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 98-98: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 98-98: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 98-98: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 99-99: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 99-99: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 99-99: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 99-99: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 99-99: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 100-100: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 100-100: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 100-100: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 100-100: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 100-100: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 101-101: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 101-101: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 101-101: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 101-101: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 101-101: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 102-102: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 102-102: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 102-102: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 102-102: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
[error] 102-102: 'UNION [DISTINCT|ALL]' is preferred over just 'UNION'.
(AM02)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/test-data/ranking_test_data.sql` around lines 23 - 104, The INSERTs
into product_metrics_daily use nondeterministic RAND() calls (in the three
per-product SELECTs and the products CROSS JOIN block) which prevents
reproducible test data; update each SELECT to use a deterministic generator
(e.g., RAND(fixed_seed + n + product_id) or replace RAND() with pure arithmetic
based on n and product_id using MOD/linear formulas) for view_count, like_count,
order_count and score so the same inputs (aliases days.n and
products.product_id) always produce the same outputs; apply this change in the
three SELECT blocks that insert for product_id 1, 2, 3 and the block using
products/CROSS JOIN, and then validate by running the script twice and comparing
SUM(score) and SUM(view_count).
📌 Summary
🧭 Context & Decision
문제 정의
선택지와 결정
1. Batch 처리 방식
2. MV 갱신 전략
3. 일간 랭킹 데이터 소스
4. API 분기 전략
🏗️ Design Overview
변경 범위
batch/job/weeklyranking/,batch/job/monthlyranking/- 배치 Jobbatch/scheduler/- 운영용 스케줄러infrastructure/persistence/jpa/rank/- MV 엔티티/Repositoryinterfaces/api/batch/- Admin APIV002__create_product_rank_mv_tables.sql,V003__create_product_metrics_daily_table.sql주요 컴포넌트 책임
WeeklyRankingJobConfig: 주간 랭킹 집계 Job 정의 (월~일 범위)MonthlyRankingJobConfig: 월간 랭킹 집계 Job 정의 (1일~말일 범위)RankingJobConstants: 공통 상수, SQL 템플릿, 날짜 포맷터RankingMetricsAggregation: 집계 결과 DTO + RowMapperBatchAdminV1Controller: Job 실행 REST APIRankingJobScheduler: cron 기반 자동 실행 (운영 환경만 활성화)RankingQueryService: 기간별 랭킹 조회 (DAILY/WEEKLY/MONTHLY 분기)🔁 Flow Diagram
배치 Job 실행 흐름
sequenceDiagram autonumber participant Admin as Admin Client participant API as BatchAdminV1Controller participant Launcher as JobLauncher participant Job as WeeklyRankingJob participant Reader as JdbcCursorItemReader participant Processor as ItemProcessor participant Writer as ItemWriter participant DB as MySQL Admin->>API: POST /api-admin/v1/batch/weekly-ranking?targetDate=20250414 API->>API: validateTargetDateFormat() API->>Launcher: run(job, params) Launcher->>Job: execute() loop Chunk Processing (100건씩) Job->>Reader: read() Reader->>DB: SELECT SUM() GROUP BY product_id ORDER BY score DESC LIMIT 100 DB-->>Reader: RankingMetricsAggregation[] Reader-->>Job: chunk Job->>Processor: process(chunk) Processor->>Processor: assign rank_number Processor-->>Job: Entity[] Job->>Writer: write(entities) Writer->>DB: DELETE FROM mv_product_rank_weekly WHERE period_start_date = ? Writer->>DB: INSERT INTO mv_product_rank_weekly DB-->>Writer: success end Job-->>Launcher: JobExecution(COMPLETED) Launcher-->>API: execution API-->>Admin: { status: COMPLETED, executionId: 1 }기간별 랭킹 API 조회 흐름
sequenceDiagram autonumber participant Client participant Controller as RankingV1Controller participant Service as RankingQueryService participant Redis participant WeeklyRepo as ProductRankWeeklyJpaRepository participant MonthlyRepo as ProductRankMonthlyJpaRepository participant ProductRepo as ProductRepository Client->>Controller: GET /api/v1/rankings?date=20250414&period=WEEKLY alt period = DAILY Controller->>Service: getPeriodRankings(date, DAILY) Service->>Redis: ZREVRANGE ranking:all:20250414 Redis-->>Service: [productId, score][] else period = WEEKLY Controller->>Service: getPeriodRankings(date, WEEKLY) Service->>Service: calculateWeekStart(20250414) → 20250414 (월요일) Service->>WeeklyRepo: findByPeriodStartDateOrderByRankNumberAsc(2025-04-14) WeeklyRepo-->>Service: ProductRankWeeklyJpaEntity[] else period = MONTHLY Controller->>Service: getPeriodRankings(date, MONTHLY) Service->>Service: calculateMonthStart(20250414) → 20250401 Service->>MonthlyRepo: findByPeriodStartDateOrderByRankNumberAsc(2025-04-01) MonthlyRepo-->>Service: ProductRankMonthlyJpaEntity[] end Service->>ProductRepo: findAllByIds(productIds) ProductRepo-->>Service: Product[] Service->>Service: merge ranking + product info Service-->>Controller: PeriodRankingResult[] Controller-->>Client: { rankings: [...], period: WEEKLY, periodStart: 20250414, periodEnd: 20250420 }✅ 테스트
총 47개 테스트 케이스 PASS
변경 목적
마케팅 및 장기 트렌드 분석 요구를 지원하기 위해 주간/월간 랭킹 집계 기능을 추가하고, 기존 일일 Redis 기반 랭킹과 함께 기간별 조회 API를 제공합니다.
핵심 변경점
Spring Batch chunk-oriented 잡으로 weekly/monthly 랭킹 집계 구현; 이를 위해
mv_product_rank_weekly,mv_product_rank_monthly물리화 테이블(V002 DDL)과product_metrics_daily일일 메트릭 테이블(V003 DDL) 신규 생성; RankingQueryService와 RankingV1Controller를RankingPeriodenum(DAILY/WEEKLY/MONTHLY)을 지원하도록 확장하여PeriodRankingResult응답 타입 추가; BatchAdminV1Controller 관리자 API 엔드포인트와 RankingJobScheduler 스케줄러 추가로 자동 실행 가능하게 구성; 배치 잡 설계 시 DELETE+INSERT 패턴으로 멱등성 보장.리스크/주의사항
기존 일일 랭킹 조회 클라이언트는
period파라미터 미지정 시 기본값DAILY로 동작하므로 호환성 있음; 다만 응답 DTO가RankingPageResponse에서PeriodRankingPageResponse로 변경되고 score/view/like/orderCount 필드가 추가되어 이를 소비하는 클라이언트 코드 수정이 필요한지 확인이 필요합니다. 또한 대량 데이터 환경에서 JDBC CursorReader 메모리 사용 및 배치 청크 처리(TOP 100 제한)의 성능 영향에 대해 검증이 필요해 보입니다.테스트/검증 방법
47개 테스트 케이스로 배치 메타테이블 생성, 집계 로직 및 날짜 경계 처리, 멱등성, API 응답 형식 등을 검증했으며; 특히 RankingPipelineIntegrationTest에서 주간/월간 배치 트리거 후 API 응답 검증을 E2E로 수행; batch.yml의 프로필별 스케줄러 활성화 구성(local/test/dev: 비활성화, qa/prd: 활성화)으로 환경별 운영 지원.