Skip to content

feat: add ColumnMappedJdbcStateProvider + Flyway migrations for typed…#3

Merged
fupelaqu merged 8 commits intomainfrom
feature/columnMappedJdbcStateProvider
Apr 10, 2026
Merged

feat: add ColumnMappedJdbcStateProvider + Flyway migrations for typed…#3
fupelaqu merged 8 commits intomainfrom
feature/columnMappedJdbcStateProvider

Conversation

@fupelaqu
Copy link
Copy Markdown
Contributor

… column storage

Introduces a new opt-in trait for mapping entity state fields to individual typed table columns managed by Flyway migrations, enabling indexed lookups and type-safe Slick queries. JdbcStateProvider remains unchanged (backward-compatible).

  • ColumnMappedJdbcStateProvider[T]: typed CRUD, searchDocuments, destroy (hard-delete)
  • FlywayMigration trait: per-entity migration folders with isolated metadata tables
  • SlickDatabase.dataSource: HikariCP DataSource accessor for Flyway
  • ExternalPersistenceProvider.destroy(): new base trait method for GDPR hard-delete
  • Flyway 11.8.1 dependency, Java 17 target for Scala 2.13
  • H2 integration tests with full CRUD + indexed lookup coverage

Closed Issue #2

… column storage

Introduces a new opt-in trait for mapping entity state fields to individual typed table columns
managed by Flyway migrations, enabling indexed lookups and type-safe Slick queries.
JdbcStateProvider remains unchanged (backward-compatible).

- ColumnMappedJdbcStateProvider[T]: typed CRUD, searchDocuments, destroy (hard-delete)
- FlywayMigration trait: per-entity migration folders with isolated metadata tables
- SlickDatabase.dataSource: HikariCP DataSource accessor for Flyway
- ExternalPersistenceProvider.destroy(): new base trait method for GDPR hard-delete
- Flyway 11.8.1 dependency, Java 17 target for Scala 2.13
- H2 integration tests with full CRUD + indexed lookup coverage

Closed Issue #2

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@codacy-production
Copy link
Copy Markdown

codacy-production bot commented Apr 10, 2026

Not up to standards ⛔

🔴 Issues 3 minor

Alerts:
⚠ 3 issues (≤ 0 issues of at least minor severity)

Results:
3 new issues

Category Results
BestPractice 1 minor
CodeStyle 2 minor

View in Codacy

🟢 Metrics 172 complexity · 2 duplication

Metric Results
Complexity 172
Duplication 2

View in Codacy

TIP This summary will be updated as you push new changes. Give us feedback

@codecov
Copy link
Copy Markdown

codecov bot commented Apr 10, 2026

Welcome to Codecov 🎉

Once you merge this PR into your default branch, you're all set! Codecov will compare coverage reports and display results in all future pull requests.

Thanks for integrating Codecov - We've got you covered ☂️

@fupelaqu fupelaqu requested a review from Copilot April 10, 2026 15:53
@fupelaqu fupelaqu merged commit a02e531 into main Apr 10, 2026
4 of 7 checks passed
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds an opt-in, Flyway-managed, column-mapped JDBC persistence provider to support typed columns, indexed lookups, and Slick-based querying, while keeping the existing JSON-blob JdbcStateProvider behavior intact. Also updates build/CI to target Java 17 and introduces H2-backed integration tests for the new provider.

Changes:

  • Introduce ColumnMappedJdbcStateProvider[T] with typed Slick table mapping, raw SQL search, and hard-delete support.
  • Add Flyway migration support (FlywayMigration), datasource exposure (SlickDatabase.dataSource), and per-entity migration conventions.
  • Add H2 testkit coverage + update build/CI (Java 17, cross-compile, Codecov env wiring) and apply broad formatting cleanups.

Reviewed changes

Copilot reviewed 18 out of 23 changed files in this pull request and generated 5 comments.

Show a summary per file
File Description
session/testkit/src/main/scala/app/softnetwork/session/scalatest/RefreshableSessionTestKit.scala Formatting-only change.
server/testkit/src/main/scala/akka/http/scaladsl/testkit/PersistenceScalatestRouteTest.scala Formatting-only refactor (line breaks/alignment).
project/Versions.scala Adds Flyway version constant.
jdbc/testkit/src/test/scala/app/softnetwork/persistence/jdbc/query/TestEntityProvider.scala Test provider implementation for column-mapped persistence.
jdbc/testkit/src/test/scala/app/softnetwork/persistence/jdbc/query/TestEntity.scala Adds test entity model.
jdbc/testkit/src/test/scala/app/softnetwork/persistence/jdbc/query/ColumnMappedJdbcStateProviderSpec.scala Adds CRUD/search/soft+hard delete tests for the new provider on H2.
jdbc/testkit/src/test/resources/db/migration/test_entity/V1__create_test_entity.sql Adds Flyway migration to create the test table + index.
jdbc/src/main/scala/app/softnetwork/persistence/jdbc/schema/FlywayMigration.scala Adds Flyway migration runner with per-entity history table.
jdbc/src/main/scala/app/softnetwork/persistence/jdbc/query/JdbcStateProvider.scala Aligns destroy with new base trait method via override.
jdbc/src/main/scala/app/softnetwork/persistence/jdbc/query/ColumnMappedJdbcStateProvider.scala New typed-column provider trait + init/migrate/search logic.
jdbc/src/main/scala/app/softnetwork/persistence/jdbc/db/SlickDatabase.scala Exposes underlying DataSource for Flyway (HikariCP).
jdbc/src/main/resources/db/migration/README.md Documents migration folder conventions and metadata table naming.
jdbc/build.sbt Adds Flyway dependencies (core + DB plugins).
core/testkit/src/main/scala/app/softnetwork/persistence/scalatest/PersistenceTestKit.scala Formatting-only change.
core/testkit/src/main/scala/app/softnetwork/persistence/person/query/PersonToJsonProcessorStream.scala Formatting-only change.
core/src/main/scala/app/softnetwork/persistence/query/JsonProvider.scala Formatting-only change.
core/src/main/scala/app/softnetwork/persistence/query/ExternalPersistenceProvider.scala Adds base destroy(uuid) hard-delete method.
core/src/main/scala/app/softnetwork/persistence/package.scala Formatting-only change.
core/src/main/scala/app/softnetwork/persistence/model/package.scala Minor formatting change to $ helper signature.
common/testkit/src/main/scala/app/softnetwork/concurrent/scalatest/CompletionTestKit.scala Formatting-only change.
build.sbt Switches to Java 17, adds --add-opens, forks tests, version bump to SNAPSHOT.
.github/workflows/release.yml Updates CI to JDK 17, cross-compile, adds Codecov token env.
.github/workflows/build.yml Updates CI to JDK 17, cross-compile, adds Codecov token env.
Comments suppressed due to low confidence (2)

.github/workflows/build.yml:53

  • The Codecov files: list includes paths that don’t exist in this repo (e.g., core/teskit/... and jdbc/teskit/... — likely meant core/testkit and jdbc/testkit). As-is, Codecov may fail to find coverage reports or under-report coverage. Consider correcting these paths (and deduplicating the repeated kv/... entry).
    - name: Upload coverage to Codecov
      uses: codecov/codecov-action@v3
      with:
        files: common/target/scala-2.12/coverage-report/cobertura.xml,common/testkit/target/scala-2.12/coverage-report/cobertura.xml,core/target/scala-2.12/coverage-report/cobertura.xml,core/teskit/target/scala-2.12/coverage-report/cobertura.xml,jdbc/target/scala-2.12/coverage-report/cobertura.xml,jdbc/teskit/target/scala-2.12/coverage-report/cobertura.xml,counter/target/scala-2.12/coverage-report/cobertura.xml,kv/target/scala-2.12/coverage-report/cobertura.xml,kv/target/scala-2.12/coverage-report/cobertura.xml,session/testkit/target/scala-2.12/coverage-report/cobertura.xml
        flags: unittests

.github/workflows/release.yml:54

  • The Codecov files: list contains paths that don’t exist (core/teskit/..., jdbc/teskit/...), which likely prevents those reports from being uploaded. These look like typos for core/testkit and jdbc/testkit (and kv/... is listed twice). Consider fixing the paths to avoid missing coverage data in releases.
      with:
        files: common/target/scala-2.12/coverage-report/cobertura.xml,common/testkit/target/scala-2.12/coverage-report/cobertura.xml,core/target/scala-2.12/coverage-report/cobertura.xml,core/teskit/target/scala-2.12/coverage-report/cobertura.xml,jdbc/target/scala-2.12/coverage-report/cobertura.xml,jdbc/teskit/target/scala-2.12/coverage-report/cobertura.xml,counter/target/scala-2.12/coverage-report/cobertura.xml,kv/target/scala-2.12/coverage-report/cobertura.xml,kv/target/scala-2.12/coverage-report/cobertura.xml,session/testkit/target/scala-2.12/coverage-report/cobertura.xml
        flags: unittests

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +159 to +184
override final def upsertDocument(uuid: String, data: String): Boolean = {
implicit val manifest: Manifest[T] = manifestWrapper.wrapped
loadDocument(uuid) match {
case Some(existing) =>
var state = serialization.read[Map[String, Any]](
serialization.write(existing)
)
val updatedState = serialization.read[Map[String, Any]](data)
for ((key, value) <- updatedState) {
if (!excludedFields.contains(key)) {
state = state + (key -> value)
}
}
Try(
serialization.read[T](serialization.write(state))(formats, manifest)
) match {
case Success(updated) => updateDocument(updated)
case Failure(e) =>
log.error(s"Failed to upsert $uuid with data $data", e)
false
}
case None =>
Try(
serialization.read[T](data)(formats, manifestWrapper.wrapped)
) match {
case Success(entity) => createDocument(entity)
Copy link

Copilot AI Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the case None branch, upsertDocument(uuid, data) deserializes an entity from data and then calls createDocument(entity), which will insert using entity.uuid rather than the uuid parameter. If data doesn’t include uuid (or includes a different one), this can upsert the wrong row and violates the method contract. Consider forcing/validating the uuid field in the parsed payload before deserializing T (and similarly ensuring the update path can’t change uuid).

Suggested change
override final def upsertDocument(uuid: String, data: String): Boolean = {
implicit val manifest: Manifest[T] = manifestWrapper.wrapped
loadDocument(uuid) match {
case Some(existing) =>
var state = serialization.read[Map[String, Any]](
serialization.write(existing)
)
val updatedState = serialization.read[Map[String, Any]](data)
for ((key, value) <- updatedState) {
if (!excludedFields.contains(key)) {
state = state + (key -> value)
}
}
Try(
serialization.read[T](serialization.write(state))(formats, manifest)
) match {
case Success(updated) => updateDocument(updated)
case Failure(e) =>
log.error(s"Failed to upsert $uuid with data $data", e)
false
}
case None =>
Try(
serialization.read[T](data)(formats, manifestWrapper.wrapped)
) match {
case Success(entity) => createDocument(entity)
private def validatePayloadUuid(
uuid: String,
state: Map[String, Any]
): Try[Unit] = {
state.get("uuid") match {
case Some(value) if value != uuid =>
Failure(
new IllegalArgumentException(
s"Payload uuid $value does not match requested uuid $uuid"
)
)
case _ => Success(())
}
}
private def normalizedUpsertState(
uuid: String,
data: String
): Try[Map[String, Any]] = {
for {
state <- Try(serialization.read[Map[String, Any]](data))
_ <- validatePayloadUuid(uuid, state)
} yield state + ("uuid" -> uuid)
}
override final def upsertDocument(uuid: String, data: String): Boolean = {
implicit val manifest: Manifest[T] = manifestWrapper.wrapped
loadDocument(uuid) match {
case Some(existing) =>
val existingState = serialization.read[Map[String, Any]](
serialization.write(existing)
)
normalizedUpsertState(uuid, data) match {
case Success(updatedState) =>
var state = existingState
for ((key, value) <- updatedState) {
if (!excludedFields.contains(key) && key != "uuid") {
state = state + (key -> value)
}
}
state = state + ("uuid" -> uuid)
Try(
serialization.read[T](serialization.write(state))(formats, manifest)
) match {
case Success(updated) => updateDocument(updated)
case Failure(e) =>
log.error(s"Failed to upsert $uuid with data $data", e)
false
}
case Failure(e) =>
log.error(s"Failed to upsert $uuid with data $data", e)
false
}
case None =>
normalizedUpsertState(uuid, data) match {
case Success(state) =>
Try(
serialization.read[T](serialization.write(state))(formats, manifest)
) match {
case Success(entity) => createDocument(entity)
case Failure(e) =>
log.error(s"Failed to create $uuid from data $data", e)
false
}

Copilot uses AI. Check for mistakes.
Comment on lines +228 to +231
*
* Note: this method is not part of the `ExternalPersistenceProvider` interface and must be
* called on the concrete provider type. For pipeline-driven hard deletes, downcast the provider
* or use a dedicated command handler.
Copy link

Copilot AI Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Scaladoc says destroy is not part of ExternalPersistenceProvider, but this PR adds destroy(uuid: String) to ExternalPersistenceProvider. Please update/remove this note so callers aren’t misled about the API.

Suggested change
*
* Note: this method is not part of the `ExternalPersistenceProvider` interface and must be
* called on the concrete provider type. For pipeline-driven hard deletes, downcast the provider
* or use a dedicated command handler.

Copilot uses AI. Check for mistakes.
Comment on lines +289 to +293
/** Validates that a string is a safe SQL identifier (letters, digits, underscores, dots). */
private def validateIdentifier(id: String): Unit =
require(
id.matches("[a-zA-Z_][a-zA-Z0-9_.]*"),
s"Invalid SQL identifier: '$id'"
Copy link

Copilot AI Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

validateIdentifier allows dots ([a-zA-Z0-9_.]). Since this is used to validate dataset before executing CREATE SCHEMA IF NOT EXISTS <dataset>, values like foo.bar will pass validation but are not valid schema identifiers in common RDBMSs and will make schema creation fail. Tighten the allowed characters for schema names (no dots) and consider also validating tableName/migrationFolder since they’re interpolated into SQL/Flyway config.

Copilot uses AI. Check for mistakes.
Comment on lines +48 to +52
.configure()
.dataSource(dataSource)
.locations(s"db/migration/$migrationFolder")
.table(s"flyway_schema_history_$migrationFolder")
.baselineOnMigrate(baselineOnMigrate)
Copy link

Copilot AI Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

migrationFolder is interpolated directly into Flyway locations(...) and the history table(...) name. Since implementors can override it, an unexpected value (path separators or SQL-special chars) can break migrations or create invalid identifiers. Consider validating/sanitizing migrationFolder (e.g., restrict to [a-zA-Z0-9_]+) before using it here.

Copilot uses AI. Check for mistakes.
- name: Setup sbt launcher
uses: sbt/setup-sbt@v1
- name: Cross Compile
run: SBT_OPTS="-Xss4M -Xms1g -Xmx4g -Dfile.encoding=UTF-8" sbt + compile
Copy link

Copilot AI Apr 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The cross-compile step uses sbt + compile, which sbt may interpret as an invalid standalone + command. For cross-building, use sbt +compile or quote the command as sbt '+ compile' (as in release.yml) to ensure sbt parses it correctly.

Suggested change
run: SBT_OPTS="-Xss4M -Xms1g -Xmx4g -Dfile.encoding=UTF-8" sbt + compile
run: SBT_OPTS="-Xss4M -Xms1g -Xmx4g -Dfile.encoding=UTF-8" sbt +compile

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants