Example distage project.
Features distage from Izumi project for dependency injection, BIO typeclasses for bifunctor tagless final, distage-testkit for testing, ZIO Environment for composing test fixtures, and distage-framework-docker for setting up test containers.
There are three variants of the example project:
- bifunctor-tagless – Main example. It's written in bifunctor tagless final style with BIO typeclasses, uses ZIO as a runtime and ZIO Environment for composing test fixtures.
- monofunctor-tagless – Written in monofunctor tagless final style with Cats Effect typeclasses, and can run using both Cats IO and ZIO runtimes.
- monomorphic-cats – A simpler example written without tagless final, uses Cats IO directly everywhere.
To launch tests that require postgres ensure you have a docker daemon running in the background.
Use sbt test to launch the tests.
You can launch the application with the following command.
# With docker daemon running
./launcher -u scene:managed :leaderboard
# Alternatively, with in-memory storage
./launcher -u repo:dummy :leaderboard
Afterwards you can call the HTTP methods:
curl -X POST http://localhost:8080/ladder/50753a00-5e2e-4a2f-94b0-e6721b0a3cc4/100
curl -X POST http://localhost:8080/profile/50753a00-5e2e-4a2f-94b0-e6721b0a3cc4 -d '{"name": "Kai", "description": "S C A L A"}'
# check leaderboard
curl -X GET http://localhost:8080/ladder
# user profile now shows the rank in the ladder along with profile data
curl -X GET http://localhost:8080/profile/50753a00-5e2e-4a2f-94b0-e6721b0a3cc4A flake.nix is included. With Nix flakes enabled you can drop into a shell
that has the exact JDK, sbt, Scala 3 and Node versions used by this project:
nix develop # one-off shell
direnv allow # automatic — uses the bundled .envrcdistage-example cross-builds to Scala.js, so the same
LadderApi/ProfileApi http4s routes also run entirely in the browser via
an in-process LocalDispatcher configured with Repo -> Dummy — no network,
no postgres, no docker. The demo UI in
bifunctor-tagless/jvm/src/main/resources/webapp/ toggles each call between
production (real HTTP) and simulation (the in-page Scala.js build),
auto-selecting simulation when no production server answers.
Try the live deployment via the badge above (published to GitHub Pages on
every push to develop), or run it locally:
./launch-simThen open http://localhost:8080/. To enable Pages on your fork: Settings → Pages → Build and deployment → Source: GitHub Actions.
./launch-sim is just a wrapper. To run the steps yourself:
sbt copySimJs # build + copy the Scala.js bundle
./launcher -u repo:dummy :leaderboard # start the serverThe webpage can also be opened directly from disk via file:// — the UI
auto-detects the backend at http://localhost:8080, and with no backend
reachable, falls back to the in-page simulation so the static page works on
its own.
If ./launcher command fails for you with some cryptic stack trace, there's most likely an issue with your Docker. First of all, check that you have docker and contrainerd daemons running. If you're using something else than Ubuntu, please stick to the relevant installation page:
sudo systemctl status docker
sudo systemctl status contrainerd
Both of them should have Active: active (running) status. If your problem isn't gone yet, most likely you don't have your user in docker group. Here you can find a tutorial on how to do so. Don't forget to log out of your session or restart your virtual machine before proceeding. If you still have problems, don't hesitate to open an issue.
- Functional Scala 2019 – Hyperpragmatic Pure FP testing with distage-testkit
- ScalaWAW Warsaw Meetup – Livecoding this project
- Source Talks — Pragmatic Pure FP approach to application design and testing with distage
Use sbt to build a native Linux binary with GraalVM NativeImage under Docker:
sbt bifunctor-taglessJVM/GraalVMNativeImage/packageBinIf you want to build the app using local native-image executable (e.g. on a Mac), comment out the graalVMNativeImageGraalVersion key in build.sbt first.
To test the native app with dummy repositories run:
./bifunctor-tagless/jvm/target/graalvm-native-image/bifunctor-tagless -u scene:managed -u repo:dummy :leaderboardTo test the native app with production repositories in Docker run:
./bifunctor-tagless/jvm/target/graalvm-native-image/bifunctor-tagless -u scene:managed -u repo:prod :leaderboardNotes:
- Currently, the application builds with GraalVM
22.3. Check other GraalVM images here - JNA libraries are just regular Java resources, currently the NI config is generated for x86-64 Linux, you'll have to re-generate or manually edit it to run on different operating systems or architectures.
- The following bugs may still manifest, but it seems like they aren't blockers anymore:
-Djna.debug_load=truekey added to the native app command line might help to debug JNA-related issues
See Native Image docs for details.
Add the following to Java commandline to run the Assisted configuration agent:
-agentlib:native-image-agent=access-filter-file=./ni-filter.json,config-output-dir=./src/main/resources/META-INF/native-image/auto-wip
Notes:
- The codepaths in
docker-javaare different for the cold state (when no containers are running) and the hot state. It seems like we've managed to build an exhaustive ruleset fordocker-javaso it's excluded inni-filter.json. If something is wrong and you need to generate the rules fordocker-java, run the agent twice in both hot and cold state. - Only
PluginConfig.constworks reliably under Native Image. So, ClassGraph analysis is disabled inni-filter.json. You can't make dynamic plugin resolution working under Native Image.