Skip to content

Add transparent ReadWriteMany (RWX) volume support via NFS#194

Draft
sjmiller609 wants to merge 2 commits intomainfrom
hypeship/rwx-nfs-volumes-v2
Draft

Add transparent ReadWriteMany (RWX) volume support via NFS#194
sjmiller609 wants to merge 2 commits intomainfrom
hypeship/rwx-nfs-volumes-v2

Conversation

@sjmiller609
Copy link
Copy Markdown
Collaborator

Summary

Adds support for attaching volumes read-write to multiple instances simultaneously (ReadWriteMany / RWX). When a second rw attachment is requested, the volume is transparently loop-mounted on the host and exported via NFS. New instances mount the volume over the network instead of as a block device, enabling shared read-write access without ext4 corruption.

Design principles:

  • NFS is an internal implementation detail — no public API changes, no NFS fields exposed
  • Access mode is per-attachment (the existing readonly field), not per-volume
  • Concurrent attachment rules: single rw = block device, multiple rw = transparently backed by NFS

Key changes:

  • lib/volumes/nfs.go — New nfsManager handling loop mount, /etc/exports management, and exportfs lifecycle
  • lib/volumes/manager.goAttachVolume transparently upgrades to NFS on concurrent rw attachments; DetachVolume tears down NFS when last NFS consumer detaches; new GetVolumeNFSInfo method
  • lib/volumes/types.go / storage.goNFSInfo struct and NFS field on Attachment for persistence
  • lib/system/init/volumes.go — Guest init gains "nfs" mount mode (NFSv4, hard mount with retrans)
  • lib/instances/configdisk.go — NFS-served volumes get mode=nfs with host/export instead of a device path; no device slot consumed
  • lib/instances/create.go — NFS volumes excluded from hypervisor disk list
  • lib/vmconfig/config.goVolumeMount gains NFSHost and NFSExport fields
  • lib/paths/paths.go — New VolumeNFSMount path helper
  • cmd/api/main.go — Derives gateway IP from network config and sets NFS host on volume manager at startup

Known limitation: When a volume transitions from block-device to NFS (second rw attach), the first instance continues using its block device. Data written by the first instance won't be visible to NFS consumers until that instance detaches and the block device is released. This is acceptable for the initial implementation — a future enhancement could live-migrate the first attachment to NFS.

Test plan

  • All 28 existing + new tests pass (go test ./lib/volumes/ -v)
  • New tests cover: RWX rejection without NFS host, NFSInfo nil when not served, NFS metadata persistence across reload, NFS teardown on last consumer detach, NFS kept alive with remaining consumers
  • go vet clean on modified packages
  • Manual validation: attach volume rw to two instances, verify NFS mount in second guest
  • Verify NFS teardown after all consumers detach (loop device released, export removed)

sjmiller609 and others added 2 commits April 9, 2026 19:35
Volumes can now be attached read-write to multiple instances
simultaneously. When a second rw attachment is requested, the volume
is automatically loop-mounted on the host and exported via NFS.
Subsequent instances mount the volume over NFS instead of as a block
device, enabling shared read-write access without filesystem corruption.

Key changes:
- New nfsManager handles loop mount, /etc/exports, and exportfs lifecycle
- AttachVolume transparently upgrades to NFS on concurrent rw attachments
- DetachVolume tears down NFS when the last NFS consumer detaches
- Guest init gains an "nfs" mount mode (NFSv4, hard mount)
- Config disk skips device slot allocation for NFS-served volumes
- Instance creation excludes NFS volumes from the hypervisor disk list
- NFS host IP derived from network bridge gateway at startup

NFS is purely an internal implementation detail — no public API changes.
The access mode is determined per-attachment by the existing readonly
field, not by a per-volume property.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add an explicit AccessMode enum (ReadWriteOnce, ReadOnlyMany,
ReadWriteMany) to the attach volume request. This replaces the
implicit rw+rw=NFS behavior with an explicit opt-in model:

- ReadWriteOnce: exclusive rw via block device (default, no change)
- ReadOnlyMany: read-only, multiple instances (maps from readonly=true)
- ReadWriteMany: shared rw via NFS (only mode that triggers NFS)

The existing readonly field is deprecated but still works with
identical semantics. Neither legacy path triggers NFS. When both
fields are set, access_mode takes precedence.

Validation rules enforce that different access modes cannot be mixed
on the same volume (e.g., RWO + RWX = conflict).

Changes: OpenAPI spec (AccessMode enum, deprecated readonly),
regenerated oapi.go, domain types, volume manager attach logic,
storage persistence, and 9 new tests (36 total pass).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 9, 2026

✱ Stainless preview builds

This PR will update the hypeman SDKs with the following commit message.

feat: Add transparent ReadWriteMany (RWX) volume support via NFS

Edit this comment to update it. It will appear in the SDK's changelogs.

hypeman-typescript studio · code · diff

Your SDK build had at least one "note" diagnostic, but this did not represent a regression.
generate ✅build ✅lint ✅test ✅

npm install https://pkg.stainless.com/s/hypeman-typescript/25a35862af2766cacd56ce2858699ac10a1c4a0e/dist.tar.gz
hypeman-openapi studio · code · diff

Your SDK build had at least one "note" diagnostic, but this did not represent a regression.
generate ✅

hypeman-go studio · code · diff

Your SDK build had at least one "note" diagnostic, but this did not represent a regression.
generate ✅build ✅lint ✅test ✅

go get github.com/stainless-sdks/hypeman-go@7f7fbf2f1c989d30380a47ed8e5662979bca788f

This comment is auto-generated by GitHub Actions and is automatically kept up to date as you push.
If you push custom code to the preview branch, re-run this workflow to update the comment.
Last updated: 2026-04-09 19:57:50 UTC

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant