This repository is Go-first and provides a production migration engine + API/UI services for VMware to CloudStack migrations.
The goal of this project is to provide near-live / warm migration from VMware to CloudStack with minimal cutover downtime.
How warm migration is achieved:
- A base snapshot is taken and copied to QCOW2 on CloudStack primary storage.
- Incremental delta rounds are continuously synced using VMware CBT (
QueryChangedDiskAreas). - At cutover (
Finalize/Finalize Now/finalize_at), the source VM is shut down, a short settle delay is applied, one final delta sync is performed, then import is completed.
This design keeps source VM downtime mostly to the final sync + import boundary, not the full disk copy duration.
- Disk copy path: VMware VDDK -> direct QCOW2 writes (no RAW intermediate).
- Delta path: CBT ranges -> direct QCOW2 updates.
- Conversion path: optional
virt-v2v-in-placeafter final sync. Single-disk guests use boot-disk-only conversion; multi-disk guests are inspected and can switch tolibvirtxmlmode when the OS spans multiple disks. - State machine + resume: per-VM runtime state under
/var/lib/vm-migrator/<vm>_<moref>/. - Control actions:
FinalizeandFinalize Nowmarkers, plus CLI/API/UI triggers.
- Linux host
- VMware VDDK installed (must include
include/vixDiskLib.handlib64/libvixDiskLib.so*)- Official download: Broadcom VDDK
- Root/sudo access (required for service install and storage preflight checks)
- CloudStack API access
- vCenter credentials
- CloudStack primary storage support in this release: NFS and Shared Mountpoint
- For NFS pools, the migration host must have network-level access and permission to mount the storage backend.
- For Shared Mountpoint pools, the CloudStack path must already exist on the migration host and point to the correct shared storage.
The bootstrap script installs required OS packages (Go, qemu tools, virt-v2v, guestfs, and optional node/npm for UI).
This project does not redistribute VDDK. Users must obtain VDDK directly from Broadcom and accept Broadcom licensing terms separately.
For Windows conversions, virt-v2v-in-place expects virtio drivers to be available through VIRTIO_WIN, which this project resolves from virt.virtio_iso, /usr/share/virtio-win/virtio-win.iso, or /usr/share/virtio-win. Bootstrap prepares this on EL hosts by adding the virtio-win Fedora repo and installing virtio-win, and on Ubuntu hosts by converting the upstream virtio-win.noarch.rpm with alien and extracting srvany helpers into /usr/share/virt-tools.
Required network access for the current implementation:
- Migration host -> vCenter:
443/TCP- Used for VMware SDK operations such as inventory, snapshots, CBT queries, shutdown, and CBT enablement.
- Migration host -> ESXi hosts serving the source VM disks:
902/TCPand443/TCP- VDDK data access uses VMware NFC/VDDK paths and in practice typically requires access to the backing ESXi host, not only vCenter.
- Migration host -> CloudStack management API: port depends on configured endpoint
- Common values:
80/TCPforhttp://<host>/client/api8080/TCPforhttp://<host>:8080/client/api443/TCPforhttps://<host>/client/api
- Migration host -> CloudStack primary storage (NFS): NFS access to the selected export
- Ubuntu engine-managed mounts default to NFSv3 over TCP and may require
2049/TCP,111/TCP, and server-side mount/lock service ports. - EL-family hosts often use NFSv4-style mounts by default and typically require at least
2049/TCP. - If you override mount options with
V2C_NFS_MOUNT_OPTS, open the ports needed by those options.
- Ubuntu engine-managed mounts default to NFSv3 over TCP and may require
- Browser/admin workstation -> migration host:
5173/TCPfor the UI8000/TCPfor the API if accessed directly
Not required by this tool:
- CloudStack management server -> VMware direct connectivity is not required by this code path.
qemu-nbddoes not open a TCP listener; it uses a local Unix socket only.
- Clone and enter the repository:
git clone https://github.com/prashanthr2/vmware-to-cloudstack.git
cd vmware-to-cloudstack- Bootstrap dependencies, build, and install services:
Before running bootstrap, make sure one of these is already present on the host:
- Extracted VDDK directory (for
--vddk-dir), for example/opt/vmware-vddk/vmware-vix-disklib-distrib - VDDK tarball file (for
--vddk-tar)
chmod +x ./scripts/bootstrap.sh
sudo ./scripts/bootstrap.sh --vddk-dir /opt/vmware-vddk/vmware-vix-disklib-distrib --install-service --with-uiIf you have a VDDK tarball instead of an extracted directory:
chmod +x ./scripts/bootstrap.sh
sudo ./scripts/bootstrap.sh --vddk-tar /tmp/VMware-vix-disklib-8.0.2-xxxxxxx.x86_64.tar.gz --install-service --with-ui- Configure engine and UI endpoint:
sudo vi /etc/v2c-engine/config.yaml
sudo vi /etc/v2c-ui/.env.localIn /etc/v2c-ui/.env.local, set:
VITE_API_BASE=http://<migration-host-ip>:8000
Use the IP/hostname of the same host where v2c-engine serve is running (not 127.0.0.1 unless browser is on that same host).
- Start services:
sudo systemctl enable --now v2c-engine v2c-ui
systemctl status v2c-engine v2c-ui- Access the UI:
- URL:
http://<migration-host-ip>:5173 - API health check:
curl -s http://<migration-host-ip>:8000/health
- Use CLI (optional/advanced):
# check migration status for one or more specs
/usr/local/bin/v2c-engine status --spec ./examples/spec.run.single-vm.single-disk.single-nic.yaml --config /etc/v2c-engine/config.yaml
# request finalize (normal)
/usr/local/bin/v2c-engine finalize --spec ./examples/spec.run.single-vm.single-disk.single-nic.yaml --vm Centos7 --config /etc/v2c-engine/config.yaml
# request finalize-now (immediate delta wait interrupt)
/usr/local/bin/v2c-engine finalize --spec ./examples/spec.run.single-vm.single-disk.single-nic.yaml --vm Centos7 --now --config /etc/v2c-engine/config.yamlNote: use --start-services in bootstrap only when /etc/v2c-engine/config.yaml and /etc/v2c-ui/.env.local are already valid.
Use scripts/bootstrap.sh to install dependencies, build the engine, and install services.
chmod +x ./scripts/bootstrap.sh
sudo ./scripts/bootstrap.sh --vddk-dir /opt/vmware-vddk/vmware-vix-disklib-distrib --install-service --with-uiIf you have a VDDK tarball:
sudo ./scripts/bootstrap.sh --vddk-tar /tmp/VMware-vix-disklib-*.tar.gz --install-service --with-uiSupported bootstrap options:
--vddk-dir <path>--vddk-tar <path>--config <path>--bin-path <path>--listen <addr>(API service listen, default:8000)--ui-listen <addr>(UI service listen, default0.0.0.0:5173)--install-service(installsv2c-engineand, with--with-ui,v2c-ui)--with-ui(installs frontend dependencies and UI service unit)--start-services(optional immediate start after setup; only use when config files are already valid)--skip-build
Recommended bootstrap flow:
- Install/build/services without auto-start.
- Edit config files.
- Enable/start services.
sudo ./scripts/bootstrap.sh --vddk-dir /opt/vmware-vddk/vmware-vix-disklib-distrib --install-service --with-ui
sudo vi /etc/v2c-engine/config.yaml
sudo vi /etc/v2c-ui/.env.local
sudo systemctl enable --now v2c-engine v2c-uiUse --start-services only if /etc/v2c-engine/config.yaml and /etc/v2c-ui/.env.local are already populated with real values (not placeholders).
Bootstrap installs service units by default without auto-start (unless --start-services is passed).
Configure first, then enable/start:
sudo vi /etc/v2c-engine/config.yaml
sudo vi /etc/v2c-ui/.env.local
sudo systemctl enable --now v2c-engine v2c-ui
systemctl status v2c-engine v2c-ui
journalctl -u v2c-engine -fInstalled paths:
- Engine binary:
/usr/local/bin/v2c-engine - Engine config:
/etc/v2c-engine/config.yaml - Optional manual build env helper:
/etc/v2c-engine/build.env(not auto-sourced) - UI env config:
/etc/v2c-ui/.env.local - Runtime state/log root:
/var/lib/vm-migrator
Environment note:
- Bootstrap does not set global
LD_LIBRARY_PATHin/etc/profile.d. - This avoids breaking host tools like
journalctl/dnfwith VDDK libraries.
Config notes:
run/serveuse vCenter credentials fromvcenterblock in config (VC_PASSWORDenv fallback).migration.vddk_pathis required forrun(path to extracted VDDK root, for example/opt/vmware-vddk/vmware-vix-disklib-distrib).- You do not need a second vCenter credential block under
vddk. - CloudStack endpoint input is flexible:
cloudstack-mgmt.example.comcloudstack-mgmt.example.com:8080http://cloudstack-mgmt.example.com:8080/client/apihttps://cloudstack.example.com
Sample references:
- Engine config template with all fields: examples/config.full.example.yaml
- UI env template: frontend/.env.example
runis the primary user command.- Internal base copy and delta sync are handled automatically inside
run. - Base copy and delta write directly into QCOW2 (no RAW intermediate).
- Delta sync uses VMware CBT (
QueryChangedDiskAreaspath). - Conversion (
virt-v2v-in-place) runs inconvertingstage after final sync (when enabled). - Stateful/resumable workflow persists state and logs per VM under
/var/lib/vm-migrator/<vm>_<moref>/. - Finalize is supported via:
- marker file (
FINALIZE) internally - immediate marker file (
FINALIZE_NOW) internally - CLI command
v2c-engine finalizefor operators - API/UI finalize action
- marker file (
Storage behavior:
- NFS pools:
- Destination path pattern:
/mnt/<storageid>/<vm>_<vmMoref>_disk<unit>.qcow2 - Engine ensures
/mnt/<storageid>exists and is mounted before copy. - If not mounted, engine attempts NFS mount using CloudStack storage pool details (
listStoragePools).
- Destination path pattern:
- Shared Mountpoint pools:
- Engine uses the CloudStack path directly as the destination root (no mount/unmount operations).
- Mandatory preflight validates path exists, is a directory, is writable, write+delete works, and free-space check is best-effort.
- On Ubuntu hosts, engine-managed NFS mounts use explicit
vers=3options to avoid QCOW2 flush I/O issues seen with some NFSv4 environments.- Optional override:
V2C_NFS_MOUNT_OPTS="<mount-options>"
- Optional override:
At a high level, each VM migration follows this model:
- The engine connects to vCenter, finds the VM, verifies disks/NICs, and ensures CBT is enabled.
- It creates a base snapshot and copies each VMware disk directly into QCOW2 on the selected CloudStack primary storage mount.
- It enters delta mode and repeatedly uses VMware CBT to fetch only changed blocks since the previous snapshot.
- When finalize is requested, or when
finalize_attime is reached, the engine shuts down the source VM according to policy, waits a short settle delay, and performs one final delta sync. - If enabled, it runs
virt-v2v-in-placeusing either boot-disk-only mode or temporarylibvirtxmlmode depending on detected guest disk layout. - It imports the root disk into CloudStack, then imports and attaches data disks, and finally attaches additional NICs.
Important behavior:
- Base and delta both write directly into QCOW2.
- Delta sync preserves QCOW2 metadata by writing through the qemu block path.
- The workflow is resumable through
state.json. statusreports current stage, next stage, and whether finalize has already been requested.
The main migration strategies are controlled by the migration: block in the VM spec.
This is the default behavior when delta_interval is set.
Parameters:
delta_interval- Required for normal continuous sync behavior.
- Unit: seconds.
- Controls how often the engine performs a delta round during the pre-cutover phase.
Behavior:
- Base copy completes first.
- The engine waits
delta_intervalseconds before the first delta round. - It then keeps running delta rounds every
delta_intervalseconds until finalize is triggered. - Finalize can be triggered manually from CLI/API/UI.
Example:
migration:
delta_interval: 300This is used when you want the tool to keep syncing until a planned cutover time.
Parameters:
finalize_at- Optional.
- Accepts ISO-like timestamps such as:
2026-03-12T23:30:00+00:002026-03-12T23:30:002026-03-12T23:30
finalize_delta_interval- Optional.
- Unit: seconds.
- Default:
30 - Used when the engine is inside the finalize window and wants tighter sync frequency before cutover.
finalize_window- Optional.
- Unit: seconds.
- Default:
600 - If current time is within
finalize_windowseconds offinalize_at, the engine reduces the sleep interval tofinalize_delta_interval.
finalize_settle_seconds- Optional.
- Unit: seconds.
- Delay after source VM shutdown/power-off and before the final snapshot is taken.
- Default if omitted/
0:- Windows guests:
30 - Linux/other guests:
15
- Windows guests:
Behavior:
- The engine still does normal delta rounds after base copy.
- Before the
finalize_attime, it uses:delta_intervalnormallyfinalize_delta_intervalonce the engine is inside the finalize window
- Once the current time passes
finalize_at, the engine treats that as a finalize request. - It then powers off the source VM according to
shutdown_mode, waitsfinalize_settle_seconds, performsfinal_sync, and continues import.
Example:
migration:
delta_interval: 300
finalize_at: "2026-03-12T23:30:00+00:00"
finalize_delta_interval: 30
finalize_settle_seconds: 30
finalize_window: 600You can trigger finalize explicitly even if finalize_at is not set.
Supported methods:
- CLI:
./v2c-engine finalize --spec ./spec.run.multi.example.yaml --vm Centos7 --config ./config.yaml./v2c-engine finalize --spec ./spec.run.multi.example.yaml --vm Centos7 --now --config ./config.yaml
- API:
POST /migration/finalize/{vm}POST /migration/finalize/{vm}?now=true
- UI:
FinalizeandFinalize Nowactions from Progress tab
Behavior:
Finalizecreates a finalize request and the workflow picks it up in the delta loop.Finalize Nowrequests immediate cutover from the delta loop wait:- it interrupts delta sleep and moves to
final_syncas soon as possible. - if currently in
base_copy, base copy still completes first, then workflow moves directly into finalization path.
- it interrupts delta sleep and moves to
- During finalization, the engine waits a short settle delay after shutdown before taking the final snapshot.
- Both requests are idempotent.
- If VM is already complete, finalize calls return success with completion status.
If a VM migration job fails, you can retry it from UI or API without regenerating all settings.
Supported methods:
- API:
POST /migration/retry/{vm}- Optional query:
?spec_file=/absolute/path/to/spec.yamlto force a specific spec. - If
spec_fileis not provided, server retries using the latest resolved spec for that VM.
- UI:
Retryaction from Progress tab (enabled only for failed jobs).
Behavior:
- Retry creates a new job ID and keeps previous failed job history.
- Retry is blocked (
409) if a job for that VM is alreadyqueuedorrunning.
+----------------------+
| v2c-engine run |
+----------+-----------+
|
v
+----------------------+
| init |
| - find VM |
| - ensure CBT |
| - create base snap |
+----------+-----------+
|
v
+----------------------+
| base_copy |
| - VDDK reads |
| - write QCOW2 |
| - per-disk parallel |
+----------+-----------+
|
v
+----------------------+
| delta loop |
| - wait delta_interval|
| - create delta snap |
| - QueryChanged... |
| - apply CBT blocks |
+----------+-----------+
|
+--------------+---------------+
| |
| finalize requested? | no
| finalize_at reached? +------> back to delta loop
|
v
+---------------------------+
| final_sync |
| - shutdown source VM |
| - create final snapshot |
| - apply last CBT changes |
+-------------+-------------+
|
v
+---------------------------+
| converting |
| - virt-v2v-in-place |
| (if enabled) |
+-------------+-------------+
|
v
+---------------------------+
| import_root_disk |
| - importVm |
| - attach extra NICs |
+-------------+-------------+
|
v
+---------------------------+
| import_data_disk |
| - importVolume |
| - attachVolume |
+-------------+-------------+
|
v
+---------------------------+
| done |
+---------------------------+
# Run one or more VM migrations
./v2c-engine run --spec ./spec.run.example.yaml --config ./config.yaml
./v2c-engine run --spec ./spec.run.example.yaml --spec ./another-vm.yaml --config ./config.yaml
./v2c-engine run --spec ./spec.run.multi.example.yaml --parallel-vms 3 --config ./config.yaml
# Check status (includes current stage, next stage, finalize_requested, finalize_now_requested)
./v2c-engine status --spec ./spec.run.multi.example.yaml --config ./config.yaml
./v2c-engine status --spec ./spec.run.multi.example.yaml --vm Centos7 --json --config ./config.yaml
# Request finalize for selected VM(s) from a batch spec
./v2c-engine finalize --spec ./spec.run.multi.example.yaml --vm Centos7 --config ./config.yaml
./v2c-engine finalize --spec ./spec.run.multi.example.yaml --vm Centos7 --now --config ./config.yaml
./v2c-engine finalize --spec ./spec.run.multi.example.yaml --vm Centos7 --vm-id vm-3312 --config ./config.yaml
# API service
./v2c-engine serve --config ./config.yaml --listen :8000finalize is idempotent:
- If finalize already requested, command returns success and reports it.
- If finalize-now already requested (
--now), command returns success and reports it. - If VM is already done, command returns success and reports completion.
Stage order:
initbase_copydelta/final_syncconvertingimport_root_diskimport_data_diskdone
Highlights:
- Snapshot quiesce policy:
autotries quiesced snapshots when VMware Tools are healthy, else fallback non-quiesced. - CBT auto-enable if not already enabled.
- Parallel VM and parallel disk support.
- CloudStack import of root + data disks; data disk attach handled in workflow.
- Additional NIC mappings are attached after import VM creation.
UI runs as a service (v2c-ui) and talks to v2c-engine serve.
API endpoints:
GET /vmware/vmsGET /cloudstack/{zones|clusters|storage|networks|diskofferings|serviceofferings}POST /migration/specPOST /migration/startPOST /migration/retry/{vm}- Optional query:
?spec_file=...
- Optional query:
GET /migration/status/{vm}GET /migration/jobsPOST /migration/finalize/{vm}- Optional query:
?now=truefor immediate finalize request
- Optional query:
GET /migration/logs/{vm}GET /health
Status payload includes:
stagenext_stagefinalize_requestedfinalize_now_requestedoverall_progresstransfer_speed_mbpsdisk_progress
Use examples/README.md.
Known limitations and platform-specific workarounds are tracked in docs/KNOWN_ISSUES.md.
Common templates:
- examples/config.full.example.yaml
- examples/spec.run.single-vm.single-disk.single-nic.yaml
- examples/spec.run.single-vm.multi-disk.multi-nic.yaml
- examples/spec.run.single-vm.defaults-only.yaml
- examples/spec.run.multi-vm.single-disk.single-nic.yaml
- examples/spec.run.multi-vm.multi-disk.multi-nic.yaml
- examples/spec.run.multi-vm.defaults-only.yaml
go build -o v2c-engine ./cmd/v2c-engineIf VDDK is in non-default path:
export CGO_CFLAGS="-I/opt/vmware-vddk/include"
export CGO_LDFLAGS="-L/opt/vmware-vddk/lib64 -lvixDiskLib -ldl -lpthread"chmod +x ./scripts/uninstall.sh
sudo ./scripts/uninstall.sh --purge-stateuninstall.sh removes service/files/config artifacts created by bootstrap.
It does not auto-remove OS packages.
To print the bootstrap package list for manual review/removal:
./scripts/uninstall.sh --list-packagesbase-copy and delta-sync are hidden by default.
To enable direct expert usage:
export V2C_ENABLE_EXPERT_COMMANDS=1