fs_lvm offline migration does not work
See original GitHub issueDescription No way for offline migrate VM with system drive on fs_lvm VM is going to the FAILED state after migration, because lvm volume is not activated on target node.
Probably it is should be done by tm/fs_lvm/premigrate
script, but it seems it is called only for online migration
To Reproduce Try to offline migrate.
Expected behavior VM succesful migrated
Details
- Affected Component: Storage
- Hypervisor: KVM
- Version: 5.6.0
Additional context
For solve this problem need to activate volume after migration and before resuming VM:
lvchange -ay /dev/vg-one-137/lv-one-6446-0
Progress Status
- Branch created
- Code committed to development branch
- Testing - QA
- Documentation
- Release notes - resolved issues, compatibility, known issues
- Code committed to upstream release/hotfix branches
- Documentation committed to upstream release/hotfix branches
Issue Analytics
- State:
- Created 5 years ago
- Comments:21 (20 by maintainers)
Top Results From Across the Web
Troubleshoot live migration issues - Windows Server
This article provides information on solving the issues when live migrating virtual machines. Applies to: Windows Server
Read more >FslInstallation/Windows - FSL - FslWiki
Now that the FSLvm is installed, launch VMware Workstation Player from the Start menu and click on the Open icon. In the Open...
Read more >Migrate VM when vCenter server is offline!!?
I know when vCenter server down, HA services are still running on hosts, but i don't know can i migrate virtual machine between...
Read more >FslInstallation/Windows
Now that the FSLvm is installed, launch VMware Player from the Start menu and click on the Open icon. In the Open dialogue,...
Read more >Migrating a Physical Machine or Virtual Machine to a System
(You can also import an Open Virtualization Format (OVF) file to a system, ... After migrating a PM or VM, the network driver...
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
The hard-coded
TYPE=FILE
for the volatile disks rise other issues with recent libvirt. At least on CentOS7 libvirt is blocking the live-migration because it is assuming that the disks of type “file” are not on shared filesystem. But in fact this is just the definition, the real device is not file after all, but libvirt desn’t know that.As a workaround I’ve created alternate deploy script that use a trivial helper to alter the domain XML and replaces the definition from file to block dev.
IMO there should be a
DISK_TYPE
option in theTM_MAD_CONF
instead of the hard-coded type, just like it is for the IMAGE datastore.Thanks, this confirms the problem is
<TYPE>FILE</TYPE>
We’ll revisit the scripts and consider when the behavior should be exteneded to accomodate this.Thanks.
Replanning this to 5.8.2 till we have a clear picture of what is needed in terms of actions