NUMA aware placement for KVM
See original GitHub issueDescription This issue is to implement placement constraints on NUMA nodes. A VM can be pinned on a node based on:
- PCI affinity
- Memory affinity
- Manual node assigment
Use case Improve performance of VM by making memory and/or I/O access to the same NUMA node.
Interface Changes New atrributes to set NUMA placement preferences:
NUMA = [
NODE_POLICY = <PCI | MEMORY | AUTO | MANUAL >
ALLOCATION_POLICY = <DEDICATED | NODE >
NUMA_NODE_ID = <>
HUGE_PAGE_SIZE_MB = <>
]
NUMA node selection (NODE_POLICY
attribute):
- PCI affinity. Place the VM in the same numa node as the PCI passthrough devices. If multiple devices are present the VM will be placed in the node of the first one.
- MEMORY guided. Place the VM in the first numa node with enough free memory
- MANUAL place the VM in the given node
- AUTO: : As assgined by numad implies (default):
<numatune>
<memory mode='strict' placement='auto'/>
</numatune>
...
<vcpu placement='auto'>2</vcpu>
vCPU and Memory will be assigned according to ALLOCATION_POLICY
as follows:
- NODE: OpenNebula will pin the vCPUs an memory to the selected NUMA core resources. (default)
<vcpu placement='static' cpuset='0-13,28-41'>2</vcpu>
...
<numatune>
<memory mode='strict' nodeset='0'/>
</numatune>
Other VMs can be assigned to the same NUMA cores in the node.
- DEDICATED: Same as node but each vCPU is pinned to an specific NUMA core. OpenNebula will also mark this node as used and will not be assigned to any other VM.
<cputune>
<vcpupin vcpu='0' cpuset='0'/>
<vcpupin vcpu='1' cpuset='2'/>
</cputune>
Additional Context N/A
Progress Status
- Branch created
- Code committed to development branch
- Testing - QA
- Documentation
- Release notes - resolved issues, compatibility, known issues
- Code committed to upstream release/hotfix branches
- Documentation committed to upstream release/hotfix branches
Issue Analytics
- State:
- Created 4 years ago
- Comments:16 (16 by maintainers)
Top Results From Across the Web
KVM on NUMA » ADMIN Magazine
It gives the normalized "distances" or "costs" between the different NUMA nodes. If a process running in NUMA node 0 needs 1 nanosecond...
Read more >9.3. libvirt NUMA Tuning Red Hat Enterprise Linux 7
Run numad to align the guests' CPUs and memory resources automatically. Then run numastat -c qemu-kvm again to view the results of running...
Read more >Advanced NUMA Configuration Options in AHV
The placement is set to a given NUMA node (NUMA node 0 and 1 in this case and both real cores + hyperthreads)....
Read more >Isolate CPU Resources in a NUMA Node on KVM
View the NUMA topology. In the example below, there are two NUMA nodes (sockets), each with a four-core CPU with hyperthreading enabled. ·...
Read more >What's Coming From the MM For KVM
I.e. making Linux NUMA aware. ➢ The Linux Scheduler currently is blind about the memory placement of the process.
Read more >Top Related Medium Post
No results found
Top Related StackOverflow Question
No results found
Troubleshoot Live Code
Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start FreeTop Related Reddit Thread
No results found
Top Related Hackernoon Post
No results found
Top Related Tweet
No results found
Top Related Dev.to Post
No results found
Top Related Hashnode Post
No results found
Top GitHub Comments
Hi, I’m currently working on #3673. Not only CPU_MODEL, but all configuration options from vmm_exec_kvm.conf will be possible to specify in cluster or in the host. The priority of the values will be VM > Host > Cluster > config file
But the RAW configuration option works slightly different now. The RAW from the config file is added to the VM RAW. This behavior must preserve. So the logic for RAW will be: use VM RAW and add first RAW option from Host, Cluster or config file.
Hi, thanks for reply. I think that my request is more related to this #3664 because what I doing in raw xml, is de facto pinning to numa node and enabling hugepages.