Yes, now that you mention it, it seems that "Generate from host NUMA configuration" in 15.04 simply puts everything in node 0. At least, that's what I'm seeing. While that's a little better than just spanning the entire guest across both nodes (as a "default"), leaving an entire second node available for sunshine and rainbows is not a desired function.
Manual pinning seems to be the only way to go. Unfortunately, this puts a heavy strain on managing those resources -- I have numerous scrap papers laying all over my office with CPU and memory counts under node columns, with arrows pointed left and right. It's comical to think about, but ...
Very surprising that libvirt and Ubuntu are not able to recognize the available NUMA resources when starting guests and automatically placing them in the node that will be most appropriate for their requirements.
I do understand regarding your comment about manual pinning being broken. And from what I can tell, that is working fine. So essentially the LTS release seems covered.
Now that you speak about vcpu and cpuset, I am now very curious to try running VMs with a much different topology entirely unknown to libvirt. To manually (hoping that in the future this will be a feature) make physical cores available as "cores" and HT cores available as "threads". I don't really know much about any of this function, maybe it already exists or maybe I've just lost my marbles.
In all of the tuning aspects of libvirt, I am always concerned with the "quality" of HT compared to the physical core. There are a few things here, one is that I do not want to waste a potentially usable thread by disabling HT. But second, under heavy load I would prefer a process on a guest with 2 cores to be pushed to a physical core with the respective physical HT as a part of that single core, while also making available the second physical core with its HT as a physical part of that.
As I type this email, I have a database server that is happily pinned to only HT cores right now. I can't imagine that would be detrimental to its function, but preferably I would like some sort of policy to ensure each guest is operating on at least one legitimate physical core, and not entirely on the core leftovers.
Maybe I am thinking too far down the road here :-). You enlighten me greatly Stefan, your wisdom us always appreciated.
Hello Stefan,
Yes, now that you mention it, it seems that "Generate from host NUMA configuration" in 15.04 simply puts everything in node 0. At least, that's what I'm seeing. While that's a little better than just spanning the entire guest across both nodes (as a "default"), leaving an entire second node available for sunshine and rainbows is not a desired function.
Manual pinning seems to be the only way to go. Unfortunately, this puts a heavy strain on managing those resources -- I have numerous scrap papers laying all over my office with CPU and memory counts under node columns, with arrows pointed left and right. It's comical to think about, but ...
Very surprising that libvirt and Ubuntu are not able to recognize the available NUMA resources when starting guests and automatically placing them in the node that will be most appropriate for their requirements.
I do understand regarding your comment about manual pinning being broken. And from what I can tell, that is working fine. So essentially the LTS release seems covered.
Now that you speak about vcpu and cpuset, I am now very curious to try running VMs with a much different topology entirely unknown to libvirt. To manually (hoping that in the future this will be a feature) make physical cores available as "cores" and HT cores available as "threads". I don't really know much about any of this function, maybe it already exists or maybe I've just lost my marbles.
In all of the tuning aspects of libvirt, I am always concerned with the "quality" of HT compared to the physical core. There are a few things here, one is that I do not want to waste a potentially usable thread by disabling HT. But second, under heavy load I would prefer a process on a guest with 2 cores to be pushed to a physical core with the respective physical HT as a part of that single core, while also making available the second physical core with its HT as a physical part of that.
As I type this email, I have a database server that is happily pinned to only HT cores right now. I can't imagine that would be detrimental to its function, but preferably I would like some sort of policy to ensure each guest is operating on at least one legitimate physical core, and not entirely on the core leftovers.
Maybe I am thinking too far down the road here :-). You enlighten me greatly Stefan, your wisdom us always appreciated.
Thanks again.
~Laz