in nova.conf you have
[pci]
passthrough_whitelist = {"address":"*:03:00.*","physical_network":null}
note the address is not relevent here just the physical_network: null
im assuming the addres matches a PF in this case and that it is in switchdev mode.
it would be in switch deve mode if it had VF were inteded to be used for hardware offloaded ovs.
a more commen example might be
[pci]
passthrough_whitelist = {"devname":"<pf netdev name>","physical_network":null}
while we discourage the use of devname in the whitelist anyway many people
use it as an easy way to whitelist the nic and if its a PF any of its child vfs.
as a use i create a port of vnic-type=driect-physical on a vxlan or geneve network.
i.e. openstack port create --vnic-type direct-physical --network my-tunneled-network my-pf
NOTE: (neutron will chose the first segmentation type driver form its list of enabled tenant segmentaion types which is typically a tunnel network to preserve the vlan/flat networks for the admin to use as provider networks. the network type is available to admin users via a openstack network show but not to normal users so they dont know what type of network they will get.)
then the user boot a vm with this port
i.e. openstack server create --port my-pf my-server
at this point nova will schdule the vm to a host that has a free PF.
nova will consider any PF that is whitelisted and has not had a vm claim one of its VFs
as an avaiable PF. this is a feature used by many to not need to decide if a host nic should be used for PF or VFs upront. e.g. it can be consumed as a PF it its not hosting instance one of its VF alraedy or can be used for VF without needing to update the nova config and restart the agent.
if the PF has is in switchdev mode (i.e because you intended its VFs to be used via ovs) then
it will pass this check in the ovn driver.
capabilities = ovn_utils.get_port_capabilities(port)
if (vnic_type in ovn_const.EXTERNAL_PORT_TYPES and ovn_const.PORT_CAP_SWITCHDEV not in capabilities): LOG.debug("Refusing to bind port due to unsupported vnic_type: %s " "with no switchdev capability", vnic_type)
return
so vnic_type direct phyical will be in the list of external port types and because
the pf was configured for hardware offloaded ovs it will have the switchdev capablity.
at this point nova will use that bound port to create the vm. this happens in two parts
it would be my hope that the first step of spwaning the vm on the host, pluggin the interface into the netwok backedn would fail but it wont not. in this case that should be done by os-vif.
how this works is nova first looks at the vif-type set by the port ml2 mechanium driver
which will be ovs.
since this is bound by ovn it will not take the second branch since hybrid_plug=false for ovn.
elif vif.is_hybrid_plug_enabled():
obj = _get_vif_instance(
vif, objects.vif.VIFBridge, port_profile=profile, plugin="ovs", vif_name=vif_name, bridge_name=_get_hybrid_bridge_name(vif))
so it will take the else
else:
obj = _get_vif_instance(
vif, objects.vif.VIFOpenVSwitch, port_profile=profile, plugin="ovs", vif_name=vif_name)
if vif["network"]["bridge"] is not None: obj.bridge_name = vif["network"]["bridge"]
this construct a standard VIFOpenVSwitch, object which correspond to normal ovs backend.
this will cause os-vif to simply create a normal ovs port which will succeed.
the next step of spwaning the vm is generating the xml which for vnic_type direcirct-physical should be the hostdev elemnt
as its a pf we cannot use the interface element so we can enforce a mac adress or any other isolation. since we are providing the entire pf the guest could change that anyway.
now i think we are actully saved here because of how the
_get_config_os_vif function works.
and the vm will boot without the sriov interface attached eventhough it has claimed it.
if ovn or any other ml2 driver accpeted a tunneled netowrk (vxlan,gre,geneve) amd returned vif_type hw_veb however we would generate the first xml that would be a security whole.
since this is a plugable part of neutron and since neutron planned to supprot external interfaces for standar sriov ports which woudl use hw_veb this is still a concern but maybe not a security bug. this is still very very broken behavior.
just to clarify the possibel security issue ill try and give an exact scenario.
neutron deployed with ml2/ovn after support for external ports is added with vnic-type driect physical.
https:/ /github. com/openstack/ neutron/ commit/ 269734184645e7e 168a7a6de9352ef 79aae8b6f4
in nova.conf you have _whitelist = {"address" :"*:03: 00.*"," physical_ network" :null}
[pci]
passthrough
note the address is not relevent here just the physical_network: null
im assuming the addres matches a PF in this case and that it is in switchdev mode.
it would be in switch deve mode if it had VF were inteded to be used for hardware offloaded ovs.
a more commen example might be _whitelist = {"devname":"<pf netdev name>", "physical_ network" :null}
[pci]
passthrough
while we discourage the use of devname in the whitelist anyway many people
use it as an easy way to whitelist the nic and if its a PF any of its child vfs.
as a use i create a port of vnic-type= driect- physical on a vxlan or geneve network.
i.e. openstack port create --vnic-type direct-physical --network my-tunneled-network my-pf
note many users will not know if the private teanant networks they create are backed by a tunnel or not. https:/ /docs.openstack .org/api- ref/network/ v2/index. html?expanded= create- network- detail# create- network
per the api ref the segmentation-type is optional and if not set is chosen by neutron.
NOTE: (neutron will chose the first segmentation type driver form its list of enabled tenant segmentaion types which is typically a tunnel network to preserve the vlan/flat networks for the admin to use as provider networks. the network type is available to admin users via a openstack network show but not to normal users so they dont know what type of network they will get.)
then the user boot a vm with this port
i.e. openstack server create --port my-pf my-server
at this point nova will schdule the vm to a host that has a free PF.
nova will consider any PF that is whitelisted and has not had a vm claim one of its VFs
as an avaiable PF. this is a feature used by many to not need to decide if a host nic should be used for PF or VFs upront. e.g. it can be consumed as a PF it its not hosting instance one of its VF alraedy or can be used for VF without needing to update the nova config and restart the agent.
if the PF has is in switchdev mode (i.e because you intended its VFs to be used via ovs) then
it will pass this check in the ovn driver.
https:/ /github. com/openstack/ neutron/ blob/a3dc80b509 d72c8d1a3ea007c b657a9e217ba66a /neutron/ plugins/ ml2/drivers/ ovn/mech_ driver/ mech_driver. py#L861- L865
if (vnic_type in ovn_const.
return
ovn_const. EXTERNAL_ PORT_TYPES is defined here https:/ /github. com/openstack/ neutron/ blob/a3dc80b509 d72c8d1a3ea007c b657a9e217ba66a /neutron/ common/ ovn/constants. py#L286- L289
EXTERNAL_PORT_TYPES = (portbindings. VNIC_DIRECT,
portbindings. VNIC_DIRECT_ PHYSICAL,
portbindings. VNIC_MACVTAP)
so vnic_type direct phyical will be in the list of external port types and because
the pf was configured for hardware offloaded ovs it will have the switchdev capablity.
at this point nova will use that bound port to create the vm. this happens in two parts
it would be my hope that the first step of spwaning the vm on the host, pluggin the interface into the netwok backedn would fail but it wont not. in this case that should be done by os-vif.
how this works is nova first looks at the vif-type set by the port ml2 mechanium driver
which will be ovs.
https:/ /github. com/openstack/ nova/blob/ 56ac9b22cfa71da af7a452fda63bd2 7811a358c4/ nova/network/ os_vif_ util.py# L523-L524 so nova_to_osvif_vif which converts the port into an os-vif class will call to_osvif_ vif_ovs /github. com/openstack/ nova/blob/ 56ac9b22cfa71da af7a452fda63bd2 7811a358c4/ nova/network/ os_vif_ util.py# L328-L358
_nova_
https:/
since the vnic-type is direct-physical it will not take the first branch
if vnic_type == model.VNIC_ TYPE_DIRECT: direct_ vif_instance(
port_ profile= _get_ovs_ representor_ port_profile( vif),
plugin= "ovs")
_set_represent or_datapath_ offload_ settings( vif, obj)
obj = _get_vnic_
vif,
since this is bound by ovn it will not take the second branch since hybrid_plug=false for ovn. hybrid_ plug_enabled( ):
objects. vif.VIFBridge,
port_ profile= profile,
plugin= "ovs",
vif_ name=vif_ name,
bridge_ name=_get_ hybrid_ bridge_ name(vif) )
elif vif.is_
obj = _get_vif_instance(
vif,
so it will take the else
objects. vif.VIFOpenVSwi tch,
port_ profile= profile,
plugin= "ovs",
vif_ name=vif_ name) ]["bridge" ] is not None:
obj. bridge_ name = vif["network" ]["bridge" ]
else:
obj = _get_vif_instance(
vif,
if vif["network"
this construct a standard VIFOpenVSwitch, object which correspond to normal ovs backend.
this will cause os-vif to simply create a normal ovs port which will succeed.
the next step of spwaning the vm is generating the xml which for vnic_type direcirct-physical should be the hostdev elemnt
https:/ /libvirt. org/formatdomai n.html# host-device- assignment
<hostdev mode='subsystem' type='pci' managed='yes'> 'no'>
<source writeFiltering=
<address domain='0x0000' bus='0x06' slot='0x02' function='0x0'/>
</source>
</hostdev>
as its a pf we cannot use the interface element so we can enforce a mac adress or any other isolation. since we are providing the entire pf the guest could change that anyway.
now i think we are actully saved here because of how the
_get_config_os_vif function works.
https:/ /github. com/openstack/ nova/blob/ 56ac9b22cfa71da af7a452fda63bd2 7811a358c4/ nova/virt/ libvirt/ vif.py# L493-L534
because the os-vif object is VIFOpenVSwitch we will take this elif /github. com/openstack/ nova/blob/ 56ac9b22cfa71da af7a452fda63bd2 7811a358c4/ nova/virt/ libvirt/ vif.py# L516-L517
https:/
and we will create a normal ovs interface that will convenitely match the one we plugged eairlier instead
https:/ /github. com/openstack/ nova/blob/ 56ac9b22cfa71da af7a452fda63bd2 7811a358c4/ nova/virt/ libvirt/ vif.py# L446-L450
and the vm will boot without the sriov interface attached eventhough it has claimed it.
if ovn or any other ml2 driver accpeted a tunneled netowrk (vxlan,gre,geneve) amd returned vif_type hw_veb however we would generate the first xml that would be a security whole.
since this is a plugable part of neutron and since neutron planned to supprot external interfaces for standar sriov ports which woudl use hw_veb this is still a concern but maybe not a security bug. this is still very very broken behavior.