Infrastructure Adventures

12/08/2010

I/O Virtualization Overview: CNA, SR-IOV, VN-Tag and VEPA

Filed under: Network, Virtualization — Tags: , , , , , , , , , , — Joe Keegan @ 3:58 PM

I’m trying to wrap my mind around the IO virtualization starting to be seen in virtualization and converged infrastructure deployments. There’s a whole host of new acronyms and technologies like CNA, SR-IOV, VN-Link and VEPA. What are these things and how do they relate? Let’s find out.

CNA

A CNA is a Converged Network Adapter and this is pretty strait forward to understand. It’s a PCI card that is both a NIC and a FCoE HBA. Basically it does both “Run of the mill” Ethernet Networking  and FCoE SAN.

The CNA shows  up in the OS as two different cards, a NIC and an HBA. To be able to use the FCoE side of things you’ll need to connect the CNA to an FCoE device such as a FCoE switch.

Generally a trunk is delivered to the CNA from the switch with a FCoE VLAN and a Data VLAN (or more than one data VLAN in the case of virtulization) and the FCoE frames are set to CoS 3.

SR-IOV

Single Root – IO Virtualization (SR-IOV) is a standards based way of vitalizing PCI cards, meaning a single PCI card can show up as many “virtual” PCI cards. For example if you had a NIC that supports SR-IOV then that NIC can show up as multiple “virtualized” NICs (not to be confused with a vNIC).

With SR-IOV each card has a Physical Function (PF) which is the interface to the physical card and then one or more Virtual Functions (VF) which is the virtualized instance of the card. The PF looks and acts like a normal PCI card with the additional ability to configure VFs.  The VFs also looks and acts like a normal PCI card, but with limited ability to make any configuration changes.

The intention is for the Hypervisor to access the PF and configure the VFs and then have each VM access a VF directly. So for this to work the Hypervisor/OS must support SR-IOV, such as VMWare’s VMDirectPath Dynamic feature.

SR-IOV is still really new and there are lots of developments in this area. Not everything is fully baked and one area that still needs to be standardized is how Inter-VLAN switching is to be done. This becomes more complicated since each VM uses a VF directly and for the most part bypasses the hypervisor.

One of the way’s Inter-VM switching is accomplished is by using an external switch. There are two main proposals right now that are working on becoming standards or possibly a single standard. The proposals include Cisco’s VN-Tag (802.1Qbh) and HP’s VEPA (802.1Qbg).

VN-Tag (802.1Qbh)

Basically the frames from each VM is tagged with an identifier, called a VN-Tag, and the switch has a virtual interface (VIF or seen as vEth in Cisco products) mapped to each identifier/VN-Tag.  From a switching point of view the switch treats the virtual and physical interfaces the same.

When the switch receives the frame with a VN-Tag it determines which VIF the frame should be mapped to and then receives the frame on that interface. The frame is then switches like normal to either a physical or another virtual interface.

So in the case of a deployment using SR-IOV the traffic from each VF would be VN-Tagged which would tie the VF to a specific VIF (vEth). As you can see above the VM on the left is using a VF configured to use VN-Tag 10 and traffic with VN-Tag 10 belongs to VIF 1/0/1 on the switch.

Obviously to use VN-Tags both the host (i.e. NIC, Hypervisior, what have you) and the switch both need to be able to understand VN-Tags. Also Cisco uses VN-Tags in several products including the Nexus 1000v, Nexus 2000 Fabric Extenders and in the UCS.

VEPA (802.1Qbg)

Virtual Ethernet Port Aggregator (VEPA) also enables inter-VM switching via an external switch. The basics of VEPA are a bit simpler then VN-Tag since the frames from the VMs are not tagged or modified in any way.

In VEPA all the frames from the VMs are forwarded out to the switch. If a server with two VMs are communicating then the frames from one VM would be forwarded out the physical servers NIC to the switch and the switch would forward it right back to the server to be delivered to the destination VM.

For VEPA to work the behavior of a switch will have to be changed to allow this hair pinning, since today a switch will not forward a frame out an interface it just received it on.

Final Notes

As you can imagine there are lots of nuances around the implement of both VN-Tag and VEPA, which I hope to cover in detail in future posts. But this at least helped me sort out what these four terms and technologies were and how they relate. I hope it can you too.

In the future I’ll look at the network in the Cisco UCS and HP Matrix to see how these technologies are put to work.

Advertisement

3 Comments »

  1. Thank you for your valuable information.

    Comment by anonse erotyczne — 12/23/2010 @ 10:09 PM

  2. Well, VEPA sounds a bit simpler than Qbh/VN-Tag because it does not provide the same functionality.

    For multiple independent virtual interfaces to be treated differently as Qbh does, you need Multichannel VEPA (Qbc-Qbg). This also uses tagging, utilizing QinQ mechanisms to tunnel different interfaces, and without allowing for cascading of virtual connectors, with some multicast inneficiencies and requiring new hardware.

    That picture puts both complexities into a different perspective…

    Good work still!!

    Comment by Pablo Carlier — 03/07/2011 @ 11:57 AM

    • Thanks for the info and stopping by. It’s been a while since I wrote this and it seems like time for an update.

      Comment by Joe Keegan — 03/08/2011 @ 12:37 PM


RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Blog at WordPress.com.

%d bloggers like this: