Fun with QinQ tunnels – Part 1

QinQ tunnels extend a VLAN across the network or the internet. The usual way this is done is by having a standard VLAN in your network connecting to a QinQ tunnel in the service provider network at both ends.

This allows multiple VLANs in your network to be encapsulated within another VLAN across the demarc boundaries, and back into your network at another site.

There are some prerequisites with setting up QinQ tunnels, in that the MTU size must be increased to accommodate the larger packet size. You also need a switch that supports them, for this I used a 3560 and a 3750, both running Advanced IP Services, the Inside switches (in the first diagram are 3550s, and in the final diagram we used the same 3750).

The basic diagram looks like this:

QinQ tunnels
A standard Trunk port from the Inside Switch (e0/1) connects to the QinQ trunk on the provider switch (e0/1), which then connects to the other provider switch via another standard trunk (VLAN 10, e0/10 – e0/10), and finally a QinQ tunnel port on the other Provider switch (e0/1) connects to a standard trunk port (e0/1) on the other Inside switch. The dotted line shows how the switches, and the end user, see the link.

The configuration of the switches would be as follows:

Inside switch (left hand side)

vlan 4
  name Client_VLAN
vlan 5
  name Server_VLAN
vlan 6
  name Other_VLAN
int e0/1
  description **** Link to QinQ ****
  switchport trunk encapsulation dot1q
  switchport trunk allowed vlan 4,5,6
  switchport mode trunk
int e0/4
  switchport access vlan 4
int e0/5
  switchport access vlan 5
int e0/6
  switchport access vlan 6

Provider Switch (left hand side)

system mtu 1998
system mtu jumbo 9000
vlan 10
  name QinQ_VLAN
int e0/1
  description **** QinQ VLAN ****
  switchport access vlan 10
  switchport trunk encapsulation dot1q
  switchport mode dot1q-tunnel
  no keepalive
  l2protocol-tunnel cdp
  l2protocol-tunnel stp
int e0/10
  description **** Provider to Provider link ****
  switchport trunk encapsulation dot1q
  switchport trunk allowed vlan 10
  switchport mode trunk

Provider Switch (right hand side)

system mtu 1998
system mtu jumbo 9000
vlan 10
  name QinQ_VLAN
int e0/1
  description **** QinQ VLAN ****
  switchport access vlan 10
  switchport trunk encapsulation dot1q
  switchport mode dot1q-tunnel
  no keepalive
  l2protocol-tunnel cdp
  l2protocol-tunnel stp
int e0/10
  description **** Provider to Provider link ****
  switchport trunk encapsulation dot1q
  switchport trunk allowed vlan 10
  switchport mode trunk

Inside switch (right hand side)

vlan 4
  name Client_VLAN
vlan 5
  name Server_VLAN
vlan 6
  name Other_VLAN
int e0/1
  description **** Link to QinQ ****
  switchport trunk encapsulation dot1q
  switchport trunk allowed vlan 4,5,6
  switchport mode trunk
int e0/4
  switchport access vlan 4
int e0/5
  switchport access vlan 5
int e0/6
  switchport access vlan 6

Now if you attach a laptop to the same ports on both sides and assign an IP address to both laptops (say 10.1.1.10/24 and 10.1.1.11/24) they should be able to ping each other.

The above is an in-an-ideal-world scenario. Really you just want to be able to configure standard trunk links on your equipment and have the service provider take care of all the configuration of the QinQ tunnels. But sometimes what you get is slightly different. And what we got was this:

qinq-presented

Now our options were to either (A) purchase a new switch so we can replicate the layout in the first picture, or (B) try and find a way of having the QinQ settings and the trunk settings on the same switch. Option A would cost quite a bit of money, but is option B possible? Can a QinQ tunnel exist on the same switch as the trunk? The dangers are that it won’t work due to loopguard and bpduguard. But it’s worth a shot, right?

Turns out that it is, and all it takes is one little ethernet cable, now connected from port e0/1 to e0/2. Port e0/10 is then used to link up to the provider switch at the other site.

The settings for the switches on the left-hand side remain the same, but what we have done is loop a cable from one port back into another port.

qinq-final

So now all of the config for both the right-hand side switches goes into the one switch (we used a 3750):

system mtu 1998
system mtu jumbo 9000
vlan 10
  name QinQ_VLAN
vlan 4
  name Client_VLAN
vlan 5
  name Server_VLAN
vlan 6
  name Other_VLAN
int e0/1
  description **** Link to QinQ ****
  switchport trunk encapsulation dot1q
  switchport trunk allowed vlan 4,5,6
  switchport mode trunk
int e0/2
  description **** QinQ VLAN ****
  switchport access vlan 10
  switchport trunk encapsulation dot1q
  switchport mode dot1q-tunnel
  no keepalive
  l2protocol-tunnel cdp
  l2protocol-tunnel stp
int e0/4
  switchport access vlan 4
int e0/5
  switchport access vlan 5
int e0/6
  switchport access vlan 6
int e0/10
  description **** Provider to Provider link ****
  switchport trunk encapsulation dot1q
  switchport trunk allowed vlan 10
  switchport mode trunk

Traffic will (effectively) come in on e0/1, get encapsulated in e0/2 and traverse to the other side via e0/10, again we tested via ping and all was good.

So it turns out that you can have your QinQ tunnels trunk and the VLAN trunk living on the same switch.

Part two answers the question “Can we route different subnets across a QinQ link?”