I recently attended
a CWNA course taught by none other than Devin Akin, wireless guru and
co-founder of CWNP. During the course I was reminded about how attenuation can
become your best friend when building high density Wi-Fi networks.
During my time working for a WISP years ago we had a particular site that started to run into problems. This site had a six sector Motorola Canopy setup running on the ISM unlicensed 5Ghz band at the top of a hospital building. The site provided 360 degrees of coverage utilizing six 60 degree sector APs which had worked well for quite some time. These APs utilized a proprietary TDMA radio technology and were also GPS sync’d to allow for efficient channel reuse. However, the AP’s had the potential to hear other non Motorola Canopy GPS sync’d 5Ghz devices from any direction. One day the cluster of AP’s started to pick up numerous interference from other competitive WISP deployments in the area using the same 5Ghz band. Signal to noise ratio dropped and so did CPE performance. We came up with a solution to take each individual sector AP off the tripod at the center of the six story building and mount each of the 60 degree sectors (orange cylinders in picture) below the top edge of the outer building walls. Here’s a simple illustration:
This new setup allowed the building to provide attenuation from other 5Ghz interference (blue cylinders in picture). After a spectrum analysis was performed on each AP we verified that interference dropped significantly due to the building attenuation. SNR increased and CPE performance increased.
The Motorola Canopy
hardware (now Cambium Networks) protocols in use are not using 802.11
protocols, however they use the same unlicensed frequency band and follow the
same principles of RF propagation. In high density Wi-Fi deployments
attenuation can become your best friend just like the hospital building became
ours once we relocated the sector APs. Attenuation such as walls, wall
thickness, and number of walls RF propagates through can help reduce co-channel
interference between access points AND clients that are reusing the same
channel space in high density Wi-Fi deployments.
An easy way we discussed identifying CCI/CCC during the CWNA course was to fire up your favorite wireless tool like Wifi Explorer Pro. Grab a laptop with a similar spec radio that your AP has (ex: your ap is 3×3, use a 3×3 client) and stand right underneath your AP. Here’s what a scan in my house looks like right next to my AP reading a -16 on channel 36.
Identify how many other APs your laptop can hear that are on the same channel of the AP your standing by. In the example above you can see that I can hear another AP using a primary channel of 36 at a -81. This AP along with other nearby clients could have the potential to cause co-channel contention. What you see may not be exactly what the AP hears as every radio has variations in receive sensitivity, but it will help to identify possible contention or interference.
We should no longer
build Wi-Fi for maximum distance in enterprise environments like we did years
and years ago, but we should now build for capacity and efficiency. So make
sure you take advantage of those walls and other building obstacles when
designing your next high capacity Wi-Fi network if needed.
Take a look at some of the following references to familiarize yourself with co-channel interference/contention:
I came across a scenario where a user had two data centers in different locations connecting back to the same ISP via BGP. These two data centers would be advertising a unique /24 at each site. However, the user also wanted to advertise the other DC’s /24, but not in an active state for failover. Being that the user was connecting back to the same provider AS, I decided to test using the BGP MED (Multi Exit Discriminator) attribute to determine which /24 would be the preferred route from the provider end. The MED with the lowest value takes a higher priority.
We’re using Extreme Networks summit series switches, so I tested the configuration on exos 22.6 using my EXOS virtual lab. I made sure to apply a lower MED value to the /24 I wanted to prioritize at each primary site and also applied a higher MED value to the backup /24 at the opposite site.
In summit series switches you start with a policy file that matches the network address used in the BGP network statement. You can then apply a med value to that match. EXOS uses vi when creating these policy files. Here are the commands:
In my lab, I was using two different AS numbers representing each DC connected back to the same AS. Therefore I also had to use the “enable bgp always-compare-med” command on the simulated provider AS exos virtual switch as MED values are not compared between routes advertised from different autonomous systems by default.
Of course, your provider has to be willing to accept MED. If not, you could also try prepending your AS number to the AS_Path. This is another way to manipulate what route is less preferred. However, this method is not always supported as some providers ignore duplicate AS in the AS_path. The change for AS_Path is simple, just replace the med set 100; with As-path “2020“; in the policy file. This example is using AS path 2020 and should be applied to the BGP network statement that serves as the backup route at the opposite DC location.
As a systems engineer for Extreme Networks, I like to get as much hands-on lab gear that I can within a reasonable budget. I have quite a large lab setup at home as you can see.
One of my goals was to build something a bit more portable and powerful enough to run ESXi with a few VMs. I also like things that don’t take up too much power sitting idle. My test lab configurations usually consist of different virtual network operating systems such as Extreme Networks EXOS as well as Extreme virtual Wi-Fi controllers, Extreme Control VMs, and a host of other VMs. I usually don’t generate large amounts of traffic or massive compute load in my lab. If I need more computing power, I move to my physical switches and higher end Sandy-bridge based XEON servers.
The x86 Based ODROID H2
After a bit of hunting, I found a nice and small Supermicro rig, the SYS-E200. You can check out an awesome portable rig built using a few SYS-E200 units at tinkertry.com here. One of the downsides is that the SYS-E200 build per unit can be quite expensive, so I started to look for a smaller x86 system on chip (SoC) solution. SoC tends to be cheaper, draw less power, but the downside is they aren’t typically powerful. I then came across the x86 SoC Odriod-H2 system by Hardkernel which sported a Gemini Lake Intel CPU with VT-x virtualization support. This board looked like a perfect small and portable solution to run ESXi on. I quickly placed a pre-order as the board started at just $112. Here are the full specifications:
Intel Quad-core processor J4105 (14nm) with 4MiB Cache, up to 2.5Ghz(Single Thread) or 2.3Ghz(Multi Thread)
This tiny x86 lab box has some impressive specifications. I didn’t need lots of CPU power, but with dual SATA ports, 32GB max RAM, M.2 support, VT-x, and 2 Gigabit interfaces I couldn’t pass this board up. I’m glad I preordered because the units sold out pretty quick. Here are the total build costs so far:
The power switch isn’t required as the board does have a small power and reset button that’s accessible through a small opening in the ODROID case, but I thought it would function a bit better with a larger power button. You could even run with NVMe or eMMC storage only and go with the smaller Type 2 case.
Once I received the board, I inserted the first 4GB RAM module (Patriot), and to my dismay, I couldn’t get the unit to post. Nothing I tried worked. I quickly posted to the ODRIOD forum and noticed other people were having issues with different RAM modules. I quickly ordered a second RAM stick that was on the officially supported list, and I finally got the board posted to the BIOS. I would have gone with a larger capacity module but didn’t want to spend the extra cash just in case the board was DOA. One thing for sure is that this unit is picky on RAM, so make sure you order from the official HardKernel supported list. From reading through the rest of the forums, a BIOS update is in the works to correct some of the RAM compatibility issues.
ODRIOD H2 and ESXi
The next task was to get ESXi installed. I did some preliminary research on the NICs which were Realtek RTL8111G units. I quickly found that newer versions of ESXi didn’t have these drivers baked in, so I started looking at how to add the drivers into ESXi. I followed an article from sysadminstories.com for the how-to. I started with ESXi 6.5.0 update 2 offline bundle. Free ESXi doesn’t have a 6.7 offline bundle that I could find.
Once you have a USB ESXi image ready, plug it in. Set the bios order to boot via USB and install ESXi. I decided to install a SATA SSD in my ODRIOD as I had an extra 64GB drive laying around which saved on the build cost. I also had to manually modify precheck.py during the ESXi installation since the system detected less than 4GB of RAM and my other RAM module wasn’t working. Here’s another article that shows the steps from simon-simonnaes.blogspot.com.
I then loaded up one of my favorite virtual network operating systems, Extreme Networks virtual_EXOS ISO from the Extreme Networks GitHub page. If you get the following error during the EXOS installation “mount: mounting /dev/hdc on /mnt/a failed: No such device or address” make sure your VM cd-rom drive is on IDE controller 1 and set to Master in Vmware. I added the first Realtek NIC assigned to vSwitch0 on port 2 of EXOS and the second Realtek NIC assigned to a new vSwitch to port 3 on EXOS. I’m now running a two-port virtual_EXOS system that bridges traffic across both ODROID-H2 NICs.
With three H2s, I could also try and build an HCI demo lab possibly running Nutanix CE. A FreeNAS build would also be a cool project. You’d have to check driver support, but as of the end of December 2018, the ODROID H2 is still on back order. Overall, this was a pretty fun build.
Since my last post, I’ve had quite a life change of events occur. I recently accepted a position with Extreme Networks as a Senior Systems Engineer and moved out of Northwest Indiana to North Carolina. How did that happen after almost eight years at my previous role and fresh into an interim position? Well, my wife and I had been thinking about relocating for quote some time. We had visited numerous warmer states within the last couple of years and ended up revisiting Raleigh North Carolina quite a few times. The only thing stopping us from making the jump was a job of course. However I didn’t want to apply for any IT job, so I spent quite some time searching and applying to specific positions. One of those happened to be with Extreme Networks. At Purdue Northwest, we were a legacy Enterasys customer who transitioned into Extreme Networks after the acquisition of Enterasys. I’d become very familiar with Extreme Networks products and had experience showcasing how we used Extreme Networks at PNW numerous times. I loved working with Extreme products and thought it would be even better working for Extreme Networks.
I’m now two months into my new position and am enjoying it very much. We’re settling into the area and are also looking for a local church home. The weirdest transition is the kids are now on a year-round schooling schedule, but we’re getting used to it. I also work from home and work out of our main office once a week. My home lab is growing fast and getting to meet new and potential Extreme Networks customers is fun. Purdue Northwest was great. However, I felt very comfortable leaving the University in the hands of some great folks. I know my previous team will do a fantastic job.
Now I’m focused more than ever in my passion for networking. I’m learning a great deal and am working with some of the brightest minds in the networking community. I’m sure my future posts will start diving back down more of the technical track, but I’ll make sure to share the culture side of things as well. I’d like to thank God first and foremost along with my wife and family for all the support. I’d also like to thank everyone else who helped me get to where I am.
If you make your way into the world of networking, you’re bound to come across a decision path on how you should handle network expansion. Should your default method always be to extend or stretch your layer 2 bridge domain? The root of the answer can be found when discussing the why. Let’s take a look at some of the use cases I’ve come across within enterprise network environments:
Device Requirement Device “A” needs to communicate with device “B” and those two devices are “required” to live on the same layer 2 broadcast domain. I haven’t come across any new devices or applications that fall into that spectrum, and it’s 2018. However, some enterprise organizations may still have legacy devices or poorly manufactured devices/applications with no foreseeable updates that may fall into this category.
Customer demand A customer you service in area “A” needs network services expanded to area “B.” They want their equipment to stay on the same subnet. Cough, cough, point of sales systems. I believe that modern POS systems can talk via IP across different subnets, but this can also be a possible use case that still comes up.
Data center disaster recovery or should I say “specific” DR models. I say “specific” because not all DC DR needs to be developed with an absolute layer 2 requirement extension model. Specific apps that are short-sighted will include layer 2 extension as a requirement. Someone insists that a VM pinned to a specific IP move from region “A” to region “B” and the IP needs to stay the same. What!?! Let’s think of better ways to do this, DNS, automate IP provisioning? However, this can still be a possible use case.
Ease of use Sometimes if you’re uncomfortable with routing protocols, it may seem easier to span a VLAN across the core of the network. Less IP provisioning, less ACL’s, potentially less firewall rules, and less management of those dreaded IP routing protocols. However, this is something we are in control of, so it’s OK to take time to research and learn what routing protocol would work best for your environment. Don’t let the lack of information drive your operation.
I can confirm that extending 100’s of VLAN’s through your core along with multiple instances of STP with a sprinkle of HSRP is NOT scalable. You will run into issues at some point. Others would say, “but my superior wants things done yesterday.” That’s another topic which may be worth blogging about in the future but hang in there.
You’re getting the point. There are some better ways to accomplish the listed use cases, but I understand that sometimes you may not be able to work with vendor X, customer Y, or technician Z to remove the necessity of layer 2 extension. Maybe your options are limited, but you’re a Rockstar network admin/engineer, so can we design around the “end user requirements”? If you must, you have quite a few options to extend layer 2 through the use of overlays. There will be some added complexity, but overlays may be worth considering instead of spanning layer 2 segments across the core.
Ok, so what’s this overlay stuff? Say you designed your network with proper layer 2 segmentation along with a layer 3 routing protocol. Everything is working great. Your layer 2 fault domains are isolated through the use of routing protocols, you don’t have STP running across your core, and you’re taking advantage of multipath layer 3 routing. All is wonderful in the world. You then have a “hard” requirement, maybe one listed above to extend layer 2. Do you go back and span a VLAN through your core? No, overlays to the rescue! Overlays have been around for quite some time, think GRE, Pseudo-wires, etc. Some of the latest overlays you may have heard of are VXLAN or EVPN. Basically you’re encapsulating information from one segment of your network and forwarding it across an existing layer. The information de-encapsulates at an endpoint and voila you’ve extended layer 2 across your layer 3 network. I know, easier said than done. There’s plenty of resources out there on how to setup overlay protocols, so I won’t go into the details.
Now let’s say you want to build your network from the ground up with extension services in mind. This would allow you to have a robust layer 2 transport natively built into your network. Extreme Networks has something called Fabric Connect which was an acquired technology from their Avaya network acquisition. Fabric connect is designed around shortest path bridging MAC (SPBM) as the forwarding plane and IS-IS as a control plane. You forward traffic not by IP routes, but by using an I-SID or Individual Service Identifier. You can create a layer 2 virtual service network (VSN) that’s more “circuit” based. The core of your network becomes a fabric connect mesh, and from an operational perspective, you configure services at the edge. You no longer have to segment devices to only certain parts of your network. Extreme Networks claim is that you get something like MPLS (however different) without the complexity.
Fabric connect makes me start thinking about locator/ID Separation Protocol or LISP which focuses on separating location (think IP address) from a device ID (think IP address again). If you separate location and device apart, you can now create two namespaces. In LISP, that’s the endpoint identifier (EID) and the routing locator (RLOC). What you then create is a mapping architecture similar to DNS mapping an IP to a name for determining forwarding. In fact, Cisco Campus Fabric uses LISP and VXLAN that creates another overlay solution that allows client mobility across a network.
The next time you have the opportunity to design or redesign a network, take time to study the why before you implement the how. And most importantly have fun!