Here’s a video post about an interesting building construction that, probably, few will encounter so I thought it might be interesting. Also the perils of making assumptions about building construction when working on Wi-Fi designs.
Recently my friend and former 😢colleague Mac_wifi has called on others in the community to publish their CWNE application essays as a resource, benchmark, encouragement to others seeking to achieve the CWNE certification. I think this is a good idea. Nobody had asked me to write an essay in over 20 years and I wasn’t quite sure where to start. Hopefully a few of the current crop of CWNEs sharing their submission will help guide others.
Here’s a link to that blog post
I agree with the CWNP’s approach to their expert level. If you’re used to sitting multiple choice exams through Pearson Vue for your IT certs the need for an essay might seem a bit strange. Indeed many certification streams award the expert level automatically once enough exams have been passed. There’s nothing wrong with that approach but CWNP chose to take a different route for a good reason.
To become a CWNE, in addition to passing the exams, you need at least five years experience working in enterprise wireless networking, have people prepared to endorse you and write three essays to demonstrate that knowledge and experience.
What that means is a CWNE knows their stuff, has walked the walk and … well you know how the rest of that goes. It means a CWNE has experience that goes beyond the theoretical knowledge one can gather by studying for certification.
Some advice I’d offer anyone aiming for CWNE certification is to gather real world examples of solutions delivered and problems solved for your essays. In the rushing around of daily life it’s easy to fix a problem, move on to the next thing and then forget about the details. It’s also worth remembering your essays don’t need to be a technical tour de force of excellence, proving your capabilities as a Wi-Fi super human. We all know the real world is full of compromises. Name them and explain them. I consider an essay that knowingly describes a flawed implementation speaks more of experience than a textbook “perfect” network design, for example.
So those essays…
Something that happened in November 2020 but that slipped past me until today is the FCC opened up some of the UNII-4 band for unlicensed usage, which includes Wi-Fi.
It isn’t much, only 45MHz but because of where it sits, just beyond UNII-3, it’s really useful extra bandwidth.
UNII-3 is where channels 149-165 live with the band providing 5 x 20MHz channels, 2 x 40MHz and therefore 1 x 80MHz. Opening up just part of the UNII-4 band nicely rounds things off at the top of the existing 5GHz Wi-Fi spectrum allowing another 3 x 20MHz channels (169, 173 & 177), 2 x 40MHz, 1 x 80MHz and therefore making a new 160MHz channel possible.
This is, of course, excellent news. We love more channel capacity especially as it’s likely many chipsets will already support these channels so if we’re lucky extra capacity can be made available with a firmware upgrade.
To bring reality crashing in, it’s worth noting this is the FCC… I don’t believe there’s any word about what European and UK regulators are going to do. Hopefully they’ll follow suit, but we’ll have to wait and see.
Also, whilst extending bands is a great way to add capacity with existing equipment it raises the spectre of compatibility problems. The lovely enterprise Wi-Fi network you manage may well support these channels but if client devices do not then you can’t use them.
Recently I’ve been on a bit of a certification binge, gathering ACCP, ACMP, and ACSP from Aruba, CCSA from Check Point and I was very excited to achieve the expert level certification from CWNP. I am now CWNE #420 (yes I know….)
For a long time I considered a lot of IT certifications to be… a bit thin, maybe not worth the paper they’re written on…. that sort of thing. I thought I had good reasons for this but now I think I misunderstood what that certification meant. So what do I think now?
Anyone in the UK is probably familiar with the MOT. The annual road worthiness test cars over three years old are subjected to. A well serviced car should always sail through but just because your car passes the MOT, doesn’t mean it’s actually road worthy, or even safe.
What you’re getting is a snapshot of a moment in time when everything appeared to be ok. It doesn’t mean you can just ignore everything for another year. If the brakes don’t seem right, you’ll get ’em checked.
Before I fall too far down this analogy, what’s my point?
Most IT certifications are awarded after an exam that tests knowledge and understanding as best as possible. What’s very hard to test is an individual’s ability to apply that knowledge…. experience.
I have met people who were well qualified, with all the appropriate IT certifications who, despite this, just couldn’t fix the problem, design the thing, make it work. Just like the MOT, having the certification alone doesn’t guarantee anything.
Certification plus experience is a great combination. Quite a few certification paths suggest you should have a few years experience under your belt before you start. Someone who has been working in a field for a few years and has achieved the relevant certification shows much more than a newbie who’s crammed for the exam.
Certification is a great way to learn a product. You might have been working with Aruba ClearPass for several years, as I have, but maybe you’ve never touched Onguard. To pass the exam you need to learn about the whole product, even the bits you don’t use day to day.
Certification without experience still has value, but less value imho. Some people are just good at exams. They can cram. They can learn the correct answers. They can remember enough, for long enough, to get through. That doesn’t prove they can take that knowledge and apply it appropriately, but it might just give the edge if they’re up against someone with neither experience nor certs. At least it shows some drive and interest.
Just as the MOT certificate hopefully means the car is actually roadworthy, IT certification should tell you someone understands the principles and concepts of the product. If it turns out they don’t, you’ll probably discover that pretty quickly.
For many years I was looking after too many things to really get to grips with everything. I became very good at working out how to make something work, doing that, and then forgetting about it.
Personally I’ve found studying for certification exams to be a great way to focus the mind. To dig into products that previously I might have skimmed over, things I’ve never understood but could make work I now have a much deeper knowledge of. That’s important when they stop working…. or you need to justify to someone why you should be doing this thing and being paid well for it.
That’s now what I consider the IT certs I’ve achieved to be all about. They’re a framework for learning something focused on a technology or specific product and then validating that.
The further down a track you find interesting, the more rewarding it is to get the achievements.
As everyone who’s been granted CWNE status will tell you, it represents a lot of work – not just learning to pass exams but actively working in the field and gaining experience.
Quite apart from anything else the gentle ego massage of passing an exam doesn’t get old.
How a factory default Aruba Universal AP discovers just what it’s supposed to be in the world is a source of some confusion, not least because the process changed (I believe around AOS 8.4). For a thorough explanation of how things used to work see here.
Put simply all current UAPs (shipped with AOS8.6 or 8.7 at the time or writing) take a cloud first approach, which flips on its head what Aruba APs always used to do.
It’s perhaps worth explaining what a Universal AP actually is. Aruba UAPs can operate in Instant or Campus mode. A campus AP (CAP) talks to a Mobility Controller. An Instant AP (IAP) can be standalone or operate as part of an IAP cluster with one AP taking the role of Virtual Controller. It used to be each model of AP had separate CAP and IAP versions. An IAP could be converted to CAP mode but a CAP could only be a CAP. There was some weird geopolitical reason for this that eventually went away hence the more sensible Universal AP became the norm.
So what happens?
A factory default UAP follows this logic:
– Attempts to contact Aruba Activate
– Connect to Airwave if DHCP option/DNS entry exists
– join a virtual controller on the local subnet
– Listen for a virtual controller (layer 2)
– Look for a mobility controller (layer 2 ADP/DHCP options/DNS)
– Broadcast setmeup SSID
– Reboot after 15 minutes
Of course if the AP receives config at any of these steps, it does what it’s told and doesn’t proceed through the discovery logic.
What’s changed in this list from older software version (pre 8.4) is the cloud wins. If your UAP can connect Activate, and there’s a rule there telling it what to do, that’s what it does. If that rule is incorrect you could see some unexpected behaviour.
The other significant change is looking for a mobility controller is now pretty much bottom of the list. You might be used to an AP looking for a mobility controller as the first thing it does, which indeed is what used to happen, but not longer. So if you run a controller environment and manage to end up with a VC running on an AP subnet you’ll find other new APs form a IAP cluster with this and nothing appears on the mobility controller. Once this happens it’s possible to convert all the APs to campus mode, pointing them at a mobility controller, so this isn’t too painful.
This cloud first approach makes a lot of sense and when it all works smoothly Activate rules can make ZTP provisioning of a new UAP very smooth.
In many deployments it isn’t unusual to have no internet access for APs, especially campus APs. In this scenario, to avoid odd behaviour, make sure your AP subnet is configured with the correct DHCP options or DNS entry for the mobility controller.
If you find a new UAP doesn’t appear to be working, e.g. it hasn’t appeared on your mobility controllers, there’s every possibility that something else along the discovery logic has taken precedence.
Finally it’s worth noting you can manually set controller options from the AP-boot prompt by connecting to the AP console and interrupting the boot sequence. Any static configuration is acted on first. This approach doesn’t scale of course, but it’s useful for testing and building a small lab environment alongside an active system.
You’ve just upgraded the network with the latest Wi-Fi6 APs – this promises to be faster, with lower latency and all round better for everyone and everything… great! But…. there are rumblings.
During your testing you found a number of the corporate laptops used an Intel Wi-Fi NIC and the driver had never been updated… these hit a well known bug that causes the driver to ignore Wi-Fi 6 enabled BSSIDs. No problem, because you did the testing that issue was found and a new driver was deployed.
Despite all your efforts, a number of helpdesk calls have come in from users complaining they can’t connect to the network any more. Some of them can’t even see the network…. Hmmm.
Turns out these machines haven’t had the new driver deployed by Group Policy because they’re not part of the domain. They’re BYOD, they have the same ancient driver and they won’t play ball with the 802.11ax network you’ve just deployed.
That’s not all. The old network didn’t have any of the roaming enhancements enabled and with all change it seemed the perfect opportunity to enable them: 802.11k/v/r all switched on.
Some of the misbehaving laptops can connect to the network, sometimes, but things are really unreliable. These also have an older Intel 7260 Wi-Fi chipset but updating the driver doesn’t help.
You’ve been struck by another Intel bug where the presence of the 802.11K Quiet information element upsets things and they break. This time it’s a hardware problem.
So do you switch off 802.11ax and 802.11k on any SSIDs used for BYOD or do you say “tough, your old stuff might not work any more”?
That, of course, is a policy matter.
When I encountered both these issues in a recent deployment, the decision was to take the path of least resistance and disable the functions. This means the network can’t benefit from the performance and capacity benefits offered by Wi-Fi6.
Not having any control over BYOD clients means they may end up dictating terms for the network. That might be fine, if it fits the policy, but in this scenario it was done because it was easiest, it made the problem go away. If that decision isn’t revisited later the network will always be operating below it’s potential.
If you know any enterprise networking you’ll have come across AAA – Authentication, Authorization and Accounting. The cornerstone of network security that is ensuring the client can be authenticated, so not just anyone can connect – and often that first A is as far as it goes.
So what of Authorization?
This is what provides a way of making your Wi-Fi more efficient. If you have corporate devices, BYOD, IoT and they currently have three separate SSIDs (not uncommon) you can put all three onto the same SSID, reducing management traffic, and use the Authorization part of AAA to determine what network access each client should have.
This might use Active Directory group membership to determine which VLAN a user gets dropped into, or what ACLs are applied. In many systems both these are part of the Role Based Access Control is the term used for this.
Accounting is where you see what the user actually did. In practice this usually takes the form of when they connected to the network, how long for and how much data they transferred.
Here’s how it comes together in an example recent proof of concept for a customer:
Multiple departments are in a building and it’s necessary to provide security, keeping traffic from each dept separate. For regulatory purposes it’s necessary to assign network services costs to departments but this has to be based on real world information such as bandwidth use. Finally “we’re all one company so we don’t want to setup separate networks”.
Most networks have pretty much everything in place to do this, it’s just a question of whether all the dots are joined.
A RADIUS server (Aruba Clearpass in this case but something like Cisco ISE could be used) is already used for 802.1X authentication. Users from different AD groups can be assigned different roles, placing them in their department’s VLAN or simply applying ACLs specifying the access the client should have. The APs or controller are configured to return RADIUS accounting which allows the administrator to accurately determine the data traffic used for each connection.
Everything needed to do this has been around for quite a long time, but an awful lot of networks out there still have one SSID per VLAN, an SSID for each client type, and SSID for each day of the week. There are much better ways to do this.
At the risk of sounding ‘arty-farty’ and woowoo it’s my opinion that to be human is to be creative and anytime we dismiss what we, or someone else does as “just technical” some essential human quality is being denied. Let me try and make some sense of that.
If you spend any time reading the tweeted thoughts of louder members of the Wi-Fi expert community, you’ll realise they don’t all get on. Sometimes this is the result of personality clashes or political differences, but often there’s strong disagreement about how one should implement Wi-Fi. So if two highly experienced engineers disagree and sling the mud over how Wi-Fi works, what chance do the rest of us mortals have of understanding it? Is the real problem we’ve fallen for the lie that working in an engineering role is not creative? Continue reading
The Aruba AP-387 was launched a little while ago now, I first saw it demonstrated at Aruba Atmosphere 2018 in Croatia. It’s an AP designed for point to point links with 802.11ac 5GHz and 802.11ad 60GHz radios. The aggregate RF throughput is in the region of 2.5Gbps which means it can maintain duplex gigabit wired speeds and testing has shown this to be reliable up to the specified 400 metres. Should conditions cause a deterioration of the 60GHz link, the 5GHz link will continue to provide connectivity, albeit with lower throughput.
Installation is relatively straight forward. The APs don’t need super accurate aiming as part of the 802.11ad spec includes electronic alignment of the antenna. This can also cope with wind motion, though the more stable and well aimed the units, the better.
This install was between a sports facility building and a remote cabin beside the cycle track. The cabin had been without connectivity since construction as, for reasons nobody can explain, no data ducting was installed alongside the power feed. Attempts to drag fibre through the power duct failed and costs for ducting and fibre installation were priced at around £25k.
The AP-387 is what Aruba refer to as a unified AP so a pair can be standalone and managed as Instant APs or they can work as part of an Aruba controller environment – as was the case here. The link uses Aruba’s mesh configuration with one mesh portal and one mesh point.
This link was configured to use UNII-3 Band C channels on the 5GHz radio as the institution had an Ofcom license for outdoor use. (note these channels are now available for license free use indoors only at low power as well as outdoor at high power with a license…. not that gets confusing at all.)
The initial setup on the bench was very straight forward. The installation was handed over to experienced cabling contractors with no specific wireless expertise or specialist equipment.
And it just worked.
The AP behaves as a bridge by default, passing all tagged vlans across the link. This network uses the same management vlan for APs and switches, so the only deviation from standard edge switch config at the remote end was to untag this management vlan on the uplink port.
The link length was approx 190 metres and I kept an eye on it during some quite mixed weather using a regular automated speed test. No performance drop was observed during heavy rain or fog.
This was a great result. The cost, including installation, was a little over 10% of the cabling price estimate.
Two points to note. The mounting brackets really require pole mounting as there is no horizontal adjustment available. Once in operation there’s very little information available about the state of the 60GHz link.
A little while ago I read some fairly barbed comments from someone about the pointlessness and futility of using an Ekahau Sidekick for wireless surveys.
The argument went something along the lines of: because the Sidekick’s Wi-Fi interfaces and antennas are not the same as the devices actually using the network, the reported results are meaningless. The only way to survey realistically is to use the device the network is designed for.
These ideas weren’t really presented as part of a discussion, more a proclamation that anyone carrying out surveys using a Sidekick is producing misleading results. It’s quite the claim but at first glance the logic is hard to argue with, so does this position have any merit?
My immediate reaction, based mainly on my own experience, was “not really”.
It’s true a network that looks good to the Sidekick can be problematic for a client like an iPhone 5 and this is entirely down to the high quality of the Sidekick antennas, especially relative to the design compromised antenna found in a smartphone.
When analysing survey results in Ekahau an offset can be added to compensate for this. Working on a university campus I’ve always used -10dB as this fits with the previously mentioned iPhone – the most common client.
What’s more, because Wi-Fi chipsets are not calibrated there can be significant variation between devices of the same type. Three iPhone 6 handsets will likely give you three different received signal levels.
So how do you know whether the client you’re using to carry out a representative test of the network is good, average or a poorly performing example? You can take multiple devices and take an average, or take the worst performing example and use that… but you still don’t know whether there’s another one that’s even worse.
In other words how do you apply any rigor to your surveying if nothing is accurate and devices vary? But it gets worse.
Take a look at this post by the brilliant WifiNigel. Nigel has demonstrated (with a nice little rig and measurements and everything) just how much the orientation of a device changes the received signal strength.
What Nigel’s work demonstrates is just how important it is to get your device offset right. If the network is design for a voip client, it’s important to test that device while it’s just off vertical, gripped in a hand and held against a, presumably, human ear… not sitting horizontally on a desk at hip level…
Whilst the Sidekick is not calibrated with accuracy any RF lab would find acceptable, they are tuned to a reference point so ought to be more reliable than the network clients.
It is key to know what is a realistic device offset to use and as far as possible that needs to be based on devices in use, not sitting on a desk in a different orientation.