Wi-Fi Capacity… Just what do you need?

Prompted by Peter Mackenzie’s excellent talk (of course it was) at WLPC 2022 titled “It is Impossible to Calculate Wi-Fi Capacity” I wanted to share some real world experience. I’ll also link to the presentation at the bottom of this page – you should watch it if you haven’t already.

In this talk Peter explores what we mean by capacity planning and amusingly pokes fun at the results of blindly following certain assumptions.

There’s also a look at some fascinating data from Juniper Mist showing real world throughput of all Mist APs within a particular time frame. It’s a huge data set and provides compelling evidence to back up what many of us have long known, namely: you don’t need the capacity you think you do, and the devices/bandwidth per person calculations are usually garbage.

I have walked in to a university library building, full to bursting with students working towards their Easter exams. Every desk is full, beanbags all over the floor in larger rooms to provide the physical capacity for everyone who needs to be in there. Everyone’s on the Wi-Fi (ok not everyone but most people) with a laptop and smartphone/tablet. Tunes being streamed by many.

I can’t remember the overall numbers but over 50 clients on most APs. In these circumstances the average throughput on APs would climb to something like 5Mb.

Similarly a collection of accommodation buildings with about 800 rooms, lots of gaming, netflix and generally high bandwidth stuff going on, the uplinks from the distribution router would almost never trouble a gigabit.

These are averages of course, which is how we tend to look at enterprise networks for capacity planning. We’re interested in trends and, on the distribution side, making sure we’re sitting up and taking notice if we get near 70% link utilization, perhaps lower in many cases.

In fact the wired network is where this gets really interesting. This particular campus at one time had a 1Gb link between two very busy routing switches that spent a lot of it’s time saturated. This had a huge impact on the network performance. This was doubled up (LACP) to a 2Gb link and the problem went away.

Of course this is quite a while ago. Links were upgraded to 10Gb and then 40Gb, but another interesting place to look is the off-site link. As with any campus that has it’s own DCs some of the network traffic is to and from local resources, but the vast majority of Wi-Fi traffic was to and from the Internet. The traffic graph on the internet connection always mirrored the Wi-Fi controllers.

At busy times with 20,000 users on the Wi-Fi, across over 2,500 APs we would see maybe 4-6Gb of traffic.

You will always have examples where users need high performance Wi-Fi and are genuinely moving a lot of data. However the vast majority of users are simply not doing this. Consequently I could walk into the library, associate with an elderly AP that already has 58 clients associated and happily get 50Mb on an internet speed test.

I’ve shared my thoughts before about capacity considerations, which Peter also touches on in his talk. Suffice to say I think Peter is absolutely right in what he says here. With exceptions, such as applications like VoIP with predictable bandwidth requirements, we have a tendency to significantly over-estimate the bandwidth requirements of our networks and the assumptions on which those assessments are made will often be misleading at best.