Bigger is better and numbers sell but what do you need?
With the ratification of Wi-Fi7 which, of course, promises ever moar speeds I’m reminded how important requirements are in network design. The bandwidth requirements of Wi-Fi clients are often, in my experience, hugely over-estimated when they’re even thought about at all. How often have we seen people jump onto a ookla speedtest and need to see multiple 100s of Mb/s throughput to consider the Wi-Fi good?
I’m not saying don’t aim high, we should do everything we can to design networks that offer the highest MCS (Modulation Coding Scheme) to clients and avoid bottlenecks on the wire from the AP to the off-site links. But… and anyone who’s built enterprise Wi-Fi networks knows this, you can’t provide the highest speeds to all of the clients all of the time… so what do you need?
I’ve recently been working in a manufacturing environment where Wi-Fi clients are a mix of printers, scanners, admin laptops. The biggest talkers moving the most traffic are averaging around 3Mb/s.
Of course that average masks traffic peaks, we want to deliver a good user experience for opening large files etc but, truth be told, a lot of that traffic is teams calls. What matters in a network like this is reliability. Consistency of latency and roaming performance is far more important than big speedtest numbers.
So this network design uses a fairly dense deployment of Wi-Fi6 APs with 40MHz channels on 5GHz, these connected to modern gigabit switches that have 20Gb links to a high performance collapsed core with a 100-500Mb site link.
We’re using 10Gb links from the edge switches and a hugely capable collapsed core switch because that’s the norm for this enterprise equipment, not because we need that throughput.
Nobody using this network complains when their speedtest result is only showing them 40Mb/s. They will complain that Teams calls are dropped because roaming takes too long.