On-Prem is cheaper than cloud…

I came across this tweet that got a few people talking:

For a long time the rhetoric I heard at my previous employer was “The cloud is just someone else’s computer”… which is true of course but intrinsic to that comment is that if you have computers of your own why would you need to use someone elses.

That changed at some stage, as fashion does, and where once it was clearly far more sensible to continue to use our own, ‘on-prem’ DCs these suddenly became hugely expensive and the TCO of cloud looked a lot better.

So is on-prem cheaper than cloud? Well… yes and no… like so many things, it depends.

Most often you can spin the figures to suit the case you want to make. For example the cloud spend is all coming from the IT budget whereas the responsibility for something like aircon maintenance for the DC building might sit with the estates department. Costs can be moved around, included or excluded depending on the outcome you want – or maybe how honest you are…

Personally I think the true TCO of on-prem DC stuff ought to accurately reflect how much it costs the organisation rather than how much of it sits in one budget, but that’s just me and my simplistic view of how accounting ought to work.

Also, what’s the space worth and has its cost been written off? A dedicated, fully owned DC building as part of a university campus has a very different value, and therefore potential cost, to DC space within a building in a city. Basically what else could that space resource be used for and is it potentially worth much more filled with people rather than computers.

The size of an organisation makes a huge difference. One developer with a good idea can use AWS or Azure and throw together a level of infrastructure that would need at least 10s of thousands to achieve with tin.

However just as the organisations with ‘legacy’ on-prem DCs are looking at their maintenance & replacement budgets with a heavy heart, those startups that grew as cloud native might well be looking at their monthly cloud fees and sighing just as heavily.

I’m personally a big fan of the hybrid approach. Some workloads work brilliantly in the cloud, others less so…

What that tweet pushes back against is the idea cloud is cheaper because it just is. That patently isn’t true and many have been burned by just how high their CSP bills are.

AOS-Switch (2930) failing to download ClearPass CA certificate

tl;dr – Check the clocks, check you the well-known URL on ClearPass is reachable, check you’ve allowed HTTP access to ClearPass from the switch management subnet.

Another in my series of simple issues that have caught me out, yet don’t seem to have any google hits.

When you implement downloadable user roles from ClearPass with an Aruba switch the switch uses HTTPS to fetch the role passed in the RADIUS attribute.

There are a few things you need in place to make this all work but the overall config isn’t in scope for this post. The key thing I want to focus on, that caught me out recently, is how the switch validates the ClearPass HTTPS certificate.

With AOS-CX switches (e.g. 6300) the certificate can simply be pasted into the config using the following commands:
crypto pki ta-profile <name>
<paste your cert here>

You don’t need the full trust chain either, if your HTTPS cert was issued by an intermediate CA you only need to provide that cert, though it doesn’t hurt to add the root CA as well.

With AOS-Switch OS based hardware (e.g. 2930f) you can’t paste the cert in, your CLI option is uploading it via TFTP.

Fortunately there’s a much easier way of doing this – an AOS-Switch will automatically download the CA cert from ClearPass using a well-known URL – specifically this one:

You have to tell the switch your RADIUS server is ClearPass by adding “clearpass” to the host entry – but I did say I wasn’t going to get into the config.

Recently I had a site where this didn’t work. The switch helpfully logged:

CADownload: ST1-CMDR: Failed to download the certificate from <my clearpass FQDN> server

This leads to:

dca: ST1-CMDR: macAuth client <MAC> on port 1/8 assigned to initial role as downloading failed for user role


ST1-CMDR: Failed to apply user role <rolename> to macAuth client <MAC> on port 1/8: user role is invalid

So what was wrong? In this case it was super simple. The route to ClearPass was via a firewall that wasn’t allowing HTTP access.

Other things to check are clocks – both the switch and ClearPass – always use NTP if you can. Also there have been ClearPass bugs introduced in some versions that break the well-known URL so its worth checking the URL is working. There can also be some confusion between RSA an ECC certificates, which ClearPass now supports. The switch will use RSA.

Wi-Fi Capacity… Just what do you need?

Prompted by Peter Mackenzie’s excellent talk (of course it was) at WLPC 2022 titled “It is Impossible to Calculate Wi-Fi Capacity” I wanted to share some real world experience. I’ll also link to the presentation at the bottom of this page – you should watch it if you haven’t already.

In this talk Peter explores what we mean by capacity planning and amusingly pokes fun at the results of blindly following certain assumptions.

There’s also a look at some fascinating data from Juniper Mist showing real world throughput of all Mist APs within a particular time frame. It’s a huge data set and provides compelling evidence to back up what many of us have long known, namely: you don’t need the capacity you think you do, and the devices/bandwidth per person calculations are usually garbage.

I have walked in to a university library building, full to bursting with students working towards their Easter exams. Every desk is full, beanbags all over the floor in larger rooms to provide the physical capacity for everyone who needs to be in there. Everyone’s on the Wi-Fi (ok not everyone but most people) with a laptop and smartphone/tablet. Tunes being streamed by many.

I can’t remember the overall numbers but over 50 clients on most APs. In these circumstances the average throughput on APs would climb to something like 5Mb.

Similarly a collection of accommodation buildings with about 800 rooms, lots of gaming, netflix and generally high bandwidth stuff going on, the uplinks from the distribution router would almost never trouble a gigabit.

These are averages of course, which is how we tend to look at enterprise networks for capacity planning. We’re interested in trends and, on the distribution side, making sure we’re sitting up and taking notice if we get near 70% link utilization, perhaps lower in many cases.

In fact the wired network is where this gets really interesting. This particular campus at one time had a 1Gb link between two very busy routing switches that spent a lot of it’s time saturated. This had a huge impact on the network performance. This was doubled up (LACP) to a 2Gb link and the problem went away.

Of course this is quite a while ago. Links were upgraded to 10Gb and then 40Gb, but another interesting place to look is the off-site link. As with any campus that has it’s own DCs some of the network traffic is to and from local resources, but the vast majority of Wi-Fi traffic was to and from the Internet. The traffic graph on the internet connection always mirrored the Wi-Fi controllers.

At busy times with 20,000 users on the Wi-Fi, across over 2,500 APs we would see maybe 4-6Gb of traffic.

You will always have examples where users need high performance Wi-Fi and are genuinely moving a lot of data. However the vast majority of users are simply not doing this. Consequently I could walk into the library, associate with an elderly AP that already has 58 clients associated and happily get 50Mb on an internet speed test.

I’ve shared my thoughts before about capacity considerations, which Peter also touches on in his talk. Suffice to say I think Peter is absolutely right in what he says here. With exceptions, such as applications like VoIP with predictable bandwidth requirements, we have a tendency to significantly over-estimate the bandwidth requirements of our networks and the assumptions on which those assessments are made will often be misleading at best.

ClearPass Guest Sponsor Lookup

Guest user self-registration is one of my favourite things. It allows users to create their own account without invoking a helpdesk ticket. Sponsorship means an account has to be approved before it becomes active.

Typically e-mail is used to reach the sponsor and tends to be specified as a hidden field in the form, a drop down menu or an LDAP search.

I recently configured this for a customer who wanted to search their on site Active Directory for the sponsor, specifically users within a group of which only other groups were a member. A nested group. Nested groups are very common in organisations but I struggled to find some clear documentation on how to make it work for this particular use case.

First add the sponsor lookup field to your self-registration form

To do this open the form and in the appropriate place add a field. Then on the form field editor, select sponsor_lookup. You probably want this to be a required field.

You also need to add the LDAP server to ClearPass Guest.

From Administration > Operator Logins > Servers select “Create new LDAP server”.

If this is an AD server it will be using LDAPs v3. ClearPass Guest automatically uses this version of the protocol when Active Directory is selected as the server type.

Enter the server URL in the format ldap://<servername>/dc=<domain>,dc=<suffix>

The Bind DN and Bind Username will likely be the same <user>@<domain>

At this point, you should be able to perform lookups or searches against the directory. In my case I needed to restrict the search to a distribution group. This uses a LDAP OID 1.2.840.113556.1.4.1941

So, as per the screenshot above, choose a custom LDAP filter. Here’s the filter I’ve used:



    # Match users in any of these groups



    # Match users by any of these criteria







As the comment suggests you can add more groups to the search. For a primary group that users are members of the format is (memberOf=CN=Wireless,CN=Users,DC=clearpass,DC=aruba,DC=com)

This query worked a treat and meant when searching guests wouldn’t be presented with accounts for admin users or meeting rooms.

Reflecting on 2021

Another flippin’ year goes by with curtailed social interactions, travel and that raised baseline of anxiety but aside from all that, here’s what I’m thinking.

It’s been another tricky year for lots of businesses. Network assets get sweated for a bit longer because everyone’s a little uncertain, so you don’t spend money you don’t need to. For me, that meant the year began with the threat of redundancy. More on that later…

Wi-Fi continues not to be taken quite as seriously as it should be by most enterprises. As Keith Parsons commented in his London Wi-Fi Design Day talk, Wi-Fi doesn’t punish you for doing it wrong. That’s not strictly true as a really badly designed network can fall apart entirely under load, something I’ve seen a few times, but his point stands; namely you can throw a load of APs into a space with very little thought and… it works.

An interesting challenge I’ve encountered has been organisations ignoring the design, doing what they’ve always done, and then complaining they have issues. I’ve also encountered an organisation stating “Wi-Fi is not business critical” when clearly it is because as soon as there are issues it becomes the highest priority. In many circumstances Wi-Fi is the edge everyone uses. Not only is Wi-Fi as business critical as the wired network, in many cases it’s more so. Of course the wired edge supports the wireless edge, we need it all to work.

And on that subject, we continue to see a separation between wired and wireless in many network teams. In my mind this doesn’t make sense a lot of the time and I consider the wireless and wired edge to be the same domain. Plus we all know the wires are easier.

Wi-Fi6E has arrived and will totally solve all our problems… It’s something I’ve yet to play with, but lacking either AP or client that will have to wait. Wi-Fi6 (802.11ax) has arguably so far failed to deliver on the huge efficiency increase promised I think because legacy clients persist, and will do for many years to come, but also because the scheduling critical to OFDMA is probably not where it needs to be.

It was also really disappointing to see Ofcom release such a small amount of 6GHz bandwidth here in the UK.

This has been another year of firsts for me, working with high profile clients, learning new technologies (gaining Checkpoint certifications), embracing project management (I’m a certified Prince2 Practitioner)

It’s also particularly struck me how much my so-called “career” has been held back my ethics and, what could refer to as loyalty but, if I’m honest, is probably comfort. I don’t have a complaint about this, I am who I am, this is just an observation.

I’ve been approached for, and turned down extremely well paid roles working for organisations who’s core purpose is something I cannot support. I’m happy with this. If I’m not working to make the world better, I at least need to feel I’m not actively making it worse.

This is no judgement on anyone who does work in areas I disagree with btw. We all have to make our own decisions about what matters in life.

The loyalty/comfort is a trickier area. I declined a really interesting opportunity that meant moving overseas. Ultimately that was probably the right decision but, on reflection, it was taken too quickly. I work an interesting role for a great company, long may that continue, but when the feet start itching or an opportunity comes my way, I need to be ready to embrace the uncertainty. Starting the year with redundancies, and losing a great member of the team, served as a reminder that loyalty in employment is transactional.

Wi-Fi Design in Higher Ed

I was recently invited to speak at the London Wi-Fi Design Day hosted by Ekahau and Open Reality. It was a fantastic day, great to catch up with people in person and some excellent talks (if I say so myself).

You can watch all the talks, from this and previous years, on this playlist. Talks from London 2021 start at number 41 as the list is ordered today.

My talk, some thoughts on Wi-Fi Design in Higher Education can be watched below.

I was particularly challenged by Peter Mackenzie’s talk on troubleshooting and the idea we all tend to jump to answers before we’ve asked enough questions. Hugely helpful and highly recommended.

ClearPass Galleria logo sizing

The ClearPass Galleria template makes it easy to have a pretty, screen format responsive captive portal. It’s mostly a case of chuck some images in, set the colours, and then it looks great.

Some of the defaults are a bit odd though, especially the logo size. Recently I built a captive portal for a customer and their logo was king but in Galleria it was far too small.

Fortunately Aruba have made it easy to override CSS in the template. From Administration \ Plugin Manager, choose Configuration on the Galleria plugin in use.

Add your CSS overrides in the HTML HEAD section. This can be used to override any of the CSS in the template. For the logo you want to play around with this:

.nonav-logo img {
max-width: 100%;
max-height: 220px;

Charging EVs

Another post that’s nothing to do with Wi-Fi.

CCS rapid connector (left) – Type 2 connector (right)
Photo – Paul Sladen

Zap-Map is a popular service in the UK for locating EV chargers and, hopefully, getting some indication as to their status. Recently I’ve noticed a lot of chargers marked as faulty with comments like “only charges at 10kW instead of 43kW” or “only supplying 7kW instead of 50kW”. These almost always mean the same thing – the user doesn’t know what they’re doing. The result is a charger gets flagged as faulty when there’s nothing wrong, everything is working just as it should.

I don’t think it’s entirely fair to blame the user here, yes they should have read the manual for their car, but this stuff can be complicated and the dealer supplying their car probably hasn’t explained this because they don’t understand it themselves (there are good car dealers out there selling EVs but I’ve yet to meet one).

In summary for those already bored, every time someone has reported a DC rapid charger is bad because it’s only delivering 7kW, they’ve just used the wrong connector, type2 instead of CCS most likely. If a charge point says it’s 22kW, you won’t get that unless you car’s onboard charger is capable to taking it (Read The Flippin’ Manual). When you connect your new car capable of CCS rapid charging at 100kW to a 125kW charger, you won’t actually get 100kW for much of the time and you might not see that high rate at all. All this is completely normal.

The first key point: public charge points are either AC or DC with many rapid chargers offering both. If you’re plugging in using a type 2 connector then you’ll be charging with AC. Most AC charge points require you to use your own cable but some, mostly older, rapid chargers have a tethered type 2 AC cable, sometimes labelled 43kW.

Fast Charging

The rate at which your car will charge on AC is dictated by the capability of the car’s onboard charger and how much power the charge point signals the car can take.

AC charge points are not chargers at all, they’re basically a fancy switch that supplies mains power to the charger built into the car so it’s important to know what type of onboard charger your car has. It’s most likely to be 7kW (single phase), 11kW (three phase), or 22kW (three phase) if it’s a Renault Zoe or some models of Tesla.

Note: Three phase chargers require a three phase cable. There have been instances where dealers have provided the wrong cable for cars that have a three phase charger as an option. If your 11kW capable car is plugged into a 22kW charge point but only charges at 7kW, check the cable is correct.

Charge points themselves are either single phase in which case they can supply about 7kW or three phase and able to supply up to 22kW, occasionally as much as 43kW, and you can use any of these with any car.

For example my Kia has a 7kW onboard charger. If I connect it to a 7kW PodPoint at the local Lidl I’ll get about 7kW (usually 6.6kW). If I connect it to the 43kW AC connector of a BP Pulse Rapid charger I’ll get about 7kW. So what do I get from a 22kW point? Yep, I’ll get 7kW. If I connect my Zoe with it’s 22kW capability into the same 43kW rapid, I’ll expect to get 22kW.

Rapid charging

eVolt rapid charger with AC, CCS and CHAdeMO

Rapid chargers really are chargers and supply DC directly to the battery in your car, with the charge rate controlled by the car’s battery management system (BMS). These use a different type of connector, either CCS (most new cars) or CHAdeMO (Nissan Leaf, old model Kia Soul EV) and these cables are always tethered, permanently connected to the charger. Earlier I mentioned that some rapid chargers have an AC cable, this can cause confusion as, of course, it will fit your CCS car…. but you’re then using AC and not DC, which is where most of the errant Zap-Map complains come from.

The vast majority of rapid chargers are nominally rated at 50kW with 125, 150 and even 350kW chargers being installed.

Just as with AC, how fast your car actually charges depends on the capabilities of the rapid charger and the car. The big difference is that many cars limit the charge rate depending on the battery’s state of charge and, sometimes, the temperature so how fast it can go will vary.

To give an example again, my Kia can charge at up to 77kW over CCS. In practice it can only take this much power when the battery is below 40% and the charge rate soon tails off. Fastned have really good information about this for many cars. Their graph shows that although my car can take up to 77kW it steps down a few times, and once the battery is over 75% it drops to below 40kW, then below 30kW before tailing off to a trickle as the car approaches 100%

This is why you should never take your car to 100% on a rapid charger… it takes ages and ties up the charger, stopping others from using it. Typically charging on a rapid is also more expensive, so you’re wasting time as well as money…. and annoying anyone else waiting.

I typically see about 43kW reported by my car, which climbs to up towards 50kW right before it sharply steps down at just over 75%, exactly in line with the Fastned graph.

What these graphs demonstrate is there’s a tactic to time efficient rapid charging. If you’re making a long journey, plan charging stops when the battery is going to be getting low. Charging from 20-60% is generally faster than going from 40-80%.

There are folks who say all this is too complicated, and maybe it is. Most EV owners will just plug in at home and never worry about it, but it’s worth understanding at least some of this so you don’t make a fool of yourself on Zap-Map comments. It isn’t that many years ago we had different types of petrol and two stroke oil to contend with, not to mention distributor points and the fact a car would never start on a damp morning. By comparison knowing a couple of headline numbers is hardly a major barrier to EV adoption.

What I’ve said above is true of most cars. There are, of course, exceptions. Firstly Tesla have their own charging network which in some cases delivers rapid charging over type2 connectors. I have no experience of Tesla’s chargers. Some early models of Renault Zoe with the quick charge option can AC charge at 43kW. I believe they’re the only car that can do this and even Renault dropped it. Whilst all models of Renault Zoe can charge at up to 22kW AC only the very latest cars have CCS capability and even then it was an option until mid 2021 so there are plenty of Zoes around that cannot DC rapid charge. There’s no risk of confusion with this as the CCS plug won’t fit.

Cutting red tape

Another of my occasional non Wi-Fi related posts.

There’s often talk in political circles about “cutting red tape” and it’s almost always accepted as a good thing. After all, red tape is that irritating thing that stops us getting stuff done…. and it’s red, which is bad.

But I wonder if you’ve ever stopped and questioned what “cutting red tape” actually means. What is it you want to do, what or who is stopping you from doing it, and why?

A really good example from the world of corporate IT is change management. I don’t know anyone who likes change management. It’s stupid red tape that slows everything down, stalls projects, stops you from simply getting stuff done. It’s the very definition of red tape that we could happily do away with. Yet pretty much every corporate has some form of change control… So if this red tape is so bad, why do so many organisations embrace it?

Back when I ran my own IT department looking after the servers, desktops, and everything in between I could just do stuff. Occasionally when I did stuff everything broke and it was on me to fix it. I was answerable to other people, but they didn’t really understand what I did and generally they were just happy I’d made everything work, and wasn’t I clever.

Later, early on in a different role, I broke something but this time there were questions… managers wanted to know who had approved the change I’d made, the director wanted a report from the change manager about what lessons had been learned and why our roll back plan hadn’t worked (it didn’t exist). It all got very uncomfortable because, unbeknownst to me, there was some quite important red tape I’d just moved out of the way and crawled under that existed to make people like me think through changes more carefully.

We have red tape across our society and, yes, some of it is not helpful. Just as change management can be an unhelpful barrier, a pointless box-ticking exercise and a process that does little more than provide a handy scapegoat on which to dump the blame, red tape in the public sector can absorb time, effort and money, delivering very little.

However when it’s done right, it’s fantastic regulation that empowers people to do their jobs, supports individuals and teams through difficult decisions and prevents those who might take dangerous shortcuts for their own financial gain from doing so, or at least holds them accountable.

When someone talks about “cutting red tape” it’s important to understand what they mean. Is it removing unhelpful bureaucracy or is it removing important protections.

For example in the UK you can’t just do whatever you want with land you own. Planning permission is required for a great many things from modest house extensions up to large scale projects. Planning decides whether you can do it and then building regs serve to make sure it’s done properly. Again, nobody likes planning permission but allowing anyone to build anything without controls would be a complete disaster.

I also wonder who benefits financially from that particular red tape being removed.

Recently it’s been found many of the UK’s water companies are illegally emptying raw sewerage into watercourses, polluting our rivers, the sea around our coastline and our beaches. There are plenty of rules against this sort of thing, but the consequence for breaking these is a fine which, so the accusation goes, is simply absorbed into the cost of doing business. Sometimes that troublesome regulatory red tape is performing a really important task and getting rid of it leaves the way open for unscrupulous operators to do really damaging things. Perhaps, sometimes, rather than less red tape, we need a bit more.

More RadSec fun

A previous post waffled about setting up Radsec between an Aruba AP and ClearPass running in Azure. Having deployed this in a slightly different Azure environment I’ve learned some things worth sharing.

The first thing is ClearPass handles RadSec using RadSec Proxy. This receives the RadSec connection and proxies the RADIUS traffic to the ClearPass RADIUS server. One casualty of this approach is that, at the time of writing, Policy Manager sees these incoming connections as being from localhost.

There are circumstances where it’s useful to know which ClearPass cluster member has dealt with a request. For example if I don’t have a cluster VIP, and that’s not supported in Azure, it’s possible to build HA for a guest portal if you know which server handled the initial MAC-auth. With RadSec you don’t, so this can’t be done.

My Azure lab was very simple, just a ClearPass VM running in an Azure tenant with no Network Security Groups configured. In the production environment I’ve worked with ClearPass was placed in a NSG protected by Azure Firewall which performed source NAT. As a result incoming RadSec connections all have a source of the NSG firewall private range, rather than the true source.

The RadSec tab of the Network Device config in ClearPass lets you set an override source IP. Previously I put the NAT public IP of my client in here. In production I’ve used the internal /26 network range of the azure firewall. The config box takes a range, in this case its

I’ve then used the option to validate the certificate, using the CN or SAN and entered the CN name of the cert issued to that client.

The final point worth noting is neither ClearPass nor the Aruba AP issue keepalives down the TLS tunnel. The Azure firewall drops idle connections after four minutes. This is not configurable. So you will see lots of up and down messages in the event viewer. When I was initially testing this in my lab setup this was an indication of a problem. In production it was just normal behaviour, which caused some confusion.

In a busy network with lots of authentication traffic you’ll probably see the tunnel stay active for much longer. I’m not entirely convinced all is well, but it’s working.