An impressive attempt to summarise Wi-Fi which is a very deep topic. However I think the executive summary already missed the most critical thing about Wi-Fi:
only 1 transmitter at a time per channel - across all WLANs, yours and your neighbours, with no deterministic way to avoid collisions.
It's a shared medium and it's not even half duplex, unlike the dedicated full duplex you would typically get with an ethernet cable to a switch port.
The fact that Wi-Fi achieves what it does with this limitation, and how it co-ordinates the dance of multiple unknown clients using the same medium - and in the presence of other RF technologies to boot - is indeed an incredible technology story, but this achilles heel is the single most defining thing about Wi-Fi performance.
> only 1 transmitter at a time per channel - across all WLANs, yours and your neighbours, with no deterministic way to avoid collisions.
Thatâs not correct. You and your neighbor can use the same channel at the same time. On your network, the transmissions of the other network appear will appear as noise. As long as the other devices are far enough away, however, your devices will still be able to make out their own signal.
This is a common misconception.. you and your neighbour can configure the same channel, you cannot successfully transmit at the same time on the same channel within range. Nor can you and your own AP successfully transmit at the same time on the same channel.
When you and your neighbour _appear_ to be transmitting at the same time, each adapter is actually spending most of it's time waiting for a clear medium and for various backoff timers to expire before attempting to transmit.
"Appear as noise" is not defined for Wi-Fi adapters. There is only "I received a frame addressed to me and acknowledged it" or "I sent a frame and either did or didn't get an acknowledgement back from the receiver". Receivers do not know why they didn't receive a frame, or, if they received a corrupted frame, why it was corrupted. They just wait for a retransmit. Senders ordinarily wait a certain time to receive an acknowledgement, and if they don't, the start the transmit wait cycle again. But they often then reduce the data rate to increase the odds of a successful transmission.
I'm glossing over some complexity here, because there's a sender and receiver to consider, and each has a different view of the RF environment, but the point is always correct when all transmitters and receivers (lets say the 2 APs and each has 1 client) are in audible range of each other. And this is most of the time. Note that "audible range" (where the signal is such that the medium is deemed as busy by the adapter) is much larger than the "usable range" (where data can be transmitted at reasonable speeds). So transmitters create interference in a much larger area than they actually operate in.
That means your neighbour transmitting at 6Mbps to his AP will indeed degrade the performance of your client who wants to transmit at 600Mbps because your client has to wait ~100 times longer for a clear medium.
The multi access story is improving, though.
OFDMA on wifi7/802.11be: https://blogs.cisco.com/networking/wi-fi-7-mru-ofdma-turning...
Yes, and before that MU-MIMO is also an improvement to the problem. Still only 1 transmitter at a time, but multiple receivers.
Well the newer WiFi standards on 6Ghz support a lot more channels. Not a perfect work around by any means but it does significantly reduce congestion.
Yeah 6Ghz freq doesn't have DFS channels which remove a lot of usable channels for 5Ghz. Unfortunately it'll be a while until most devices support 6Ghz.
Yes, that helps quiet a lot in practice because in most places there's limited "frequency-domain" capacity (i.e. free channels) but plenty of "time-domain" capacity, (i.e. free air-time). So even if you are sharing a channel with 4 other APs and their users, everybody may subjectively feel the network is fast. When chopping up the time domain into nanoseconds there's just a lot of idle time available, even if clients are pulling down files at 600Mbps.
But at a fundamental level, the channel space (~60 across all bands best case) is extremely limited but the potential growth in transmitters is unbounded. It's like a linear hack to an exponential problem. It seems to work at first, but under very high load conditions performance still degrades ever faster until it falls off a cliff. Then there's all sorts of complex dynamic behaviour like the hidden node problem to add to this, but it all boils down to needing air-time and SNR.
> But at a fundamental level, the channel space (~60 across all bands best case) is extremely limited but the potential growth in transmitters is unbounded.
Youâre overlooking the spatial dimension: https://en.wikipedia.org/wiki/Spatial_multiplexing
I'd like to understand why the WiFi spec developed so slowly from G to N and finally to AC but now it's seems like a new version is released every other year yet many of the features/extensions are poorly implemented or have nearly 0 real world improvement.
Speaking just on timelines (rather than actual underlying innovations or improvements), 802.11 was in 1997, next in 1999, G in 2003, then a 6 year gap to N in 2009, 4 year gap to AC in 2013, 8 year gap to wifi 6 in 2021, wifi 7 in 2024 (though apparently buyer beware), and wifi 8 expected (according to the article) in 2028. Doesn't seem too rapid? The 8 year gap is the weird one out.
I think part of it is that if there isn't a regular and practiced process for bumping standards, then gaps between revisions can grow quite large and stagnation can set in, and if there are any significant improvements it'll take longer for them to come to fruition than if there were regular revisions that are only modest most of the time. Looking at a few other things that come to mind: USB had an 8 year gap between 2 and 3 as well, PCIe had a 7 year gap between 3 and 4 (albeit while they only had a 3 year gap between the specification for 5 to 6, it still took 3 more years (2025) for the first pcie6 devices, and I still can't buy a consumer-level pcie6 motherboard, it's a separate mess), C++ had an 8 year gap between C++03 and C++11, Java had a 5 year gap between 6 and 7 (and another 3 years after 7 to get to Java 8); all of these things now have more rapid cycles.
I would agree with that. G to N was perhaps the most critical move in Wi-Fi because it included MIMO. You can think of this as unwanted signal echoes and reflections being switched from a liability to a benefit. Heck, I _still_ run WiFi-4 networks and they perform very well. WiFi-5 was an incremental upgrade, with many experimental features that barely used in practice.
802.11 is in general a vast swag of cool tricks, and when enough ideas are thrown at a wall, many do end up sticking, but for the most part the benefits are cumulative. MIMO being one major exception.
I'm not a hardware guy, but my guess would be evolution of radio transceiver tech in the cell space drives improvements downstream in wifi. Better transceivers can pull quality signals from what was noise generations past, its not magic of course, but the speed transceivers can run over copper cable goes up similarly. 1Gbps was a fast cable a while ago, and now we're doing hundreds of gigabits commonly.
Another thing is that features like beamforming and higher QAM, let's say, are going to matter more in ideal scenarios where APs are in their sweet spot relative to clients, and you get to take advantage of high SNRs. Is that going to help when someone buys a Netgear Wifi 7 AP only to flip it upside down behind the couch in their apartment in an environment where 2.4 and even 5 ghz are basically gone from all their neighbors' use? Still, faster data rates mean clients get on and off the air quicker overall, saving airspace and battery if applicable. So, I think there's mainstream and highly specialized features rolling out simultaneously.
Does any of it have to do with the spectrum becoming available? After 2.4GHz and 5GHz, I have no idea what else the latest/future gens of WiFi are using. As some tech like 2G is no longer in operation, that spectrum was opened up. There are other frequencies that have become available where operating the older equipment that used to operate there is a big no-no now. There was a frequency range used by old wireless microphone systems that are banned at locations.
Just taking a swing at it, but I don't play that sport so probably a big whiff
In regulatory regions where it is usable, Wifi 6 (802.11ax) added some 6GHz channels. Wifi 6e extended that to roughly the entire 6GHz band, for ~1GHz of contiguous RF bandwidth in that area alone.
The "old" cellular bands aren't generally open, at least in the States. We tend to use them for newer licensed stuff in cellular-land instead of the old licensed stuff we used to do. (Old modulation techniques die out and get replaced, but licensed RF bandwidth is still licensed RF bandwidth.)
> Wi-Fi signal strength decreases at an exponential rate as you move further away from a router.
This is surprising to me. I'd have guessed it decreases quadratically (i.e. due to the inverse square law), not exponentially.
The paragraph below seems to contain an explanation, but I don't really understand it (namely because I don't know what that percentage "Coverage" column actually means, or what we mean with "the total distance at each QAM step").
So that table is using distance as a proxy for signal to noise ratio. SNR is what really matters.
Each data rate in the standard uses a different encoding technique. "Faster" encoding techniques cram more data into a given transmission interval but require a higher signal to noise ratio to be received without error. Since SNR declines with distance you can have a rough idea at what distance from a transmitter you will be able to receive at what data rate.
However, people and vendors focus far too much on maximum throughput. I've seen data showing that even in the best conditions, clients spend about 1% of their time transmitting or receiving at the highest data rates. Because they are dynamically adjusting the data rate based on the perceived SNR.
Individual clients' peak throughput also works against _aggregate_ throughput when talking about wireless networks with multiple users. If you have 100 clients, do you want one to be able to dominate the others or everyone get a more or less equal share? These peak speeds assume configurations that I would never deploy in practice, because they favour individual users and cripple aggregate throughput - things like 160 MHz wide channels.
But the sticker speed is what sells..
There are a lot of people who are the only ones using their Wi-Fi, so they probably don't care about the performance for anyone else
But this is the point. What your neighbour's are doing greatly affects the performance of your network.
If you have a good connection and are successfully able to transmit packets to your AP at 600Mbps, and your neighbour has a poor connection and is transmitting at 6Mbps to his AP at that moment, you literally have to wait ~100 times as long for a free medium before you can attempt to transmit. And that's for every single frame. Then you have to hope his client is well-behaved enough not to transmit while you are transmitting. Otherwise you end up having to wait again and retransmit anyway.
You might not notice this with only 2 clients. It might be the difference between a 80MBps and a 50MBps download for example. But it decays exponentially with the number of clients.
https://en.wikipedia.org/wiki/Power_law
Because the variable is the base, not exponent.
I know what "exponentially" means, I know what "quadratically" means (and how it's not exponentially), and I know the inverse square law. Hence my question why the article claims "signal strength" decreases exponentially, when the raw power received by an antenna definitely decreases quadratically, not exponentially. That's just physics. But there might be some convoluted thing about stepping down symbol rate which affects throughput (which I guess could be colloquially called "signal strength" if I squint really hard) that I don't understand here.
yeah, it's pretty common to refer to x^2 as exponential colloquially since there's A. an exponent B. a single term for all values (vs. quadratic, cubic, quartic...)
But you're technically correct!
I'm actually not sure that they don't actually mean exponentially. There's something about not only increasing the distance, but potentially also the modulation (and thus the symbol rate) stepping down, which maybe in total causes the decline to be ~exponential? But it's not clear to me at all. That's why I ask, I have a hard time parsing it.
But then again, the sentence uses the term "signal strength", not "throughput", so that would suggest quadratically. But I guess "signal strength" could be meant colloquially and mean more than just the raw signal power received by the antenna, here.
It's all very fuzzy to me, as it stands.
Do you also think that f(x) = x^1 is exponential? How about f(x) = x^0?
Kind of irrelevant, because you could also ask "Do you also think that f(x) = x^1 is polynomial? How about f(x) = x^0?" The distinction was clearly between polynomial (specifically quadratic) and exponential, leaving those trivial cases out.
Today I set up a NWA210BE (Zyxel) to replace a unifi 6+ AP; I bought it second hand and my key metrics were: 4x4 MIMO, available used/discounted, current gen, fully functional standalone mode.
The 4x4 makes all the difference. Sitting in my car the 6+ would fight with my 4G for internet and cause maps to be super slow; now I'm off the property before its unusable.
I had intended to put APs in multiple rooms, but there doesn't seem like much point now.
Interesting...
I have a Netgear WAX218, one of the last cheap business-class APs I could find that don't require a cloud service to manage. WAY better than the pro-sumer wifi routers I was running before in access point mode. I'll have to look into Zyxel offerings a bit more when I'm ready to replace my Netgear.
Anyone know of a similarly excellent resource for understanding wired networking? CAT specifications, how to pick high quality switches/routers etc.?
Beej's guide will help you with understanding networking overall, I don't think it would help you choosing switches/routers specifically.
Nice detailed article!
Finding it increasingly difficult to avoid bottlenecks though. Even with wifi 7 I still get 1.3 on my mac and 0.5 on iphone. More than enough realistically, but upstream internet is 1.7 so tiny bit unfortunately
Think I'm just going to wire the place with 10 gig fiber
>The speed advantages that Access Points have over mesh systems will become much more obvious with Wi-Fi 7.
From what I've read mesh devices generally can detect when they've got wired backhaul so they can stay in mesh mode for the clean handovers while not relying on it for actually moving data
Due to boring circumstances outside of my control, I have to use WiFi for the most part, so I've got quite some experience with making it run optimally (or rather, as optimally as I managed to, not as optimally as I would like it to).
And yeah, you pretty much already have to have a visible line of sight to get anything even close to 1 Gbps. And still be on channels with little interference. (DFS helps if you're not near radar, which intentionally causes you to get kicked off those channels and lose connection entirely.) And even then you might have to mess about a lot with positioning, because of reflections and generally multipath propagation.
I'd say it's not worth the headache. I would love to lay down Ethernet cable, even if it was just cabling only suitable for 1 Gbps (for which there's no good reason to, might as well do 10 Gbps).
But yeah, any mesh system worth its salt figures out the topology and absolutely favors wired links over WiFi for the back haul. Anything else wouldn't make any sense at all, there is basically no situation where you'd prefer an RF channel over a wire, unless the wire is maybe made of wet string.
> And yeah, you pretty much already have to have a visible line of sight to get anything even close to 1 Gbps
If one considers that the higher speeds in 802.11ac and 802.11be require 256QAM modulation or better, this is completely expected (assuming 5 GHz band of course, which doesn't go through material very well at all). If you've sen a live eyeball chart of a 256QAM or 1024QAM constellation on test equipment for clear-air microwave link purposes, and seen how quickly it can degrade or get fuzzy if there's anything in the way of the link, it becomes more readily apparent. MCS levels 8 and onwards here:
https://en.wikipedia.org/wiki/Wi-Fi_7
"Clean" eyeball example of 256QAM: https://www.everythingrf.com/community/what-is-256-qam-modul...
examples of "fuzzy qam" in 16QAM, same principle applies to denser QAM
https://www.researchgate.net/figure/Typical-eye-diagram-Symb...
Have you thought about using powerline devices? Iâve successfully used them in place where running my own cable wasnât a possibility, and WiFi wasnât cutting it.
https://www.hp.com/us-en/shop/tech-takes/what-is-a-powerline...
My house is built out of reinforced concrete, so wireless signals reach almost nowhere. I got Ethernet put into the living room and bedroom and put in 2.5 Gbps USB ethernet dongles on powered hubs, so when I plug into my phone/laptop to charge they get wired ethernet automatically.
how many spatial streams are you using (2x2, 3x3, etc) and are you using an 80 or 160 MHz channel?
If you have a set of full capability 802.11be clients you'll see the best performance with a 3x3 AP and 160 MHz channels.
Good to see the subjective adjectives in the RF world are here too. Except they're not the same ordering, as EH is before UH for WiFi but after in RF
Get a daily email with the the top stories from Hacker News. No spam, unsubscribe at any time.