Hacker News new | past | comments | ask | show | jobs | submit login
802.eleventy what? Why Wi-Fi kind of sucks (arstechnica.co.uk)
159 points by adunk on May 26, 2017 | hide | past | favorite | 52 comments



WiFi doesn't have an easy life either:

- It needs to work on unlicensed spectrum, which means that it has to play well with all manner of devices that contend for this spectrum (e.g. other WiFi devices, Bluetooth, IEEE802.15.4). In practice this means that it cannot do much beyond CSMA/CA (i.e. the 'listen before talking' thing). CSMA/CA is a terrible contention mechanism for high density scenarios, and before long much of the air-time is taken by collisions. LTE does not have this problem, it works on licensed spectrum, as such, an LTE base station can just divide the time/spectrum blocks and allocate them to the various contending devices as it pleases (as it owns the spectrum), making almost optimal use of the spectrum that is available to it. 802.11ax will improve on this a bit (e.g. it will have OFDMA, which reduces the collision domain; it will allow the AP do to some coordination, via 'trigger' frames)

- Wifi has a lot of luggage; IEEE 801.11ax will be backward compatible with tens of billions of devices going all the way to IEEE 802.11b, which came out in 1999.

- Costumers don't like spending all that much money on Wifi. This cost-pressure means that we don't have as many people looking into WiFi as we should (people writing drivers; people debugging problems; radio engineers; investment in testing equipment).

- MIMO (introduced in 802.11n), downstream MU-MIMO (introduced in 802.11ac), and upstream MU-MIMO (to be introduced in 802.11ax) are all technically impressive, but also very hard to implement well. (But we are now starting to see the benefits of this, particularly the 802.11ac wave2 devices.)

Anyway, I have high hopes for WiFi, well beyond a billion WiFi chips are sold every year, and it is getting better all the time.


Yeah. If your neighbor has one 802.11b device and your router sees it on its channel then it has to get out of it's way, dragging your whole Network into the pits.

If you detect anything older than 802.11n, change channels or start knocking on doors.


That is not true, all newer (than 802.11b) frames are encapsulated in a 802.11b frame, precisely to be compatible with it. At IEEE there are always discussions about to drop compatibility with 802.11b but usually someone reminds that in fact it is not a real problem.

What is a problem is that the preambule time is wasted, but it would not be too hard to imagine that even this preambule could be used for newer amendments after some homework.


It's extremely annoying as some devices /only/ support B. They're quite legacy at this point, but some people still use older Nintendo handhelds that infamously only work on 802.11b with /very/ 'compatibility minded' options.

IMO the entire 2.4ghz public spectrum block should be viewed as for legacy support. However far more spectrum should go to limited range use. (Also, building walls should have filter meshes that absorb frequencies not designed for use with mobile computer to cell towers / GPS).


In theory, that is not necessary, you can transmit information even if the spectrum is completely saturated, by using Wi-Fi backscatter like in: http://passivewifi.cs.washington.edu/

You neighbour's 802.11b will never see that another Wi-fi network piggybacks on him and goes 100 times faster. There is even not the need to use the cumbersome CSMA/CA.

Indeed it is not an existing amendment, but if you are interested I would be happy to help someone to present a few ideas at IEEE 802.11.


> If you detect anything older than 802.11n, change channels or start knocking on doors.

How do you expect the "knocking on doors" part to go?

"Excuse me, something using an old wifi standard in your home is stomping on my speeds. Could you either turn it off or replace it?"


I suspect that minority of responses will be "Get orf moi laaaand!"


Well, that's not really true.

But if you did want to force someone to upgrade, you could always jam them... pretty easy to selectively jam a radio. Or crack their WEP (most 11b)... So many bad things.


All the way back to 802.11 DSSS in 1997 (only 1/2 Mbps)


On your first bullet point you left out the most ubiquitous 2.4GHZ device of all: The microwave oven. Consumers don't even realize that microwaving popcorn can ruin their netflix streaming (depending on the oven's age and shielding).


I dropped out ouf the 802.11 business a few years ago but it seems I am still relevant, after all the article is about 802.11ac, not something about 802.11ax. On the technical side I agree with most of what the author says, but it is only a part of the larger picture.

-First I did my own tests of 802.11ac in 2014 and the manufacturers were correct in their claims at that time. You have to understand that the best speed is when you are in ideal radio conditions and simply you are never in ideal conditions and most of the time you are even far from the ideal case.

- Second, 802.11 sucks but not about raw speed, the MAC layer of most chip implementations is often ultra simplified and the outcome is that it is difficult to be authentified. This is strange as the Wi-Fi chip most often is a little computer and the MAC is implemented in software.

- Third, there are unreasonable economic expectations by users as well as the article's author: Wait you want gigabit speeds, ultra-reliability in challenging radio conditions, and that at a tenth of the cost of a 3G mobile radio?

- Fourth: Your phone has more hard time to cope with that throughput, than the Wi-Fi chip has. Android and Linux in general have many internal buffers because there are layers in charge of different features. The usable throughput is the raw radio throughput divided by the number of buffers. There are research OSes which use pointers instead of buffers, but Linux and Windows use buffers.


> there are unreasonable economic expectations by users as well as the article's author: Wait you want gigabit speeds, ultra-reliability in challenging radio conditions, and that at a tenth of the cost of a 3G mobile radio?

Honestly the big reason mobile radios are more reliable is that they're allowed to transmit at like 20x the power of devices in the ISM band. That's not a cost thing, it's just a regulatory issue steming from (some rather outdated) concerns about interference in the shared environment.

I mean, it's true that wifi is sort of a mess. Marketing of the standards has gotten way ahead of real world hardware capability (frankly MIMO as it stands is basically voodoo snakeoil that provides no consumer benefit whatsoever). But it's no less complicated than all the junk going on in LTE either. Everything's a mess.

Wifi continues to have the extraordinary advantages of being privately deployable, performant, pervasive, and standard (i.e. ethernet-framed, so everything Just Works the way everyone expects). It's not going anywhere.


>frankly MIMO as it stands is basically voodoo snakeoil that provides no consumer benefit whatsoever.

MIMO works fine for me. The jump from 802.11g to 802.11n with two or more spatial streams was a significant bump in speed. And yes, I mean speed as in TCP, not PHY rate.


> Honestly the big reason mobile radios are more reliable is that they're allowed to transmit at like 20x the power of devices in the ISM band.

Yes and no. You can always go faster with more TX power, but if your neighbor can do the same then you get interference.

I can see 20 networks around my house. If everyone 20xs their power it would probably be thousands competing on a few narrow bands.


> frankly MIMO as it stands is basically voodoo snakeoil that provides no consumer benefit whatsoever

Curious why you say this. In the last few years, I've seen a significant increase in PHY data rate when using multiple spatial streams.


I bought the snake oil in the form of a high end Ubiquiti WAP. Waiting for myco to issue me a new MBP touch bar so I can test it out.


Yeah, it's not snake oil. There are only a couple reasons it wouldn't work: you're in an unusual and unfortunate indoor environment with insufficient reflectors, or the modem designers were incompetent. Barring those things, you'll get a faster connection.


It is working great!

The snakeoil comment was about the MIMO functionality which I can only really test via my iPhone7 at the moment with the 2x2 MIMO client to the 4x4 MIMO AP (UAP-AC-HD). Once I get the MacBook Pro I'll have a 3x3 MIMO client.

I bought primarily because if this animation explaining the benefits of MIMO. https://unifi-hd.ubnt.com/

The marketing worked on me!


The Ubiquiti gear is, in general, great. It's high end consumer gear with enterprise-y features at mid range consumer pricing.

There's no magic bullet, and buying a bigger more expensive AP is, on its own, likely to make no real difference to any wifi benchmarks you might throw at it.

However, what you do get is the ability to easily place several APs in your house, know they will work together seamlessly, as you would expect from enterprise-y gear, and end with a more reliable and consistent wifi network.

(The article goes into detail why raw speed won't be gained by buying the bigger AP, so I won't repeat it here...)


I completely agree.


  Third, there are unreasonable economic expectations by
  users as well as the article's author: Wait you want
  gigabit speeds, ultra-reliability in challenging radio
  conditions, and that at a tenth of the cost of a 3G
  mobile radio?
How are customers supposed to know what "economically reasonable" expectations are if suppliers are flat out making claims they can't deliver?

Imagine you're planning a trip to City X, you look online and four-star hotels with brands you've heard of quote $75 a night. If you turn up and it's not a four-star hotel, is it your fault because you had "unreasonable economic expectations"? Should you somehow have known that the going rate for a four-star hotel room was $500 a night, when half a dozen reputable suppliers were offering the same thing at a fraction of the cost? Seems to me it's not your fault - you've been a victim of fraud.


dont think i've ever seen packaging that advertised that a router works well in a crowded environment


At least on the outside of the packaging, I don't think the vast majority talk about spectrum over allocation.

Of course many issues like that could be solved if APs shipped with their signal strength on low rather than blasted to the FCC limits. But I don't think we'll see that changed any time soon.


Unknown unknowns. Does the average consumer even understand that contention is an issue?

All the average person sees is the number of bars in their wifi connection UI, which AFAIK doesn't take contention into account, only signal strength.


WiFi bars should be color coded for contention. Green = rf is all mine, Red = get neighbors to turn down transmit power.


And the hotel didn't advertise that they weren't in the ghetto.


> The usable throughput is the raw radio throughput divided by the number of buffers.

This is not true. Buffers can under some circumstances increase latency, or cause a TCP connection to take longer to get to its steady-state speed, but they don't hurt steady-state throughput unless something else is very wrong.


You are talking of yet another issue which is real and is related to the way TCP detects a congestion, but this is not specific to Wi-Fi. There are also problems of interoperability between TCP/IP and the Wi-Fi MAC layer, some of them could trigger congestion algorithms, but as the MCDU gets larger and larger this should not be a problem nowadays. Indeed it is a bit strange to have MCDU that are 64k large and IP packets that are 1.5k large.

What I discussed was the way the buffers are in series inside the OS, this is known since a long time. A modern operating system throughput could be 5 to 9 faster than Linux: https://www.usenix.org/node/186147


> Wait you want gigabit speeds, ultra-reliability in challenging radio conditions, and that at a tenth of the cost of a 3G mobile radio?

I would be happy with any kind of reliability for a mobile chipset in reasonable radio conditions. Too often, I have to turn off WiFi to get basic connectivity happening. 4G/LTE connections are way better. It would be nice if Apple detected a bad WiFi connection and fell back to cell, but it seems they haven't figured out that trick yet.


> It would be nice if Apple detected a bad WiFi connection and fell back to cell, but it seems they haven't figured out that trick yet.

Apple introduced this very feature in iOS 9 https://support.apple.com/en-us/HT205296

I've never really had time or interest to investigate how well it's implemented, admittedly. It caused some consternation among those with crap data plans at the time, who where understandably upset when the iOS 9 update enabled the feature and lead to unexpected bills.


I had that problem in my home. If I wanted a fast and reliable connection, I had to turn off Wi-Fi and fall back to LTE.

I recently swapped out the piece-of-junk router from Comcast with a decent router, and it made a tremendous difference. Doesn't solve the issue when using other people's Wi-Fi networks though.


Have you tried using the Wi-Fi Assist feature?

https://support.apple.com/en-us/HT205296


The problem isn't the fallback to cellular but the function it uses to detect whether the WiFi is actually working. It works if there's a gross network failure but fails to detect networks which have packet loss, captive portal pages which cannot be loaded, etc. With the exception of captive portal pages, this isn't specific to the WiFi implementation either – the cellular stack is just as broken.

As far as I can tell, this comes down to two problems: the most obvious technical challenge is the difficulty of detecting soft failures rather than hard failures. What they need to implement is a hard timeout which resets the connection state if the remote end fails to respond correctly within a set interval. I encounter this regularly commuting on the subway or taking underground tunnels between buildings; toggling airplane mode is the only way to get it to accept that the base station (WiFi or cellular) it was talking to 1500 feet back is never going to start responding.

The social problem appears to be that nobody considers this a keynote demo feature and so it hasn't advanced beyond iOS 1.0 despite years of bug reports.


Even an app which you could program yourself would be excellent. It seems the system has equal belief in the reliability of all wifi networks, in terms of sticking with them. I'd like to give good reputation to my home wifi, but any new wifi should start with low reputation. And anything with a captive portal, you really want your phone to connect via 3G every few minutes to see if the connection is filtered.


I was unaware, but went and checked to see if it was enabled already. It was. So it doesn't work.


I really want to use Plume... but I don't want my WiFi network to depend on "the cloud". I don't see any reason they should need to upload data about my private internal network to their servers. It means I can't trust them. If they ever went out of business, got acquired, or otherwise shut the service down I'd be left with a bunch of bricks too.

Does anything else do the same kind of mesh dynamic frequency allocation, but without requiring any kind of cloud service?



I used to work on the 4x4 mu-mimo chip. The way we tested was to connect coax cables from one board to another. You could get about 800MB/s out of the link that way. At home, I measured 100MB/s between two boxes in the same room and maybe 15MB/s to a laptop 3 rooms away. The only way to get something better is to cheat on TX power levels. No amount of signal processing can defeat attenuation through a couple of walls.


What a weird article. My MBP gets 600 Mbps in my living room where my AirPort Extreme is, and around 300 Mbps in the bedroom above it. Both are like four years old. Newer APs are getting to the point where the GigE uplink port is the bottleneck, and consumer 10g hardware is non-existent. What exactly "sucks" about it?


I feel like people who write these articles live in houses with two desktop PCs, four laptops, six smartphones, and 15 IoT devices all connected all the time. That sort of setup really does cause problems.

The rest of us live in a world with a couple of computers and a couple of phones and maybe a game console or TV connected and everything is fine.


It could also just be in a densely settled area. When I lived in the Mission Hill neighborhood of Boston, I purchased a brand new top of the line router for my parents in the suburbs, but did all the configuration at my apartment. When testing in my apartment, 2.4ghz couldn't move more than 5mbps. At my parents' house, it had no trouble pushing 120mbps+. From my apartment, something like 40 2.4ghz wireless networks were visible.


If wifi sucks then how do you describe Bluetooth?


Bluetooth (and BLE) optimizes for a different set of constraints: low cost, low power. It performs terribly if you have thousands of devices in a small room (basically any conference or convention).


Or if you make the mistake of changing all of your light bulbs for Bluetooth LED light bulbs. Have to fight to turn on the damn lights on all the time.


My Apple Watch battery drains in hours when I forget to place it in Airplane mode when I am at a convention. Soooo many Bluetooth devices in one place makes it very unhappy.


I have never had a Bluetooth headset that doesn't sometimes cut out when my phone is in my pocket.


Really really sucks


I'm curious about "AX" coming up. Uses 2.4/5ghz bands, just more channels apparently.

"AD" i think is 60ghz, which doesn't penetrate walls too well in comparison.

Who knows, maybe we'll all use special wifi wall paint in the future, just to make the entire inside of the house or business an antenna.


I'm curious: what do mmWave means for WiFi ? How ideally for consumers should we regulate it ? And could we achieve, maybe at the lower end of mmWave, an unlicensed alternative that competes well with LTE, in urban spaces?


Millimetre wave Wi-Fi (60 GHz) does not penetrate walls and in practice needs line of sight and beam-forming antennas. The scenarios it is good for are things like docking stations, or connecting mostly stationary devices within line-of-sight to each other in the same room. Definitely not suitable for anything like LTE scenarios.


So how does one set up a LTE network in their home? :)


If you're handy with software defined radios: https://github.com/Microsoft/OTP4LTE-U




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: