Is the notebook or desktop wifi NIC and antenna important or only the router? Because when I had a shitty laptop a few years back the latency sucked ass, both at home and at my university (where I hope they had good network components but idk)
With wifi, everything is important, even the number of people connected on your channel… not the number of wifi networks on the channel, the number of total nodes using the same channel. The ap hardware factors in, your wifi card (client) factors in, even drivers and other things can factor in. The band (2.4/5/6 GHz), the non-wifi traffic, spurious emissions from other harmonic frequencies, even electrical noise from gadgets and other devices nearby. You can even factor in distance to the ap and cosmic background noise.
On top of that, it’s half duplex, so only one node can successfully transmit at a time. So it interferes with itself.
It’s a complete mess of unknowns and unknowable things, unless you have a very good spectrum analyser to look into it.
IMO, this is what makes WiFi so terrible. There’s simply too many factors that can be slowing you down, most of which you can’t see and aren’t obvious.
I just tested ping between my weak computers, one of which supports only 100mbit ethernet and are sequentially connected via cheap 2$ dumb switch and ISP-provided router and got 0.187ms average, while ping via same system, but using 802.11ac for one device got 8.16ms with standard deviation of 11.9, maximum of 67ms and minimum of 1.44ms.
Right. Like even in the shittiest scenario that’s not a major difference. There’s stuff like interference and the speeds are lower, sure, but 1 gigabit is plenty for non enterprise situations
your latency on your network might be 0.6ms, but for most practical use-cases, it will be orders of magnitude more. Partly due to the interference and half duplex nature of wifi, but also because of CSMA/CA (carrier sense multi access / collision avoidance) algorithm, which listens before transmitting to ensure the channel is clear, and waits when it’s busy until it’s clear before transmitting. The actual transit time for each frame is very short, but getting to the point where you can actually transmit is the main challenge for wifi.
Propegation time for a 1500 byte frame on gigabit Ethernet is approximately 12 µs, or 12 microseconds, aka 0.012 ms. So the argument is kind of squished here. Given that you have a dedicated channel to the switch (and not needing a carrier sense, collision avoidance of detection algorithm with ethernet) the frame can be immediately sent, so the total transit time from a computer connected by ethernet to a router or switch is orders of magnitude faster.
Your experience varies massively depending on your RF environment. In my suburban neighborhood, I’m getting a stable 3.4ms to my router. The same hardware when I was in a dense urban environment was around 11ms. I’ve never looked at retry counters, but if I had to guess, I’m getting close to zero right now, but was getting considerably higher in a dense area.
Wireless has a lower minimum latency than wired, that’s why trading houses set up relay towers from Chicago to NYC, in order to achieve the lowest possible latency for their trades between the two markets.
Wired gives better stability, due to almost zero interference noise. The primary cause of sucky WiFi speeds/stability, is having too many other people’s routers nearby.
Ehhh… not quite. There’s evidence that copper runs closer to the speed of light (aka c), than fiber. Light through glass runs at around 2/3 c, making it the slowest option.
Wireless technically runs as fast as light, through atmosphere that’s a tiny bit slower than c, but as close as we can get.
There’s also a large argument among physisicts and electrician YouTubers about the speed of electricity through a wire, and I don’t understand the conclusions, though they were articulated quite well by the YouTubers, it just didn’t stick in my brain. The premise is how fast a lightbulb would illuminate if it had one light-second of pure copper (or superconducting) wire between the power source and the bulb, with little to no resistance. It’s interesting but nuanced and complex.
Wifi, being EM waves (same as light) should run the fastest, copper ethernet close behind and fiber dragging it’s heels at 2/3rds c. However, in practical applications, wifi has more to overcome since it’s a shared medium. Copper and fiber have a dedicated medium, so they have no competition in signaling, wifi needs to contend with everything from other wifi networks spurious emissions from other frequencies, even background cosmic radiation, as well as itself (half duplex). Because of all of that, you generally end up with wifi in last because it has so many protections and checks that it delays itself to ensure that it’s transmission will be recieved intact. The packets are generally larger and take longer to get started, so all the additional (mostly artificial) slowdowns make it slower. However, if you use highly directional antennas, a pair of them, on different but otherwise equivalent frequencies for send/receive, and cut out a lot of the other factors by designing the system well, then disable most of the protections because they’re not needed by design, it will be faster, at least in terms of latency, than fiber or copper in almost every case.
Since designing a multi-access system that doesn’t need wifi’s protections is borderline impossible, this is limited to very controlled point to point systems where both ends are tightly constrained.
So the argument “wifi has a lower minimum latency” is correct, but irrelevant in 99.99% of use-cases. Copper is easier and cheaper than fiber and actually runs faster, than fiber, but it’s only viable for extremely short runs, up to 100m in most cases, and fiber, while “slow” at 2/3rds c, is better for longer distance since there’s less line-loss across the glass per foot.
This is a very deep topic and I’m no physicist, but I’ve been endlessly fascinated by this issue for a very long time. The information here is the result of my research over many years. I still consider fiber to be the gold standard of data communication, ethernet to be next-best and overall best for relatively short connections, and wireless to be dead last due to all the challenges it faces that are not easily overcome.
WiFi 5 latency is only two times higher than cooper (0.3ms vs 0.6ms). WiFi 6 has the same or even lower latency. WiFi 7 is even better. If latency is your game, copper is a poor choice. Unless you have spare money for an industrial 100Gbps set up. Which you don’t.
I have never heard of a latency-sensitive game that doesn’t use UDP for inner loop communication. Sure they use TCP for login and server browser, but the actual communication for gameplay almost always uses UDP.
Minecraft and Terraria use both TCP and UDP, presumably in the way I described (TCP for initial connection, asset download, etc. and UDP for world state sync). Factorio uses UDP exclusively, and implements reliable transport where needed in software.
Unless it’s changed in the past year which I doubt, Minecraft exclusively uses TCP for client/server communication. I’ve been modding the game for years and am pretty familiar with the protocol. I think it’s actually one of the few which don’t use UDP to some capacity.
Ah okay, didn’t know that does it differently since I’ve never touched it. Makes me wonder why they used UDP for it but didn’t use it in the Java protocol yet.
Can’t find any UDP implementation or even UDP protocol description for Terraria, while there are implementations of Terraria protocol that use TCP and documentation for it. Basically no evidence of UDP and a lot of evidence of TCP for gameplay.
Minecraft uses only TCP. Sources: wiki.vg, myself, myself and friend of mine and myself again(no link for now, but two minecraft proxy server implementations)
Latency is the name of the game if you’re gaming. Copper will always give you the fastest ping times compared to the fastest wifi you can buy.
The wifi latency on generic 5ghz routers is like 5ms if not less
Not even 5 ms. I have a properly set up Wi-Fi at home and you’ll feel no difference in gaming. Wi-Fi only adds like 1-2 ms latency at most.
Unless you have no choice - a good WiFi will not add noticeable latency.
Myself I am playing over 5ghz wifi. I would say I don’t feel much difference, but prefer cable any time!
Is the notebook or desktop wifi NIC and antenna important or only the router? Because when I had a shitty laptop a few years back the latency sucked ass, both at home and at my university (where I hope they had good network components but idk)
With wifi, everything is important, even the number of people connected on your channel… not the number of wifi networks on the channel, the number of total nodes using the same channel. The ap hardware factors in, your wifi card (client) factors in, even drivers and other things can factor in. The band (2.4/5/6 GHz), the non-wifi traffic, spurious emissions from other harmonic frequencies, even electrical noise from gadgets and other devices nearby. You can even factor in distance to the ap and cosmic background noise.
On top of that, it’s half duplex, so only one node can successfully transmit at a time. So it interferes with itself.
It’s a complete mess of unknowns and unknowable things, unless you have a very good spectrum analyser to look into it.
IMO, this is what makes WiFi so terrible. There’s simply too many factors that can be slowing you down, most of which you can’t see and aren’t obvious.
WiFi 5 latency on a decent router (not the shit your ISP gives you for free) is only 0.6ms. Yes, that’s less than 1ms.
I just tested ping between my weak computers, one of which supports only 100mbit ethernet and are sequentially connected via cheap 2$ dumb switch and ISP-provided router and got 0.187ms average, while ping via same system, but using 802.11ac for one device got 8.16ms with standard deviation of 11.9, maximum of 67ms and minimum of 1.44ms.
You have a very shitty WiFi over there. I haven’t seen anything over 1ms ever.
I just don’t live on the moon, neighbours have WIFI too.
And?
Ranges are crowded, a lot of interference.
Where? There’s not much interference even in Soviet blocks. What are you talking about?
Right. Like even in the shittiest scenario that’s not a major difference. There’s stuff like interference and the speeds are lower, sure, but 1 gigabit is plenty for non enterprise situations
There’s no interference unless you live in a Soviet block.
Maybe…
your latency on your network might be 0.6ms, but for most practical use-cases, it will be orders of magnitude more. Partly due to the interference and half duplex nature of wifi, but also because of CSMA/CA (carrier sense multi access / collision avoidance) algorithm, which listens before transmitting to ensure the channel is clear, and waits when it’s busy until it’s clear before transmitting. The actual transit time for each frame is very short, but getting to the point where you can actually transmit is the main challenge for wifi.
Propegation time for a 1500 byte frame on gigabit Ethernet is approximately 12 µs, or 12 microseconds, aka 0.012 ms. So the argument is kind of squished here. Given that you have a dedicated channel to the switch (and not needing a carrier sense, collision avoidance of detection algorithm with ethernet) the frame can be immediately sent, so the total transit time from a computer connected by ethernet to a router or switch is orders of magnitude faster.
Here’s the thing - it won’t in real life.
This comment does not make sense to me
Your experience varies massively depending on your RF environment. In my suburban neighborhood, I’m getting a stable 3.4ms to my router. The same hardware when I was in a dense urban environment was around 11ms. I’ve never looked at retry counters, but if I had to guess, I’m getting close to zero right now, but was getting considerably higher in a dense area.
How would you get an entire 5g BTS without frequency regulating agency hunting your ass?
I meant to say 5ghz
Wireless has a lower minimum latency than wired, that’s why trading houses set up relay towers from Chicago to NYC, in order to achieve the lowest possible latency for their trades between the two markets.
Wired gives better stability, due to almost zero interference noise. The primary cause of sucky WiFi speeds/stability, is having too many other people’s routers nearby.
No shit?
I mean copper runs at 2/3 the speed of light.
Wireless is pretty much the speed of light.
I thought they used dedicated fiber for their links.
Ehhh… not quite. There’s evidence that copper runs closer to the speed of light (aka c), than fiber. Light through glass runs at around 2/3 c, making it the slowest option.
Wireless technically runs as fast as light, through atmosphere that’s a tiny bit slower than c, but as close as we can get.
There’s also a large argument among physisicts and electrician YouTubers about the speed of electricity through a wire, and I don’t understand the conclusions, though they were articulated quite well by the YouTubers, it just didn’t stick in my brain. The premise is how fast a lightbulb would illuminate if it had one light-second of pure copper (or superconducting) wire between the power source and the bulb, with little to no resistance. It’s interesting but nuanced and complex.
Wifi, being EM waves (same as light) should run the fastest, copper ethernet close behind and fiber dragging it’s heels at 2/3rds c. However, in practical applications, wifi has more to overcome since it’s a shared medium. Copper and fiber have a dedicated medium, so they have no competition in signaling, wifi needs to contend with everything from other wifi networks spurious emissions from other frequencies, even background cosmic radiation, as well as itself (half duplex). Because of all of that, you generally end up with wifi in last because it has so many protections and checks that it delays itself to ensure that it’s transmission will be recieved intact. The packets are generally larger and take longer to get started, so all the additional (mostly artificial) slowdowns make it slower. However, if you use highly directional antennas, a pair of them, on different but otherwise equivalent frequencies for send/receive, and cut out a lot of the other factors by designing the system well, then disable most of the protections because they’re not needed by design, it will be faster, at least in terms of latency, than fiber or copper in almost every case.
Since designing a multi-access system that doesn’t need wifi’s protections is borderline impossible, this is limited to very controlled point to point systems where both ends are tightly constrained.
So the argument “wifi has a lower minimum latency” is correct, but irrelevant in 99.99% of use-cases. Copper is easier and cheaper than fiber and actually runs faster, than fiber, but it’s only viable for extremely short runs, up to 100m in most cases, and fiber, while “slow” at 2/3rds c, is better for longer distance since there’s less line-loss across the glass per foot.
This is a very deep topic and I’m no physicist, but I’ve been endlessly fascinated by this issue for a very long time. The information here is the result of my research over many years. I still consider fiber to be the gold standard of data communication, ethernet to be next-best and overall best for relatively short connections, and wireless to be dead last due to all the challenges it faces that are not easily overcome.
WiFi 5 latency is only two times higher than cooper (0.3ms vs 0.6ms). WiFi 6 has the same or even lower latency. WiFi 7 is even better. If latency is your game, copper is a poor choice. Unless you have spare money for an industrial 100Gbps set up. Which you don’t.
Please speak standards, not marketing language. Replace WiFi and number with 802.11 and letters in the end.
One packet drop for TCP creates huge latency for application level protocol. And not many games use UDP for their transport.
Citation Needed
I have never heard of a latency-sensitive game that doesn’t use UDP for inner loop communication. Sure they use TCP for login and server browser, but the actual communication for gameplay almost always uses UDP.
Let’s see… Terraria, Factorio, Minecraft.
Minecraft and Terraria use both TCP and UDP, presumably in the way I described (TCP for initial connection, asset download, etc. and UDP for world state sync). Factorio uses UDP exclusively, and implements reliable transport where needed in software.
Unless it’s changed in the past year which I doubt, Minecraft exclusively uses TCP for client/server communication. I’ve been modding the game for years and am pretty familiar with the protocol. I think it’s actually one of the few which don’t use UDP to some capacity.
The original PC Java client uses TCP; every other client, including the C++ PC version, uses UDP.
Ah okay, didn’t know that does it differently since I’ve never touched it. Makes me wonder why they used UDP for it but didn’t use it in the Java protocol yet.
Oops, Factorio moved to UDP.
Can’t find any UDP implementation or even UDP protocol description for Terraria, while there are implementations of Terraria protocol that use TCP and documentation for it. Basically no evidence of UDP and a lot of evidence of TCP for gameplay.
Minecraft uses only TCP. Sources: wiki.vg, myself, myself and friend of mine and myself again(no link for now, but two minecraft proxy server implementations)