Hacker Newsnew | past | comments | ask | show | jobs | submit | pumplekin's commentslogin

This is due to advertising standards. They are required to advertise "average speed", although how this is actually calculated is nebulous.

A&A not advertising can just say what the link speeds actually are on the product pages.

Other ISP's could do this too, but it would cause confusion having one figure on the advert and one figure on the product pages, and they might get in trouble if they link to the product pages in the adverts.


Couldn't they just list link speed and average speed (however that is measured, before or after protocol overhead for example) as two separate lines on the product page?

.uk being the TLD, and .gb being the ISO 3166-1 alpha-2 code is a quirk of history that comes with .uk being on the internet very early.

I once wrote something that did, as an internal tool.

It was basically an MPLS traceroute tool that used LOC records on RFC1918 loopbacks to plot pretty maps (well, the lines were way too straight on long range links, but ...).

It was used by marketing and basically nobody else, but it existed !


I've always thought we could put a bit of general purpose TCAM into general purpose computers instead of just routers and switches, and see what people can do with it.

I know (T)CAM's are used in CPU's, but I am nore thinking of the kind of research being done with TCAM's in SSD like products, so maybe we will get there some day.


TCAM still uses 2-bit binary storage internally, it just ignores one of the values.


There’s a lot of tech in signaling that doesn’t end up on CPUs and I’ve often wondered why.

Some of it is ending up in power circuitry.


Yes, but they run a bunch of other useful services for internet plumbers too.

This would just be another "general good of the internet" service.

RIPE for example run.

Atlas: A cooperative service for internet reachability and measurements. - https://www.ripe.net/analyse/internet-measurements/ripe-atla...

DNSMON: Monitor the root and TLD's and other key internet domains. Does so from many locations so as to test anycast issues. - https://dnsmon.ripe.net/

RIPEStat / BGplay: Debug and examine internet reachability issues. - https://stat.ripe.net/ - https://stat.ripe.net/bgplay

And volunteer resources to help to run other things, like https://www.as112.net/ that sinks all the PTR lookups for RFC1918 that leaks to the internet, among other things.


The idea of an IX, or IX peering LAN is simple in concept. It is a LAN (a flat, layer2 network), to which multiple ISP's can plug in routers.

Like your home LAN might have 192.168.0.1 = router, 192.168.0.2 = laptop, 192.168.0.3 = phone etc, a peering LAN will have things like 195.66.224.21 = HurricaneElectric, 195.66.224.22 = NTLI, 195.66.224.31 = Akamai, 195.66.224.48 = Arelion etc ...

So instead of all these ISP's that want to exchange traffic with each other having to assign ports and run cables in a full mesh (which quickly would get out of control), everyone connects to the "big switch in the middle" with that peering LAN on it, and they use that.

Back in the day, that might have been an actual single big switch, or a stack of switches. Now IXP infrastructures are much more complex, but the presentation to the end user is usually still a cable (or bundle of cables) that goes into something that looks to them like a "big switch".

There is a LOT more to know about this space (Peering vs Transit, PNI's, L3 internet exchanges, what Google are doing by withdrawing from IXP's), but I wanted to write a comment that didn't turn into an essay.


You should read https://datatracker.ietf.org/doc/html/rfc1627 for a path not travelled.

Not everyone thought this was a good idea, and I still maintain the alternative path would have led to a better internet than the one we today.


As the authors themselves note, RFC 1597 was merely formalizing already widespread common practice. If the private ranges were not standardized then people would still have created private networks, but just used some random squatted blocks. I can not see that being better outcome.


The optimist in me wants to claim that not assigning any range for local networks would have lead to us running out of IPv4 addresses in the late 90s, leading to the rapid adoption of IPv6, along with some minor benefits (merging two private networks would be trivial, much fewer NATs in the world leading to better IP based security and P2P connectivity).

The realists in me expects that everyone would have used one of the ~13 /8 blocks assigned to the DoD


The realist in me thinks that we'd probably have had earlier adoption of V6 but the net good from that is nil compared to the headaches.

V6 is only good when V4 is exhausted, so it's tautological to call it a benefit of earlier exhaustion of V4, or am I missing something? I'm probably missing something.


I'm guessing the reason they think it would have been better is that right now the headaches are from us being a weird limbo state where we're kinda out of IPv4 addresses but also not really at the point where everything supports IPv6 out of necessity. If the "kinda" were more definitive, there would potentially have been enough of a forcing factor that everyone make sure to support IPv6, and the headaches would have been figured out.


CGNAT is playing a big role. More and more people across the planet are sharing an IPv4 address with dozens or even hundreds of other customers of their ISPs.


Agreed.

Also, fun fact, the Google IPv6 tracker says we're about to reach 50%. Time to throw s party!


> 50%

As global average: some countries are above 50% already. (Mobile devices are probably a big part of that.)


Cloudflare Radar has separate mobile vs desktop ipv6 adoption stats. Globally mobiles have 45% ipv6, desktops 37%. In US mobiles have 60% vs desktops 46%

https://radar.cloudflare.com/explorer?dataSet=http&groupBy=i...


Yup! Least bad thing about smartphones, lol.


Can you please elaborate? How would such a minute change lead to "a better internet"?


I'm not the OP or author, but the argument against private network addresses is that such addresses break the Internet in some fundamental ways. Before I elaborate on the argument, I want to say that I have mixed feelings on the topic myself.

Let's start with a simple assertion: Every computer on the Internet has an Internet address.

If it has an Internet Address, it should be able to send packets to any computer on the Internet, and any other computer on the Internet should be able to send packets to it.

Private networks break this assumption. Now we have machines which can send packets out, but can't receive packets, not without either making firewall rule exceptions or else doing other firewall tricks to try to make it work. Even then, about 10-25% of the time, it doesn't work.

But it goes beyond firewall rules... with IP addresses being tied to a device, every ISP would be giving every customer a block of addresses, both commercial and residential customers.

We'd also have seen fast adoption of IPv6 when IPv4 ran out. Instead we seem to be stuck in perpetual limbo.

On team anti-private networking addresses:

- Worse service from ISPs - IPv4 still in use past when it should have been replaced - Complex work around overcoming firewalls

I'm sure we all know the benefits of private networks, so I don't need to reiterate it.


> But it goes beyond firewall rules

Honestly though... does it, all that much? Even in a world where NAT didn't exist and we all switched to IPv6, we'd still all be behind firewalls, as everyone on an IPv6 home network is today. Port forwarding would just be replaced by firewall exemptions.

Like on a philosophical level, I do wish we had a world where the end-to-end principle still held and all that, but I'm not actually sure what difference it would make, practically speaking. "Every device is reachable" didn't die because of IPv4 exhaustion or NAT, it died because of security, in reality most people don't actually want their devices to be reachable (by anyone).


> I'm sure we all know the benefits of private networks, so I don't need to reiterate it

That is I think the key. Private networks have sufficient benefit that most places will need one.

The computers and devices on our private network will fall into 3 groups: (1) those that should only communicate within our private network, (2) those that sometimes need to initiate communication with something outside our network but should otherwise have no outside contact, and (3) those that need to respond to communication initiated from something outside our network.

We could run our private network on something other than IP, but then dealing with cases #2 and #3 is likely going to be at least as complicated as the current private IP range approach.

We could use IP but not have private ranges. If we have actual assigned addresses that work from the outside for each device we are then going to have to do something at the router/firewall to keep unwanted outside traffic from reaching the #1 and #2 types of devices.

If we use IP but do not have assigned addresses for each device and did not have the private ranges I'd expect most places would just use someone else's assigned addresses, and use router/firewall rules to block them off from the outside. Most places can probably find someone else's IP range that they are sure contains nothing they will ever need to reach so should be safe to use (e.g., North Korea's ranges would probably work for most US companies). That covers #1, but for #2 and #3 we are going to need NAT.

I think nearly everyone would go for IP over using something other than IP. Nobody misses the days when the printer you wanted to buy only spoke AppleTalk and you were using DECnet.

At some point, when we are in the world where IP is what we have on both the internet and our private networks but we do not have IP ranges reserved for private networks, someone will notice that this would be a lot simpler if we did have such ranges. Routers can then default to blocking those ranges and using NAT to allow outgoing connections. Upstream routers can drop those ranges so even if we misconfigure ours it won't cause problems outside. Home routers can default to one of the private ranges so non-tech people trying to set up a simple home network don't have to deal with all this.

If for some reason IANA didn't step in and assign such ranges my guess is that ISPs would. They would take some range within their allocation, configure their routers to drop traffic using those address, and tell customers to use those on their private networks.


> every ISP would be giving every customer a block of addresses, both commercial and residential customers.

or more likely, you would still receive only handful of addresses and would have needed to be far more considerate what you connect to your network, thus restricting the use of IP significantly. Stuff like IPX and AppleNet etc would have probably then been more popular. The situation might have been more like what we had with POTS phones; residential houses generally had only one phone number for the whole house and you just had to share the line between all the family members etc.


They worked around this with IPv6 by the fact that SLAAC exists and some devices insist on always using it. Your ISP has to give you at least 64 bits of address space or else some phones won't work on your network. And even if they only give you the bare minimum of 64 bits, you can subdivide it further without SLAAC if you know what you're doing.

Furthermore, the use of privacy addresses obfuscates how many devices you have.


The phone company would have been happy to sell you more phone lines. I knew people who had some.

But you're right that as dumb as it is, it's likely that ISPs would have charged per "device" (ie per IP address).

Before 1983 in the US, you could only rent a phone, not own one (at least not officially) and the phone company would charge a rental fee based on how many phones you had rented from them. Then, when people could buy their own phones, they still charged you per phone that you had connected! You could lie, but they charged you.

Like I said, I have mixed feelings about NATs, but you're right that the companies would have taken advantage of customers.


Interestingly, IPv4 is also we have the "great" ecosystem of IOT devices needing to talk to the cloud: making your phone able to talk to your thermostat is too damn complicated...


> Every computer on the Internet has an Internet address

By every computer did you include every MCU that can run TCP/IP stack ?


Not the GP, but yes.


The general root servers generally don't support AXFR, but if you want to AXFR the root, you can do so from lax.xfr.dns.icann.org or iad.xfr.dns.icann.org.


There is work coming at the IETF to help with this.

- Draft: DELEG (a new way of doing delegations, replacing the NS/DS records).

- A draft to follow: Using the extensible mechanisms of DELEG to allow you to specify alternative transports for those nameservers (eg: DoH/DoT/DoQ).

This would allow a recursive server to make encrypted connections to everything it talks to (that has those DELEG records and supports encrypted transports) as part of resolution.

Of course, traffic analysis still exists. If you are talking to the nameservers of bigtittygothgirls.com, and the only domains served by those name servers are bigtittygothgirls ...


I live in Scotland, have two former racing greyhounds, and I'm very grateful for a local farmer who has a dog run / playpark with an honesty box we can drop something in to help with upkeep when we give our two a nice run.


Why doesn't the council have community parks?


We don't have leash laws like in the US, so it is common for dogs to just be allowed to run around in most community parks.

The problem with sighthounds is they will lock on to squirrels, rabbits or other things, and running at 40mph will be out of sight and lost VERY quickly.

So we don't let ours off lead except in controlled places (like this one).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: