I’m curious about something so I’m going to throw this thought experiment out here. For some background I run a pure IPv6 network and dove into v6 ignoring any v4 baggage so this is more of a devils advocate question than anything I genuinely believe.
Onto the question, why should I run a /64 subnet and waste all those addresses as opposed to running a /96 or even a /112?
- It breaks SLAAC and Android
let’s assume I don’t care for whatever reason and I’m content with DHCP, maybe android actually supports DHCP in this alternate universe
- It breaks RFC3306 aka Unicast-prefix-based multicast groups
No applications I care about are impacted by this breakage
- It violates the purity of the spec
I don’t care
What advantages does running a /64 provide over smaller subnets? Especially subnets like a /96 where address count still far exceeds usage so filling subnets remains impossible.
Nibble boundaries and MAC-48. Way way back in the day like the 90s the plan was your IP would be matched to your Mac address so no matter which network you were connected to your last 48 bits would be the same.
but then they needed to start using control bits for various reasons and so a proposal was made to increase the address space to 60, but 60 is a bit of an awkward nibble boundry so they decided to expand it to 64.
Ultimately as the network admin, you can run whatever network size you want. The preset prefix sizes are recommended sizes not mandatory ones.
Now for the controversial bit.
You are showing some V4 baggage there by trying to conserve address space, it is not needed.
There are so many addresses available that the current allocates public block will not exhaust for over 100 years and we still have 5 more blocks to use.
I’m not looking for this type of answer. I’m aware of why v6 was designed with /64 subnets…I’m also aware we don’t need to conserve addresses, both of these reasons are why I prefixed my question with the devils advocate bit. I understand all of this…then I proceed to describe why mac based, or more generally SLAAC, addressing doesn’t matter to me because we have DHCP and DHCP works great, who needs SLAAC? You cannot convince me to use SLAAC, SLAAC is not important to me or my hypothetical use cases.
…also yes I’m showing v4 baggage…because again…devils advocate…this is a thought experiment, not a genuine question, in this I just think that a /64 is dumb…a /96 is much nicer because it’s still plenty big while not being quite so excessive. Keep in mind, IRL I’m a firm believer of /64 everywhere…I don’t carry v4 baggage…hypothetical me from this question does and it’s not going away because 4.3B addresses is still PLENTY when you don’t care about the purity of v6 design.
Standardizing /64 everywhere is great when you want to immediately figure out which part is the “network number” and which part is the “host number”.
The standardization also helps in conserving route table space as the routers don’t have to care about the last 64 bits of IPv6 addresses, because you are routing /64 networks around, not the hosts. (I believe that’s why people do the “reserve /64, assign /127” thing for P2P links.)
Does it conserve router space? I get what you’re getting at but if I have 10 subnets it doesn’t really matter from a route table perspective if they’re /96 or /64. What matters is subnet aggregation but I’m not sure the size matters?
I think someone will still have to route the /96 networks eventually? Aggregation is helpful for routers located in the upper part of the network hierarchy only.
There’s also the problem of service providers “racing to the bottom” if /64 is not standardized, for example some ISPs may choose to delegate /96 instead, or /116, or /120, … you get it. We still have ISPs assigning people /128 in spite of /64 being standardized everywhere.
Yeah but what I’m getting at is that upper router routing /96s shouldn’t be impacted. 10 /96s is basically indestinguisable from 10 /64s in terms of memory consumed. If I’m only using 10 subnets it shouldn’t matter what the size of those subnets are as long as the count stays the same. It’s when you start deagregating blocks into smaller chunks and consuming more of them than you would otherwise that you start eating table space. I can’t think of a situation where someone would consume more /96s than /64s given they’re both basically infinite addresses.
…you know…that’s a really good point. Honestly this whole thought started because I saw someone adamantly defend not wanting to use an entire /64 and being annoyed Android didn’t have DHCP and it got me thinking…if someone genuinely didn’t care about the design goals of v6 are there good reasons to stick to them if DHCP works everywhere. Like I care about the elegance…but not everyone does. I’ve never seen ISPs assign a /128 although I have heard about it. I have seen 1x/64 assignments though which is only marginally better…but if you stop caring about clean /64 subnets then it becomes manageable without having to resort to an NDP proxy.
I personally have mixed feelings on Google’s decision with DHCP. On the one hand I understand the frustration as it’s not their place to dictate your network architecture…on the other hand I think it’s admirable because it might be the one thing keeping that part of the v6 design goals alive when some wish it weren’t.
You are right - although I dislike Google in general, the fact that Android supports only SLAAC is most likely the dominating reason why residential ISPs delegate /64 at all.
🤔 I hope you’re wrong but also I doubt you are. Ik a lot of people have been making a fuss about Android and DHCP, I do hope Google will stick to their guns on this. I feel like whether they do or not will have a massive impact on the direction v6 goes with subnet sizes in the future. Mostly in business environments which largely haven’t deployed v6 yet.
Weren’t people talking about this from a service provider perspective? Aren’t they taking about carrier routers trying to table huge portions of the Internet?
Even if that’s the case it doesn’t really change anything. I was more asking from an end user perspective as I’m hoping we never end up at a point where providers start doing this, however even if they do it doesn’t actually change anything in their routing table. Let’s say providers start giving everyone a /80 instead of a larger block, if they have 50 customers, 50 /80s is no worse than 50 /56s. The only time deaggregation is a problem is when the total number of routes increases but that’s not going to be caused by this as the point of the argument is if you don’t use /64s everywhere than almost any sized block becomes big enough for any sized organization. I really don’t understand why some people hate using a /64 everywhere, it’s not wasteful, it’s the design goal but that’s why this post exists to try to understand the technical downsides and unfortunately so far I’m wishing there were more than Android stops working and your network looks uglier.
Yeah I don’t get it either.
I take more issue with how v6 is going to work with SMB, hint the other post. I am hoping when my ISP stops denying the existence of v6, maybe they’ll do reasonable allocation or PD.
All ISPs should do PD unless you’ve got some very special setup and they give you something that must be manually configured. Honestly too many ISPs still lack IPv6 and it’s baffling. I have a friend with Verizon FiOS and after years of not having it he finally got it earlier this year I think…only to have it get taken away a little while ago. Like what?
This is outside the scope of your question and old, but I keep seeing something like this as a justification for not caring about conservation like v4 limitations. I have questions though.
v4 has been around for ~50 years?
Does everyone likely believe that we’ll replace the protocol before then? So there’s no need to worry about having a repeat take place?
Doee anyone thing it’s weird that we are repeating previous unforeseen requirements? IBM said there’d never need to be more than so many KBs of RAM, etc.
Do we think the rate of IP enabled devices will increase, decrease, or stay the same as we progress through that up coming 100 years?
I suppose we’ll just migrate to a new protocol version when we finally get that colony on Mars, expand all over the universe and run out of the infinite IP space 😉
We may replace it by then, but when people quote the 100 year thing like I have we sometimes forget to mention that is in reference to the current allocated 2000::/3 block.
We have several other blocks reserved for future use which will take several hundred years each to use up.
If we find a more efficient way of using address space then we can use those methods for the other /3 blocks.