Hacker Newsnew | past | comments | ask | show | jobs | submit | sekh60's commentslogin

I'm curious did you have an OEM license or a retail license? OEM licenses die with the mobo.

OEM licenses are for the computer, not the motherboard. The online activation historically hasn't worked if you change motherboard, but the phone line folks would always activate it for you if you explained that it was the same computer with a different motherboard.

TIL. I always heard that their licensing people tended to uphold the motherboard line.

i bought a builder license from newegg in 2017. unfortunately i was not diligent about saving the product key. this was actually the third time i had been in this scenario after changing hardware. no idea why it wouldn’t work this time around.

How is it not a routing rule with ipv6? Firewalls and routers typically support dynamic prefixes (even Vyos, pfSense, openSense do).

How do I tell my phone that I want to send traffic to server A via isp1 and server B via isp2

On your router?

edit Less flippantly, what are you wanting to base the routing rule on? What's your ipv4 routing rule?

DSCP is allowed in ipv6.

https://www.juniper.net/documentation/us/en/software/junos/c...


Without nat, my understanding is the right way in v6 is to issue addresses of every network and then send a message to each end device asking it to use a specific ip address to route traffic and hope every client implements RFC 4191 in the right way.

There's a few options I'm aware of.

The "proper" way would be to get your own ASN and use BGP to route the traffic.

If you're wanting to use a secondary WAN link as a backup for when the other goes down you could have the backup link's LAN have a lower priority. (So I guess hope everything implements RFC 4191 like you said).

You can use NAT66/NPTv6 if you want (though it's icky I guess).

How are you doing it currently?


NAT66 is a thing.

The amount of ignorance in these ipv6 posts is astounding (seems to be one every two months). It isn't hard at all, I'm just a homelabber and I have a dual-stack setup for WAN access (HE Tunnel is set up on the router since Bell [my isp] still doesn't give ipv6 address/prefixes to non-mobile users), but my OpenStack and ceph clusters are all ipv6 only, it's easy peasy. Plus subnetting is a heck of a lot less annoying that with ipv4, not that that was difficult either.

“it’s easy peasy” says guy who demonstrably already knows and has time to learn a bunch of shit 99.9% of people don’t have the background or inclination to.

People like you talking about IPv6 have the same vibe as someone bewildered by the fact that 99.9% of people can’t explain even the most basic equation of differential or integral calculus. That bewilderment is ignorance.


These people apparently had the time and inclination to learn a bunch of shit about IPv4, though.

"Easy" is meant in that context. The people acting like the IPv4 version is easy.

So your second paragraph doesn't fit the situation at all.


"The shit about IPv4" was easy to learn and well documented and supported.

"The shit about IPv6" is a mess of approaches that even the biggest fanboys can't agree on and are even less available on equipment used by people in prod.

IPv6 has failed wide adoption in 30 decades, calling it "easy" is outright denying the reality and shows the utter dumb obliviousness of people trying to push it and failing to realize where the issues are.


Could you share a list of IPv6 issues that IPv4 does not exhibit? Something that becomes materially harder with IPv6? E.g., "IPv6 addresses are long and unwieldy, hard to write down or remember". What else?

Traffic shapping in v6 is harder than v4. At least it was for me, because NDP messages were going into the shaping queue, but then getting lost since the queue only had a 128 bit address field, and 128 bits isn't actually enough for local addresses. When the traffic shaping allowed traffic immediately, the NDP traffic would be sent, but if it needed to be queued, the adapter index would get lost (or something) and the packets disappeared. So I'd get little bursts of v6 until NDP entries timed out and small queues meant a long time before it would work again.

Not an issue in ipv4 because ARP isn't IPv4 so IP traffic shaping ignores it automatically.


Software support is a big one. I ran pfSense. It did not support changing IPv6 prefixes. It still barely does. So something as simple has having reliable IPv6 connectivity and firewall rules with pfSense was impossible just a few years ago for me.

Android doesn't support DHCPv6 so I can't tell it my preferred NTP server, and Android silently ignores your local DNS server if it is advertised with a IPv4 address and the Android device got a IPv6 address.

Without DHCPv6 then dynamic DNS is required for all servers. Even a 56 bit prefix is too much to remember, especially when it changes every week. So then you need to install and configure a dynamic DNS client on all servers in your network.


"I already know enough to be productive, can the rest of the world please freeze and stop changing?"

This is not even that unreasonable. Sadly, the number of IP devices in the world by now far exceeds the IPv4 address space, and other folks want to do something about that. They hope the world won't freeze but would sort of progress.


Network engineering is a profession requiring specific education. At a high level it’s not different from calculus. You learn certain things and then you learn how to apply them in the real life situations.

It’s not hard for people who get an appropriate education and put some effort into it. Your lack of education is not my ignorance.


Dude.

The difficulty of setting IPv6 up at your house vs. the needs of a multi-homed, geographically diverse enterprise couldn't be more dissimilar.

I'd lay off the judgment a bit.


I'd gladly listen about the difficulties of setting up enterprise networks! No irony; listening to experts is always enlightening.

BTW a homelab often tries to imitate more complex setups, in order to be a learning experience. Can these difficulties be modelled there?


company where i work has deployments across the world with few hundreds of thousands of hardware hosts (in datacenters), vms and containers + deployments in a few clouds. also a bunch of random hardware from multitude of vendors. multiple lines for linking datacenters and clouds. also some lines to more specific service providers that we are using.

all of it ipv4 based. ipv6 maybe in distant future somewhere on the edge in case our clients will demand in.

inside our network - probably not going to happen


I find this completely fine. I don't see much (if any) upside in migrating a large existing network to anything new at all, as long as the currently deployed IPv4 is an adequate solution inside it (and it obviously is).

Public-interfacing parts can (and should) support IPv6, but I don't see much trouble exposing your public HTTP servers (and maybe mail servers) using IPv6, because most likely your hosting / cloud providers do 99.9% of it already, out of the box (unless it's AWS, haha), and the rare remaining cases, like, I don't know, a custom VPN gateway, are not such a big deal to handle.


vast majority of our stuff is self hosted. http servers in a way are the least important way for our clients to work with us.

amount of work to support ipv6 on the edge will be very big and none of our clients asked for it as far as i know.

the only time we discussed it, it's when we were getting fedramp certification. because of this https://www.gsa.gov/directives-library/internet-protocol-ver...


I ran network team at an organization with hundreds of thousands hardware hosts in tens-of-megawatts large data centers, millions of VMs and containers, links between data centers, links to ISPs and IXes. We ran out of RFC1918 addresses at around 2011-2012 and went IPv6-only. IPv4 is delivered as a service to nodes requiring it via an overlay network. We intentionally simplified network design by doing so.

This is neither hard nor expensive.


different environments. for us at this point of time it will be expensive without added benefit.

I should have been gentler and less arrogant, yes. Sincerely though, please explain how ipv6 is in anyway more difficult than a properly set up ipv4 enterprise. What tools are not available?

War crime laws only apply to poorer nations sadly


Huh? Lebanon is not being held to war crime laws, and is the poorer nation. They bombed Northern Israel for over 2 years, including a soccer field full of children that weren't their targets but are very much dead.

If anything, it's the opposite.


Consider DuPont for all your chemical weapon ingredients needs!


What about OpenStack, or even CloudStack?


I think the main selling point for SME (wtih a small IT team) is that Proxmox is very easy to setup (download iso, install debian, ready to go). CloudStack seems to require a lot of work just to get it running: https://docs.cloudstack.apache.org/en/latest/quickinstallati...

Maybe I'm wrong - but where I am from, companies with less than 500 employees are like 95% of the workforce of the country. That's big enough for a small cluster (in-house/colocation), but to small for something bigger.


Yeah. The keys here are 'easy' and 'I can play with it at home first'. Let's be honest, being able to throw together a bunch of old dead boxes and put proxmox on them in a weekend is a game changer for a learning curve.


The main reason I never tried OpenStack was that the official requirements were more than I had in my home VM host, and I couldn't figure out if the hardware requirements were real or suggested.

Proxmox has very little overhead. I've since moved to Incus. There are some really decent options out there, although Incus still has some gaps in the functionality Proxmox fills out of the box.


PLEASE DON'T DOWN VOTE ME TO HELL THIS IS A DISCLAIMER I AM JUST SHARING WHAT I'VE READ I AM NOT CLAIMING THEM AS FACTS.

...ahem...

When I was researching about this a few years ago I read some really long in-depth scathing posts about Open stack. One of them explicitly called it a childish set of glued together python scripts that fall apart very quickly when you get off the happy path.

OTH opinions on Proxmox were very measured.


> When I was researching about this a few years ago I read some really long in-depth scathing posts about Open stack. One of them explicitly called it a childish set of glued together python scripts that fall apart very quickly when you get off the happy path.

And according to every ex-Amazoner I've ment: the core of AWS is a bunch of Perl scripts glued together


It doesn't matter when there's an entire amazon staff keeping it running.


I think you know as well as I do that it very much does matter. Even if you have an army of engineers around to fix things when they break, things still break.


I think the point is that for Amazon it's their own code and they pay full time staff to be familiar with the codebase, make improvements, and fix bugs. OpenStack is a product. The people deploying it are expected to be knowledgeable about it as users / "system integrators" but not developers. So when the abstraction leaks, and for OpenStack the pipe has all but burst, it becomes a mess. It's not expected that they'll be digging around in the internals and have 5 other projects to work on.


That explains a lot


Yeah, I think that makes solutions like Proxmox better is that there’s no reason to try and copy Amazon’s public cloud on your own could.

I find that the main paradigms are:

1. Run something in a VM

2. Run something on in a container (docker compose on portainer or something similar)

3. Run a Kubernetes cluster.

Then if you need something that Amazon offers you don’t implement it like open stack, you just run that specific service on options #1-3.


I think the utility really comes from getting an accessible control plane over your company's data centers/server rack.

Kubernetes clusters doesn't really solve the storage plane issue, or a unified dashboard for users to interact with it easily.

Something like harvester is pretty close IMO to getting a kubernetes alternative to Proxmox/open cloud.


The reason there were so many commercial distributions of open stack was because setting it up reliably end to end was nearly impossible for most mere mortals.

Company’s like meta cloud or mirantis made a ton of money with little more than openstack installers and a good out of the box default config with some solid monitoring and management tooling


This matches my personal experience having worked with OpenStack.


> One of them explicitly called it a childish set of glued together python scripts that fall apart very quickly when you get off the happy path.

A 'childish set scripts' that manages (as of 2020) a few hundreds of thousands of cores, 7,700 hypervisors, and 54,000 VMs at CERN:

* https://superuser.openinfra.org/articles/cern-openstack-upda...

The Proxmox folks themselves know (as of 2023) of Proxmox clusters as large as 51 nodes:

* https://forum.proxmox.com/threads/the-maximum-number-of-node...

So what scale do you need?


CERN is the biggest scientific facility in the world, with a huge IT group and their own IXP. Most places are not like that.

Heck, I work at a much smaller particle accelerator (https://ifmif.org) and have met the CERN guys, and they were the first to say that for our needs, OpenStack is absolutely overkill.


> Heck, I work at a much smaller particle accelerator (https://ifmif.org) and have met the CERN guys, and they were the first to say that for our needs, OpenStack is absolutely overkill.

I currently work in AI/ML HPC, and we use Proxmox for our non-compute infrastructure (LDAP, SMTP, SSH jump boxes). I used to work in cancer with HPC, and we used OpenStack for several dozen hypervisors to run a lot of infra/services instances/VM.

I think that there are two things determine which system should be looked at first: scale and (multi-)tenancy. More than one (maybe two) dozen hypervisors, I could really see scaling/management issues with Proxmox; I personally wouldn't want to do it (though I'm sure many have). Next, if you have a number internal groups that need allocated/limited resource assignments, then OpenStack tenants are a good way to do this (especially if there are chargebacks, or just general tracking/accounting).


vast vast (vaaast) majority of businesses are in that 1-100 nodes range.


> vast vast (vaaast) majority of businesses are in that 1-100 nodes range.

Yes, but even the Proxmox folks themselves say the most they've seen is 51:

* https://forum.proxmox.com/threads/the-maximum-number-of-node...

I'm happily running some Proxmox now, and wouldn't want to got more than a dozen hypervisor or so. At least not in one cluster: that's partially what PDM 1.0 is probably about.

I have run OpenStack with many dozens of hypervisors (plus dedicated, non-hyperconverged Ceph servers) though.


I use vyos instead of OpenWRT, but I'd presume OpenWRT can mirror a port? It'd be better to do it on your switch of course. But you could mirror your traffic going across the LAN-WAN barrier and direct it to a security onion install, it's an opensource IDS. It has pretty heavy demands, but traffic analysis is not an easy, computationally cheap task.


Consumer vendors for routers/firewall combos are trash, but I think they'd go a long way in helping people by having an easy to turn on IoT vlan.

Matter devices run without internet access (at least this is the whole point of the spec, some manufacturers have fewer features without using the cloud based app, but to be Matter certified it must run locally to some extent), so blocking the vlan should be okay with a lot of IoT devices.

Random dodgy streamer box does need internet access though, so I think at best having a vlan (probably one just for it sadly) that doesn't have access to the rest of your internal network would be the only realistic solution. Still won't help prevent it from using your connection as part of a botnet though. It's a hard problem.

Unfortunately users are very adverse to learning anything about how their devices work, so I don't have any idea what can be done about the problem.

Maybe we have to rely on the state going after sellers of such pre-compromised devices? I'd say hold the users somewhat liable, maybe a small fine, when they are part of a botnet, and wave them when it's a "legit brand" that gets compromised outside of the users control? Pressure would need to be done on "legit" consumer manufacturers to actually provide security updates to somewhat older devices and not abandon them the minute the latest model is released.


> Unfortunately users are very adverse to learning anything about how their devices work, so I don't have any idea what can be done about the problem.

They are.

But there's precedent: Manufacturers spent years shipping consumer routers that worked out-of-the-box with default wide-open networks with SSIDs like "NETGEAR" or "linksys," which was gloriously insecure.

Some folks were sure back then that this could never change, but it has changed. These days, such devices generally reasonably-secure by default.

It can presumably change for Matter and IoT, too.

(Except the rabbit hole is kind of interesting, because... The usual method of setting up a Matter device means scanning a QR code with a pocket supercomputer to begin the process of connecting the Matter device to whatever wifi network it is that the pocket supercomputer is currently using.

And this does work for getting a Matter device online, but it doesn't allow for easy separation of network roles.

So the routers will need to change, and the Matter setup process will also need to change. Shouldn't take more than another decade or two for both things to get accomplished, I suppose.)


Matter-over-thread can be added typically without any WAN connection. Just need the QR code. And in a recent revision to the spec they added provisioning via NFC, which will be great since some devices have easy to lose QR codes.


Matter-over-anything can typically be added without any WAN connection


Shoutout to Mikrotik for being the only consumer vendor with good router/firewall combos. I recommend getting one if you're comfortable doing a bit of work to setup a secure home network.


My AP has a default "guest" ssid/vlan that has a weparate address block on it... I use that for untrusted devices.

It's a dedicated prosumer/commercial ap though.


Is it HPE Aruba Instant On? Great APs.


EnGenius EWS377AP WiFi 6 4x4... Been pretty good for a few years now... Considering going back to Ubiquiti for Wifi 7 at some point, but this has been good enough for my needs, and my work/personal desktops are all wired 10/2.5gb so no real issues practically.

It doesn't reach as far outside of my home as my older Ubiquiti AP seemed to reach though... I could get almost a block away before my phone would drop when driving. Now it cuts out in the driveway... and less than halfway into the back yard... single AP on middle of second floor ceiling. Had considered additional unit for back yard coverage.


So run Gentoo, like I do. You get flexibility using USE flags to compile which component you want to include in a package.


I have a Framework Ryzen AI 300 series. Had the screen flickering after a kernel update several weeks ago. Fix was to add "amdgpu.dcdebugmask=0x2" to the grub kernel cmdline. Running Fedora 43, fully up to date as of yesterday. I sadly can't find the official forum thread about it. Hope it helps though.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: