Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Our company made the switch over to Valkey, and we've invested hundreds of engineering hours into it already. I don't see us switching back at this point especially when it's clear Redis could easily pull the bait-and-switch again.


Your company invested hundreds of engineering hours switching from Redis to a clean fork of Redis?


I can easily see this for a midsize company.

While it's likely an easy process to drop in valkey, creating the new instances, migrating apps to those new instances, and making sure there's not some hidden regression (even though it's "drop in") all takes time.

At a minimum, 1 or 2 hours per app optimistically.

My company has hundreds of apps (hurray microservices). That's where "hundreds of hours" seems pretty reasonable to me.

We don't have a lot of redis use in the company, but if we did it'd have taken a bit of time to switch over.

Edit: Dead before I could respond but I figured it was worthwhile to respond.

> It's literally just redis with a different name, what is there to test?

I've seen this happen quite a bit in opensource where a "x just named y" also happens to include tiny changes that actually conflict with the way we use it. For example, maybe some api doesn't guarantee order but our app (in a silly manor) relied on the order anyways. A bug on us, for sure, but not something that would surface until an update of redis or this switch over.

It can also be the case that we were relying on an older version of redis, the switchover to valkey necessitates that we now bring in the new changes to redis that we may not have tested.

These things certainly are unlikely (which is why 1 or 2 hours as an estimate, it'd take more if these are more common problems). Yet, they do and have happened to me with other dependency updates.

At a minimum, simply making sure someone didn't fat finger the new valkey addresses or mess up the terraform for the deployment will take time to test and verify.


> My company has hundreds of apps (hurray microservices). That's where "hundreds of hours" seems pretty reasonable to me.

Sounds like a huge disadvantage in your company’s choice of software architecture to me.


There's definitely pros and cons to this approach.

The pro being that every service is an island that can be independently updated and managed when needs be. Scaling is also somewhat nicer as you only need to scale the services under heavy load rather than larger services.

It also makes for better separation of systems. The foo system gets a foo database and when both are under load we only have to discuss increasing hardware for the foo system/database and not the everything database.

The cons are that it's more complex and consistency is nearly impossible (though we are mostly consistent). It also means that if we need a system wide replacement of a service like redis, you have to visit our 100+ services to see who depends on it. That rarely comes up as most companies don't do what redis did.


Indeed. Microservice zealot 'architects' love to ignore the work that has to into each microservice and the overhead of collaboration between services. They'll spend a couple of years pretending to work on that problem in any meaningful way, then move on to a different company to cause similar chaos


that’s a loaded statement without understanding the company.

Sometimes large companies acquire smaller companies and keep the lights on in the old system for a while. They may not even use a similar tech stack!

Sometimes they want to cleanly incorporate the acquired systems into the existing architecture but that might take years of development time.

Sometimes having a distributed system can be beneficial. Pros / cons.

and sometimes it’s just a big company with many people working in parallel, where it’s hard to deploy everything as a single web app in a single pipeline.


My understanding is that Valkey was forked directly from redis. So assuming you migrate at the forks point-in-time, then it literally is the same code.


Yes, but not the same infrastructure and configuration and documentation. Any reasonable operation will also do validation and assurance. That adds up if you have a sizable operation. "Hundreds of hours" is also not some enormous scale for operations that, say, have lots of employees, lots of data, and lots of instances.

The part you are thinking of is not the time consuming part.


I believe it. There are companies that invested hundreds of engineering hours to rename master to main.


That is even more ridiculous, at least switching to a clean fork of Redis has business reasons. Following the latest cultural fads, less so.


One would find it hard to believe how often we hardcoded "master" to every corner of the software that ever touches any VCS.


At the very least you have to validate everything that touches redis, which means finding everything that touches redis. Internal tools and docs need to be updated.

And who knows if someone adopted post-fork features?

If this is a production system that supports the core business, hundreds of hours seems pretty reasonable. For a small operation that can afford to YOLO it, sure, it should be pretty easy.


But why are they spending any time switching away from Redis at all unless they are a hosting provider offering Redis-as-a-service?

I wasn't aware the license had any negative affect on private internal use.


The negative effect is that you have to bring the lawyers back in, and they tend to take an extremely conservative position on what might be litigated as offering “Redis-as-a-service”.


Would they? My experience is the lawyers sign off when software is being chosen, but then they aren't consulted again.

After that, it is just updating version numbers. Lawyers don't sign off on version upgrades, why would I bring this to them?


Many legal departments cannot afford nuance when a newsworthy license change occurs. The kneejerk reaction is to switch away to mitigate any business risk.


I doubt having lawyers review your usage is more expensive than spending hundreds of hours of dev time to migrate.


Our lawyers looked at SSPL since we do host software for customers and it does use Redis and went "Eh, this is as clear as mud." so Valkey it is!


By switch I mean that all new projects use Valkey instead of Redis, and we've invested hundreds of hours into those new projects.


There are companies using many thousands of Redis instances storing petabytes of data with millions of users.

Now consider a no-down-time migration. How long do you think that'll take to engineer and execute?


Even the infrastructure switch and testing should take a lot of time, yet the application level tests etc.


Why did you not pay Redis for a licence instead? I'm genuinely curious. Did you feel uncomfortable being tied to a license fee that might increase in the future, or was it just too expensive?


What? Isn't Valkey a "drop in" replacement? I switched a couple of deployment, it "just worked" but maybe I'm just too simple.


how does it take hundreds of hours to swap out a back end when you're using a trivial protocol like redis?

did you switch out the client or something? maybe the problem is not using pluggable adapters? is your business logic coupled to the particular database client API? oof.

I know the cluster clients are different (been there, done that) but hundreds of hours, seriously? or was that just hyperbole?


I think you might underestimate how little time hundreds of hours is. It's very, very easy to reach your first hundred hours in a task, e.g. taking a 40 hour week, 3 engineers = 120 hours.

If valkey is working, why spend that time reverting to redis, when you could be spending it on things that are actually going to provide value?


Hundreds could be 200 which, at 10 hours a day 5 days a week, is like a week and a half for a team of 3. It seems quite possible if you had to do testing/benchmarking, config changes, deploy the system, watch metrics, etc.


My company is relatively small. With probably 6 separate redis instances deployed in various places (k8s, bare metal, staging and prod environments) and dozens of (micro)services using them it's probably at least 40 hours (one person-week) to migrate everything at this point. Also there are things like documentation, legacy apps that keep working but nobody wants to spend time updating them, naming problems everywhere (renaming "redis" everywhere with zero downtime would be a huge pain), outdated documentation, possibly updates in CI, CD, and e2e tests, and probably more problems that ight become apparent in scale.

And we're honestly not large. For a mid size company, hundreds hours sound reasonable. For a big company the amount of work must be truly staggering.


By switch I mean that all new projects use Valkey instead of Redis, and we've invested hundreds of hours into those new projects. We've also tried stuff with the Valkey Glide client.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: