>However, now that these applications are running entirely on prem, it becomes difficult for the developer to manage. In many cases, rolling out major updates or bug fixes relies on having the user manually update their deployment. This is unreliable for the developer and burdensome for the user.
Can someone explain this to a non k8s user. Why is this a problem? Or is this saying that k8s deployments are usually unreliable?
Some feedback - please make it easy to visit your home page or atleast link it on your blog posts. While it was interesting, I was curious to know more about what your company did and couldn’t figure out an easy way to visit your main page to find that out.
The top nav bar links to a sign up page on a different domain which doesn’t mention anything about what you guys do.
And the company logo on the blog just links to the main blog page.
Thank you for taking the time to provide this feedback and I'm glad you liked the post! We’ll update the blog UI asap. In the interim, here’s more about Pixie: https://pixielabs.ai/
I have a hard time buying this hybrid architecture is the future. Running a bunch of stateful services with potentially large storage requirements in my own infrastructure is not zero-maintenance even if the control plain is managed. And proxying the UI for end users introduces back security and usability issues.
The best experience and lowest friction sale is to deliver a fully managed experience and earn the trust of customers. Confluent, Snowflake, Mongo Atlas, Rockset, CockroachDB, Elastic, Splunk, New Relic, etc all prove this is a great model for both customers and vendors. For the largest security conscious customers you can offer a fully VPC/on-prem solution with exactly the same form factor if you want. Hybrid can have its place, but I'd always try hard to offer a managed solution if you can justify it.
Thanks for the feedback. In general we try to keep the footprint of services on customer environments small. Since Pixie is mostly an ephemeral system our storage requirements are relatively modest. What this does allow us to do is process enough of the data on the customer machines locally that we only have to send back and store summaries. The long term goal is to build a cloud + edge based system where we can efficiently process data by having data locality.
Kubernetes gives us a substrate where these types of systems are manageable without too many complications. In the longer run we will likely have VPC hosted services as well for people who just don’t want to deal with it.
Off topic from the comment you replied to, but could you explain how Pixie is different from something like Crossplane? For some reason in my head that was the first comparison I made.
Your blog post does hit on a major pain point for private cloud/on-prem deployments which is management & troubleshooting post-deployment. I'm looking for the best solutions for this problem so always like to hear what people recommend.
Pixie is an observability platform which gives access to telemetry needed to troubleshoot your workloads (observe, debug and analyze). It is not aimed to help you provision and manage your workloads.
Projects like Crossplane (which is great!) are aimed to help with workload deployment and management. Crossplane specifically enables multi/hybrid cluster deployments. It isn't necessary focussed on observability (crossplane folks can correct me here :) )
Yeah, I really should not have put Splunk in this list because it shows perils of trying to make on-premise software run in cloud vs. SaaS-first mindset.
Can someone explain this to a non k8s user. Why is this a problem? Or is this saying that k8s deployments are usually unreliable?