I speak for a more distributed, microservices ecosystem, but the core issue is having a “fat” E2E testing suite in the first place. Since take hours to run, it forces a “batched” testing approach where a bunch of PRs are tested together (nightly etc) and leads to very slow feedback cycles for developers. Ideally I would shift to having a very small suite of E2E tests that can be run for every PR and then have service specific api tests that test the integration of the service being changed with the rest of the system. This way you de-centralize the testing as well and are closer to achieving true continuous delivery of each change to production.
Hi HN, We wrote a deep dive on a problem many of us in the platform/DevOps space face: traditional staging environments are slow, expensive, and create contention that directly hurts DORA metrics. The post explores how moving to on-demand, isolated sandboxes can directly impact Change Lead Time, CFR, etc., based on findings from the latest DORA report. Happy to answer any questions.
AGI is a vague term but a naive definition is smarter than any human in any fields/domain.
Clearly current LLMs are impressive in document/code generation but I’m having a hard time extrapolating that to AGI (assuming that it’s still reliant on current transformers arch).
Thanks for sharing! This approach is inspired by tools like opendiffy. It’s an adaptation of canary testing applied earlier in the SDLC - more specifically for Pull Request testing. Lends itself quite well to cover contract and integration testing and potentially “shift left” other types of testing.
Interesting idea! The real bottleneck here is likely not the number of test runs, but rather the overhead of environment setup and tear down. This is where ephemeral environments really shine if they can be optimized for quick startup.
Thanks for the reference. Yes, it boils down to what is intelligence. I guess true AGI is more than pure intelligence and also involves a sense of “self” or ego and such.
Impressive list indeed! Co-founder of Signadot here. Btw, we don't manage Kubernetes (K8s) environments; instead, we enable multi-tenancy for test tenants within a customer's existing K8s setup, without duplicating infrastructure. We leverage sidecars or a service mesh for request-based isolation, dynamically routing traffic based on request headers (typically propagated using OpenTelemetry).
In the early days it’s ok to not worry about the difference between company name & product. But as you develop many products it may make sense to have company & product names be distinct. It depends on product positioning and branding goals. Theres no “best” approach here.
> Define Sandboxes using a simple YAML file, specifying customizations relative to the baseline environment. Maintain these YAML files in your git repository and standardize Sandboxes across your organization.
It looks like these yaml files are not k8s files that I already have?
Also, is it open source? (I couldn't find a link to source on mobile)
Yes, these are our (thin) YAML files via which you describe the Sandboxes in terms of deltas from the baseline env. The K8s yaml files remain as the source of truth for your standard deployments. The operator is not open source. Some components like the CLI and Resource plugins are. We do have a free tier, however.
Co-founder of Signadot here. Computers and software programs become multi-tenant so they can do more and support concurrent users with fixed resources. However, when it comes to test environments for cloud native applications, most are single-tenant, meaning teams can't easily share a testing environment for different features.
The idea here is to bring multi-tenancy to test environments by routing traffic through specific versions of microservices. This way, multiple testing environments can be created at a lower cost, enabling new use cases that were too expensive before. This blog post is about one such use-case around end-to-end testing of a set of PRs realized in a cost effective way. Would love to get your feedback!