
With Palette Nested Clusters, we’ve integrated vcluster with the Palette platform. We’ll end our description of vcluster here, but we highly recommend you spend some time with Loft Labs’ docs. One might deploy the foundational services only once on the Kubernetes host cluster and map them into each Kubernetes virtual cluster. is highly configurable, thus enabling a wide array of use cases.įor example, imagine a scenario in which multiple teams are developing microservices that each rely on a set of shared services. Exactly which K8s resources are synchronized - and in which direction(s), etc. By default, the virtual cluster leverages the same container runtime interface (CRI) as the host.Īdvanced scenarios are possible, however, where the virtual cluster uses its own CRI for maximum isolation. Typically, certain fundamental K8s primitives such as pods and services are always synchronized from the virtual Kubernetes cluster to the host.

The syncer does the heavy lifting of synchronizing K8s resources between the API servers of the two Kubernetes control planes (host and virtual).
#Its judys time palette full#
We could go on and on about the virtues of vcluster, but here is a picture:Įssentially, vcluster relies on two core components: a syncer and a K8s control plane (typically a single-binary distribution such as K3s, although full CNCF K8s control planes are supported) to create a “virtual” Kubernetes cluster within a pre-existing host cluster. Nested Clusters are built on top of Loft Labs’ open source projects, vcluster and vcluster CAPI Provider, as core open source technology.


We’ve just introduced this feature as part of our Palette 3.0 announcement, which is all about the developer experience. What are Nested Clusters?Īnd this is where Palette Nested Clusters come in. Many developer clusters will also require considerable multicluster management overhead to ensure consistency and to keep everything up-to-date and secure. But this approach quickly becomes too expensive, as most developers will leave their cluster running 24×7. One dedicated cluster per developer is another alternative. Lastly, managing RBAC in this scenario can become onerous, causing procedural inefficiencies and friction. The soft multitenancy model inherent to the namespace approach can’t handle multiple versions of the same custom resource definition (CRD) and doesn’t provide hard isolation when it comes to certain operators and other cluster-scoped resources. Additionally, local Kubernetes clusters cannot be shared with multiple team members for collaboration.Īccess to a namespace from a cluster maintained by a platform engineering team brings the cluster under enterprise control, but runs into limitations around tenancy and logistics. Considerations such as secret management, ingress controllers, load balancers, network security policies and resource limitations will all come into play. Today, most developers get access to Kubernetes clusters in one of three ways - and none of them is ideal.Ī local kind cluster is easy to deploy, but achieving consistent configuration between your local setup and a production environment is not always possible. Why Is It so Difficult to Access Kubernetes Clusters? Keep reading to find out how Spectro Cloud Palette’s new Nested Cluster feature solves these problems without compromising on security or visibility. Dedicated clusters, kind clusters, and namespaces are each imperfect answers (as we’ll see in a moment), but there is now a fourth way!

What’s more, they feel constrained by the security processes imposed by their organizations, especially around Kubernetes provisioning and role-based access control (RBAC), where they face contention over API versions during custom resource-heavy development.Īs a platform engineer, it’s your challenge to solve this conundrum. Yet many developers today don’t have this. The freedom to deploy what they want, when they want, without waiting for approvals and fulfillment of internal provisioning tickets.Access to an unrestricted K8s sandbox with the same heavy-duty compute power, container network interface (CNI), container storage interface (CSI) driver, and cloud controller manager (CCM) as their production K8s environments.What application developers need from Kubernetes resources is pretty simple:
