
vCluster Auto Nodes brings dynamic autoscaling to any Kubernetes
vCluster Labs has announced the availability of autoscaling features for Kubernetes workloads across public cloud, private cloud, hybrid infrastructure and bare metal environments, enabled through its new Auto Nodes capability.
The new feature leverages Karpenter, the open source autoscaler originally developed by Amazon Web Services, to enable dynamic, infrastructure-agnostic scaling for virtual Kubernetes clusters. With Auto Nodes, engineering and platform teams can scale workloads in a vendor-neutral manner, avoiding the limitations of being restricted to a single cloud provider or managed Kubernetes offering.
Auto Nodes is designed to integrate with common infrastructure automation tools such as Terraform and OpenTofu, allowing teams to declaratively define infrastructure for their Kubernetes clusters. It also supports native integrations with NVIDIA Base Command Manager and KubeVirt, which enable seamless autoscaling of workloads across bare metal private clouds and AI clusters outfitted with NVIDIA GPUs.
Speaking about the launch, Lukas Gentele, CEO of vCluster, said:
"With Auto Nodes, we're unlocking Karpenter-powered dynamic autoscaling across any infrastructure. It's not just about autoscaling, it's about doing it anywhere, with full isolation and flexibility - all based on a proven open source technology. That's a superpower no other K8s distro has across private cloud and bare metal environments."
Previously, dynamic autoscaling with Karpenter was primarily available on managed Kubernetes services such as EKS and AKS. vCluster's Auto Nodes extends this functionality into a portable, infrastructure-agnostic solution, making it possible for organisations to scale Kubernetes workloads independently of a particular platform or vendor.
Torsten Volk, Principal Analyst for Application Modernization at Enterprise Strategy Group, commented on the significance of the feature:
"The ability of Auto Nodes to instantly provision and decommission capacity on Kubernetes clusters based on resources from any cloud or on-premises data centre infrastructure elevates auto scaling to a new level. Organisations can now scale or burst their workloads anywhere based on cost, compliance, or other considerations, without being tied to a single platform vendor," said Volk. "vCluster's Auto Node feature is an advancement because it allows organisations to fully leverage modern hybrid and multi-cloud resources, whether in a regulated private cloud, a hyperscaler, or bare metal. This is significant because it provides exactly the flexibility that enterprises need to future-proof their infrastructure strategy."
vCluster has recently introduced Private Nodes, for attaching dedicated Kubernetes nodes directly into a virtual cluster, allowing for isolated, single-tenant workloads. With the integration of Auto Nodes, these isolated virtual clusters now gain the ability to scale their node resources automatically across any kind of environment, using the embedded Karpenter operator.
How it works
The Auto Nodes capability builds on the Private Nodes approach by integrating a Karpenter operator within each virtual cluster. This operator constantly monitors pod demands, including unschedulable pods, and dynamically provisions new nodes that meet specific requirements. Once workloads terminate and demand drops, the operator removes unneeded nodes. The system supports a variety of node providers, including Terraform, NVIDIA BCM, KubeVirt, and custom environments, and is compatible with both CPU and GPU-based workloads.
This setup enables fully isolated, auto-scaling virtual clusters that can utilise compute resources from any available infrastructure. Organisations are able to deploy hybrid and multi-cloud architectures in which virtual clusters scale elastically for varied workloads, maintaining isolation without duplicating entire clusters or re-architecting environments.
Auto Nodes also make it possible to shift workloads between clouds or data centres in response to real-time pricing, availability or policy requirements, without the need for changes to application code or infrastructure setup. This facilitates cost and resource optimisation across environments.
Key use cases
The company highlights several use cases for Auto Nodes:
- Scaling AI and machine learning workloads, particularly those requiring GPUs, across cloud and on-premises environments
- Optimising CI/CD pipelines through provisioned burst compute for test jobs, reducing costs by avoiding idle capacity
- Providing secure, multi-tenant, production-grade environments that scale independently for various teams and applications
- Enabling hybrid and multi-cloud infrastructure strategies, allowing workloads to scale seamlessly across provider boundaries without requiring architectural redesign
Auto Nodes is now generally available as part of the vCluster v0.28 and vCluster Platform v4.4 releases. It supports static and dynamic node pools, allows fine-grained allocation controls via NodeTypes, and can target specific hardware architectures, GPU configurations and instance shapes.
With this release, vCluster's Private Nodes feature can now automatically scale node resources across different infrastructures, maintaining tenant isolation while optimising utilisation and operational efficiency.