Cloudfleet brings a managed Kubernetes experience to on-premises and hybrid cloud environments. Simplify your infrastructure footprint and reduce operational overhead with Cloudfleet Kubernetes Engine.
Cloudfleet is CNCF-conformant, allowing seamless migration of any conformant clusters while avoiding vendor lock-in.
Cloudfleet dynamically provisions compute capacity based on real-time demand.
Add on-premises nodes to your cluster with a single command and schedule pods across both on-premises and cloud nodes.
Nodes securely connect via an encrypted network spanning multiple clouds and regions.
The hybrid cloud is rapidly emerging as the future of enterprise computing. Organizations are shifting from all-in cloud migrations to viewing data centers as strategic assets. This transition is driven by cost optimization efforts, increased data sovereignty requirements, and growing expectations for global availability.
Cloudfleet eliminates the complexity of managing Kubernetes clusters. It is specifically designed for on-premises infrastructure and containerized workloads. Cloudfleet seamlessly supports hyperconverged environments, bare metal, and any type of virtualization, offering unmatched ease of use and resilience. The platform is built to meet the container management needs of companies of all sizes, with a flexible pay-as-you-go pricing model.
Cloudfleet Kubernetes Engine (CFKE) is a fully managed Kubernetes service that allows you to run applications in your own data center, on any cloud provider, and in any region - all from a single control plane. Cloudfleet runs the Kubernetes control plane in a managed, secure environment, ensuring that all critical components remain available and up to date. No matter the size of your cluster or where your infrastructure is located - whether in an on-premises data center in Asia or an AWS region in the US - you can focus on your workloads and pay only for the resources your applications use.
Adding nodes to your Cloudfleet Kubernetes clusters is as easy as executing a single CLI command. All you need is a Linux machine with SSH access - no Kubernetes expertise required. With Cloudfleet, managing and operating on-premises Kubernetes clusters has never been simpler. Say goodbye to the complexity of maintaining a stack of management tools for each environment - Cloudfleet handles everything for you.
Getting started with Cloudfleet is easy. No contracts, no upfront payments. Create a cluster with three nodes, each with up to 8 vCPUs - for free - without needing to attach a payment method. For larger production deployments, you only pay for the servers managed by Cloudfleet, with per-minute billing. If your organization has specific requirements or needs assistance with a proof of concept, our team is ready to help.
Cloudfleet handles the most challenging aspects of Kubernetes control plane management, including networking, security, updates, scaling, monitoring, and high availability. By simplifying platform management, Cloudfleet removes operational barriers, enabling organizations to get up and running in minutes instead of weeks.
Cloudfleet Kubernetes Engine comes with a state-of-the-art stack of foundational components pre-installed, pre-configured, and tested. The best infrastructure is the one you don’t have to manage - Cloudfleet provides the easiest way to run global Kubernetes clusters.
Cloudfleet Kubernetes Engine is built for the most demanding networking environments, including edge computing, IoT deployments, and remote locations with unstable internet connectivity. If connectivity between application servers and the Cloudfleet control plane is lost, workloads continue running until the connection is restored.
Automated failover to the public cloud can be configured in case of connectivity disruptions in edge or on-prem environments. Once the connection is re-established, the cluster automatically restores its intended state using highly available, durable persistent storage.
This manifest deploys an Nginx pod in a specific on-premises data center, ensuring it runs only on servers located in 'dc-washington-1' with at least 256 MiB of memory and 1 CPU core.
This manifest deploys an Nginx application with two replicas, distributing them across both on-premises infrastructure and Google Cloud (GCP). It uses node affinity to restrict scheduling to nodes labeled as either GCP or on-premises, while pod anti-affinity ensures that replicas are spread across different environments to improve resilience.
apiVersion: v1
kind: Pod
metadata:
name: example-app
spec:
nodeSelector:
topology.kubernetes.io/region: dc-washington-1
containers:
- name: example-app
image: nginx
resources:
requests:
memory: "256Mi"
cpu: "1000m"
limits:
memory: "512Mi"
cpu: "2000m"
apiVersion: apps/v1
kind: Deployment
metadata:
name: example-app
spec:
selector:
matchLabels:
app: nginx
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: "cfke.io/provider"
operator: "In"
values: ["gcp", "on-premises"]
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx
topologyKey: cfke.io/provider
containers:
- name: nginx
image: nginx:1.24
resources:
requests:
cpu: 50m
memory: 56Mi
This manifest deploys an Nginx pod in a specific on-premises data center, ensuring it runs only on servers located in 'dc-washington-1' with at least 256 MiB of memory and 1 CPU core.
This manifest deploys an Nginx application with two replicas, distributing them across both on-premises infrastructure and Google Cloud (GCP). It uses node affinity to restrict scheduling to nodes labeled as either GCP or on-premises, while pod anti-affinity ensures that replicas are spread across different environments to improve resilience.
Operating Kubernetes - especially at scale and in distributed environments - requires specialized expertise and can be complex. Cloudfleet provides out-of-the-box features to accelerate time to production for containerized workloads on-premises, in the cloud, in hybrid deployments, and at the edge.
Adding your on-premises servers to a Cloudfleet cluster does not require opening your network to the internet or allowing inbound connections. A single outgoing port is used to synchronize cluster state, and all traffic between servers and the control plane is end-to-end encrypted.
Available on all plans, including the free tier, Cloudfleet works with any server provisioning tools and NAS solutions. It also supports SSO integration with your IdP via SAML and LDAP protocols, enabling compatibility with Microsoft Active Directory, Google Directory, and Okta.
Cloudfleet runs a CNCF-conformant distribution of vanilla Kubernetes, ensuring that all applications deployed on Cloudfleet Kubernetes Engine are fully compatible with standard Kubernetes environments. What does this mean for you? Seamless migration of Kubernetes applications on and off Cloudfleet - without major code modifications.
Cloudfleet handles continuous software updates and patching for cluster components. Updates are triggered on a defined schedule, requiring no manual intervention, keeping your applications up to date with the latest features and security patches.
Leverage public cloud resources when needed. Run the majority of your Kubernetes workloads in your own data center to maintain cost control, while seamlessly adding burst capacity from any public cloud provider as demand increases.
Cloudfleet delivers truly global clusters - eliminating the need to build separate Kubernetes clusters for each site or cloud provider. Say goodbye to ‘cluster of clusters’ architectures, complex multi-cloud management, and vendor lock-in.
With Cloudfleet, you can use both bare metal and virtualized infrastructure within a Kubernetes cluster. There are no fixed requirements on the number of servers or resources - you can dynamically scale your cluster based on your organization’s needs. With virtualization platforms like vSphere and Proxmox VE, you don’t even need to predefine infrastructure - CFKE automatically provisions and scales virtual machines based on workload demand.
Self-managed nodes
Cloudfleet allows you to deploy highly available Kubernetes clusters on bare metal and edge environments. With Cloudfleet’s API-first design, you can seamlessly integrate your existing technology stack and upgrade it to cloud-native infrastructure. This enables rapid, cost-effective adoption of enterprise-grade cloud technologies - without disruption or downtime.
Self-managed nodes | Node auto-provisioning
With Cloudfleet Kubernetes Engine and VMware vSphere, you get the best of both worlds. Cloudfleet creates highly available Kubernetes clusters within your VMware vSphere environment, allowing you to easily deploy and scale clusters of any size. Clusters automatically scale up or down based on workload demands, ensuring optimal resource utilization.
Self-managed nodes | Node auto-provisioning
Deploying Kubernetes on-premises or at the edge doesn’t have to be complex. By combining Cloudfleet Kubernetes with Proxmox VE, you get a seamless, cost-effective solution for managing containerized workloads. Proxmox VE provides a robust, open-source virtualization platform, while Cloudfleet Kubernetes simplifies cluster deployment, scaling, and management.
Self-managed nodes
Cloudfleet is the simplest way to deploy highly available Kubernetes clusters on Equinix Metal. With Cloudfleet, you get a fully managed Kubernetes experience on Equinix, enabling you to effortlessly manage workloads across multiple locations. Leverage Equinix Metal’s unmatched global reach and connectivity ecosystem to ensure high performance, scalability, and seamless multi-region operations.
A hybrid cloud is an IT architecture that integrates public cloud, private cloud, and on-premises infrastructure, enabling seamless workload portability, automation, and orchestration across environments. It optimizes costs by leveraging public cloud scalability while maintaining control over sensitive workloads in private infrastructure. This approach ensures flexibility, resilience, and regulatory compliance without vendor lock-in.
By unifying multiple cloud environments, a hybrid cloud enhances operational efficiency through intelligent workload distribution and dynamic scaling. It enables businesses to implement a cloud strategy tailored to performance, security, and budgetary requirements while supporting multi-cloud deployments. With automation and centralized management, hybrid cloud architectures reduce complexity, streamline DevOps workflows, and improve business continuity.
Hybrid cloud and multi-cloud are both cloud strategies but serve distinct purposes. A hybrid cloud integrates private cloud, public cloud, and on-premises infrastructure into a unified environment, enabling seamless workload mobility and centralized management. It optimizes costs by keeping sensitive workloads in private environments while leveraging public cloud resources for scalability. Hybrid cloud is ideal for organizations that require flexibility, security, and compliance without sacrificing automation and efficiency.
Multi-cloud, on the other hand, involves using multiple public cloud providers without necessarily integrating them into a single architecture. This approach mitigates vendor lock-in, enhances resilience, and optimizes performance by selecting the best cloud services for specific workloads. While hybrid cloud focuses on interoperability between private and public resources, multi-cloud emphasizes diversification across multiple cloud platforms. Many enterprises adopt both strategies to maximize cost efficiency, availability, and control.
Kubernetes is ideal for hybrid cloud deployments because it provides a consistent, automated platform for managing workloads across on-premises, private, and public cloud environments. Its container orchestration capabilities ensure seamless workload portability, allowing applications to run anywhere without modification. With built-in automation for scaling, self-healing, and resource optimization, Kubernetes reduces operational overhead while improving performance and reliability.
Its declarative infrastructure management, combined with robust networking and security policies, enables organizations to enforce governance and compliance across hybrid environments. Kubernetes’ support for multi-cloud strategies ensures workload distribution based on cost, performance, or regulatory requirements, preventing vendor lock-in. By unifying hybrid cloud operations, Kubernetes streamlines DevOps workflows, accelerates innovation, and enhances overall infrastructure efficiency.
Companies adopt a hybrid cloud strategy to maximize flexibility, cost efficiency, and security while avoiding vendor lock-in. By combining private infrastructure with public cloud resources, organizations can scale workloads dynamically, optimizing costs without sacrificing control over sensitive data. This approach ensures compliance with regulatory requirements while leveraging cloud elasticity for peak demand and innovation-driven workloads.
Hybrid cloud also enhances resilience and performance by distributing workloads across multiple environments based on latency, security, or operational needs. With automation and centralized management, businesses can streamline DevOps workflows, improve disaster recovery, and accelerate deployment cycles. By adopting a hybrid model, companies gain the agility to innovate while maintaining governance, optimizing IT spending, and ensuring business continuity.
Fine-grained role-based access control (RBAC) with organization and project scopes, least-privilege permissions, and comprehensive audit trails for all user actions.
Enterprise Single Sign-On (SSO) via SAML and OIDC, integrating with Okta, Microsoft Entra ID, Google Workspace, and other compatible identity providers.
Governance, centralized audit logging, and compliance readiness aligned with SOC 2 and ISO 27001 standards (certifications in progress).
Built to support large-scale, mission-critical deployments with dedicated teams, proven processes, and clear operational commitments.
Expert-led architecture, deployment, and migration services to help you design, roll out, and scale Cloudfleet across complex environments.
A dedicated customer success team guiding onboarding, adoption, and long-term success, with access to best practices and operational guidance.
24/7 access to experienced engineers via defined support channels, with clear escalation paths for critical incidents.
Clearly defined Service Level Agreements (SLAs) covering availability, response times, and incident handling for mission-critical workloads.
Cloudfleet seamlessly extends your cluster anywhere, turning any hardware - even in your office or home - into a modern enterprise cluster.
Using Cloudfleet together with Hetzner allowed us to bring up a managed Kubernetes cluster just as quick as with any US-based hyper-scaler, but with the benefit of being EU hosted which is very valuable in today's times.
By combining our standardized configurations with the powerful automation features of the Cloudfleet platform, we've built a development workflow that is fast, secure, and incredibly efficient.
Perfect balance between flexibility and managed service - and the support is outstanding.
We deploy to customer infrastructure without changing how we build or manage Kubernetes.
It is easy to build a multi-cloud setup without getting locked into any single provider.
It is easy to build a multi-cloud setup without vendor lock-in.
Cloudfleet lets us scale our game servers across multiple providers - reliably and affordably.
Cloudfleet not only cut our infrastructure costs - it saved us hours of work by taking cluster management off our plate.
Run Kubernetes consistently across public cloud, private infrastructure, and on-prem environments, with a single control plane and unified operational model.
Operate, scale, and migrate Kubernetes clusters across environments using a consistent, opinionated platform that reduces operational overhead.
Transparent pricing and infrastructure control help you avoid hyperscaler lock-in, hidden fees, and unexpected cost growth as your workloads scale.
Retain full portability of your clusters, workloads, and tooling by running standard Kubernetes without proprietary extensions or forced dependencies.
Designed for production workloads, with operational tooling, escalation paths, and support processes built to meet enterprise reliability requirements.
Secure by default, with isolation, encryption, and access controls designed to meet the requirements of regulated and security-conscious organizations.
Create your free Cloudfleet Kubernetes cluster in minutes - no setup hassle, no cost. Get started instantly with the always-free Basic plan.