What is Cloudfleet?

Cloudfleet’s mission is to transform infrastructure management by delivering seamless, automated, and scalable solutions that unify resources across datacenters, clouds, and edge environments. We aim to provide just-in-time infrastructure, automated upgrades, and advanced permissions management, all through a single, intuitive interface.

The Cloudfleet Kubernetes Engine (CFKE), is a fully managed Kubernetes service designed to run containerized workloads on clusters spanning multiple cloud providers and on-premise environments. CFKE centralizes Kubernetes control with a single managed control plane, capable of overseeing nodes distributed across diverse platforms and infrastructure setups.

Kubernetes is an open-source system that automates the deployment, scaling, and management of containerized applications. Cloudfleet delivers a fully CNCF-conformant Kubernetes service, ensuring that any workload compatible with Kubernetes can seamlessly operate on Cloudfleet Kubernetes Engine (CFKE).

Cloudfleet Kubernetes Engine (CFKE) architecture

What is Cloudfleet good for?

Cost Optimization

Cloudfleet empowers organizations to optimize the cost-efficiency of their entire application portfolio across all dimensions. Instead of requiring you to define rigid capacity sizes upfront when creating your Kubernetes cluster—before you even deploy your first workload—Cloudfleet works backward from your actual workload requirements. You deploy your applications on Cloudfleet, and we determine the optimal size, cost, and location of the infrastructure needed to run them. You always have the option to define specific constraints, such as running your application in a particular jurisdiction or on a specific machine type, while Cloudfleet handles the rest.

By simply using pod labels, you can seamlessly move your workloads to spot instances, newer and more cost-effective instances, or even different architectures like ARM.

Multi-Cloud Flexibility

Cloudfleet avoids locking you into a specific cloud provider, region, or data center. Instead, you connect your multiple cloud accounts to Cloudfleet, which then provides a consistent experience across all of them. This allows you to run your workloads simultaneously on different clouds or easily migrate them between cloud environments.

Cloudfleet differentiates itself from other multi-cloud Kubernetes solutions through its unique architecture. It’s not just a Kubernetes distribution compatible with various clouds; Cloudfleet offers a single, managed cluster that can simultaneously manage compute power across different clouds. This enables you to interact with one unified Kubernetes cluster to deploy and monitor applications on multiple clouds and seamlessly move workloads between them.

Hybrid Cloud Integration

In addition to public clouds, Cloudfleet offers first-class support for on-premise infrastructure. You can add any Linux server to your Cloudfleet Kubernetes cluster, managing your on-premise assets in the same way you manage public cloud resources. Just as Cloudfleet connects multiple clouds and regions, it also securely connects your multiple data centers, enabling you to manage your on-premise capacity via a single Kubernetes cluster.

For example, if most of your applications run in your on-premise data centers, but you have a specific machine learning training job requiring custom hardware that is only available on a public cloud provider, Cloudfleet allows you to run your applications as usual while “spilling over” to the cloud for the duration that particular job requires.

Edge Computing Capabilities

If you manage applications across multiple sites, you can add your edge infrastructure as Kubernetes nodes to Cloudfleet. Your edge locations connect securely to each other and to Cloudfleet’s control plane, allowing you to manage all your edge locations through a single Kubernetes interface. Your edge locations only need a simple internet connection to become part of Cloudfleet.

For instance, if you are a manufacturing company with multiple production sites, each with its own mini data center, you can use Cloudfleet to deploy and manage the lifecycle of your applications across these sites and monitor all workloads from your headquarters.

Features of Cloudfleet

Node auto-provisioning

Unlike traditional Kubernetes, you don’t provision nodes upfront with CFKE on supported cloud providers (AWS, GCP, and Hetzner Cloud). Instead, you deploy your workloads and Cloudfleet automatically provisions the optimal nodes to run them. The system dynamically selects the most cost-effective compute instances, scales resources as needed, and removes unused nodes to minimize costs.

This eliminates upfront capacity planning entirely. You no longer need to predict future resource needs or make infrastructure sizing decisions before understanding actual workload patterns. Simply deploy your applications and Cloudfleet determines the right infrastructure configuration. As applications grow or shrink, nodes automatically adjust to match demand.

Cost optimization happens automatically through intelligent instance selection and aggressive scale-down policies. Cloudfleet continuously evaluates available instance types across configured cloud providers, selecting the most cost-effective options. The platform can even choose the cheapest cloud region globally unless you specify geographic constraints. When pods are deleted or scaled down, nodes are promptly removed rather than sitting idle. When no workloads are running, clusters scale to zero nodes, representing perfect cost efficiency.

Cloudfleet handles the entire node lifecycle automatically, not just initial provisioning. Nodes are automatically updated when new operating system patches or Kubernetes versions become available. The system continuously monitors node health, detecting and replacing unhealthy nodes before they impact application availability. When using spot instances or preemptible VMs for cost savings, Cloudfleet automatically provisions replacement nodes if the cloud provider reclaims capacity.

Cloudfleet continuously adds native integrations for more cloud providers. For more information, refer to the Node auto-provisioning documentation.

Managed Control Plane

CFKE provides a managed Kubernetes control plane that ensures high availability and scalability of Kubernetes API servers and the persistence layer. In the Pro version and above, CFKE deploys the control plane across multiple Availability Zones (AZs) to enhance resilience and automatically replaces unhealthy control plane nodes.

Kubernetes Compatibility and Support

CFKE runs upstream Kubernetes and adheres to Kubernetes conformance standards. This compatibility allows you to leverage existing plug-ins and tools from the Kubernetes ecosystem. Applications deployed on CFKE can seamlessly operate alongside or migrate to other standard Kubernetes environments, whether on-premises or in public clouds, without requiring code refactoring. For further details, refer to the Kubernetes version lifecycle documentation. Please see the Kubernetes version management for more information.

Free to Start

Cloudfleet offers a Basic Edition that allows you to explore the platform and its features without incurring any costs. The Basic Edition includes a fully managed Kubernetes control plane and a limited number of compute nodes, enabling you to deploy and run containerized workloads across multiple cloud providers and on-premise environments. CFKE Basic Edition is used by many customers for development, testing, and small non-critical workloads. For more information, refer to the Pricing documentation.

Hybrid deployments

Cloudfleet enables hybrid deployments by allowing self-managed Linux servers to join your cluster. This powerful feature lets customers integrate infrastructure from any cloud provider, on-premises data centers, or edge locations. Self-managed nodes can be provisioned using Terraform for automation or added manually, making Cloudfleet truly infrastructure-agnostic beyond the natively supported cloud providers. For more information, refer to the self-managed nodes documentation.

Global Secure Networking

CFKE establishes a secure, encrypted network connecting nodes across various clouds and regions. This capability enables a unified Kubernetes cluster that spans different cloud providers and on-premise environments, allowing workloads across regions and datacenters to operate as if on the same network. For more information, refer to the Networking Architecture documentation.

Access to Public Cloud APIs

CFKE simplifies the use of public cloud APIs by providing secure access from workloads running on CFKE without the need for hard-coded credentials. Utilizing OIDC for integration, CFKE supports interaction with services from providers such as AWS, GCP, and Azure, as well as any third-party APIs compatible with OIDC. For more information, refer to the Public Cloud APIs documentation.

Frequently Asked Questions

Q: What can I do with Cloudfleet Kubernetes Engine (CFKE)?

With Cloudfleet Kubernetes Engine, you can run any containerized workloads on a Kubernetes cluster that spans multiple cloud providers and on-premise environments. CFKE provides a single Kubernetes control plane that can manage nodes in different cloud providers and on-premise environments.

You may find CFKE useful if you need to support multiple cloud providers or need to distribute workloads geographically across the same cloud provider. You can also use CFKE to run workloads primarily on-premise and burst to the cloud when needed. CFKE makes migrations between cloud providers and on-premise environments seamless.

CFKE fits into many use cases where you need to run workloads on multiple cloud providers and on-premise environments.

Q: What are the differences between Cloudfleet Kubernetes Engine (CFKE) and other multi-cloud Kubernetes solutions?

CFKE provides a single Kubernetes control plane that can manage nodes in different cloud providers and on-premise environments at the same time. This allows you to have a single Kubernetes cluster that can span multiple cloud providers and on-premise environments. CFKE also provides a single pane of glass to manage your infrastructure across different cloud providers and on-premise environments.

Under CFKE, workloads can communicate with each other over a secure overlay network in the same IP space. This allows you to run workloads on different cloud providers and on-premise environments as if they are in the same network.

Other solutions require you to have multiple Kubernetes clusters, one for each cloud provider or on-premise environment. This makes it difficult to manage and maintain the infrastructure across different environments and requires migration efforts to move workloads between different clusters.

Q: How long does it take to create a Cloudfleet Kubernetes Engine (CFKE) cluster?

Creation of a CFKE cluster takes little less than 2 minutes.

Q: How do I add nodes to my Cloudfleet Kubernetes Engine (CFKE) cluster?

Unlike traditional Kubernetes offerings, CFKE assumes the responsibility of managing the infrastructure in your cluster based on your workload specifications.

CFKE can automatically provision nodes in supported cloud providers (currently AWS, GCP, and Hetzner Cloud) based on your workload specifications. This node auto-provisioning means you do not need to decide on the number or type of nodes to add to your cluster. CFKE will automatically provision the nodes based on your workload requirements, upgrade and repair them. The only thing you need to do is to configure a Fleet in the CFKE console. A Fleet represents a collection of nodes that are managed by CFKE in one or more supported cloud providers. See the Fleet and Fleet Types section for more information.

Once a Fleet is configured, you deploy your Kubernetes workloads to the cluster. Based on different factors such as the number of pods, CPU and memory requirements, CFKE will automatically provision the nodes in the cloud provider that you have configured in the Fleet. CFKE offers a very large set of configurations that you can use to fine-tune the node provisioning process. See the Node auto-provisioning section for more information.

For infrastructure beyond the natively supported cloud providers, CFKE supports adding self-managed nodes. Self-managed nodes are any type of Linux servers that you add to your CFKE clusters, whether from other cloud providers, on-premises data centers, or edge locations. This makes Cloudfleet truly infrastructure-agnostic. Self-managed nodes can be provisioned automatically using Terraform or added manually with just a few simple steps. Cloudfleet continuously works to add native integrations for more cloud providers, but self-managed nodes provide immediate support for any infrastructure. See the self-managed nodes section for more information.

Q: How much does Cloudfleet Kubernetes Engine (CFKE) cost?

Cloudfleet offers a generous free tier that includes one Basic Tier cluster that can include compute nodes up to 24 vCPUs. Basic Tier is suitable for development, testing, and small non-critical workloads. For critical production workloads, it is recommended to use Pro Tier that offers better availability and support. The Pro Tier charges a control plane fee. The first 24 vCPUs are free, and additional usage is billed based on the number of vCPUs managed in your cluster.

Please visit the Pricing section to learn more about the different tiers and their pricing.

Q: How do I get started with Cloudfleet Kubernetes Engine?

You should create an organization or be invited to an existing organization to get started with Cloudfleet. Please visit the Getting Started section to learn how to create a new organization.

Q: What if I need nodes from an unsupported cloud provider?

Node auto-provisioning currently works with AWS, GCP, and Hetzner Cloud. For any other infrastructure, you can add nodes using self-managed nodes. This approach supports any cloud provider, on-premises servers, or edge devices. Cloudfleet continuously adds native integrations for more cloud providers, but self-managed nodes (particularly with Terraform) provide a powerful solution for any infrastructure not yet natively supported. See the self-managed nodes section for more information.

Q: Can I mix auto-provisioned and self-managed nodes?

Yes. You can use node auto-provisioning for workloads on supported cloud providers while also having self-managed nodes for specific requirements or unsupported infrastructure. Kubernetes scheduling features let you control which workloads run where.

Q: How long does it take for nodes to appear after deploying a workload?

Typically 2-4 minutes from when a pod enters Pending state. This includes analyzing pod requirements (seconds), provisioning cloud infrastructure (1-3 minutes), and joining the node to the cluster (30-60 seconds).

Q: What happens if my Fleet hits its CPU limit?

Pods will remain in Pending state until resources become available. You can either increase the Fleet’s CPU limit, add another Fleet with a different cloud provider, or delete some existing workloads to free up capacity.

Q: How does node auto-provisioning affect my cloud bill?

You pay your cloud provider directly for the infrastructure Cloudfleet provisions. The difference is that you only pay for nodes while they’re actually needed, rather than maintaining idle capacity. Cloudfleet’s control plane has its own pricing (see Pricing).