Getting started

This guide is designed to walk new users through the essential first steps of setting up and utilizing the Cloudfleet platform.

At its core, Cloudfleet Kubernetes Engine (CFKE) provides a fully managed Kubernetes service. This service is engineered to run containerized workloads on clusters that can span multiple cloud providers and on-premise setups, offering a single pane of glass for infrastructure management. This approach addresses the common challenge of managing diversified infrastructure environments, each with its own tools and configurations, by providing a unified control plane and consistent operational experience.

Understanding Key Cloudfleet Concepts

Before diving into the practical steps, it’s helpful to understand a few core concepts that are fundamental to the Cloudfleet platform.These concepts are central to how Cloudfleet simplifies and automates Kubernetes management.

Cloudfleet Kubernetes Engine (CFKE)

CFKE is the heart of Cloudfleet’s offering. It is a fully managed Kubernetes service designed to run containerized workloads across different cloud and on-premise infrastructure environments at the same time.

  • Managed Control Plane: A significant feature of CFKE is its managed Kubernetes control plane. This means Cloudfleet handles the setup, maintenance, availability, and scalability of the Kubernetes API servers and the underlying persistence layer. This frees users from the operational overhead of managing the Kubernetes master components, allowing them to focus on their applications.

  • Node Auto-Provisioning: CFKE features a sophisticated Node Auto-Provisioner. This component automates the provisioning and management of worker nodes (the compute infrastructure where applications run). It dynamically selects optimal compute instances from connected cloud providers and scales them as needed based on workload demands, optimizes costs, and integrates with major cloud providers. This “hands-free” infrastructure management means users don’t typically need to manually create, size, or upgrade worker nodes.

  • Self-managed nodes: Although node auto-provisioning is the preferred method for operating Cloudfleet on supported public clouds, Cloudfleet also allows adding a VM or bare-metal Linux server to an existing CFKE cluster. This extends Cloudfleet’s capabilities to nearly any infrastructure provider or on-premise environment.

  • Kubernetes Compatibility: CFKE runs upstream Kubernetes and adheres to Kubernetes conformance standards. This ensures that applications deployed on CFKE are portable and can operate with existing Kubernetes tools and plugins without modification.

Fleets

The concept of a “Fleet” is a Cloudfleet-specific abstraction crucial for managing where your Kubernetes worker nodes are provisioned.

A Fleet represents a public cloud account where Cloudfleet is authorized to provision and manage nodes for user workloads. Cloudfleet currently supports major cloud providers like Amazon Web Services (AWS), Google Cloud Platform (GCP), and Hetzner Cloud for Fleets.

This model allows users to connect their own existing cloud provider accounts (e.g., their AWS, GCP, or Hetzner account) to Cloudfleet. Users can then leverage Cloudfleet’s unified API and management capabilities to deploy and monitor applications within their own accounts.

When you connect your cloud account as a Fleet to Cloudfleet, you maintain your existing commercial agreement with the cloud provider, you pay to the cloud provider for the infrastructure, and your data remains within your control.

Step 1: Creating Your Cloudfleet Organization

The first practical step is to create a Cloudfleet organization. As a first step, you create the root account together with the organization that provides access to the Cloudfleet Console, the central hub for managing organizations, clusters, and other resources.

  1. Visit the Sign-Up Page: Navigate to the Cloudfleet sign-up page.
  2. Enter Your Details: Fill in the required information, such as name, email address, and password.
  3. Activate Your Account: Follow the instructions sent to the provided email address to activate the account.
  4. Access the console: Once the account is set up and activated, you can login with your credentials to the Cloudfleet dashboard.

Once you have an active organization, you can also invite your team members to the organization for collaboration.

Step 2: Creating Your First CFKE Cluster & Connecting Your Cloud Account

With the required tools, the next step is to create a Cloudfleet Kubernetes Engine (CFKE) cluster.

When you create a cluster, you get an empty Kubernetes cluster with no nodes. To run applications, this cluster then needs worker nodes, which are provisioned via a “Fleet” that connects to a cloud provider account or on-premises resources.

Creating the CFKE Cluster

A CFKE cluster can be created using the Cloudfleet Console.

  1. Navigate to Clusters: In the Cloudfleet Console, go to the ‘Clusters’ section.
  2. Initiate Cluster Creation: Click the “Create Cluster” button.
  3. Configure Cluster Details:
    • Cluster Type/Tier: The Cloudfleet Console offers different cluster tiers (e.g., Basic, Pro). The Basic Edition is free to start and suitable for exploration, development, and non-critical workloads. You can select Basic to start using Cloudfleet for free and later upgrade to the Pro version.
    • Cluster Name: Choose a human-friendly name for the cluster (e.g. my-first-cfke-cluster).
    • Kubernetes Version: Select the desired Kubernetes release channel. Cloudfleet typically manages your Kubernetes version and upgrades your cluster as new Kubernetes versions become available. Basic versions always get the latest versions whereas the Pro clusters have capability to select their versions more strictly.
  4. Crate the cluster: Click the “Create” button after reviewing the details. This will trigger the cluster creation process. Provisioning the control plane takes 2-3 minutes. The console will indicate when the cluster is ready.

Understanding and Setting Up a Fleet

Once the cluster’s control plane is active, it requires worker nodes to run applications. This is where Fleets are essential. As previously mentioned, a Fleet represents a connected cloud account or on-premises infrastructure where Cloudfleet is authorized to provision and manage these nodes.

Unlike many Kubernetes solutions, Cloudfleet eliminates the need for upfront node creation and manual capacity planning. Users simply delegate necessary permissions for their cloud accounts to Cloudfleet and begin deploying workloads. Cloudfleet then intelligently determines the optimal number and type of Kubernetes nodes and provisions them automatically. Furthermore, if Cloudfleet detects idle capacity, it removes or consolidates unused nodes, ensuring efficient resource utilization and cost optimization.

To create a fleet:

  1. Open the fleet creation dialog: In the Cloudfleet Console, next to the newly created cluster, click on the “New auto-provisioning fleet” button.

  2. Choose a name and supported providers: You can use any name you want for your fleet and select which of the supported cloud providers you want this fleet to support.

  3. Provide Cloud Provider Credentials/Permissions: This is the most provider-specific part. Cloudfleet needs authorization to manage resources (like virtual machines, networks, load balancers) in the user’s cloud account. The method varies by provider:

    • Hetzner Cloud:

      • Requires a Hetzner Cloud account and an API Token with “Read & Write” permissions generated from the Hetzner Cloud console.
      • This API token is then pasted into the Cloudfleet Console during Fleet setup.
    • Amazon Web Services (AWS):

      • CFKE uses IAM roles and Workload Identity Federation for secure, credential-less access. Users create an IAM role in their AWS account with specific permissions and a trust policy allowing CFKE’s internal IAM role to assume it.
      • Cloudfleet provides a Terraform module to automate the creation of this IAM role and necessary VPC resources. The output of this module (a fleet_arn) is then used in the Cloudfleet Console.
      • Please visit the Fleet documentation to learn how to do this setup.
    • Google Cloud Platform (GCP):

      • CFKE uses the Workload Identity Federation for keyless authentication to your cloud account. Users grant specific IAM roles (e.g. roles/compute.instanceAdmin.v1) to a CFKE-managed service account principal within their GCP project.
      • The GCP Project ID is then provided in the Cloudfleet Console during Fleet setup.
      • Prerequisites include having a default VPC network with automatic subnet creation in the GCP project.
      • Please visit the Fleet documentation to learn more about how to make the setup.
  4. Define Resource Limits (e.g., CPU Limits): During Fleet setup, you will be able to limit the maximum total vCPU count for the nodes Cloudfleet can provision within this Fleet. This acts as an important cost control measure, preventing unintended over-provisioning.

  5. Finalize Fleet Creation: Fill in any other required information (e.g., a name for the Fleet) and click “Create”.

    With the Fleet configured, Cloudfleet now has the authorization and information needed to provision worker nodes in the connected cloud account as applications are deployed to the CFKE cluster.

Step 3: Installing and Configuring the Cloudfleet CLI

The Cloudfleet Command Line Interface (CLI), cloudfleet-cli (often referred to as cloudfleet), is an essential tool for managing Cloudfleet resources from a terminal. It allows users to create and manage clusters, configure kubectl, add self-managed nodes, and interact with the Cloudfleet API programmatically.

Installing the Cloudfleet CLI

Cloudfleet provides installation methods for various operating systems, with package managers being the preferred route for ease of installation and updates.

Cloudfleet CLI is available for MacOS as a universal binary.

You can install the CLI using Homebrew:

brew install cloudfleetai/tap/cloudfleet-cli

If you do not use brew, you can download the archive via this link and extract on your system.

Cloudfleet CLI is available at Cloudfleet’s APT repository for x64 and ARM architectures. You can install it using the following commands:

curl -fsSL https://downloads.cloudfleet.ai/apt/pubkey.gpg | sudo tee /usr/share/keyrings/cloudfleet-archive-keyring.gpg >/dev/null
echo "deb [signed-by=/usr/share/keyrings/cloudfleet-archive-keyring.gpg] https://downloads.cloudfleet.ai/apt stable main" | sudo tee /etc/apt/sources.list.d/cloudfleet.list
sudo apt-get update
sudo apt-get install cloudfleet

Cloudfleet CLI is available as RPM package:

After downloading the RPM package, you can install it using the following command:

sudo rpm -i cloudfleet.rpm

Cloudfleet CLI is available as ZIP package:

After downloading the ZIP package, you can extract it and run the cloudfleet binary.

Cloudfleet CLI is available on Winget for both AMD64 and ARM architectures and be installed by following command:

winget install Cloudfleet.CLI

Alternatively, you can download the binary directlt via following links:

Download the ZIP file for your platform and extract it to a directory.

After installation, users can verify it by running cloudfleet --version or cloudfleet --help to confirm.

Installing the kubectl

The Kubernetes command-line tool, kubectl, allows you to run commands against the Kubernetes cluster and works together with Cloudfleet CLI to interact with your CFKE cluster.

If you have been using Kubernetes already, you might have already installed kubectl. If not, there are multiple ways of installing kubectl and you can visit the Kubernetes documentation to learn how to install it based on your platform and preferred method of installation.

Step 4: Deploying Your First Application

Now that the CFKE cluster is running and a Fleet is configured, it’s time to deploy a sample application. This will demonstrate how Cloudfleet, through Kubernetes, orchestrates workloads and how its Node Auto-Provisioner works. Standard Kubernetes tooling, specifically kubectl, is used for this purpose.

Configuring kubectl for your CFKE Cluster

To interact with the Kubernetes cluster using kubectl, the local kubectl configuration (kubeconfig) must be updated to point to the new CFKE cluster. Additionally, the Cloudfleet CLI must be configured with your account to enable authentication against the cluster.

To perform these steps, click the “Connect to cluster” button. A wizard will guide you through the CLI tool configuration.

You can skip the first step if you have already downloaded the Cloudfleet CLI.

The next step will present two commands:

cloudfleet auth add-profile user default YOUR_ORGANIZATION_ID

Running this command configures a profile named “default” for your CLI. If you have access to multiple organizations, you can configure your CLI for them as well, but “default” will suffice for now. Your organization ID can be found in the console by navigating to Billing -> Payment and viewing it under the Billing Contact Information section.

The same wizard will display another command:

cloudfleet clusters kubeconfig YOUR_CLUSTER_ID

This command adds your CFKE cluster to your local kubeconfig file.

You can now verify the connectivity by executing the following command:

kubectl cluster-info

This should display information about the Kubernetes control plane. Another useful command is:

kubectl get nodes

Initially, if no workloads have been deployed, this command will return no nodes. As previously explained, CFKE worker nodes are provisioned on-demand by the Node Auto-Provisioner. When there is no active workload in the cluster, the entire cluster scales down to zero nodes.

Deploying a Sample Application

A common “hello world” example for Kubernetes is deploying an Nginx web server.

Create a Deployment: Use the following kubectl command to create a Kubernetes Deployment:

kubectl create deployment nginx-deployment --image=nginx:latest --replicas=2

This command instructs Kubernetes to:

  • create deployment nginx-deployment: Create a new Deployment object named nginx-deployment.
  • –image=nginx:latest: Use the latest official Nginx image from Docker Hub
  • –replicas=2: Request two running instances (Pods) of the Nginx application. Deploying two replicas, rather than one (as seen in some basic tutorials 3), introduces a basic level of availability and can better demonstrate how Kubernetes distributes workloads, potentially across multiple auto-provisioned nodes.

Checking Your Deployment

After issuing the deployment command, Kubernetes will begin working to achieve the desired state.

Check Deployment Status:

kubectl get deployments

This command will display the nginx-deployment and its status, indicating the number of replicas that are ready and available.

Check Pod Status:

kubectl get pods

This command will list the individual Nginx Pods and their current status. Initially, they might go through states such as Pending and ContainerCreating before reaching the Running state.

The Pods may remain in the Pending state for a short period while Cloudfleet’s Node Auto-Provisioner provisions new worker nodes in the configured Fleet to accommodate them. This is expected behavior and demonstrates a key automation feature of CFKE.

Understanding Node Auto-Provisioning in Action

When the nginx-deployment was created, CFKE’s Node Auto-Provisioner detected the new workload requirements (two Nginx Pods needing resources).

If the cluster had no suitable worker nodes, or if existing nodes were at capacity, the Node Auto-Provisioner would automatically:

  1. Calculate the optimal type and size of virtual machines needed from the configured Fleet (e.g., specific instance types from AWS, GCP, or Hetzner).
  2. Provision these new nodes.
  3. Once the nodes join the cluster and are ready, Kubernetes schedules the Nginx Pods onto them.

The node auto-provisioning selects the nodes to be created by Pod labels. In this example, we did not add any labels to the Pods, which instructed CFKE to create the nodes with the cheapest available option. Since CFKE is a global solution, it also finds the cheapest region and runs the workload in that place.

However, many times we want to influence which cloud or geography our workload must run. In this case, we can use the Pod labels to instruct CFKE to deploy the workloads only on those specific places. To learn more about the large set of options that CFKE offers for workload scheduling, please visit the node auto-provisioning document.

Next Steps: Continue Your Cloudfleet Journey

Congratulations! You have successfully navigated the initial steps of using Cloudfleet: creating an account, installing the CLI, launching a CFKE cluster, configuring a Fleet, and deploying a sample application. This provides a foundational understanding of Cloudfleet’s workflow and its power in simplifying Kubernetes management.

Explore More Cloudfleet Features

Cloudfleet offers a rich set of features designed for robust, scalable, and flexible Kubernetes operations. Here are some areas for further exploration:

  • Deeper Dive into Node Auto-Provisioning: Learn more about how CFKE intelligently manages worker nodes, including how to use scheduling constraints, taints, tolerations, and affinity/anti-affinity rules to influence node selection and workload placement. The documentation includes references for specific labels like cfke.io/accelerator-name or kubernetes.io/arch that can be used for fine-grained control.

  • Advanced Networking: Explore Cloudfleet’s capabilities for global secure networking, which connects nodes across various clouds and regions as if they were on the same network. Learn about multi-region/multi-cloud load balancing in more detail and options for integrating with on-premises networks, potentially using BGP for on-premises load balancing.

  • Hybrid and On-Premises Deployments: Discover how to integrate self-managed Linux servers (from on-premises data centers or unsupported cloud providers) into a CFKE cluster, extending Cloudfleet’s management to existing infrastructure. Tutorials may cover specific scenarios, such as deploying Kubernetes on Proxmox.

  • Accessing Cloud APIs Securely: Learn how CFKE facilitates secure access from workloads running on CFKE to public cloud provider APIs (e.g., AWS S3, GCP BigQuery, Azure Blob Storage) using OIDC, eliminating the need for hard-coded credentials in applications.

  • GPU-Accelerated Workloads: For AI/ML tasks, learn how to use GPUs with Cloudfleet, potentially leveraging specialized providers like Lambda Cloud or specific GPU instances on AWS/GCP.

  • Integrating Cloudfleet with CI/CD: Explore how to integrate Cloudfleet with CI/CD pipelines.

Further Resources

To continue learning and make the most of Cloudfleet:

  • Cloudfleet Documentation: The official documentation is the comprehensive resource for all Cloudfleet features and configurations: https://cloudfleet.ai/docs

  • Cloudfleet Tutorials: A growing collection of practical guides and step-by-step instructions for various use cases, from general Kubernetes operations to specific integrations and AI/ML setups: https://cloudfleet.ai/tutorials/

  • API Reference: For users interested in programmatic interaction and automation, the API reference provides details on Cloudfleet API endpoints: https://cloudfleet.ai/docs/reference/api-reference/

  • Support: If assistance or answers to specific questions are needed, feel free to reach out to Cloudfleet team by contacting the support

The Cloudfleet team is dedicated to making the Kubernetes experience seamless and powerful, from datacenters to the cloud and the edge. This guide is just the beginning of what can be achieved with the platform.