Introduction

The Cloudfleet Terraform provider lets you manage your entire Kubernetes infrastructure stack as code, including clusters, multi-cloud fleets for node autoprovisioning, and self-managed nodes that extend your reach to any platform. Whether you’re running on AWS, managing on-premises VMware, or scaling across a dozen different cloud providers, everything is managed through consistent Terraform resources.

Cluster provisioning: Create CFKE clusters in any Cloudfleet region with your desired configuration and Kubernetes version.

Fleet configuration: Configure automatic node provisioning across AWS, GCP, and Hetzner with unified resource limits and scaling policies.

Self-managed node provisioning: Generate cloud-init userdata that works with any cloud provider or on-premises infrastructure that supports cloud-init. Add nodes from Scaleway, OVH, VMware vSphere, Proxmox, bare metal servers, and hundreds of other platforms.

Are you specific cloud provider examples? Check out the following documentations:

Provider configuration

Basic setup

Configure the Cloudfleet provider in your Terraform configuration:

terraform {
  required_providers {
    cloudfleet = {
      source = "terraform.cloudfleet.ai/cloudfleet/cloudfleet"
    }
  }
}

provider "cloudfleet" {
  # If no profile is specified, the provider uses the 'default' profile from the Cloudfleet CLI configuration (typically ~/.cloudfleet/config)
}

Configuration options

The provider supports multiple authentication methods:

provider "cloudfleet" {
  # Optional: Use specific profile (defaults to 'default')
  profile = "production"

  # Optional: Override credentials
  organization_id     = "your-org-id"
  access_token_id     = "your-token-id"
  access_token_secret = "your-token-secret"
}

Environment variables

Environment variables take priority over provider configuration and are the recommended approach for production environments:

export CLOUDFLEET_ORGANIZATION_ID="your-org-id"
export CLOUDFLEET_ACCESS_TOKEN_ID="your-token-id"
export CLOUDFLEET_ACCESS_TOKEN_SECRET="your-token-secret"
export CLOUDFLEET_PROFILE="production"

Authentication

The provider supports several authentication methods in order of priority:

  1. Environment variables (highest priority): Set the required environment variables
  2. Direct configuration: Specify credentials directly in the provider block using API tokens
  3. Cloudfleet CLI profile: Use cloudfleet auth login to authenticate (see CLI installation guide)

Quick start example

Here’s a simple example to get you started with the Cloudfleet Terraform provider, demonstrating how to create a CFKE cluster and provision nodes on OVH using cloud-init:

terraform {
    required_providers {
        cloudfleet = {
            source = "terraform.cloudfleet.ai/cloudfleet/cloudfleet"
        }
        ovh = {
            source = "ovh/ovh"
        }
    }
}

variable "ovh_application_key" {
    description = "OVH application key for API access"
    type        = string
}
variable "ovh_application_secret" {
    description = "OVH application secret for API access"
    type        = string
    sensitive   = true
}
variable "ovh_consumer_key" {
    description = "OVH consumer key for API access"
    type        = string
}

# Create an SSH key pair for OVH instances
variable "ovh_service_name" {
    description = "OVH service name for the cloud project"
    type        = string
}

provider "cloudfleet" {}

# Create a CFKE cluster
resource "cloudfleet_cfke_cluster" "example" {
    name   = "my-cluster"
    region = "europe-central-1a"
    tier   = "basic"
}

# Generate cloud-init user-data to be used in self-managed nodes
resource "cloudfleet_cfke_node_join_information" "ovh" {
    cluster_id = cloudfleet_cfke_cluster.example.id
    region     = "DE1"
    zone       = "DE1"

    node_labels = {
        "cfke.io/provider" = "ovh"
    }
}

provider "ovh" {
    endpoint           = "ovh-eu"
    application_key    = var.ovh_application_key
    application_secret = var.ovh_application_secret
    consumer_key       = var.ovh_consumer_key
}

data "ovh_cloud_project_flavors" "flavor" {
    service_name = var.ovh_service_name
    name_filter  = "b2-7"
}

data "ovh_cloud_project_images" "images" {
    service_name = var.ovh_service_name
    region       = "DE1"
    os_type      = "linux"
}

locals {
    flavor_id = [for flavor in data.ovh_cloud_project_flavors.flavor.flavors : flavor.id if flavor.region == "DE1"][0]
    image_id  = [for image in data.ovh_cloud_project_images.images.images : image.id if image.name == "Ubuntu 24.04"][0]
}

resource "tls_private_key" "ssh_key" {
    algorithm = "ED25519"
}

resource "ovh_cloud_project_ssh_key" "cfke" {
    service_name = var.ovh_service_name
    name         = "cfke-test"
    public_key   = tls_private_key.ssh_key.public_key_openssh
}

# Create an OVH instance
resource "ovh_cloud_project_instance" "ovh_self_managed_node" {
    service_name   = var.ovh_service_name
    name           = "self-managed-node-ovh"
    billing_period = "hourly"

    boot_from {
        image_id = local.image_id
    }

    flavor {
        flavor_id = local.flavor_id
    }

    region = "DE1"

    ssh_key {
        name = ovh_cloud_project_ssh_key.cfke.name
    }

    network {
        public = true
    }

    user_data = cloudfleet_cfke_node_join_information.ovh.rendered
}

This example demonstrates how Cloudfleet can provision nodes on any cloud provider using cloud-init userdata.

Using Kubernetes provider

You can use the cloudfleet_client_config data source to configure the Terraform Kubernetes provider to deploy resource manifests to your CFKE cluster:

data "cloudfleet_client_config" "me" {}

# Configure Kubernetes provider using cluster connection details
provider "kubernetes" {
  host                   = cloudfleet_cfke_cluster.example.endpoint
  cluster_ca_certificate = base64decode(cloudfleet_cfke_cluster.example.certificate_authority)
  token                  = data.cloudfleet_client_config.me.access_token
}

# Deploy applications to your cluster
resource "kubernetes_namespace" "app" {
  metadata {
    name = "my-application"
  }
}

resource "kubernetes_deployment" "app" {
  metadata {
    name      = "my-app"
    namespace = kubernetes_namespace.app.metadata[0].name
  }

  spec {
    replicas = 3

    selector {
      match_labels = {
        app = "my-app"
      }
    }

    template {
      metadata {
        labels = {
          app = "my-app"
        }
      }

      spec {
        container {
          image = "nginx:latest"
          name  = "nginx"

          port {
            container_port = 80
          }
        }
      }
    }
  }
}

Next steps

Check resources and data sources for a complete list of available resources and their configuration options.