Resources
Terraform resources in the Cloudfleet provider allow you to define and manage your infrastructure as code. You can create and configure Kubernetes clusters, set up multi-cloud fleets with automatic node provisioning, and generate node join instructions for any environment - including on-premises and unsupported clouds. These resources are essential for provisioning and scaling Cloudfleet environments consistently and repeatably.
cloudfleet_cfke_cluster
Creates and manages Cloudfleet Kubernetes Engine (CFKE) clusters.
Required arguments:
name- Cluster name (1-63 characters, must match^[a-z0-9]([-a-z0-9]*[a-z0-9])?$)region- Cloudfleet control plane region (northamerica-central-1oreurope-central-1a)
Optional arguments:
tier- Cluster tier (basicorpro, defaults tobasic)version_channel- Kubernetes version channel (defaults to1.x.x-cfke.x)
Read-only attributes:
id- Cluster identifierendpoint- Kubernetes API endpointcertificate_authority- Base64 encoded CA certificatekubernetes_version- Current Kubernetes versionstatus- Cluster statuscreated_at- Creation timestampupdated_at- Last update timestamp
Example:
# Basic cluster
resource "cloudfleet_cfke_cluster" "basic" {
name = "my-basic-cluster"
region = "europe-central-1a"
tier = "basic"
}
# Pro cluster with specific Kubernetes version
resource "cloudfleet_cfke_cluster" "pro" {
name = "my-pro-cluster"
region = "northamerica-central-1"
tier = "pro"
version_channel = "1.29.x-cfke.x"
}
cloudfleet_cfke_fleet
Creates and manages multi-cloud infrastructure fleets for CFKE clusters. Fleets enable automatic node provisioning through the node autoprovisioner for supported cloud providers.
Required arguments:
name- Fleet namecluster_id- Cluster identifier this fleet belongs to
Optional arguments:
limits- Fleet resource limitscpu- CPU limit in cores (required within limits block)
aws- AWS fleet configurationrole_arn- AWS IAM role ARN for Karpenter resource management
gcp- GCP fleet configurationproject_id- GCP Project ID for instance deployment
hetzner- Hetzner fleet configurationapi_key- Hetzner Cloud API key with read/write access
Read-only attributes:
id- Fleet identifier
Example:
# AWS fleet
resource "cloudfleet_cfke_fleet" "aws_fleet" {
name = "aws-production"
cluster_id = cloudfleet_cfke_cluster.example.id
limits {
cpu = 50.0
}
aws {
role_arn = "arn:aws:iam::123456789012:role/CloudfleetKarpenter"
}
}
# Multi-cloud fleet
resource "cloudfleet_cfke_fleet" "multi_cloud" {
name = "multi-cloud-production"
cluster_id = cloudfleet_cfke_cluster.example.id
limits {
cpu = 100.0
}
aws {
role_arn = "arn:aws:iam::123456789012:role/CloudfleetKarpenter"
}
gcp {
project_id = "my-gcp-project-123"
}
hetzner {
api_key = var.hetzner_api_key
}
}
cloudfleet_cfke_node_join_information
This resource generates cloud-init userdata for joining self-managed nodes to CFKE clusters. It enables you to provision nodes across any cloud provider or on-premises infrastructure, extending Cloudfleet beyond the node autoprovisioner’s supported platforms.
Required arguments:
cluster_id- Cloudfleet cluster ID to generate userdata forzone- Availability zone for topology labelsregion- Region for topology labels
Optional arguments:
base64_encode- Encode userdata with base64 (defaults tofalse)gzip- Compress userdata with gzip (defaults tofalse)install_nvidia_drivers- Install NVIDIA drivers on the node (defaults tofalse)node_labels- Additional Kubernetes node labels (map of strings)
Read-only attributes:
id- Resource identifierrendered- Generated cloud-init userdatajoin_info_hash- Hash of the join information
Basic example:
# Basic node join information
resource "cloudfleet_cfke_node_join_information" "basic" {
cluster_id = cloudfleet_cfke_cluster.example.id
zone = "us-west-2a"
region = "us-west-2"
}
# GPU node with NVIDIA drivers and custom labels
resource "cloudfleet_cfke_node_join_information" "gpu_node" {
cluster_id = cloudfleet_cfke_cluster.example.id
zone = "us-west-2a"
region = "us-west-2"
install_nvidia_drivers = true
node_labels = {
"node-type" = "gpu-worker"
"environment" = "production"
"workload-type" = "machine-learning"
}
}
Multi-cloud provisioning examples
The cloudfleet_cfke_node_join_information resource works with any platform that supports cloud-init:
Hetzner Cloud
resource "cloudfleet_cfke_node_join_information" "hetzner" {
cluster_id = cloudfleet_cfke_cluster.example.id
region = "nbg1"
zone = "nbg1-dc3"
node_labels = {
"cfke.io/provider" = "hetzner"
}
}
resource "hcloud_server" "worker" {
name = "cfke-worker-hetzner"
image = "ubuntu-24.04"
server_type = "cx22"
datacenter = "nbg1-dc3"
user_data = cloudfleet_cfke_node_join_information.hetzner.rendered
public_net {
ipv4_enabled = true
ipv6_enabled = true
}
}
Scaleway
resource "cloudfleet_cfke_node_join_information" "scaleway" {
cluster_id = cloudfleet_cfke_cluster.example.id
region = "fr-par"
zone = "fr-par-1"
node_labels = {
"cfke.io/provider" = "scaleway"
}
# Scaleway requires uncompressed userdata
base64_encode = false
gzip = false
}
resource "scaleway_instance_server" "worker" {
name = "cfke-worker-scaleway"
type = "DEV1-M"
image = "ubuntu_jammy"
user_data = {
cloud-init = cloudfleet_cfke_node_join_information.scaleway.rendered
}
}
VMware vSphere
resource "cloudfleet_cfke_node_join_information" "vmware" {
cluster_id = cloudfleet_cfke_cluster.example.id
region = "datacenter-1"
zone = "rack-a"
node_labels = {
"cfke.io/provider" = "vmware"
"cfke.io/environment" = "on-premises"
}
}
resource "vsphere_virtual_machine" "worker" {
name = "cfke-worker-vmware"
resource_pool_id = data.vsphere_compute_cluster.cluster.resource_pool_id
datastore_id = data.vsphere_datastore.datastore.id
num_cpus = 4
memory = 8192
network_interface {
network_id = data.vsphere_network.network.id
}
disk {
label = "disk0"
size = 50
}
clone {
template_uuid = data.vsphere_virtual_machine.ubuntu_template.id
}
extra_config = {
"guestinfo.userdata.encoding" = "gzip+base64"
"guestinfo.userdata" = cloudfleet_cfke_node_join_information.vmware.rendered
"guestinfo.metadata" = base64gzip(templatefile("metadata.tftpl", {
instance_id = "cfke-worker-vmware"
hostname = "cfke-worker-vmware.local"
}))
}
}
Global multi-cloud deployment
# Define regions for each cloud provider
locals {
regions = {
aws = {
region = "us-west-2"
zones = ["us-west-2a", "us-west-2b"]
}
gcp = {
region = "us-central1"
zones = ["us-central1-a", "us-central1-b"]
}
hetzner = {
region = "nbg1"
zones = ["nbg1-dc3"]
}
scaleway = {
region = "fr-par"
zones = ["fr-par-1", "fr-par-2"]
}
}
}
# Generate join information for each provider
resource "cloudfleet_cfke_node_join_information" "multi_cloud" {
for_each = local.regions
cluster_id = cloudfleet_cfke_cluster.global.id
region = each.value.region
zone = each.value.zones[0]
node_labels = {
"cfke.io/provider" = each.key
"cfke.io/region" = each.value.region
"cfke.io/deployment" = "global-production"
}
}
# AWS instances
resource "aws_instance" "workers" {
count = 3
ami = "ami-0c2b8ca1dad447f8a"
instance_type = "m5.large"
user_data = cloudfleet_cfke_node_join_information.multi_cloud["aws"].rendered
tags = {
Name = "cfke-worker-aws-${count.index + 1}"
}
}
# GCP instances
resource "google_compute_instance" "workers" {
count = 3
name = "cfke-worker-gcp-${count.index + 1}"
machine_type = "e2-standard-4"
zone = local.regions.gcp.zones[0]
boot_disk {
initialize_params {
image = "ubuntu-os-cloud/ubuntu-2404-lts"
}
}
network_interface {
network = "default"
access_config {}
}
metadata = {
user-data = cloudfleet_cfke_node_join_information.multi_cloud["gcp"].rendered
}
}
# Hetzner instances
resource "hcloud_server" "workers" {
count = 2
name = "cfke-worker-hetzner-${count.index + 1}"
image = "ubuntu-24.04"
server_type = "cx22"
datacenter = "nbg1-dc3"
user_data = cloudfleet_cfke_node_join_information.multi_cloud["hetzner"].rendered
}
# Scaleway instances
resource "scaleway_instance_server" "workers" {
count = 2
name = "cfke-worker-scaleway-${count.index + 1}"
type = "DEV1-M"
image = "ubuntu_jammy"
user_data = {
cloud-init = cloudfleet_cfke_node_join_information.multi_cloud["scaleway"].rendered
}
}
Workload targeting
Target specific clouds or hardware types for your workloads:
# Deploy to AWS nodes only
resource "kubernetes_deployment" "aws_workload" {
metadata {
name = "aws-specific-app"
namespace = "production"
}
spec {
replicas = 3
selector {
match_labels = {
app = "aws-app"
}
}
template {
metadata {
labels = {
app = "aws-app"
}
}
spec {
node_selector = {
"cfke.io/provider" = "aws"
}
container {
name = "app"
image = "nginx:latest"
}
}
}
}
}
# Deploy to GPU nodes across all clouds
resource "kubernetes_deployment" "ml_workload" {
metadata {
name = "ml-training"
namespace = "ml"
}
spec {
replicas = 2
selector {
match_labels = {
app = "ml-training"
}
}
template {
metadata {
labels = {
app = "ml-training"
}
}
spec {
node_selector = {
"cfke.io/accelerator-manufacturer" = "NVIDIA"
}
container {
name = "trainer"
image = "tensorflow/tensorflow:latest-gpu"
resources {
limits = {
"nvidia.com/gpu" = "1"
}
}
}
}
}
}
}
This approach provides flexibility to run Kubernetes workloads across any infrastructure platform while maintaining centralized management through Cloudfleet.
cloudfleet_cfke_self_managed_node
Provisions Cloudfleet Kubernetes Engine (CFKE) self-managed nodes via SSH. This resource connects directly to existing infrastructure and configures it as a Kubernetes node, enabling you to integrate physical servers, existing virtual machines, or any SSH-accessible infrastructure into your CFKE cluster.
Required arguments:
cluster_id- Cloudfleet cluster ID to provision node forregion- Region for the node (added as topology labels)zone- Availability zone for the node (added as topology labels)
Optional arguments:
install_nvidia_drivers- Install NVIDIA drivers on the node (defaults tofalse)node_labels- Additional labels to apply to the Kubernetes node (merged with topology labels)ssh- SSH connection configuration block
SSH configuration block:
host- SSH host address (required)user- SSH username (required)password- SSH password for authentication (optional, sensitive)port- SSH port (optional, defaults to 22)private_key_path- Path to SSH private key file for authentication (optional, sensitive)
Read-only attributes:
id- Resource identifierjoin_info_hash- Hash of the join information
Example:
# Basic self-managed node with SSH key authentication
resource "cloudfleet_cfke_self_managed_node" "bare_metal" {
cluster_id = cloudfleet_cfke_cluster.example.id
region = "datacenter-1"
zone = "rack-a"
ssh {
host = "192.168.1.100"
user = "ubuntu"
private_key_path = "~/.ssh/id_rsa"
}
}
# GPU node with NVIDIA drivers and custom labels
resource "cloudfleet_cfke_self_managed_node" "gpu_server" {
cluster_id = cloudfleet_cfke_cluster.example.id
region = "datacenter-1"
zone = "rack-b"
install_nvidia_drivers = true
node_labels = {
"hardware-type" = "gpu-server"
"gpu-model" = "rtx-4090"
"environment" = "production"
}
ssh {
host = "192.168.1.101"
user = "admin"
private_key_path = "/secure/keys/gpu-server-key"
port = 2222
}
}
# On-premises server with password authentication
resource "cloudfleet_cfke_self_managed_node" "legacy_server" {
cluster_id = cloudfleet_cfke_cluster.example.id
region = "on-premises"
zone = "legacy-datacenter"
node_labels = {
"server-type" = "legacy-hardware"
"managed-by" = "terraform"
}
ssh {
host = "legacy-server.internal"
user = "root"
password = var.legacy_server_password
}
}
← Introduction
Data sources →