OVH

You can integrate OVH Public Cloud instances as worker nodes in your CFKE cluster. The recommended workflow leverages the Cloudfleet Terraform provider combined with cloud-init automation. This approach enables you to provision OVH instances through Terraform and utilize the automatically generated cloud-init scripts from Cloudfleet to seamlessly join these instances to your existing CFKE cluster.

Requirements

  • Terraform CLI must be installed and available in your environment.
  • Cloudfleet CLI setup completed as outlined here, or API token configured per these instructions.
  • Valid OVH account with generated API credentials: application key, application secret, and consumer key. Generate these credentials via the OVH API token creation page.
  • Active OVH Public Cloud project with the corresponding service name (project identifier).
  • An operational CFKE cluster. This guide references a cluster using CLUSTER_ID as placeholder, though you may provision a new cluster via Terraform following the Terraform setup guide.

Provisioning OVH instances for your CFKE cluster

Deploy OVH instances and connect them to your CFKE cluster using the Terraform configuration below. Update all variable values to match your environment.

terraform {
  required_providers {
    cloudfleet = {
      source = "terraform.cloudfleet.ai/cloudfleet/cloudfleet"
    }
    ovh = {
      source = "ovh/ovh"
    }
  }
}

variable "cfke_cluster_id" {
  type    = string
  default = "CFKE Cluster ID"
}

variable "region" {
  type        = string
  default     = "DE1"
  description = "OVH region to deploy the instance"
}

variable "ovh_application_key" {
  description = "OVH application key for API access"
  type        = string
}

variable "ovh_application_secret" {
  description = "OVH application secret for API access"
  type        = string
  sensitive   = true
}

variable "ovh_consumer_key" {
  description = "OVH consumer key for API access"
  type        = string
}

variable "ovh_service_name" {
  description = "OVH service name for the cloud project"
  type        = string
}

provider "cloudfleet" {}

data "cloudfleet_cfke_cluster" "cluster" {
  id = var.cfke_cluster_id
}

# Generate cloud-init user-data to be used in self-managed nodes
resource "cloudfleet_cfke_node_join_information" "ovh" {
  cluster_id = data.cloudfleet_cfke_cluster.cluster.id
  region     = var.region
  zone       = var.region

  node_labels = {
    "cfke.io/provider" = "ovh"
  }
}

provider "ovh" {
  endpoint           = "ovh-eu"
  application_key    = var.ovh_application_key
  application_secret = var.ovh_application_secret
  consumer_key       = var.ovh_consumer_key
}

data "ovh_cloud_project_flavors" "flavor" {
  service_name = var.ovh_service_name
  name_filter  = "b2-7"
}


data "ovh_cloud_project_images" "images" {
  service_name = var.ovh_service_name
  region       = var.region
  os_type      = "linux"
}

locals {
  flavor_id = [for flavor in data.ovh_cloud_project_flavors.flavor.flavors : flavor.id if flavor.region == var.region][0]
  image_id  = [for image in data.ovh_cloud_project_images.images.images : image.id if image.name == "Ubuntu 24.04"][0]
}

resource "tls_private_key" "ssh_key" {
  algorithm = "ED25519"
}

resource "ovh_cloud_project_ssh_key" "cfke" {
  service_name = var.ovh_service_name
  name         = "cfke-test"
  public_key   = tls_private_key.ssh_key.public_key_openssh
}

# Create an OVH instance
resource "ovh_cloud_project_instance" "ovh_self_managed_node" {
  service_name   = var.ovh_service_name
  name           = "self-managed-node-ovh"
  billing_period = "hourly"

  boot_from {
    image_id = local.image_id
  }

  flavor {
    flavor_id = local.flavor_id
  }

  region = "DE1"

  ssh_key {
    name = ovh_cloud_project_ssh_key.cfke.name
  }

  network {
    public = true
  }

  user_data = cloudfleet_cfke_node_join_information.ovh.rendered
}

Execute the following commands to deploy your OVH infrastructure and connect the instances to your CFKE cluster:

terraform init
terraform apply

Once the deployment finishes, confirm that your new nodes are successfully registered with the cluster:

kubectl get nodes

Load balancer setup

To expose workloads to the internet, create a Kubernetes service of type NodePort and configure an OVH Load Balancer to distribute traffic to your nodes on that specific port. This setup provides a centralized entry point for external traffic while maintaining high availability across your OVH instances.

The recommended approach involves:

  1. Deploy your application with a NodePort service (Kubernetes assigns ports in the 30000-32767 range)
  2. Create an OVH Load Balancer through the OVH control panel or API
  3. Configure backend servers pointing to your OVH instance IP addresses
  4. Set health checks to monitor application availability on the NodePort
  5. Configure firewall rules to restrict NodePort access to the load balancer IP only

Security considerations

Configure your OVH firewall rules to follow the principle of least privilege:

  • NodePort (TCP): Restrict access to your specific NodePort only to the OVH load balancer IP address. Avoid exposing this port to all sources (0.0.0.0/0) as this creates unnecessary security exposure.

This configuration ensures only the load balancer can access your NodePort services, while all external traffic flows through the load balancer for centralized access control and public exposure management.

For production deployments, consider implementing an Ingress controller instead of direct NodePort exposure to gain advanced routing capabilities and SSL termination.