Vultr
Cloudfleet supports adding Vultr instances as nodes in your CFKE cluster. The most streamlined and scalable approach is to use the Cloudfleet Terraform provider’s cloud-init integration. This method allows you to create Vultr instances with Terraform while using the cloud-init configuration generated by the Cloudfleet Terraform provider to automatically register the instances with your CFKE cluster.
Prerequisites
- You have the Terraform CLI installed on your workstation.
- You configured the Cloudfleet CLI by following the instructions here. You can alternatively use the token authentication by following the instructions here.
- You have a Vultr account and a Personal Access Token. You can create the Personal Access Token from the Vultr control panel.
- You have a CFKE cluster running. The example below assumes that you have a cluster with the ID
CLUSTER_IDbut you can also create the cluster with Terraform as shown in the Terraform introduction.
Adding Vultr instances to your CFKE cluster
Use the following Terraform configuration to create Vultr instances and integrate them with your CFKE cluster. Replace the cfke_cluster_id and vultr_api_key variables with your actual values.
variable "cfke_cluster_id" {
type = string
default = "CFKE Cluster ID"
}
variable "vultr_api_key" {
type = string
description = "Vultr Personal Access Token"
}
variable "region" {
type = string
default = "fra"
}
terraform {
required_providers {
cloudfleet = {
source = "terraform.cloudfleet.ai/cloudfleet/cloudfleet"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = "~> 2.0"
}
vultr = {
source = "vultr/vultr"
version = "~> 2.0"
}
}
}
provider "vultr" {
api_key = var.vultr_api_key
}
data "cloudfleet_cfke_cluster" "cluster" {
id = var.cfke_cluster_id
}
// Generate the cloud-init configuration to join the Vultr instance to the CFKE cluster
resource "cloudfleet_cfke_node_join_information" "vultr" {
cluster_id = data.cloudfleet_cfke_cluster.cluster.id
region = var.region
zone = var.region
node_labels = {
"cfke.io/provider" = "vultr"
}
}
// Vultr resources
// Create a VPC for the CFKE nodes. This is optional since Vultr instances can communicate over the public internet, but when nodes are part of the same VPC, they establish encrypted tunnels over the private network
resource "vultr_vpc" "cfke_vpc" {
region = var.region
description = "CFKE VPC"
}
data "vultr_os" "ubuntu" {
filter {
name = "name"
values = ["Ubuntu 24.04 LTS x64"]
}
}
resource "vultr_instance" "cfke_node" {
count = 1
plan = "vc2-2c-4gb"
region = var.region
os_id = data.vultr_os.ubuntu.id
enable_ipv6 = true
label = "cfke-node-${count.index + 1}"
user_data = cloudfleet_cfke_node_join_information.vultr.rendered // Use the generated cloud-init configuration
hostname = "cfke-node-${count.index + 1}"
vpc_ids = [vultr_vpc.cfke_vpc.id]
}
After creating the Terraform configuration, run these commands to provision the Vultr instances and integrate them with your CFKE cluster:
terraform init
terraform apply
After deployment completes, verify the nodes have joined your cluster:
kubectl get nodes
Adding a load balancer
To expose workloads to the internet, create a Kubernetes service of type NodePort and configure a Vultr Load Balancer to forward traffic to the nodes on that port. The example below demonstrates a simple Nginx deployment exposed via port 31000 with a corresponding Vultr Load Balancer configuration.
locals {
node_port = 31000
}
resource "vultr_load_balancer" "load_balancer" {
region = var.region
label = "cfke-load-balancer"
balancing_algorithm = "roundrobin"
forwarding_rules {
frontend_protocol = "tcp"
frontend_port = 80
backend_protocol = "tcp"
backend_port = local.node_port
}
health_check {
path = "/"
port = local.node_port
protocol = "http"
response_timeout = 1
unhealthy_threshold = 2
check_interval = 3
healthy_threshold = 4
}
attached_instances = sort(vultr_instance.cfke_node[*].id)
}
// An example Kubernetes deployment using a Load Balancer
data "cloudfleet_client_config" "me" {}
provider "kubernetes" {
host = data.cloudfleet_cfke_cluster.cluster.endpoint
cluster_ca_certificate = data.cloudfleet_cfke_cluster.cluster.certificate_authority
token = data.cloudfleet_client_config.me.access_token
}
resource "kubernetes_namespace" "test" {
depends_on = [
data.cloudfleet_cfke_cluster.cluster
]
metadata {
name = "nginx"
}
}
resource "kubernetes_deployment" "app" {
wait_for_rollout = false
metadata {
name = "nginx"
namespace = kubernetes_namespace.test.id
labels = {
app = "nginx"
}
}
spec {
replicas = 2
selector {
match_labels = {
app = "nginx"
}
}
template {
metadata {
labels = {
app = "nginx"
}
}
spec {
affinity {
node_affinity {
required_during_scheduling_ignored_during_execution {
node_selector_term {
match_expressions {
key = "cfke.io/provider"
operator = "In"
values = [
"vultr"
]
}
}
}
}
}
container {
image = "nginx:latest"
name = "nginx"
port {
name = "http"
container_port = 80
}
}
}
}
}
}
resource "kubernetes_service" "app" {
depends_on = [
data.cloudfleet_cfke_cluster.cluster
]
metadata {
name = "nginx"
namespace = kubernetes_namespace.test.id
labels = {
app = "nginx"
}
}
spec {
selector = {
app = "nginx"
}
external_traffic_policy = "Local"
type = "NodePort"
port {
name = "http"
port = 80
target_port = 80
node_port = local.node_port
}
}
}
output "load_balancer_ip" {
value = vultr_load_balancer.load_balancer.ipv4
}
output "load_balancer_ip_v6" {
value = vultr_load_balancer.load_balancer.ipv6
}
For production deployments, consider installing an Ingress controller instead of this basic NodePort example.
Firewall configuration
Configure a Vultr firewall group to secure your cluster with this essential rule:
- Port 31000 (TCP): Restrict NodePort service access to the Vultr load balancer IP address only. Avoid opening this port to all sources (0.0.0.0/0) as this creates unnecessary security exposure.
This configuration follows the principle of least privilege by ensuring only the load balancer can access the NodePort. All external traffic flows through the load balancer, which provides centralized access control and public exposure management.
← Self-managed nodes
OVH →