Scaleway
Cloudfleet enables integration with Scaleway instances as worker nodes in your CFKE cluster. The optimal approach combines the Cloudfleet Terraform provider with cloud-init automation for seamless provisioning. This workflow allows you to deploy Scaleway instances via Terraform and leverage the automatically generated cloud-init configuration from Cloudfleet to register these instances with your CFKE cluster.
Prerequisites
- Terraform CLI installed on your local machine.
- Cloudfleet CLI configured following these setup instructions, or API token authentication configured per these guidelines.
- Active Scaleway account with API credentials (access key, secret key, and project ID). Generate these credentials through the Scaleway console.
- An existing CFKE cluster. This example references a cluster with ID
CLUSTER_ID, though you can create a new cluster using Terraform following the Terraform setup documentation.
Deploying Scaleway instances to your CFKE cluster
Deploy and connect Scaleway instances to your CFKE cluster using the following Terraform configuration. Update the variable values to match your environment.
variable "cfke_cluster_id" {
type = string
default = "CFKE Cluster ID"
}
variable "scaleway_access_key" {
type = string
description = "Scaleway Access Key"
}
variable "scaleway_secret_key" {
type = string
description = "Scaleway Secret Key"
}
variable "scaleway_project_id" {
type = string
description = "Scaleway Project ID"
}
terraform {
required_providers {
cloudfleet = {
source = "terraform.cloudfleet.ai/cloudfleet/cloudfleet"
}
scaleway = {
source = "scaleway/scaleway"
}
}
}
provider "scaleway" {
access_key = var.scaleway_access_key
secret_key = var.scaleway_secret_key
region = "fr-par"
project_id = var.scaleway_project_id
}
data "cloudfleet_cfke_cluster" "cluster" {
id = var.cfke_cluster_id
}
resource "cloudfleet_cfke_node_join_information" "scaleway" {
cluster_id = data.cloudfleet_cfke_cluster.cluster.id
region = "fr-par"
zone = "fr-par-1"
node_labels = {
"cfke.io/provider" = "scaleway"
}
# Scaleway requires uncompressed userdata
base64_encode = false
gzip = false
}
resource "scaleway_instance_server" "worker" {
count = 2
name = "cfke-worker-${count.index+1}"
type = "DEV1-M"
image = "ubuntu_jammy"
user_data = {
cloud-init = cloudfleet_cfke_node_join_information.scaleway.rendered
}
enable_dynamic_ip = true
}
Execute these commands to provision your Scaleway infrastructure and integrate the instances with your CFKE cluster:
terraform init
terraform apply
Once provisioning is complete, validate that your new nodes have successfully joined the cluster:
kubectl get nodes
Adding a load balancer
To expose applications to the internet, deploy a Kubernetes service of type NodePort and configure a Scaleway Load Balancer to route traffic to your nodes on the designated port. The example below demonstrates a complete setup with an Nginx deployment exposed via port 31001 and a corresponding Scaleway Load Balancer configuration.
locals {
node_port = 31001
}
// An example Kubernetes deployment using a Load Balancer
data "cloudfleet_client_config" "me" {}
provider "kubernetes" {
host = data.cloudfleet_cfke_cluster.cluster.endpoint
cluster_ca_certificate = data.cloudfleet_cfke_cluster.cluster.certificate_authority
token = data.cloudfleet_client_config.me.access_token
}
resource "kubernetes_namespace" "test" {
depends_on = [
data.cloudfleet_cfke_cluster.cluster
]
metadata {
name = "scaleway"
}
}
resource "kubernetes_deployment" "app" {
wait_for_rollout = false
metadata {
name = "nginx"
namespace = kubernetes_namespace.test.id
labels = {
app = "nginx"
}
}
spec {
replicas = 2
selector {
match_labels = {
app = "nginx"
}
}
template {
metadata {
labels = {
app = "nginx"
}
}
spec {
affinity {
node_affinity {
required_during_scheduling_ignored_during_execution {
node_selector_term {
match_expressions {
key = "cfke.io/provider"
operator = "In"
values = [
"scaleway"
]
}
}
}
}
}
container {
image = "nginx:latest"
name = "nginx"
port {
name = "http"
container_port = 80
}
}
}
}
}
}
resource "kubernetes_service" "app" {
depends_on = [
data.cloudfleet_cfke_cluster.cluster
]
metadata {
name = "nginx"
namespace = kubernetes_namespace.test.id
labels = {
app = "nginx"
}
}
spec {
selector = {
app = "nginx"
}
external_traffic_policy = "Local"
type = "NodePort"
port {
name = "http"
port = 80
target_port = 80
node_port = local.node_port
}
}
}
resource "scaleway_lb_ip" "cfke" {
zone = "fr-par-1"
}
resource "scaleway_lb" "cfke" {
ip_ids = [scaleway_lb_ip.cfke.id]
zone = scaleway_lb_ip.cfke.zone
type = "LB-S"
}
resource "scaleway_lb_backend" "cfke" {
lb_id = scaleway_lb.cfke.id
name = "nginx"
forward_protocol = "http"
forward_port = local.node_port
server_ips = [for ip in flatten(scaleway_instance_server.worker.*.public_ips) : ip.address]
health_check_http {
uri = "/"
}
}
resource "scaleway_lb_frontend" "cfke" {
lb_id = scaleway_lb.cfke.id
backend_id = scaleway_lb_backend.cfke.id
inbound_port = 80
}
output "load_balancer_ip" {
value = scaleway_lb_ip.cfke.ip_address
}
For production deployments, consider implementing an Ingress controller instead of this basic NodePort approach.
Firewall configuration
Configure Scaleway security groups to secure your cluster with proper network restrictions:
- Port 31001 (TCP): Limit NodePort service access to the Scaleway load balancer IP address only. Avoid exposing this port to all sources (0.0.0.0/0) as this creates unnecessary security exposure.
This configuration follows the principle of least privilege by ensuring only the load balancer can access the NodePort. All external traffic flows through the load balancer, providing centralized access control and public exposure management.
← OVH
Exoscale →