This is the multi-page printable view of this section. Click here to print.
Admin Guide
1 - Inventory Management
Now let’s suppose that you’ve cloned the Git platform repository template.
Inventory Management
It is now time to declare your various instances in Ansible’s inventory. Simply extend the ansible/inventories/hosts.txt the following way:
##########
# Global #
##########
[kahuna]
kowabunga-kahuna-1 ansible_host=10.0.0.1 ansible_ssh_user=ubuntu
##################
# EU-WEST Region #
##################
[kiwi_eu_west]
kiwi-eu-west-1 ansible_host=10.50.101.2
kiwi-eu-west-2 ansible_host=10.50.101.3
[kaktus_eu_west]
kaktus-eu-west-a-1 ansible_host=10.50.101.11
kaktus-eu-west-a-2 ansible_host=10.50.101.12
kaktus-eu-west-a-3 ansible_host=10.50.101.13
[eu_west:children]
kiwi_eu_west
kaktus_eu_west
################
# Dependencies #
################
[kiwi:children]
kiwi_eu_west
[kaktus:children]
kaktus_eu_west
In this example, we’ve declared our 6 instances (1 global Kahuna, 2 Kiwi and 3 Kaktus from EU-WEST region and their respective associated private IP addresses (used to deploy through SSH).
They respectively belong to various groups, and we’ve also created sub-groups. This is a special Ansible trick which will allow us to inherit variables from group each instance belongs to.
In that regard, considering the example of kaktus-eu-west1, the instance will be assigned variables from possibly various files. You can then safely:
- declare host-specific variables in ansible/host_vars/kaktus-wu-west-1.yml file.
- declare host-specific sensitive variables in ansible/host_vars/kaktus-eu-west-1.sops.yml file.
- declare kaktus_eu_west group-specific variables in ansible/group_vars/kaktus_eu_west/main.yml file.
- declare kaktus_eu_west group-specific sensitive variables in ansible/group_vars/kaktus_eu_west.sops.yml file.
- declare kaktus group-specific variables in ansible/group_vars/kaktus/main.yml file.
- declare kaktus group-specific sensitive variables in ansible/group_vars/kaktus.sops.yml file.
- declare eu_west group-specific variables in ansible/group_vars/kaktus/eu_west.yml file.
- declare eu_west group-specific sensitive variables in ansible/group_vars/eu_west.sops.yml file.
- declare any other global variables in ansible/group_vars/all/main.yml file.
- declare any other global sensitive variables in ansible/group_vars/all.sops.yml file.
This way, instance can inherit variables from its global type (kaktus), its region (eu_west), and a mix of both (kaktus_eu_west).
Note that Ansible variables precedence will apply:
role defaults < all vars < group vars < host vars < role vars
Let’s take the time to also update the ansible/inventories/group_vars/all/main.yml file to update a few settings:
kowabunga_region_domain: "{{ kowabunga_region }}.acme.local"
where acme.local would be your corporate private domain.
2 - Setup Kahuna
Now let’s suppose that your Kahuna instance server has been provisioned with latest Ubuntu LTS distribution. Be sure that it is SSH-accessible with some local user.
Let’s take the following assumptions for the rest of this tutorial:
- We only have one single Kahuna instance (no high-availability).
- Local bootstrap user with sudo privileges is ubuntu, with key-based SSH authentication.
- Kahuna instance is public-Internet exposed through IP address 1.2.3.4, translated to kowabunga.acme.com DNS.
- Kahuna instance is private-network exposed through IP address 10.0.0.1.
- Kahuna instance hostname is kowabunga-kahuna-1.
Setup DNS
Please ensure that your kowabunga.acme.com domain translates to public IP address 1.2.3.4. Configuration is up to you and your DNS provider and can be done manually.
Being IaC-supporters, we advise using OpenTofu for that purpose. Let’s see how we can do, using Cloudflare DNS provider.
Start by editing the terraform/providers.tf file in your platform’s repository:
terraform {
required_providers {
cloudflare = {
source = "cloudflare/cloudflare"
version = "~> 5"
}
}
}
provider "cloudflare" {
api_token = local.secrets.cloudflare_api_token
}
extend the terraform/secrets.tf file with:
locals {
secrets = {
cloudflare_api_token = data.sops_file.secrets.data.cloudflare_api_token
}
}
and add the associated:
cloudflare_api_token: MY_PREVIOUSLY_GENERATED_API_TOKEN
variable in terraform/secrets.yml file thanks to:
$ kobra secrets edit terraform/secrets.yml
Then, simply edit your terraform/main.tf file with the following:
resource "cloudflare_dns_record" "kowabunga" {
zone_id = "ACME_COM_ZONE_ID"
name = "kowabunga"
ttl = 3600
type = "A"
content = "1.2.3.4"
proxied = false
}
initialize OpenTofu (once, or each time you add a new provider):
$ kobra tf init
and apply infrastructure changes:
$ kobra tf apply
Ansible Kowabunga Collection
Kowabunga comes with an official Ansible Collection and its associated documentation.
The collection contains:
- roles and playbooks to easily deploy the various Kahuna, Koala, Kiwi and Kaktus instances.
- actions so you can create your own tasks to interact and manage a previously setup Kowabunga instance.
Check out ansible/requirements.yml file to declare the specific collection version you’d like to use:
---
collections:
- name: kowabunga.cloud
version: 0.1.0
By default, your platform is configured to pull a tagged official release from Ansible Galaxy. You may however prefer to pull it directly from Git, using latest commit for instance. This can be accommodated through:
---
collections:
- name: git@github.com:kowabunga-cloud/ansible-collections-kowabunga
type: git
version: master
Once defined, simply pull it into your local machine:
$ kobra ansible pull
Kahuna Settings
Kahuna instance deployment will take care of everything. It’ll take the assumption of running a supported Ubuntu LTS release, enforce some configuration and security settings, install the necessary packages, create local admin user accounts, if required, and setup some form of deny-all filtering policy firewall, so you’re safely exposed.
Admin Accounts
Let’s start by declaring some user admin accounts we’d like to create. We don’t want to keep on using the single nominative ubuntu account for everyone after all.
Simply create/edit the ansible/inventories/group_vars/all/main.yml file the following way:
kowabunga_os_user_admin_accounts_enabled:
- admin_user_1
- admin_user_2
kowabunga_os_user_admin_accounts_pubkey_dirs:
- "{{ playbook_dir }}/../../../../../files/pubkeys"
to declare all your expected admin users, and add their respective SSH public key files in the ansible/files/pubkeys directory, e.g.:
$ tree ansible/files/pubkeys/
ansible/files/pubkeys/
└── admin_user_1
└── admin_user_2
Note
Note that all registered admin accounts will have password-less sudo privileges.We’d also recommend you to set/update the root account password. By default, Ubuntu comes without any, making it impossible to login. Kowabunga’s playbook make sure that root login is prohibited from SSH for security reasons (e.g. brute-force attacks) but we encourage you setting one, as it’s always useful, especially on public cloud VPS or bare metal servers to get a console/IPMI access to log into.
If you intend to do so, simply edit the secrets file:
$ kobra secrets edit ansible/inventories/group_vars/all.sops.yml
and set the requested password:
secret_kowabunga_os_user_root_password: MY_SUPER_STRONG_PASSWORD
Firewall
If you Kahuna instance is connected on public Internet, it is more than recommended to enable a network firewall. This can be easily done by extending the ansible/inventories/group_vars/kahuna/main.yml file with:
kowabunga_firewall_enabled: true
kowabunga_firewall_open_tcp_ports:
- 22
- 80
- 443
Note that we’re limited opened ports to SSH and HTTP/HTTPS here, which should be more than enough (HTTP is only used by Caddy server for certificate auto-renewal and will redirect traffic to HTTPS anyway). If you don’t expect your instance to be SSH-accessible on public Internet, you can safely drop this line.
MongoDB
Kahuna comes with a bundled, ready-to-be-used MongoDB deployment. This comes in handy if you only have a unique instance to manage. This remains however optional (default), as you may very well be willing to re-use an existing external production-grade MongoDB cluster, already deployed.
If you intend to go with the bundled one, a few settings must be configured in ansible/inventories/group_vars/kahuna/main.yml file:
kowabunga_mongodb_enabled: true
kowabunga_mongodb_listen_addr: "127.0.0.1,10.0.0.1"
kowabunga_mongodb_rs_key: "{{ secret_kowabunga_mongodb_rs_key }}"
kowabunga_mongodb_rs_name: kowabunga
kowabunga_mongodb_admin_password: "{{ secret_kowabunga_mongodb_admin_password }}"
kowabunga_mongodb_users:
- base: kowabunga
username: kowabunga
password: '{{ secret_kowabunga_mongodb_user_password }}'
readWrite: true
and their associated secrets in ansible/inventories/group_vars/kahuna.sops.yml
secret_kowabunga_mongodb_rs_key: YOUR_CUSTOM_REPLICA_SET_KEY
secret_kowabunga_mongodb_admin_password: A_STRONG_ADMIN_PASSWORD
secret_kowabunga_mongodb_user_password: A_STRONG_USER_PASSWORD
This will basically instruct Ansible to install MongoDB server, configure it with a replicaset (so it can be part of a future cluster instance, we never know), secure it with admin credentials of your choice and create a kowabunga database/collection and associated service user.
Kahuna Settings
Finally, let’s ensure the Kahuna orchestrator gets everything he needs to operate.
You’ll need to define:
- a custom email address (and associated SMTP connection settings) for Kahuna to be able to send email notifications to users.
- a randomly generated key to sign JWT tokens (please ensure it is secure enough, not to compromise issued tokens robustness).
- a randomly generated admin API key. It’ll be used to provision the admin bits of Kowabunga, until proper user accounts have been created.
- a private/public SSH key-pair to be used by platform admins to seamlessly SSH into instantiated Kompute instances. Please ensure that the private key is being stored securely somewhere.
Then simply edit the ansible/inventories/group_vars/all/main.yml file the following way:
kowabunga_public_url: "https://kowabunga.acme.com"
(as variable will be reused by all instance types)
and the ansible/inventories/group_vars/kahuna/main.yml file the following way:
kowabunga_kahuna_http_address: "10.0.0.1"
kowabunga_kahuna_admin_email: kowabunga@acme.com
kowabunga_kahuna_jwt_signature: "{{ secret_kowabunga_kahuna_jwt_signature }}"
kowabunga_kahuna_db_uri: "mongodb://kowabunga:{{ secret_kowabunga_mongodb_user_password }}@10.0.0.1:{{ mongodb_port }}/kowabunga?authSource=kowabunga"
kowabunga_kahuna_api_key: "{{ secret_kowabunga_kahuna_api_key }}"
kowabunga_kahuna_bootstrap_user: kowabunga
kowabunga_kahuna_bootstrap_pubkey: "YOUR_ADMIN_SSH_PUB_KEY"
kowabunga_kahuna_smtp_host: "smtp.acme.com"
kowabunga_kahuna_smtp_port: 587
kowabunga_kahuna_smtp_from: "Kowabunga <{{ kowabunga_kahuna_admin_email }}>"
kowabunga_kahuna_smtp_username: johndoe
kowabunga_kahuna_smtp_password: "{{ secret_kowabunga_kahuna_smtp_password }}"
and add the respective secrets into ansible/inventories/group_vars/kahuna.sops.yml:
secret_kowabunga_kahuna_jwt_signature: A_STRONG_JWT_SIGNATURE
secret_kowabunga_kahuna_api_key: A_STRONG_API_KEY
secret_kowabunga_kahuna_smtp_password: A_STRONG_PASSWORD
Ansible Deployment
We’re done with configuration (finally) ! All we need to do now is finally run Ansible to make things live. This is done by invoking the kahuna playbook from the kowabunga.cloud collection:
$ kobra ansible deploy -p kowabunga.cloud.kahuna
Note that, under-the-hood, Ansible will use Ansible Mitogen extension to speed things up. Bear in mind that Ansible’s run is idempotent. Anything’s failing can be re-executed. You can also run it as many times you want, or re-run it in the next 6 months or so, provided you’re using a tagged collection, the end result will always be the same.
After a few minutes, if everything’s went okay, you should have a working Kahuna instance, i.e.:
- A Caddy frontal reverse-proxy, taking care of automatic TLS certificate issuance, renewal and traffic termination, forwarding requests back to either Koala Web application or Kahuna backend server.
- The Kahuna backend server itself, our core orchestrator.
- Optionally, MongoDB database.
We’re now ready for provisioning users and teams !
3 - Provisioning Users
Your Kahuna instance is now up and running, let’s get things and create a few admin users accounts. At first, we only have the super-admin API key that was previously set through Ansible deployment. We’ll make use of it to provision further users and associated teams. After all, we want a nominative user account for each contributor, right ?
Back to TF config, let’s edit the terraform/providers.tf file:
terraform {
required_providers {
kowabunga = {
source = "registry.terraform.io/kowabunga-cloud/kowabunga"
version = ">=0.55.1"
}
}
}
provider "kowabunga" {
uri = "https://kowabunga.acme.com"
token = local.secrets.kowabunga_admin_api_key
}
Make sure to edit the Kowabunga provider’s uri with the associated DNS of your freshly deployed Kahuna instance and edit the terraform/secrets.yml file so match the kowabunga_admin_api_key you’ve picked before. OpenTofu will make use of these parameters to connect to your private Kahuna and apply for resources.
Now declare a few users in your terraform/locals.tf file:
locals {
admins = {
// HUMANS
"John Doe" = {
email = "john@acme.com",
role = "superAdmin",
notify = true,
}
"Jane Doe" = {
email = "jane@acme.com",
role = "superAdmin",
notify = true,
}
// BOTS
"Admin TF Bot" = {
email = "tf@acme.com",
role = "superAdmin",
bot = true,
}
}
}
and the following resources definition in terraform/main.tf:
resource "kowabunga_user" "admins" {
for_each = local.admins
name = each.key
email = each.value.email
role = each.value.role
notifications = try(each.value.notify, false)
bot = try(each.value.bot, false)
}
resource "kowabunga_team" "admin" {
name = "admin"
desc = "Kowabunga Admins"
users = sort([for key, user in local.admins : kowabunga_user.users[key].id])
}
Then, simply apply for resources creation:
$ kobra tf apply
What we’ve done here was to register a new admin team, with 3 new associated user accounts: 2 regular ones for human administrators and one bot, which you’ll be able to use its API key instead of the super-admin master one to further provision resources if you’d like.
Better do this way as, shall the key be compromised, you’ll only have to revoke it or destroy the bot account, instead of replacing the master one on Kahuna instance.
Newly registered user will be prompted with 2 emails from Kahuna:
- a “Welcome to Kowabunga !” one, simply asking yourself to confirm your account’s creation.
- a “Forgot about your Kowabunga password ?” one, prompting for a password reset.
Warning
Account’s creation confirmation is required for the user to proceed further. For security purpose, newly created user accounts are locked-down until properly activated.
With security in mind, Kowabunga will prevent you from setting your own password. Whichever IT policy you’d choose, you will always end up with users having a weak password or finding a way to compromise your system. We don’t want that to happen, nor do we think it’s worth asking a user to generate a random ‘strong-enough’ password by himself, so Kowabunga does it for you.
Once users have been registered and password generated, and provided Koala Web application has been deployed as well, they can connect to (and land on a perfectly empty and so useless dashboard ;-) for now at least ).
Let’s move on and start creating our first region !
4 - Create Your First Region
Orchestrator being ready, we can now bootstrap our first region.
Let’s take the following assumptions for the rest of this tutorial:
- The Kowabunga region is to be called eu-west.
- The region will have a single zone named eu-west-a.
- It’ll feature 2 Kiwi and 3 Kaktus instances.
Back on the TF configuration, let’s use the following:
Region and Zone
locals {
eu-west = {
desc = "Europe West"
zones = {
"eu-west-a" = {
id = "A"
}
}
}
}
resource "kowabunga_region" "eu-west" {
name = "eu-west"
desc = local.eu-west.desc
}
resource "kowabunga_zone" "eu-west" {
for_each = local.eu-west.zones
region = kowabunga_region.eu-west.id
name = each.key
desc = "${local.eu-west.desc} - Zone ${each.value.id}"
}
And apply:
$ kobra tf apply
Nothing really complex here to be fair, we’re just using Kahuna’s API to register the region and its associated zone.
Kiwi Instances and Agents
Now, we’ll register the 2 Kiwi instances and 3 Kaktus ones. Please note that:
- we’ll extend the TF locals definition for that.
- Kiwi is to be associated to the global region.
- while Kaktus is ti be associated to the region’s zone.
Let’s start by registering one Kiwi and 2 associated agents:
locals {
eu-west = {
agents = {
"kiwi-eu-west-1" = {
desc = "Kiwi EU-WEST-1 Agent"
type = "Kiwi"
}
"kiwi-eu-west-2" = {
desc = "Kiwi EU-WEST-2 Agent"
type = "Kiwi"
}
}
kiwi = {
"kiwi-eu-west" = {
desc = "Kiwi EU-WEST",
agents = ["kiwi-eu-west-1", "kiwi-eu-west-2"]
}
}
}
}
resource "kowabunga_agent" "eu-west" {
for_each = merge(local.eu-west.agents)
name = each.key
desc = "${local.eu-west.desc} - ${each.value.desc}"
type = each.value.type
}
resource "kowabunga_kiwi" "eu-west" {
for_each = local.eu-west.kiwi
region = kowabunga_region.eu-west.id
name = each.key
desc = "${local.eu-west.desc} - ${each.value.desc}"
agents = [for agent in try(each.value.agents, []) : kowabunga_agent.eu-west[agent].id]
}
Warning
Note that, despite have 2 Kiwi instances, from Kowabunga perspective, we’re only registering one. This is because, the 2 instances are only used for high-availability and failover perspective. From service point of view, the region only has one single network gateway.
Despite that, each instance will have its own agent, to establish a WebSocket connection to Kahuna orchestrator.
Kaktus Instances and Agents
Let’s continue with the 3 Kaktus instances declaration and their associated agents. Note that, this time, instances are associated to the zone itself, not the region.
Information
Note that Kaktus instance creation/update takes 4 specific parameters into account:
- cpu_price and memory_price are purely arbitrary values that express how much actual money is worth your metal infrastructure. These are used to compute virtual cost calculation later, when you’ll be spawning Kompute instances with vCPUs and vGB of RAM. Each server being different, it’s fully okay to have different values here for your fleet.
- cpu_overcommit and memory_overcommit define the overcommit ratio you accept your physical hosts to address. As for price, not every server is born equal. Some have hyper-threading, other don’t. You may consider that a value of 3 or 4 is fine, other tend to be stricter and use 2 instead. The more you set the bar, the more virtual resources you’ll be able to create but the less actual physical resources they’ll be able to get.
locals {
currency = "EUR"
cpu_overcommit = 3
memory_overcommit = 2
eu-west = {
zones = {
"eu-west-a" = {
id = "A"
agents = {
"kaktus-eu-west-a-1" = {
desc = "Kaktus EU-WEST A-1 Agent"
type = "Kaktus"
}
"kaktus-eu-west-a-2" = {
desc = "Kaktus EU-WEST A-2 Agent"
type = "Kaktus"
}
"kaktus-eu-west-a-3" = {
desc = "Kaktus EU-WEST A-3 Agent"
type = "Kaktus"
}
}
kaktuses = {
"kaktus-eu-west-a-1" = {
desc = "Kaktus EU-WEST A-1",
cpu_cost = 500
memory_cost = 200
agents = ["kaktus-eu-west-a-1"]
}
"kaktus-eu-west-a-2" = {
desc = "Kaktus EU-WEST A-2",
cpu_cost = 500
memory_cost = 200
agents = ["kaktus-eu-west-a-2"]
}
"kaktus-eu-west-a-3" = {
desc = "Kaktus A-3",
cpu_cost = 500
memory_cost = 200
agents = ["kaktus-eu-west-a-3"]
}
}
}
}
}
}
resource "kowabunga_agent" "eu-west-a" {
for_each = merge(local.eu-west.zones.eu-west-a.agents)
name = each.key
desc = "${local.eu-west.desc} - ${each.value.desc}"
type = each.value.type
}
resource "kowabunga_kaktus" "eu-west-a" {
for_each = local.eu-west.zones.eu-west-a.kaktuses
zone = kowabunga_zone.eu-west["eu-west-a"].id
name = each.key
desc = "${local.eu-west.desc} - ${each.value.desc}"
cpu_price = each.value.cpu_cost
memory_price = each.value.memory_cost
currency = local.currency
cpu_overcommit = try(each.value.cpu_overcommit, local.cpu_overcommit)
memory_overcommit = try(each.value.memory_overcommit, local.memory_overcommit)
agents = [for agent in try(each.value.agents, []) : kowabunga_agent.eu-west-a[agent].id]
}
And again, apply:
$ kobra tf apply
That done, Kiwi and Kaktus instances have been registered, but more essentially, their associated agents. For each newly created agent, you should have received an email (check the admin one you previously set in Kahuna’s configuration). Keep track of these emails, they contain one-time credentials about the agent identifier and it’s associated API key.
This is the super secret thing that will allow them further to establish secure connection to Kahuna orchestrator. We’re soon going to declare these credentials in Ansible’s secrets so Kiwi and Kaktus instances can be provisioned accordingly.
Warning
There’s no way to recover the agent API key. It’s never printed anywhere but on the email you just received. Even the database doesn’t contain it. If one agent’s API key is lost, you can either request a new one from API or destroy the agent and create a new one in-place.Virtual Networks
Let’s keep on provisioning Kahuna’s database with the network configuration from our network topology.
We’ll use different VLANs (expressed as VNET or Virtual NETwork in Kowabunga’s terminology) to segregate tenant traffic:
- VLAN 0 (i.e. no VLAN) will be used for public subnets (i.e. where to hook public IP addresses).
- VLAN 102 will be dedicated to storage backend.
- VLANs 201 to 209 will be reserved for tenants/projects (automatically assigned at new project’s creation).
Note
Note that exposing Ceph storage network might come in handy if you intend to run applications which are expected to consume or provision resources directly on the underlying storage.
By default, if you only intend to use plain old Kompute instances, virtual disks are directly mapped by virtualization and you don’t have to care about how. In that case, there’s no specific need to expose VLAN 102.
If you however expect to further use KFS or running your own Kubernetes flavor, with an attempt to directly use Ceph backend to instantiate PVCs, exposing the VLAN 102 is mandatory.
To be on the safe side, and future-proof, keep it exposed.
So let’s extend our terraform/main.tf with the following VNET resources declaration for the newly registered region.
resource "kowabunga_vnet" "eu-west" {
for_each = local.eu-west.vnets
region = kowabunga_region.eu-west.id
name = each.key
desc = try(each.value.desc, "EU-WEST VLAN ${each.value.vlan} Network")
vlan = each.value.vlan
interface = each.value.interface
private = each.value.vlan == "0" ? false : true
}
This will iterate over a list of VNET objects that we’ll define in terraform/locals.tf file:
locals {
eu-west = {
vnets = {
// public network
"eu-west-0" = {
desc = "EU-WEST Public Network",
vlan = "0",
interface = "br0",
},
// storage network
"eu-west-102" = {
desc = "EU-WEST Ceph Storage Network",
vlan = "102",
interface = "br102",
},
// services networks
"eu-west-201" = {
vlan = "201",
interface = "br201",
},
[...]
"eu-west-209" = {
vlan = "209",
interface = "br209",
},
}
}
}
And again, apply:
$ kobra tf apply
What have we done here ? Simply iterating over VNETs to associate those with VLAN IDs and the name of Linux bridge interfaces which will be created on each Kaktus instance from the zone (see further).
Note
Note that while services instances will have dedicated reserved networks, we’ll (conventionally) add the VLAN 0 here (which is not really a VLAN at all).
Kaktus instances will be created with a br0 bridge interface, mapped on host private network controller interface(s), where public IP addresses will be bound. This will allow further create virtual machines to be able to bind public IPs through the bridged interface.
Subnets
Now that virtual networks have been registered, it’s time to associate each of them with service subnets. Again, let’s edit our terraform/main.tf to declare resources objects, on which we’ll iterate.
resource "kowabunga_subnet" "eu-west" {
for_each = local.eu-west.subnets
vnet = kowabunga_vnet.eu-west[each.key].id
name = each.key
desc = try(each.value.desc, "")
cidr = each.value.cidr
gateway = each.value.gw
dns = try(each.value.dns, each.value.gw)
reserved = try(each.value.reserved, [])
gw_pool = try(each.value.gw_pool, [])
routes = kowabunga_vnet.eu-west[each.key].private ? local.extra_routes : []
application = try(each.value.app, local.subnet_application)
default = try(each.value.default, false)
}
Subnet objects are associated with a given virtual network and usual network settings (such as CIDR, route/gateway, DNS server) are associated.
Note the use of 2 interesting parameters:
- reserved, which is basically a list of IP addresses ranges, which are part of the provided CIDR, but not not to be assigned to further created virtual machines and services. This may come in handy if you have specific use of static IP addresses in your project and want to ensure they’ll never get assigned to anyone programmatically.
- gw_pool, which is a range of IP addresses that are to be assigned to each project’s Kawaii instances as virtual IPs. These are fixed IPs (so that router address never changes, even if you do destroy/recreate service instances countless times). You usually need one per zone, not more. But it’s safe to extend the range for future-use (e.g. adding new zones in your region).
Now let’s declare the various subnets in terraform/locals.tf file as well:
locals {
subnet_application = "user"
eu-west = {
"eu-west-0" = {
desc = "EU-WEST Public Network",
vnet = "0",
cidr = "4.5.6.0/26",
gw = "4.5.6.62",
dns = "9.9.9.9"
reserved = [
"4.5.6.0-4.5.6.0", # network address
"4.5.6.62-4.5.6.63", # reserved (gateway, broadcast)
]
},
"eu-west-102" = {
desc = "EU-WEST Ceph Storage Network",
vnet = "102",
cidr = "10.50.102.0/24",
gw = "10.50.102.1",
dns = "9.9.9.9"
reserved = [
"10.50.102.0-10.50.102.69", # currently used by Iris(es) and Kaktus(es) (room for more)
]
app = "ceph"
},
# /24 subnets
"eu-west-201" = {
vnet = "201",
cidr = "10.50.201.0/24",
gw = "10.50.201.1",
reserved = [
"10.50.201.1-10.50.201.5",
]
gw_pool = [
"10.50.201.252-10.50.201.254",
]
},
[...]
"eu-west-209" = {
vnet = "209",
cidr = "10.50.209.0/24",
gw = "10.50.209.1",
reserved = [
"10.50.209.1-10.50.209.5",
]
gw_pool = [
"10.50.209.252-10.50.209.254",
]
},
}
}
}
Note
Note that we arbitrary took multiple decisions here:
- Reserve the first 69 IP addresses of the 10.50.102.0/24 subnet for our region growth. Each project’s Kawaii instance (one per zone) will bind an IP from the range. That’s plain enough room for the 10 projects we intend to host. But this saves us some space, shall we need to extend our infrastructure, by adding new Kaktus instances.
- Use of /24 subnets. This is really up to each network administrator. You could pick whichever range you need which wouldn’t collapse with what’s currently in place.
- Limit virtual network to one single subnet. We could have added as much as needed.
- Reserve the first 5 IPs of each subnet. Remember, our 2 Kiwi instances are already configured to bind .2 and .3 (and .1 is the VIP). We’ll save a few extra room for future use (one never knows …).
- Reserve the subnet’s last 3 IP addresses for Kawaii gateways virtual IPs. We only have one zone for now, so 1 would have been enough, but again, we never know what the future holds …
Warning
It is now time to read carefully what you wrote, really do. We just wrote down a bloated list of network settings, subnets, CIDRs, IP addresses and mistake have probably happened. While nothing’s written in stone (you can always apply TF config again), better find it now, that wasting hours trying to figure out later why your virtual machine doesn’t get network access ;-)Once carefully reviewed, again, apply:
$ kobra tf apply
One more thing, let’s reflect those changes in Ansible’s configuration as well.
Simply extend your ansible/inventories/group_vars/eu_west/main.yml file the following way:
kowabunga_region: eu-west
kowabunga_region_domain_admin_network: "10.50.101.0/24"
kowabunga_region_domain_admin_router_address: 10.50.101.1
kowabunga_region_domain_storage_network: "10.50.102.0/24"
kowabunga_region_domain_storage_router_address: 10.50.102.1
kowabunga_region_vlan_id_ranges:
- from: 101
to: 102
net_prefix: 10.50
net_mask: 24
- from: 201
to: 209
net_prefix: 10.50
net_mask: 24
This will help us provision the next steps …
Let’s continue and provision our region’s Kiwi instances !
5 - Provisioning Kiwi
As detailed in network topology, we’ll have 2 Kiwi instances:
- kiwi-eu-west-1:
- with VLAN 101 as administrative segment with 10.50.101.2,
- with VLAN 102 as storage segment with 10.50.102.2,
- with VLAN 201 to 209 as service VLANs.
- kiwi-eu-west-1:
- with VLAN 101 as administrative segment with 10.50.101.3,
- with VLAN 102 as storage segment with 10.50.102.3,
- with VLAN 201 to 209 as service VLANs.
Note that 10.50.101.1 and 10.50.102.1 will be used as virtual IPs (VIPs).
Inventory Management
If required, update your Kiwi instances in Ansible’s inventory.
Important
Note that for the first-time installation, private IPs from the inventory are to replaced by the servers private ones (or anything in place which allows for bootstrapping machines).The instances are now declared to be part of kiwi, kiwi_eu_west and eu_west groups.
Network Configuration
We’ll instruct the Ansible collection to provision network settings through Netplan. Note that our example is pretty simple, with only a single network interface to be used for private LAN, no link aggregation being used (recommended for enterprise-grade setups).
As the configuration is both instance-specific (private MAC address, IP address …), region-specific (all Kiwi instance will do likely the same), and, as such, repetitive, we’ll use some Ansible overlaying.
We’ve already declare quite a few stuff at region level when creating eu-west one.
Let’s now extend the ansible/inventories/group_vars/kiwi_eu_west/main.yml file with the following:
kowabunga_netplan_config:
ethernet:
- name: "{{ kowabunga_host_underlying_interface }}"
mac: "{{ kowabunga_host_underlying_interface_mac }}"
ips:
- "4.5.6.{{ kowabunga_host_public_ip_addr_suffix }}/26"
routes:
- to: default
via: 4.5.6.1
vlan: |
{%- set res=[] -%}
{%- for r in kowabunga_region_vlan_id_ranges -%}
{%- for id in range(r.from, r.to + 1, 1) -%}
{%- set dummy = res.extend([{"name": "vlan" + id | string, "id": id, "link": kowabunga_host_vlan_underlying_interface, "ips": [r.net_prefix | string + "." + id | string + "." + kowabunga_host_vlan_ip_addr_suffix | string + "/" + r.net_mask | string]}]) -%}
{%- endfor -%}
{%- endfor -%}
{{- res -}}
As ugly as it looks, this Jinja macro will help us iterate over all the VLAN interfaces we need to create by simply taking a few instance-specific variables into consideration.
And that’s exactly what we’ll define in ansible/inventories/host_vars/kiwi-eu-west-1 file:
kowabunga_primary_network_interface: eth0
kowabunga_host_underlying_interface: "{{ kowabunga_primary_network_interface }}"
kowabunga_host_underlying_interface_mac: "aa:bb:cc:dd:ee:ff"
kowabunga_host_vlan_underlying_interface: "{{ kowabunga_primary_network_interface }}"
kowabunga_host_public_ip_addr_suffix: 202
kowabunga_host_vlan_ip_addr_suffix: 2
You’ll need to ensure that the MAC addresses and host and gateway IP addresses are correctly set, depending on your setup. Once done, you can do the same for the alternate Kiwi instance in ansible/inventories/host_vars/kiwi-eu-west-2.yml file.
Extend the ansible/inventories/group_vars/kiwi/main.yml file with the following to ensure generic settings are propagated to all Kiwi instances:
kowabunga_netplan_disable_cloud_init: true
kowabunga_netplan_apply_enabled: true
Information
Note that setting kowabunga_netplan_disable_cloud_init is an optional step. If you’d like to keep whatever configuration cloud-init has previously set, it’s all fine (but it’s always recommended not to have dual source of truth).Network Failover
Each Kiwi instance configuration is now set to receive host-specific network configuration. But they are meant to work in an HA-cluster, so let’s define some redundancy rules. The two instances respectively bind the .2 and .3 private IPs from each subnet, but our active router will be .1, so let’s define network failover configuration for that.
Again, extend the region-global ansible/inventories/group_vars/kiwi_eu_west/main.yml file with the following configuration:
kowabunga_kiwi_primary_host: "kiwi-eu-west-1"
kowabunga_network_failover_settings:
peers: "{{ groups['kiwi_eu_west'] }}"
use_unicast: true
trackers:
- name: kiwi-eu-west-vip
configs: |
{%- set res = [] -%}
{%- for r in kowabunga_region_vlan_id_ranges -%}
{%- for id in range(r.from, r.to + 1, 1) -%}
{%- set dummy = res.extend([{"vip": r.net_prefix | string + "." + id | string + ".1/" + r.net_mask | string, "vrid": id, "primary": kowabunga_kiwi_primary_host, "control_interface": kowabunga_primary_network_interface, "interface": "vlan" + id | string, "nopreempt": true}]) -%}
{%- endfor -%}
{%- endfor -%}
{{- res -}}
Once again, we iterate over kowabunga_region_vlan_id_ranges variable to create our global configuration for eu-west region. After all, both Kiwi instances from there will have the very same configuration.
This will ensure that VRRP packets flows between the 2 peers so one always ends up being the active router for each virtual network interface.
Firewall Configuration
When running the Ansible playbook, Kiwi instances will be automatically configured as network routers. This is mandatory to ensure packets flow from WAN to LAN (and reciprocally) to inter-VLANs for services.
Configuring the associated firewall may then comes in handy.
There 2 possible options:
- Kiwi remains a private gateway, non-exposed to public Internet. This may be the case if you intend to only run Kowabunga as private corporate infrastructure only. Projects will get their own private network and the ‘public’ one will actually consist of one of your company’s private subnet.
- Kiwi is a public gateway, exposed to public Internet.
In all cases, extend the ansible/inventories/group_vars/kiwi/main.yml file with the following to enable firewalling:
kowabunga_firewall_enabled: true
In our first case scenario, simply configure the firewall as pass-through NAT gateway. Traffic from all interfaces will simply be forwarded:
kowabunga_firewall_passthrough_enabled: true
In the event of a public gateway, things are a bit more complex, and you should likely refer to the Ansible firewall module documentation to declare the following:
kowabunga_firewall_dnat_rules: []
kowabunga_firewall_forward_interfaces: []
kowabunga_firewall_trusted_public_ips: []
kowabunga_firewall_lan_extra_nft_rules: []
kowabunga_firewall_wan_extra_nft_rules: []
with actual rules, depending on your network configuration and access means and policy (e.g. remote VPN access).
PowerDNS Setup
Information
Documentation below is ephemeral.
Kiwi currently relies on PowerDNS as a third-party DNS server. Current deployment comes bundled, associated with a MariaDB backend.
This is a temporary measure only. Next stable versions of Kiwi will soon feature a standalone DNS server implementation, nullifying all third-aprty dependencies and configuration requirements.
In order to deploy and configure PowerDNS and its associated MariaDB database backend, one need to extend Ansible configuration.
Let’s now reflect some definitions into Kiwi’s ansible/inventories/group_vars/kiwi_eu_west/main.yml configuration file:
kowabunga_powerdns_locally_managed_zone_records:
- zone: "{{ storage_domain_name }}"
name: ceph
value: 10.50.102.11
- zone: "{{ storage_domain_name }}"
name: ceph
value: 10.50.102.12
- zone: "{{ storage_domain_name }}"
name: ceph
value: 10.50.102.13
This will further instruct PowerDNS to handle local DNS zone for region eu-west on acme.local TLD.
Note that we’ll use the Kaktus instances VLAN 102 IP addresses that we’ve defined in network toplogy so that ceph.storage.eu-west.acme.local will be a round-robin DNS to these instances.
Finally, edit the SOPS-encrypted ansible/inventories/group_vars/kiwi.sops.yml file with newly defined secrets:
secret_kowabunga_powerdns_webserver_password: ONE_STRONG_PASSWORD
secret_kowabunga_powerdns_api_key: ONE_MORE
secret_kowabunga_powerdns_db_admin_password: YET_ANOTHER
secret_kowabunga_powerdns_db_user_password: HERE_WE_GO
As names stand, first 2 variables will be used to expose PowerDNS API (which will be consumed by Kiwi agent) and last twos are MariaDB credentials, used by PowerDNS to connect to. None of these passwords really matter, they’re server-to-server internal use only, no use is ever going to make use of them. But let’s use something robust nonetheless.
Kiwi Agent
Finally, let’s take care of Kiwi agent. The agent will establish its secured WebSocket connection to Kahuna, receives configuration changes from, and apply accordingly.
Now remember that we previously used TF to register new Kiwi agents. Once applied, emails were sent for each instance with a set of agent identifier and API key. These values now have to be provided to Ansible, as these are going to be the credentials used by Kiwi agent to connect to Kahuna.
So let’s edit each Kiwi instance secrets file in respectively ansible/inventories/host_vars/kiwi-eu-west-{1,2}.sops.yml files:
secret_kowabunga_kiwi_agent_id: AGENT_ID_FROM_KAHUNA_EMAIL_FROM_TF_PROVISIONING_STEP
secret_kowabunga_kiwi_agent_api_key: AGENT_API_KEY_FROM_KAHUNA_EMAIL_FROM_TF_PROVISIONING_STEP
Ansible Deployment
We’re finally done with Kiwi’s configuration. All we need to do now is finally run Ansible to make things live. This is done by invoking the kiwi playbook from the kowabunga.cloud collection:
$ kobra ansible deploy -p kowabunga.cloud.kiwi
We’re now ready for provisioning Kaktus HCI nodes !
6 - Provisioning Kaktus
As detailed in network topology, we’ll have 3 Kaktus instances:
- kaktus-eu-west-a-1:
- with VLAN 101 as administrative segment with 10.50.101.11,
- with VLAN 102 as storage segment with 10.50.102.11,
- with VLAN 201 to 209 as service VLANs.
- kaktus-eu-west-a-2:
- with VLAN 101 as administrative segment with 10.50.101.12,
- with VLAN 102 as storage segment with 10.50.102.12,
- with VLAN 201 to 209 as service VLANs.
- kaktus-eu-west-a-3:
- with VLAN 101 as administrative segment with 10.50.101.13,
- with VLAN 102 as storage segment with 10.50.102.13,
- with VLAN 201 to 209 as service VLANs.
Pre-Requisites
Kaktus nodes will serve both as computing and storage backends. While computing is easy (one just need to ease available CPU and memory), storage is different as we need to prepare hard disks (well … SSDs) and set them up to be part of a coherent Ceph cluster.
As a pre-requisite, you’ll then need to ensure that your server has freely available disks for that purpose.
If you only have limited disks on your system (e.g. only 2), Ceph storage will be physically collocated with your OS. Best scenario would then be to:
- partition your disks to have a small reserved partition (e.g. 32 to 64 GB) to your OS
- possibly do the same on another disk so you can use software RAID-1 for sanity.
- partition the rest of your disk for future Ceph usage.
In that case, parted is your friend for the job. It also means you need to ensure, at OS installation stage, that you don’t let distro partitioner use your full device.
Recommendation
As much as can be, we however recommend you to have dedicated disks for Ceph cluster. An enterprise-grade setup would use some small SATA SSDs in RAID-1 for OS and as many dedicated NVMe SSDs as Ceph-reserved data disks.Inventory Management
If required, update your Kaktus instances in Ansible’s inventory.
Important
Note that for the first-time installation, private IPs from the inventory are to replaced by the servers private ones (or anything in place which allows for bootstrapping machines).The instances are now declared to be part of kaktus, kaktus_eu_west and eu_west groups.
Network Configuration
We’ll instruct the Ansible collection to provision network settings through Netplan. Note that our example is pretty simple, with only a single network interface to be used for private LAN, no link aggregation being used (recommended for enterprise-grade setups).
As the configuration is both instance-specific (private MAC address, IP address …), region-specific (all Kaktus instance will do likely the same), and, as such, repetitive, we’ll use some Ansible overlaying.
We’ve already declare quite a few stuff at region level when creating eu-west one.
Let’s now extend the ansible/inventories/group_vars/kaktus_eu_west/main.yml file with the following:
kowabunga_netplan_vlan_config_default:
# EU-WEST admin network
- name: vlan101
id: 101
link: "{{ kowabunga_host_vlan_underlying_interface }}"
ips:
- "{{ kowabunga_region_domain_admin_host_address }}/{{ kowabunga_region_domain_admin_network | ansible.utils.ipaddr('prefix') }}"
routes:
- to: default
via: "{{ kowabunga_region_domain_admin_router_address }}"
# EU-WEST storage network
- name: vlan102
id: 102
link: "{{ kowabunga_host_vlan_underlying_interface }}"
kowabunga_netplan_bridge_config_default:
- name: br0
interfaces:
- "{{ kowabunga_host_underlying_interface }}"
- name: br102
interfaces:
- vlan102
ips:
- "{{ kowabunga_region_domain_storage_host_address }}/{{ kowabunga_region_domain_storage_network | ansible.utils.ipaddr('prefix') }}"
routes:
- to: default
via: "{{ kowabunga_region_domain_storage_router_address }}"
metric: 200
# Region-generic configuration template, variables set at host level
kowabunga_netplan_config:
ethernet:
- name: "{{ kowabunga_host_underlying_interface }}"
mac: "{{ kowabunga_host_underlying_interface_mac }}"
vlan: |
{%- set res = kowabunga_netplan_vlan_config_default -%}
{%- for r in kowabunga_region_vlan_id_ranges[1:] -%}
{%- for id in range(r.from, r.to + 1, 1) -%}
{%- set dummy = res.extend([{"name": "vlan" + id | string, "id": id, "link": kowabunga_host_vlan_underlying_interface}]) -%}
{%- endfor -%}
{%- endfor -%}
{{- res -}}
bridge: |
{%- set res = kowabunga_netplan_bridge_config_default -%}
{%- for r in kowabunga_region_vlan_id_ranges[1:] -%}
{%- for id in range(r.from, r.to + 1, 1) -%}
{%- set dummy = res.extend([{"name": "br" + id | string, "interfaces": ["vlan" + id | string]}]) -%}
{%- endfor -%}
{%- endfor -%}
{{- res -}}
As for Kiwi previously, this looks like a dirty Jinja hack but it actually comes handy, saving you from copy/paste mistakes and iterating over all VLANs and bridges. We’ll still need to add instance-specific variables, by extending the ansible/inventories/host_vars/kaktus-eu-west-a-1 file:
kowabunga_host_underlying_interface: eth0
kowabunga_host_underlying_interface_mac: "aa:bb:cc:dd:ee:ff"
kowabunga_host_vlan_underlying_interface: eth0
kowabunga_region_domain_admin_host_address: 10.50.101.11
kowabunga_region_domain_storage_host_address: 10.50.102.11
You’ll need to ensure that the physical interface, MAC address and host admin+storage network addresses are correctly set, depending on your setup. Once done, you can do the same for the alternate Kaktus instances in ansible/inventories/host_vars/kaktus-eu-west-a-{2,3}.yml files.
Extend the ansible/inventories/group_vars/kaktus/main.yml file with the following to ensure generic settings are propagated to all Kaktus instances:
kowabunga_netplan_disable_cloud_init: true
kowabunga_netplan_apply_enabled: true
Information
Note that setting kowabunga_netplan_disable_cloud_init is an optional step. If you’d like to keep whatever configuration cloud-init has previously set, it’s all fine (but it’s always recommended not to have dual source of truth).Information
Note that, by opposition to Kiwi instances, services VLAN (201 to 209) interfaces will be left unconfigured (i.e. no IP address). None is actually needed, as we’re creating bridge interfaces on top, which are meant for further Kompute virtual instances to be able to bind the appropriate underlying VLAN interface.Storage Setup
It is now time to setup the Ceph cluster ! As complex as it may sounds (and it is), Ansible will populate everything for you.
So let’s start by defining a new cluster identifier and associated region, through ansible/inventories/group_vars/kaktus_eu_west/main.yml file:
kowabunga_ceph_fsid: "YOUR_CEPH_REGION_FSID"
kowabunga_ceph_group: kaktus_eu_west
The FSID is a simple UUID. It’s only constraint is to be unique amongst your whole network (should you have multiple Ceph clusters). Keep track of it, we’ll need to push this information to Kowabunga DB later on.
Monitors and Managers
Ceph cluster comes with several nodes as monitors. Simply put they are exposing the Ceph cluster API. You don’t need all nodes to be monitors. One is enough, while 3 is recommended, for high-availability and distributing workload. Each Kaktus instance can be turned into a Ceph monitor node.
One simply need to declare so in ansible/inventories/host_vars/kaktus-eu-west-a-{1,2,3}.yml instance-specific file:
kowabunga_ceph_monitor_enabled: true
kowabunga_ceph_monitor_listen_addr: "{{ kowabunga_region_domain_storage_host_address }}"
Information
Having more than 3 monitors in your cluster is not necessarily useful. If you have more than 3 instances in your region and need to choose, simply pick the most powerful servers (hardware characteristics wise) and they’ll process the heavy lifting.
Also be sure that those nodes (and those nodes only !) are defined in Kiwi’s DNS regional configuration under ceph record.
Ceph cluster also comes with managers. As in real-life, they don’t do much ;-) Or at least, they’re not as vital as monitors. They however expose various metrics. Having one is nice, more than that will only help with failover. As for monitors, one can enable it for a Kaktus in ansible/inventories/host_vars/kaktus-eu-west-a-{1,2,3}.yml instance-specific file:
kowabunga_ceph_manager_enabled: true
and its related administration password in ansible/inventories/group_vars/kaktus.sops.yml file:
secret_kowabunga_ceph_manager_admin_password: PASSWORD
This will help you connect to Ceph cluster WebUI, which is always handy when troubleshooting is required.
Authentication keyrings
Once running, Ansible will also generate specific keyrings at cluster’s boostrap. Once generated, these keyrings will be locally stored (and for you to be added to source control) and deployed to further nodes.
So let’s define where to store these files in ansible/inventories/group_vars/kaktus/main.yml file:
kowabunga_ceph_local_keyrings_dir: "{{ playbook_dir }}/../../../../../files/ceph"
Once provisioned, you’ll end up with a regional sub-directory (e.g. eu-west), containing 3 files:
- ceph.client.admin.keyring
- ceph.keyring
- ceph.mon.keyring
Important
These files are keyring and extremely sensitive. Anyone with access to these files and your private network gets a full administrative control over the Ceph cluster.
So keep track of them, but do it smartly. As they are plain-text, let’s ensure you don’t store them on Git that way.
A good move is to have them SOPS encrypted:
$ kobra secrets encrypt ceph.client.admin.keyring
$ mv ceph.client.admin.keyring ceph.client.admin.keyring.sops
before being pushed. Ansible will automatically decrypt them on the fly, should they end up with .sops extension.
Disks provisioning
Next step is about disks provisioning. Your cluster will contain several disks from several instances (the ones you’ve either partitioned or left untouched at pre-requisite stage). Each instance may have different topology, different disks, different sizes etc … Disks (or partitions, whatever) are each managed by a Ceph OSD daemon.
So we need to reflect this topology into each instance-specific ansible/inventories/host_vars/kaktus-eu-west-a-{1,2,3}.yml file:
kowabunga_ceph_osds:
- id: 0
dev: /dev/disk/by-id/nvme-XYZ-1
weight: 1.0
- id: 1
dev: /dev/disk/by-id/nvme-XYZ-2
weight: 1.0
For each instance, you’ll need to declare disks that are going to be part of the cluster. The dev parameter simply maps to the device file itself (it is more than recommended to use /dev/disk/by-id mapping instead of bogus /dev/nvme0nX naming, which can change across reboots). The weight parameter will be used for Ceph scheduler for object placement and corresponds to each disk size in TB unit (e.g. 1.92 TB SSD would have a 1.92 weight). And finally the id identifier might be the most important of all. This is the UNIQUE identifier across your Ceph cluster. Whichever the disk ID you use, you need to ensure than no other disk in no other instance uses the same identifier.
Data Pools
Once we have disks aggregated, we must create data pools on top. Data pools are a logical way to segment your global Ceph cluster usage. Definition can be made in ansible/inventories/group_vars/kaktus_eu_west/main.yml file, as:
kowabunga_ceph_osd_pools:
- name: rbd
ptype: rbd
pgs: 256
replication:
min: 1
request: 2
- name: nfs_metadata
ptype: fs
pgs: 128
replication:
min: 2
request: 3
- name: nfs_data
ptype: fs
pgs: 64
replication:
min: 1
request: 2
- name: kubernetes
ptype: rbd
pgs: 64
replication:
min: 1
request: 2
In that example, we’ll create 4 data pools:
- 2 of type rbd (RADOS block device), for further be used by KVM or a future Kubernetes cluster to provision virtual block device disks.
- 2 of type fs (filesystem), for further be used as underlying NFS storage backend.
Each pool relies on Ceph Placement Groups for objects fragments distribution across disks in the cluster. There’s no rule of thumb on how much one need. It depends on your cluster size, its number of disks, its replication factor and many more parameters. You can get some help thanks to Ceph PG Calculator to set an appropriate value.
The replication parameter controls the cluster’s data redundancy. The bigger the value, the more replicated data will be (and the less prone to disaster you will be), but the fewer usable space you’ll get.
File Systems
Shall you be willing to share your Ceph cluster as a distributed filesystem (e.g. with Kylo service), you’ll need to enable CephFS support.
Once again, this can be enabled through instance-specific definition in ansible/inventories/host_vars/kaktus-eu-west-a-{1,2,3}.yml file:
kowabunga_ceph_fs_enabled: true
and more globally in ansible/inventories/group_vars/kaktus/main.yml
kowabunga_ceph_fs_filesystems:
- name: nfs
metadata_pool: nfs_metadata
data_pool: nfs_data
default: true
fstype: nfs
where we’d instruct Ceph to use our two previously created pools for underlying storage.
Storage Clients
Finally, we must declare clients, allowed to connect to our Ceph cluster. We don’t really expect remote users to connect to, only libvirt instances (and possibly kubernetes instances, shall we deploy such), so declaring these in ansible/inventories/group_vars/kaktus/main.yml file should be enough:
kowabunga_ceph_clients:
- name: libvirt
caps:
mon: "profile rbd"
osd: "profile rbd pool=rbd"
- name: kubernetes
caps:
mon: "profile rbd"
osd: "profile rbd pool=kubernetes"
mgr: "profile rbd pool=kubernetes"
Kaktus Agent
Finally, let’s take care of Kaktus agent. The agent will establish its secured WebSocket connection to Kahuna, receives configuration changes from, and apply accordingly.
Now remember that we previously used TF to register new Kaktus agents. Once applied, emails were sent for each instance with a set of agent identifier and API key. These values now have to be provided to Ansible, as these are going to be the credentials used by Kaktus agent to connect to Kahuna.
So let’s edit each Kaktus instance secrets file in respectively ansible/inventories/host_vars/kaktus-eu-west-a-{1,2}.sops.yml files:
secret_kowabunga_kaktus_agent_id: AGENT_ID_FROM_KAHUNA_EMAIL_FROM_TF_PROVISIONING_STEP
secret_kowabunga_kaktus_agent_api_key: AGENT_API_KEY_FROM_KAHUNA_EMAIL_FROM_TF_PROVISIONING_STEP
Ansible Deployment
We’re finally done with Kaktus’s configuration. All we need to do now is finally run Ansible to make things live. This is done by invoking the kaktus playbook from the kowabunga.cloud collection:
$ kobra ansible deploy -p kowabunga.cloud.kaktus
We’re all set with infrastructure setup.
One last step of services provisioning and we’re done !
7 - Provisioning Services
Infrastructure is finally all set. We only need to finalize the setup of a few services (from Kahuna’s perspective) and we’re done.
Storage Pool
Let’s update your TF configuration to simply declare the following:
locals {
ceph_port = 3300
eu-west {
pools = {
"eu-west-ssd" = {
desc = "SSD"
secret = "YOUR_CEPH_FSID",
cost = 200.0,
type = "rbd",
pool = "rbd",
address = "ceph",
default = true,
agents = [
"kaktus-eu-west-a-1",
"kaktus-eu-west-a-2",
"kaktus-eu-west-a-3",
]
},
}
}
}
resource "kowabunga_storage_pool" "eu-west" {
for_each = local.eu-west.pools
region = kowabunga_region.eu-west.id
name = each.key
desc = "${local.eu-west.desc} - ${each.value.desc}"
pool = each.value.pool
address = each.value.address
port = try(each.value.port, local.ceph_port)
secret = try(each.value.secret, "")
price = try(each.value.cost, null)
currency = local.currency
default = try(each.value.default, false)
agents = [for agent in try(each.value.agents, []) : kowabunga_agent.eu-west[agent].id]
}
What we’re doing here is instructing Kahuna that there’s a Ceph storage pool that can be used to provision RBD images. It will connect to ceph DNS record on port 3300, and use one of the 3 agents defined to connect to pool rbd. It’ll also arbitrary (as we did for Katkus instances) set the global storage pool price to 200 EUR / month, so virtual resource cost computing can happen.
Warning
Take care of updating the YOUR_CEPH_FSID secret value with the one you’ve set in Ansible kowabunga_ceph_fsid variable. Libvirt won’t be able to reach the cluster without this information.And apply:
$ kobra tf apply
NFS Storage
Now if you previously created an NFS endpoint want to expose it through Kylo services, you’ll also need to setup the following TF resources:
locals {
ganesha_port = 54934
eu-west {
nfs = {
"eu-west-nfs" = {
desc = "NFS Storage Volume",
endpoint = "ceph.storage.eu-west.acme.local",
fs = "nfs",
backends = [
"10.50.102.11",
"10.50.102.12",
"10.50.102.13",
],
default = true,
}
}
}
}
resource "kowabunga_storage_nfs" "eu-west" {
for_each = local.eu-west.nfs
region = kowabunga_region.eu-west.id
name = each.key
desc = "${local.eu-west.desc} - ${each.value.desc}"
endpoint = each.value.endpoint
fs = each.value.fs
backends = each.value.backends
port = try(each.value.port, local.ganesha_port)
default = try(each.value.default, false)
}
In a very same way, this simply instructs Kahuna how to access NFS resources and provide Kylo services. you must ensure that endpoint and backends values map to your local storage domain and associated Kaktus instances. They’ll be used further by Kylo instances to create NFS shares over Ceph.
And again, apply:
$ kobra tf apply
OS Image Templates
And finally, let’s declare OS image templates. Without those, you won’t be able to spin up any kind of Kompute virtual machines instances after all. Image templates must be ready-to-boot, cloud-init compatible and either in QCOW2 (smaller to download, prefered) or RAW format.
Up to you to use pre-built community images or host your own custom one on a public HTTP server.
Warning
Note that URL must be reachable from Kaktus nodes, not Kahuna one (so can be private network).
The module however does not support authentication at the moment, so images must be “publicly” available.
locals {
# WARNING: these must can be in either QCOW2 (recommended) or RAW format
# Example usage for conversion, if needed:
# $ qemu-img convert -f qcow2 -O raw ubuntu-22.04-server-cloudimg-amd64.img ubuntu-22.04-server-cloudimg-amd64.raw
templates = {
"ubuntu-cloudimg-generic-24.04" = {
desc = "Ubuntu 24.04 (Noble)",
source = "https://cloud-images.ubuntu.com/noble/20250805/noble-server-cloudimg-amd64.img"
default = true
}
}
}
resource "kowabunga_template" "eu-west" {
for_each = local.templates
pool = kowabunga_storage_pool.eu-brezel["eu-west-ssd"].id
name = each.key
desc = each.value.desc
os = try(each.value.os, "linux")
source = each.value.source
default = try(each.value.default, false)
}
At creation, declared images will be download by one of the Kaktus agent and stored into Ceph cluster. After that, one can simply reference them by their name when creating Kompute instances.
Warning
Depending on remote source, image size and your network performance, retrieving images can take a significant amount of time (several minuutes). TF provider is set to use a 30mn timeout by default. Update it accordingly if you believe this won’t be enough.Congratulations, you’re now done with administration tasks and infrastructure provisionning. You now have a fully working Kowabunga setup, ready to be consumed by end users.
Let’s then provision our first project !