Kowabunga provides more than just raw infrastructure resources access. It features various “ready-to-be-consumed” -as-a-service extensions to easily bring life to your various application and automation deployment needs.
This is the multi-page printable view of this section. Click here to print.
Services
1 - Kawaii Internet Gateway
Kawaii is your project’s private Internet Gateway, with complete ingress/egress control. It stands for Kowabunga Adaptive WAn Intelligent Interface (if you have better ideas, we’re all ears ;-) ).
It is the network gateway to your private network. All Kompute (and other services) instances always use Kawaii as their default gateway, relaying all traffic.
Kawaii itself relies on the underlying region’s Kiwi SD-WAN nodes to provide access to both public networks (i.e. Internet) and possibly other projects’ private subnets (when requested).
Kawaii is always the first service to be created (more exactly, other instances cloud-init boot sequence will likely wait until they reach a proper network connectivity, as Kawaii provides). Being critical for your project’s resilience, Kawaii uses Kowabunga’s concept of Multi-Zone Resources (MZR) to ensure that, when the requested regions feature multiple availability zones, a project’s Kawaii instance gets created in each zone.
Using multiple floating virtual IP (VIP) addresses with per-zone affinity, this guarantees that all instantiates services will always be able to reach their associated network router. As much as can be, using weighted routes, service instances will target their zone-local Kawaii instance, the best pick for latency. In the unfortunate event of local zone’s failure, network traffic will then automatically get routed to other zone’s Kawaii (with an affordable extra millisecond penalty).
While obviously providing egress capability to all project’s instance, Kawaii can also be used as an egress controller, exposed to public Internet through dedicated IPv4 address. Associated with a Konvey or Kalipso load-balancer, it make it simple to expose your application publicly, as one would do with a Cloud provider.
Kowabunga’s API allows for complete control of the ingress/egress capability with built-in firewalling stack (deny-all filtering policy, with explicit port opening) as well as peering capabilities.
This allows you to inter-connect your project’s private network with:
- VPC peering with other Kowabunga-hosted projects from the same region (network translation and routing being performed by underlying Kiwi instances).
- IPSEC peering with non-Kowabunga managed projects and network, from any provider.
Warning
Keep in mind that peering requires a bi-directional agreement. Connection and possibly firewalling must be configured at both endpoints.Note that thanks to Kowabunga’s internal network architecture and on-premises network backbone, inter-zones traffic is a free-of-charge possibility ;-) There’s no reason not to spread your resources on as many zones as can be, you won’t ever see any end-of-the-month surprise charge.
Resource Creation
As a projectAdmin user, one can create a Kawaii Internet gateway for the acme project in eu-west region the following way:
data "kowabunga_region" "eu-west" {
name = "eu-west"
}
resource "kowabunga_kawaii" "gw" {
project = kowabunga_project.acme.id
region = data.kowabunga_region.eu-west.id
}
You may refer to TF documentation to extend Kawaii gateway with VPC peering and custom egress/ingress/nat rules.
VPC Peering
Kowabunga VPC peering allows you to inter-connect 2 projects subnets. This can come in handy if you have 2 specific applications, managed by different set of people, and still need both to communicate all together.
The following example extends our Kawaii gateway configuration to peer with 2 subnets:
- the underlying Ceph one, used to directly access storage resources.
- the one form marvelous project, allowing bi-directional connectivity through associated ingress/egress firewalling rules.
resource "kowabunga_kawaii" "gw" {
project = kowabunga_project.acme.id
region = data.kowabunga_region.eu-west.id
vpc_peerings = [
{
subnet = data.kowabunga_subnet.eu-west-ceph.id
},
{
subnet = data.kowabunga_subnet.eu-west-marvelous.id
egress = {
ports = "1-65535"
protocol = "tcp"
}
ingress = {
ports = "1-65535"
protocol = "tcp"
}
policy = "accept"
},
]
}
Warning
Note that setting up VPC peering requires you to configure and allow connectivity on both projects ends. Network is bi-directional and, for security measures, one project cannot arbitrary decide to peer with another one without mutual consent.IPsec Peering
Alternatively, it is also possible to setup an IPsec peering connection with Kawaii, should you need to provide some admin users with remote access capabilities.
This allows connecting your private subnet with other premises or Cloud providers as to extend the reach of services behind the walls of Kowabunga.
The above example extend our Kawaii instance with an IPsec connection with the ACME remote office. The remote IPsec engine public IP address will be 5.6.7.8 and expose the private network 172.16.1.0/24.
resource "kowabunga_kawaii_ipsec" "office" {
kawaii = kowabunga_kawaii.gw.id
name = "ACME Office"
desc = "connect ro aws ipsec"
pre_shared_key = local.secrets.kowabunga.ipsec_office_psk
remote_peer = "5.6.7.8"
remote_subnet = "172.16.1.0/24"
phase1_dh_group_number = 14
phase1_integrity_algorithm = "SHA512"
phase1_encryption_algorithm = "AES256"
phase2_dh_group_number = 14
phase2_integrity_algorithm = "SHA512"
phase2_encryption_algorithm = "AES256"
}
Warning
It comes without saying but setting up an IPsec tunnel requires you to:
- Expose both ends publicly
- Configure tunnel connectivity both ways.
- Configure both ends firewall, if necessary.
2 - Kompute Virtual Instance
Kowabunga Kompute is the incarnation of a virtual machine instance.
Associated with underlying distributed block storage, it provides everything one needs to run generic kind of application workload.
Kompute instance can be created (and further edited) with complete granularity:
- number of virtual CPU cores.
- amount of virtual memory.
- one OS disk and any number of extra data disks.
- optional public (i.e. Internet) direct exposure.
Compared to major Cloud providers who will only provide pre-defined machine flavors (with X vCPUs and Y GB of RAM), you’re free to address machines to your exact needs.
Kompute instances are created and bound to a specific region and zone, where they’ll remain. Kahuna orchestration will make sure to instantiate the requested machine on the the best Kaktus hypervisor (at the time), but thanks to underlying distributed storage, it can easily migrate to any other instance from the specified zone, for failover or balancing.
Kompute’s OS disk image is cloned from one of the various OS templates you’ll have provided Kowabunga with and thanks to thin-provisioning and underlying copy-on-write mechanisms, no disk space is ever redeemed. Feel free to allocate 500 GB of disk, it’ll never get consumed until you actually store data onto !
Like any other service, Kompute instances are bound to a specific project, and consequently associated subnet, making it sealed from other projects’ reach. Private and public interfaces IP addresses are automatically assigned by Kahuna, as defined by administrator, making it ready to be consumed for end-user.
Resource Creation
As a projectAdmin user, one can create a Kompute virtual machine instance. Example below will spawn a 8 vCPUS, 16 GB RAM, 64 GB OS disk, 128 GB data disk instance, running Ubuntu 24.04 LTS into acme project in eu-west-a zone.
data "kowabunga_zone" "eu-west-a" {
name = "eu-west-a"
}
resource "kowabunga_kompute" "server" {
project = kowabunga_project.acme.id
name = "acme-server"
disk = 64
extra_disk = 128
mem = 16
vcpus = 8
zone = data.kowabunga_zone.eu-west-a.id
template = "ubuntu-cloudimg-generic-24.04"
}
Once created, subscribed users will get notified by email about Kompute instance details (such as private IP address, initial bootstrap admin credentials …).
DNS Record Association
Any newly created Kompute instance will automatically be added into region-local Kiwi DNS server. This way, any query to its hostname (acme-server in the previous example) will be answered.
Alternatively, you may also be willing to create custom one, for example, as aliases.
Let’s suppose you’d like to have previously created instance to be Active-Directory controller, and expose itself as ad.acme.local from a DNS perspective. This can be easily done through:
resource "kowabunga_dns_record" "ad" {
project = kowabunga_project.acme.id
name = "ad"
desc = "Active-Directory"
addresses = [resource.kowabunga_kompute.server.ip]
}
Information
Note that it is possible to set more than one IP address. If so, the Kiwi DNS server will provide all results, to be used as client prefers (usually round-robin policy).3 - Konvey NLB
Konvey is a plain simple network Layer-4 (UDP/TCP) load-balancer.
It’s only goal is to accept remote traffic and ship it back to one of the many application backend, through round-robin algorithm (with health check support).
Konvey can either be used to:
- load-balance traffic from private network to private network
- load-balance traffic from public network (i.e. Internet) to private network, in association with Kawaii. In such a scenario, Kawaii holds public IP address exposure, and route public traffic to Konvey instances, through NAT settings.
As with Kawaii, Konvey uses Kowabunga’s concept of Multi-Zone Resources (MZR) to ensure that, when the requested region features multiple availability zones, a project’s Konvey instance gets created in each zone, making it highly resilient.
Warning
Being a layer-4 network load-balancer, Konvey will passthrough any SSL/TLS traffic to configured backend. No traffic inspection is ever performed.Resource Creation
As a projectAdmin user, one can create a Konvey load-balancer instance. Example below will spawn a load balancer named acme-lb in eu-west region for project acme, forwarding all TCP traffic received on port 443 to backend acme-server instance (on port 443 as well).
data "kowabunga_region" "eu-west" {
name = "eu-west"
}
resource "kowabunga_konvey" "lb" {
name = "acme-lb"
project = kowabunga_project.acme.id
region = data.kowabunga_region.eu-west.id
failover = true
endpoints = [
{
name = "HTTPS"
protocol = "tcp"
port = 443
backend_port = 443
backend_ips = [kowabunga_kompute.server.ip]
}
]
}
Information
Arguably, having a load-balancer with a single backend is pretty useless in this example. One can obviously specify more than one backend IP addresses so effective round-robin dispatching will apply.4 - Kylo NFS
Kylo is Kowabunga’s incarnation of NFS. While all Kompute instances have their own local block-device storage disks, Kylo provides the capability to access a network storage, shared amongst virtual machines.
Kylo fully implements the NFSv4 protocol, making it easy for Linux instances (and even Windows) to mount it without any specific tools.
Under the hood, Kylo relies on underlying CephFS volume, exposed by Kaktus nodes, making it natively distributed and resilient (i.e. one doesn’t need trying to add HA on top).
Warning
Kylo access is restricted to project’s private network. While all your project’s instances can mount a Kylo endpoint, it can’t be reached from the outside.Resource Creation
As a projectAdmin user, one can create a Kylo network file-system instance. Example below will spawn a instance named acme-nfs in eu-west region for project acme.
data "kowabunga_region" "eu-west" {
name = "eu-west"
}
resource "kowabunga_kylo" "nfs" {
project = kowabunga_project.acme.id
region = data.kowabunga_region.eu-west.id
name = "acme-nfs"
desc = "ACME NFS share"
}
}