:.: Terraform + Proxmox + OpenBSD = <3

:. Little introduction

Terraform, an infrastructure-as-code tool, allows us to define and provision infrastructure resources across various platforms, in our case we'll be Proxmox and we need the Terraform Provider for Proxmox (Telmate) that enables seamless integration between Terraform and Proxmox.

:. Installing and setup

We have Terraform in ports ($ doas pkg_add terraform), but not the Proxmox plugin/provider (I have a half baked port of it), so you have 2 options now, you build it yourself (read the terraform's README at /usr/local/share/doc/pkg-readmes) or use linux.

Now that you have Terraform and Proxmox plugin/provider we need to create a user in Proxmox that allows our Terraform plan to interact with it, we'll do it from the command line and we'll add some of the roles in Proxmox to give sufficient permissions to Terraform.

proxmox# pveum role add TerraformProv -privs "Datastore.AllocateSpace Datastore.Audit Pool.Allocate Sys.Audit Sys.Console Sys.Modify VM.Allocate VM.Audit VM.Clone VM.Config.CDROM VM.Config.Cloudinit VM.Config.CPU VM.Config.Disk VM.Config.HWType VM.Config.Memory VM.Config.Network VM.Config.Options VM.Migrate VM.Monitor VM.PowerMgmt"
proxmox# pveum user add terraform-prov@pve --password MySuperHardCorePassWord
proxmox# pveum aclmod / -user terraform-prov@pve -role TerraformProv

Now it's time to create a Terraform configuration file (usually with a `.tf` extension) that defines your Proxmox resources. Start by creating a new directory for your project and within it, create a file named `main.tf` to hold your configuration.

$ cd Terraform/OpenBSD
$ cat main.tf
terraform {
  required_version = ">= 0.14"
  required_providers {
    proxmox = {
        source = "telmate/proxmox"
    }
  }
}

provider "proxmox" {
    pm_tls_insecure = true
    pm_api_url = "https://192.168.0.100:8006/api2/json"
    pm_user = "terraform-prov@pve"
    pm_password = "MySuperHardCorePassWord"
}

Make sure to replace "https://192.168.0.100:8006/", "pm_user", and "pm_password" with the appropriate values for your Proxmox environment. This configuration tells Terraform how to connect to your Proxmox API.

Now it's time to create a Terraform config for our OpenBSD template in Proxmox.

$ cat openbsd.tf
resource "proxmox_vm_qemu" "openbsd" {
    # we'll create 3 servers
    count = 3

    # the name we'll give them
    # and a description
    name = "fart-init-0${count.index + 1}"
    desc = "OpenBSD fart-init test"

    # the proxmox server
    target_node = "proxmox"

    # the template to use
    clone = "OpenBSD-current"

    # if you installed qemu-agent on your template
    # change this to 1
    agent = 0

    # type of setup
    os_type = "cloud-init"

    # basic hw setup
    cores = 1
    sockets = 1
    cpu = "host"
    memory = 512
    scsihw = "lsi"

    # disk setup
    disk {
        size = "20G"
        type = "scsi"
        storage = "VMs"
        ssd = 1
        discard = "on"
    }

    # network
    network {
        model = "virtio"
        bridge = "vmbr0"
    }

    ipconfig0 = "ip=192.168.0.23${count.index + 1}/24,gw=192.168.0.1"
    nameserver = "192.168.0.1"
    ciuser = "gonzalo"
    cipassword = "fart-init"
    ssh_user  = "gonzalo"
    sshkeys = <<EOF
    ssh-ed25519 AAAIFKpaDFGAS54hJsUl3ljEWVKEghcQYg6OJn2mLRYXvx38V0SLwIrZD
    EOF
}

That it pretty much all we need, keep in mind that I will not create the clones with bigger disks than the original or anything like it, I will keep it the same, if I need more storage, I will just add one big disk and mount it on the server as a new extra disk. We don't have cloud-init for OpenBSD but I am working on fart-init a little script that will take all these parametres on Proxmox or Terraform and setup the server on booting time, restart itself and delete all the extra garbage inside of it (the next article will show an example of it).

:. Provisioning all the things!

Now that we defined Proxmox resources... we want to provision... and for that we need mainly 3 commands:

1. `terraform init`: This initializes your Terraform project, downloading any necessary dependencies, including the Proxmox provider plugin.

2. `terraform plan`: This command shows a preview of the changes Terraform will apply. It validates your configuration and ensures there are no errors.

3. `terraform apply`: Finally, this command applies the changes defined in your Terraform configuration to provision the Proxmox resources. You'll be prompted to confirm the changes
$ pwd
/home/gonzalo/Terraform/OpenBSD
$ terraform init

Initializing the backend...

Initializing provider plugins...
- Finding latest version of telmate/proxmox...
- Installing telmate/proxmox v2.9.14...
- Installed telmate/proxmox v2.9.14 (self-signed, key ID A9EBBE091B35AFCE)

Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html

Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
$ terraform plan
proxmox_vm_qemu.openbsd[0]: Refreshing state... [id=proxmox/qemu/100]
proxmox_vm_qemu.openbsd[1]: Refreshing state... [id=proxmox/qemu/103]
proxmox_vm_qemu.openbsd[2]: Refreshing state... [id=proxmox/qemu/101]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # proxmox_vm_qemu.openbsd[0] will be created
  + resource "proxmox_vm_qemu" "openbsd" {
      + additional_wait           = 5
      + agent                     = 0
      + automatic_reboot          = true
      + balloon                   = 0
      + bios                      = "seabios"
      + boot                      = (known after apply)
      + bootdisk                  = (known after apply)
      + cipassword                = (sensitive value)
      + ciuser                    = "gonzalo"
      + clone                     = "OpenBSD-current"
      + clone_wait                = 10
      + cores                     = 1
      + cpu                       = "host"
      + default_ipv4_address      = (known after apply)
      + define_connection_info    = true
      + desc                      = "OpenBSD fart-init test"
      + force_create              = false
      + full_clone                = true
      + guest_agent_ready_timeout = 100
      + hotplug                   = "network,disk,usb"
      + id                        = (known after apply)
      + ipconfig0                 = "ip=192.168.0.231/24,gw=192.168.0.1"
      + kvm                       = true
      + memory                    = 512
      + name                      = "fart-init-01"
      + nameserver                = "192.168.0.1"
      + onboot                    = false
      + oncreate                  = true
      + os_type                   = "cloud-init"
      + preprovision              = true
      + reboot_required           = (known after apply)
      + scsihw                    = "lsi"
      + searchdomain              = (known after apply)
      + sockets                   = 1
      + ssh_host                  = (known after apply)
      + ssh_port                  = (known after apply)
      + ssh_user                  = "gonzalo"
      + sshkeys                   = <<-EOT
            ssh-ed25519 AAAIFKpaDFGAS54hJsUl3ljEWVKEghcQYg6OJn2mLRYXvx38V0SLwIrZD
        EOT
      + tablet                    = true
      + target_node               = "proxmox"
      + unused_disk               = (known after apply)
      + vcpus                     = 0
      + vlan                      = -1
      + vmid                      = (known after apply)

      + disk {
          + backup             = true
          + cache              = "none"
          + discard            = "on"
          + file               = (known after apply)
          + format             = (known after apply)
          + iops               = 0
          + iops_max           = 0
          + iops_max_length    = 0
          + iops_rd            = 0
          + iops_rd_max        = 0
          + iops_rd_max_length = 0
          + iops_wr            = 0
          + iops_wr_max        = 0
          + iops_wr_max_length = 0
          + iothread           = 0
          + mbps               = 0
          + mbps_rd            = 0
          + mbps_rd_max        = 0
          + mbps_wr            = 0
          + mbps_wr_max        = 0
          + media              = (known after apply)
          + replicate          = 0
          + size               = "20G"
          + slot               = (known after apply)
          + ssd                = 1
          + storage            = "VMs"
          + storage_type       = (known after apply)
          + type               = "scsi"
          + volume             = (known after apply)
        }

      + network {
          + bridge    = "vmbr0"
          + firewall  = false
          + link_down = false
          + macaddr   = (known after apply)
          + model     = "virtio"
          + queues    = (known after apply)
          + rate      = (known after apply)
          + tag       = -1
        }
    }
--- 2 more times ---
Plan: 3 to add, 0 to change, 0 to destroy.

────────────────────────────────────────────────────── 

Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform apply" now.
$ terrafom apply
proxmox_vm_qemu.openbsd[1]: Refreshing state... [id=proxmox/qemu/103]
proxmox_vm_qemu.openbsd[0]: Refreshing state... [id=proxmox/qemu/100]
proxmox_vm_qemu.openbsd[2]: Refreshing state... [id=proxmox/qemu/101]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # proxmox_vm_qemu.openbsd[0] will be created
  + resource "proxmox_vm_qemu" "openbsd" {
      + additional_wait           = 5
      + agent                     = 0
      + automatic_reboot          = true
      + balloon                   = 0
      + bios                      = "seabios"
      + boot                      = (known after apply)
      + bootdisk                  = (known after apply)
      + cipassword                = (sensitive value)
      + ciuser                    = "gonzalo"
      + clone                     = "OpenBSD-current"
      + clone_wait                = 10
      + cores                     = 1
      + cpu                       = "host"
      + default_ipv4_address      = (known after apply)
      + define_connection_info    = true
      + desc                      = "OpenBSD fart-init test"
--- MORE OUTPUT OF YOUR MACHINES ---

Plan: 3 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

proxmox_vm_qemu.openbsd[1]: Creating...
proxmox_vm_qemu.openbsd[0]: Creating...
proxmox_vm_qemu.openbsd[2]: Creating...
proxmox_vm_qemu.openbsd[1]: Still creating... [10s elapsed]
proxmox_vm_qemu.openbsd[0]: Still creating... [10s elapsed]
proxmox_vm_qemu.openbsd[2]: Still creating... [10s elapsed]
proxmox_vm_qemu.openbsd[1]: Still creating... [20s elapsed]

--- YOU WAIT A BIT DEPENDING ON YOUR HW ---

proxmox_vm_qemu.openbsd[2]: Creation complete after 6m56s [id=proxmox/qemu/103]
proxmox_vm_qemu.openbsd[0]: Creation complete after 6m58s [id=proxmox/qemu/101]
proxmox_vm_qemu.openbsd[1]: Still creating... [7m0s elapsed]
proxmox_vm_qemu.openbsd[1]: Creation complete after 7m0s [id=proxmox/qemu/100]

Apply complete! Resources: 3 added, 0 changed, 0 destroyed.

If you go to your dashboard now, you will see these 3 new machines running.

the fart-cloud is real! the fart-cloud is real!

Let's check one of them, you will see the IPs on the "plan" output, in my case one of them is 192.168.0.233:

$ ssh gonzalo@192.168.0.233
The authenticity of host '192.168.0.233 (192.168.0.233)' can't be established.
ED25519 key fingerprint is SHA256:N3BcURmd7H9kEwTz6MLqsDvZtowr+pXJcck5yz8VDQc.
+--[ED25519 256]--+
|            oEB=.|
|         . . .==.|
|        . o  .=..|
|        .o+ .. +o|
|       .S*oo ..oo|
|      . *.+...  o|
|       *o+  .   .|
|     ..o*o..     |
|   .o..==+...    |
+----[SHA256]-----+
No matching host key fingerprint found in DNS.
This key is not known by any other names.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '192.168.0.233' (ED25519) to the list of known hosts.
gonzalo@192.168.0.233's password:
Last login: Mon May  8 16:54:36 2023
OpenBSD 7.3-current (GENERIC.MP) #1167: Thu Apr 27 14:11:51 MDT 2023

Welcome to OpenBSD: The proactively secure Unix-like operating system.

Please use the sendbug(1) utility to report bugs in the system.
Before reporting a bug, please try to reproduce it with the latest
version of the code.  With bug reports, please try to ensure that
enough information to reproduce the problem is enclosed, and if a
known fix for it exists, include that as well.

$ df -h
Filesystem     Size    Used   Avail Capacity  Mounted on
/dev/sd0a      1.9G    106M    1.7G     6%    /
/dev/sd0d      989M   26.0K    940M     1%    /home
/dev/sd0e      497M    8.0K    473M     1%    /tmp
/dev/sd0g      2.9G    1.3G    1.5G    47%    /usr
/dev/sd0h      580M    288M    263M    53%    /usr/X11R6
/dev/sd0j      3.8G    218K    3.6G     1%    /usr/local
/dev/sd0f      1.9G    9.5M    1.8G     1%    /var
/dev/sd0i      989M    302K    940M     1%    /var/log
$

That's all! We are done! Next time, we'll try to setup our new VMs using fart-init!