Home Lab Config Update – Proxmox vs. vSphere – Update 2

Proxmox had a lot of promise for my home lab. Most compelling were the ability to avoid updating licenses, good support for cloud-init, and naturally good performance for linux workloads. As I had hoped to nest vSphere for those times when I wanted to test vSphere workloads, I found it to be a no-go. Of course this is not an intended use case for it, just as nesting proxmox in vSphere isn’t. So I am not faulting it for this.

But I need to test vSphere workloads and I can’t get sufficient performance out of it to do this. So it’s back to vSphere. I’m not bummed about it, I know and like vSphere. It just means I need to take on some extra management tasks and overhead during the times I’m not specifically working with vSphere.

What I ran into was incredibly poor disk IOPS and latency when running VMs in a nested esxi host. So much so, that a k8s etcd service was unable to complete a single read or write before the default 100ms timeout was hit. A rudimentary  test via IOZONE showed 1/10th performance of a linux host running at the base proxmox level.

Farewell proxmox, hello again vSPhere. Once I get enough patience-reserve built back up, I’ll reinstall vSPhere on the home server, reconfigure everything, and then setup a demo for my next post covering cluster-api with capv and capa providers to incorporate on-prem with aws in a CI/CD workflow with canary analysis.