
Zuthof.nl Blog » Homelab
26 FOLLOWERS
A blog about Virtualization, Cloud, and Tech. Hi, my name is Daniël Zuthof. I mainly started this blog to have a place to share my ideas and to give knowledge back to the community. Posts will mainly be based upon my personal interest in tech, things I stumble upon at work or in the homelab, and ideas from customer engagements.
Zuthof.nl Blog » Homelab
4d ago
Once in a while new hardware is released that makes a difference. Such a device is the Intel Optane series of high-performance SSD’s for professional use, which was released in late 2017. In this case I’m talking about the Intel Optane P4800X and P5800X and their consumer counterparts (900P and 905P). All drives are based on the 3D XPoint Technology that Intel co-developed with Micron.
In contrary to regular SSD’s, Optane drives like the P5800X brings ultra low latency, high durability and high performance to the table. Effectively Optane is a technology that both has aspects of DRAM and regu ..read more
Zuthof.nl Blog » Homelab
1M ago
You may recognize this. You’ve updated your ESXi hosts via vSphere Lifecycle Manager (vLCM) or similar method, the host rebooted during remediation and… that’s it. Your host is not reachable anymore.
That happened to my lab server last week, while updating it to the latest ESXi 8.0 patch release. So what went wrong? In short, this happens when ESXi does not have inbox drivers for your NIC and the vLCM Image configuration does not has the async NIC driver added. When the hosts is rebooted during the remediation process the issue is obvious.
So how to install a new NIC driver if the hosts is not ..read more
Zuthof.nl Blog » Homelab
5M ago
NSX 4.0 onwards only supports vSphere 7 and later. In my homelab I’m still running ESXi 6.7 joined to a vCenter 8. It’s because the RAID controller in my host is unsupported in ESXi 7+ and has no native drivers. Running an older ESXi version is fine for me since I do most testing on an nested vSphere 8 lab.
The issue
During the rebuild of my homelab I want to run NSX 4.0. The NSX Manager deploys and runs just fine as VM on ESXi 6.7. The Edge nodes are another thing. When deploying an Edge node on ESXi 6.7 U3 (latest) via NSX Manager it fails with the error:
Ovf deploy for vm <edge name> ..read more
Zuthof.nl Blog » Homelab
5M ago
In most cases VMware Tools are available as part of ESXi. When updating an ESXi host, the VMware Tools could be updated, but most of the time not to the latest version. In this post I’ll show you how ESXi hosts can updated to the latest available version, when they contain critical bug or security fixes.
This post is not about getting VMware Tools up-to-date within your VM’s . The scope is to install the latest Tools version on your ESXi hosts so they can be made installed using your preferred deploy method.
Methods of updating the Tools GuestStore method
GuestStore is one of the newer methods ..read more
Zuthof.nl Blog » Homelab
5M ago
This holiday season I had some time to install some much needed ESXi updates in my homelab. Since I have a small number of hosts to patch, I often update them manually instead of using vLCM (or VUM previously) as you would do in a production grade environment. The update in my case consisted of 2 parts.
Update ESXi
Update ESXi drivers
The main reason to create this post is that since vSphere 6.5, updating ESXi hosts using the CLI needs to be done via Image Profiles. Which is the supported method. This post is not about patching ESXi via vLCM or VUM.
Updating ESXi
I don’t know why but it was ..read more
Zuthof.nl Blog » Homelab
6M ago
My latest homelab addition is the Maxtang NX6412 NUC which was given by VMware, together with Cohesity (many thanks both!) at VMware Explore to all vExperts who registered for the gift. After adding RAM and a SATA SSD, it was time it ran ESXi 8.
The specs of this neat, passive cooled box are:
Intel Celeron J6412 (4c @ 2.0 GHz, 2,6 GHz boost)
10w TDP
2x DDR4 PC-3200 SO-DIMM slots
2x 1Gb Realtek 8111H PCIe NIC
Wifi, Bluetooth
2x HDMI 2.0
3x USB 2.0 (of which front facing USB-C type)
2x USB 3.2
The issue with the Maxtang NUC is that both PCIe based Realtek NIC’s are not supported by ESXi 7 ..read more
Zuthof.nl Blog » Homelab
6M ago
Once and a while ESXi needs to be installed on a device of which the drivers (NIC / Storage) are not built into the ISO installer (yet). In my case it’s needed for the vExpert homelab Maxtang NUC device. This device has a Realtek PCIe NIC’s not supported by ESXi 8. Because of that limitation, I used a Realtek USB based NIC, supported by the USB Network Native Driver for ESXi fling.
When a network driver is not available to complete the creation of the default vSwitch and management network kernel port during the ESXi install phase, the ESXi installer quits with the message that no suitable NIC ..read more
Zuthof.nl Blog » Homelab
1y ago
While troubleshooting of a failed SDDC Manager deploy task in Cloud Foundation 4.4 together with VMware support, the engineer showed a way to update the SDDC bring-up parameters. This can be very helpful because it can restart a failed SDDC deployment with updated parameters. This prevents redeploying the physical servers intended for the management domain and reverting Cloud Builder to a previous snapshot.
Solution
The Cloud Builder VM has a so-called Supportability and Serviceability (SoS) utility available, which is located in “/opt/vmware/sddc-support/sos“. One of its features is to update ..read more
Zuthof.nl Blog » Homelab
1y ago
Recently I moved to the latest Cloud Foundation version (v4.4) in a lab environment that I often re-deploy. I noticed that during the Cloud Builder bring-up phase the deploy of the SDDC managed failed ever since, while using the same input parameters in the deployment parameters workbook. So it worked in 4.1.x and 4.2.x, but not in 4.4 (I did not test 4.3). Strange, right?
Symptoms
Cloud Builder was well underway when the error “SDDC Manager VM <vm name> is not yet up“. Cloud Builder tried re-deploying SDDC a couple of times, but eventually stopped and the error below in the Cloud Builde ..read more
Zuthof.nl Blog » Homelab
1y ago
One of my Raspberry Pi’s runs Ubuntu Server 20.04 (Focal) and has a static IP interface configuration using systemd. The Pi is configured in the same way as in the previous post on this topic, “Linux interface config with systemd“.
The previous post describes how systemd can be configured for network interface IP configuration, instead of plain old interface files. One of issues I had while writing that post was that my host had a static IP and a second dynamic IP by DHCP. Even when DHCP was disabled in the systemd network config for that interface. On the Debian host mentioned in the post it ..read more