This fix, of course, is to update the firmware on the pi via rpi-update
At the time of this writing, the fix was released 4 days ago according to this Github issue.
While you can run rpi-update by itself and answer its warning prompt, that can get tedious for more than one or two nodes. Since I was upgrading nine nodes, something more robust was needed. Something like the following command:
Cleaning out a few browser tabs, which of course means putting things where I will remember them. This time, that is a bunch of random PowerShell and PowerCLI I’ve been using.
The following sections will state if the snippet is either [PowerShell] or [PowerCLI] and include a link to the site where I found said snippet. The PowerCLI snippets assume a connection to either vCenter or ESXi.
[PowerShell] List / Add Windows Defender Firewall Rules
# List the rules. This produces A LOT of outputGet-NetFirewallRule# Disables outbound telnetNew-NetFirewallRule-DisplayName"Block Outbound Telnet"-DirectionOutbound-Program%SystemRoot%\System32\tlntsvr.exe–ProtocolTCP–LocalPort23-ActionBlock–PolicyStoredomain.contoso.com\gpo_name
My work has started to shift around some recently and I find myself returning to VMware products after a hiatus. In addition to running into a blog post I wrote a decade ago, it has been interesting to see how else VMware has kept up.
In particular, I have spent A LOT of time automating various things with Ansible, so discovering the extensive list of VMware modules. This post shows how to use the Ansible VMware modules to launch your first VM.
There are quite a few prerequisites to getting this going. Buckle up!
First, the requirements for your ‘Control Node’, where you will run Ansible from:
vSphere Automation Python SDK
Access to vSphere
Installing the prerequisites on Ubuntu 18.04
Note: While you can likely install these tools directly on Windows, I’ve found it easier to use Ubuntu by way of WSL2.
With everything we need installed, we can now make the VM with Ansible. We do that by creating an Ansible playbook, and then running it. An Ansible playbook is a YAML file that describes what you would like done. To create yours, open your favorite text editor and paste the following YAML into it.
# new-vm-playbook.yml-name:Create New VM Playbookhosts:localhostgather_facts:notasks:-name:Clone the templatevmware_guest:hostname:"vcenter.codybunch.lab"username:"email@example.com"password:"ultra-Secret_P@%%w0rd"resource_pool:"ResourcePooltocreateVMin"datacenter:"NameoftheDatacentertocreatetheVMin"folder:"/vm"cluster:"MyCluster"networks:-name:"Network1"hardware:num_cpus:2memory_mb:4096validate_certs:Falsename:"new-vm-from-ansible"template:"centos-7-vsphere"datastore:"iscsi-datastore"state:poweredonwait_for_ip_address:yesregister:new_vm
The great thing about describing the changes in this way, is that the file itself is rather readable. That said, we should go over what running this playbook will actually do.
The playbook tells Ansible to execute on the localhost (hosts: localhost). You can change this to be any host that can access vCenter and has pyVmomi installed. It then tells Ansible we would like to build a vm (vmware_guest) and supplies the details to make that happen. Make note of the folder: "/vm" setting which will place the VM at the root of the datacenter. As with the other variables, change this to fit your environment.
Once you have adjusted the playbook to suit your environment, it can be run as follows:
I have been on a desktop OS and tools journey for the last six or nine months. Long story short, my 2012 Retina MacBook Pro gave up the ghost in a most unpleasant way. I grabbed a System76 laptop, and gave Linux on the desktop a shot for a little while. Eventually I repaved that and moved to Windows 10 full time. As part of that, I enabled WSL 2, which breaks VMware desktop products. No biggie, one can convert VMs, right?
It is assumed that you have the following:
WSL 2 
Ubuntu (in WSL that is)
Note: You can get qemu-img as a native tool for Windows. I used the Linux version as I had a bit more experience with it.
There are three basic parts of this process: Converting the VMDK, Importing into Hyper-V, and Installing integration services.
Converting from VMDK to VHDX
This stage will be done entirely within WSL/Ubuntu, so open a terminal and then use the following commands:
The command tells qemu-img to convert the VMDK specified in PathToVMDK with an output type (-O vhdx) of vhdx and store the resulting image at PathToVHDX. The -p tells qemu-img to display the progress as it converts. You will need to change the PathTo variables to reflect your environment.
Note: There are some tools that will do this in PowerShell  . However, they can be rather flaky depending on the source VMDK. qemu-img was quite a bit more robust in this regard.
Importing into Hyper-V
Once the conversion is complete, you can import the VM into Hyper-V as you would any other vhdx. You will want to be careful when selecting which generation, that is, unless you installed the VM with UEFI, you will want to select ‘Generation 1’.
Once Imported, edit the settings of the VM.
When happy with the settings power the VM on, log in, uninstall VMware Tools and reboot, then prepare to install Hyper-V integration services.
Installing integration services
The Hyper-V integration services will generally be installed seamlessly via Windows Update. If this is not the case, or you are using an older version of Windows, download the integration tools , and then use the following command from an elevated command prompt to install them:
With the holiday season over, I found I had an excess of USB powered fairy style LED lights that needed something to do. The garage that the home lab cabinet lives in is also darker than usual during these winter months. Like the Brady Bunch theme song then, these two must somehow come together… and now, THERE ARE DOOR LIGHTS.
Door Lights build goals
Drama aside, when I set out, the goal of the door lights build was:
Light up the inside of the homelab cabinet
Trigger said lighting when the cabinet doors open and close
As much as possible, use parts already on hand
Simple enough, right? What follows then is the documentation of said build, some action shots, and what might get done differently next time.
Before we get into the parts list, remember that this build happened with parts on hand. Everything except the Pi Cobbler came from an Arduino starter kit. I do not recall the specific one, but most “IoT” starter kits will include these things.
Raspberry Pi 3 Model B Rev 1.2
Raspberry Pi Accessories (Case, Power Supply, SD card)
The light sensor circuit I built, and instructions on how to do so came from here.
The finished product:
Attached to the Raspberry Pi:
Yes, yes, I need to dust…
With the hardware build out of the way, the next step was to use a bit of software to pull tie everything together. Almost 100% of the difficulty I had getting the software to work is related to how the existing USB libraries handle switchable USB Hubs. You see, instead of messing with relays or other bits of circuitry, the Raspberry Pi 3 Model B (and others) has the ability to entirely shut off the onboard USB ports. YES! Or so I thought.
Raspberry Pi Setup
Docker? Yes, and not because everything needs to run in a container. Rather, because I can build and run ARM images for Docker on my laptop and consume the same images on the Raspberry Pi, which speeds up building said images. Moreover, because I was not 115% sure what packages, libraries, configurations, and any other changes were required, running in a container allowed me to keep the host OS on the Raspberry Pi clean. Which is a good thing, as that Pi also runs Pi-Hole and nobody wants a DNS outage.
The code for this project is primairly two files:
An explanation for the code for door_sensor.py follows:
The first line imports the LightSensor class from the gpiozero Python library. This provides all the functions we will need to see if the sensor we wired up is getting light. If it is, that means that the cabinet doors are indeed open. The second line imports a modified version of hub_ctrl.py from the GNUK project. More on this file in a bit.
Line 4 tells Python that we have a light sensor on pin 4. Line 5 prints the current state to the console.
Lines 7 - 13 are where the fun happens. We start a loop and then wait for it to become light. When it does, we turn on the LED lights and then wait for it to get dark again.
Earlier in the post I mentioned how the USB libraries in Python do not seem to provide convenience methods for controlling switchable USB hubs. This led down a rabbit hole of USB protocol debugging with usbmon. That is, until I found this code from the GnuK project. The only bit that was missing was a nice way to call it from door_lights.py. To that end, I added a send_command function to said file, that wrapped up a bit of their existing code.
Here is the Dockerfile for this build:
This container is then launched like so:
docker run --privileged--rm-it\--device /dev/bus/usb \--name there-are-door-lights
there-are-door-lights python3 door_sensor.py
What follows is going to be a slightly expanded form of the live tweeting I will be doing all week. That is, the post will be lightly edited and written stream of consciousness style.
I was a little late to the keynotes today, so missed the beginning. That said, there was a lot going on, and all of it good.
OpenStack Summit is now Open Infrastructure Summit
Next one will be in Denver, CO the week of 29, April 2019
Textiles is crazy interesting.
OpenStack is the infrastructure on which Oerlikon produces 40,000 bobbins of thread a day. They’re using OpenStack infrastructure to back end containers, AI/ML (for making real-time decisions on production lines). This takes inputs from about 70k sensors per 120 machines.
The theme has been the same as the last bunch of Summits. That is, OpenStack has matured as an infrastructure platform and is the infrastructure supporting more shiny workloads (AI/ML, Kubernetes, etc).
Focus on Day 2 Operations. Infrastructure, at scale, is difficult.
There is an Internet of Cows.
OSP 14 is coming.
Sign all the books
We had a good turnout for the first book signing of the summit. We’ll be doing these every day at 11:00 and 16:00. We have 100 copies total for the OpenStack Cookbook and another 100 of James Denton’sNeutron book
The marketplace, aka show floor gets a special call out for being busier than the last handful of summits. There is a good showing of OpenStack service providers along with the usual players (RedHat, Canonical, Cisco, Netapp, etc.). There are also new players, hyper.sh and Ambedded.
While sitting to write up this post, the folks from Roche gave a live demo of using Google Home to launch Virtual Machines on OpenStack. If you’re looking for the recording later, the title was “Ok Google, create me a VM, please”
I Accidentally Twitter
Toward the end of the day, I accidentally my Twitter account. That is, the account got locked due to unusual activity. That said, I managed to capture most of my day 1 sessions well and will get those posts up once the videos are available.
Recently my world has been centered more and more around Windows. Lately this is not a Bad Thing™. In fact, Windows Server Core and PowerShell have both come a LONG way. In the not so recent past, I wrote about how to set up Active Directory with PowerShell. In this post, I show you how to use PowerShell to join said domain.
Community is a wonderful thing. Just today I needed to learn all I could about NSX. The goal was to become an NSX “Expert” by Monday. NSX is a bit too complex for that, but, when asked, the community responded with plenty of links and suggestions. What follows here are a pile of links and my rough plan of attack.
While I’m sure there are more out there, I plan to start with the #vBrownBag NSX series by Tim Davis (@ALDTD). There are 3 pretty intense videos in the series: