With the holiday season over, I found I had an excess of USB powered fairy style LED lights that needed something to do. The garage that the home lab cabinet lives in is also darker than usual during these winter months. Like the Brady Bunch theme song then, these two must somehow come together… and now, THERE ARE DOOR LIGHTS.
Door Lights build goals
Drama aside, when I set out, the goal of the door lights build was:
Light up the inside of the homelab cabinet
Trigger said lighting when the cabinet doors open and close
As much as possible, use parts already on hand
Simple enough, right? What follows then is the documentation of said build, some action shots, and what might get done differently next time.
Before we get into the parts list, remember that this build happened with parts on hand. Everything except the Pi Cobbler came from an Arduino starter kit. I do not recall the specific one, but most “IoT” starter kits will include these things.
Raspberry Pi 3 Model B Rev 1.2
Raspberry Pi Accessories (Case, Power Supply, SD card)
The light sensor circuit I built, and instructions on how to do so came from here.
The finished product:
Attached to the Raspberry Pi:
Yes, yes, I need to dust…
With the hardware build out of the way, the next step was to use a bit of software to pull tie everything together. Almost 100% of the difficulty I had getting the software to work is related to how the existing USB libraries handle switchable USB Hubs. You see, instead of messing with relays or other bits of circuitry, the Raspberry Pi 3 Model B (and others) has the ability to entirely shut off the onboard USB ports. YES! Or so I thought.
Raspberry Pi Setup
Docker? Yes, and not because everything needs to run in a container. Rather, because I can build and run ARM images for Docker on my laptop and consume the same images on the Raspberry Pi, which speeds up building said images. Moreover, because I was not 115% sure what packages, libraries, configurations, and any other changes were required, running in a container allowed me to keep the host OS on the Raspberry Pi clean. Which is a good thing, as that Pi also runs Pi-Hole and nobody wants a DNS outage.
The code for this project is primairly two files:
An explanation for the code for door_sensor.py follows:
The first line imports the LightSensor class from the gpiozero Python library. This provides all the functions we will need to see if the sensor we wired up is getting light. If it is, that means that the cabinet doors are indeed open. The second line imports a modified version of hub_ctrl.py from the GNUK project. More on this file in a bit.
Line 4 tells Python that we have a light sensor on pin 4. Line 5 prints the current state to the console.
Lines 7 - 13 are where the fun happens. We start a loop and then wait for it to become light. When it does, we turn on the LED lights and then wait for it to get dark again.
Earlier in the post I mentioned how the USB libraries in Python do not seem to provide convenience methods for controlling switchable USB hubs. This led down a rabbit hole of USB protocol debugging with usbmon. That is, until I found this code from the GnuK project. The only bit that was missing was a nice way to call it from door_lights.py. To that end, I added a send_command function to said file, that wrapped up a bit of their existing code.
Here is the Dockerfile for this build:
This container is then launched like so:
docker run --privileged--rm-it\--device /dev/bus/usb \--name there-are-door-lights
there-are-door-lights python3 door_sensor.py
What follows is going to be a slightly expanded form of the live tweeting I will be doing all week. That is, the post will be lightly edited and written stream of consciousness style.
I was a little late to the keynotes today, so missed the beginning. That said, there was a lot going on, and all of it good.
OpenStack Summit is now Open Infrastructure Summit
Next one will be in Denver, CO the week of 29, April 2019
Textiles is crazy interesting.
OpenStack is the infrastructure on which Oerlikon produces 40,000 bobbins of thread a day. They’re using OpenStack infrastructure to back end containers, AI/ML (for making real-time decisions on production lines). This takes inputs from about 70k sensors per 120 machines.
The theme has been the same as the last bunch of Summits. That is, OpenStack has matured as an infrastructure platform and is the infrastructure supporting more shiny workloads (AI/ML, Kubernetes, etc).
Focus on Day 2 Operations. Infrastructure, at scale, is difficult.
There is an Internet of Cows.
OSP 14 is coming.
Sign all the books
We had a good turnout for the first book signing of the summit. We’ll be doing these every day at 11:00 and 16:00. We have 100 copies total for the OpenStack Cookbook and another 100 of James Denton’sNeutron book
The marketplace, aka show floor gets a special call out for being busier than the last handful of summits. There is a good showing of OpenStack service providers along with the usual players (RedHat, Canonical, Cisco, Netapp, etc.). There are also new players, hyper.sh and Ambedded.
While sitting to write up this post, the folks from Roche gave a live demo of using Google Home to launch Virtual Machines on OpenStack. If you’re looking for the recording later, the title was “Ok Google, create me a VM, please”
I Accidentally Twitter
Toward the end of the day, I accidentally my Twitter account. That is, the account got locked due to unusual activity. That said, I managed to capture most of my day 1 sessions well and will get those posts up once the videos are available.
Recently my world has been centered more and more around Windows. Lately this is not a Bad Thing™. In fact, Windows Server Core and PowerShell have both come a LONG way. In the not so recent past, I wrote about how to set up Active Directory with PowerShell. In this post, I show you how to use PowerShell to join said domain.
Community is a wonderful thing. Just today I needed to learn all I could about NSX. The goal was to become an NSX “Expert” by Monday. NSX is a bit too complex for that, but, when asked, the community responded with plenty of links and suggestions. What follows here are a pile of links and my rough plan of attack.
While I’m sure there are more out there, I plan to start with the #vBrownBag NSX series by Tim Davis (@ALDTD). There are 3 pretty intense videos in the series:
Some quick commands I’ve found handy for operating remote systems via ipmitool:
root@lab-c:~# ipmitool -I lan -H 10.127.20.10 -U root chassis status
System Power : off
Power Overload : false
Power Interlock : inactive
Main Power Fault : false
Power Control Fault : false
Power Restore Policy : unknown
Last Power Event :
Chassis Intrusion : inactive
Front-Panel Lockout : inactive
Drive Fault : false
Cooling/Fan Fault : false
Sleep Button Disable : allowed
Diag Button Disable : allowed
Reset Button Disable : allowed
Power Button Disable : allowed
Sleep Button Disabled: false
Diag Button Disabled : false
Reset Button Disabled: false
Power Button Disabled: false
Useful here are on, off, soft, cycle, reset:
root@lab-c:~# ipmitool -I lan -H 10.127.20.10 -U root power on
Chassis Power Control: Up/On
root@lab-c:~# ipmitool -I lan -H 10.127.20.10 -U root power off
Chassis Power Control: Down/Off
root@lab-c:~# ipmitool -I lan -H 10.127.20.10 -U root power soft
root@lab-c:~# ipmitool -I lan -H 10.127.20.10 -U root power reset
Change to / from pxe boot:
root@lab-c:~# ipmitool -I lan -H 10.127.20.10 -U root chassis bootdev pxe
Set Boot Device to pxe
root@lab-c:~# ipmitool -I lan -H 10.127.20.10 -U root chassis bootdev bios
Set Boot Device to bios
Reset the ipmi controller:
Note: This may need to be sent more than once to actually do the thing.
root@lab-c:~# ipmitool -I lan -H 10.127.20.10 -U root mc reset [ warm | cold ]
Sent cold reset command to MC
Set a bunch of hosts to pxe & reboot:
Here I’ll supply two of these. The first will do hosts in parallel, with a random delay up to MAXWAIT. This is for any number of reasons. The primary being to be nice to the power infrastructure where you are performing the resets. It is also a useful snippet for chaos style resets.
seq -f "10.127.20.%g" 1 100 | xargs -I X "sleep $((RANDOM % MAXWAIT)) ipmitool -I lan -H X -U root chassis bootdev pxe && ipmitool -I lan -H X -U root power reset"
This second option is a bit more yolo, and fires off all the resets ta once.
seq -f "10.127.20.%g" 1 100 | xargs -I X "ipmitool -I lan -H X -U root chassis bootdev pxe && ipmitool -I lan -H X -U root power reset"
I have had a need recently to have a number of OpenSource projects authenticate against Microsoft Active Directory. While there are many ways to do this, ADFS, or Active Directory Federation Services allows us to use SAML, which in turn can be tied into 3rd party Single Sign On tools (Okta, Facebook, etc.)
In order to use this script, you will need:
A Windows server, either 2012R2 or 2016
Schema level of at least 2012R2
User account with Domain Admin permission
Older versions may work, but are untested
Installing ADFS with PowerShell
To install ADFS with powershell, log into the Windows server where ADFS is to be deployed, and:
Download the script (Full script also included below)
Review & run the script
How it works
Now that you’ve installed ADFS, let’s examine what we actually ran.
The script first installs NuGet. This is used to install 3rd party modules.
If ADFS is working, you’ll see something like this:
There is more!
The script provided creates a self-signed SSL certificate. While that will get you up and running in the lab, is not how you should deploy this in production. If you have a different certificate, say from an internal CA, or otherwise trusted CA, you can use it with this script. First ensure it is part of your Windows certificate store, then substitute your certificate’s thumbprint in the following line and continue to use the script:
$certThumbprint = "Your SSL Cert Thumbprint here"
In this post we used PowerShell to install, configure, and validate Active Directory Federation Services (ADFS). This in turn enables you to use Active Directory as an identity provider with all manner of 3rd party SSO tools.
This is a quick post to remind me how I got around the eye-razors that the default bright colored Slack client is. First, the end result:
So, it’s not quite perfect, but it’s workable. The theme itself is CSS, and there are a few ways to get Slack to use said CSS, depending on how you consume Slack. The links in the resources section below discuss how to do it via browser or desktop client. What follows here, is how to apply said theme using Rambox.
Note: I feel like I’m a bit late to the party both theme wise and to Rambox. Rambox is everything Adium / Pidgin wanted to be when it grew up, and lets me pull in Slack, Tweetdeck, and others into one spot.
Night Mode for Slack in Rambox
To “enable” night mode, open Rambox, and then select “configure” for the Slack service to change:
In the resulting window, expand the “Advanced” section at the bottom:
In the “Advanced” text field, copy and paste the code from here.
The theme itself, along with how to force Rambox to load it came from here:
The first stop in our metrics adventure was to install and configure Netdata to collect system level statistics. We then configured all of the remote Netdata agents to send all of their data to a central proxy node. This is a great start, however, it is not without some challenges. That is, the data supplied by Netdata, while extensive, needs to be stored elsewhere for more than basic reporting, analysis, and trending.
What then should we do with said data? In addition to allowing you to stream data between Netdata instances as we did in our prior post, you can also stream to various databases, both standard and specialized.
As we are exploring TICK stack, we will stream our metrics data into InfluxDB, a specialized time-series database.
InfluxDB is the “I” in TICK Stack. InfluxDB is a time series database, designed specifically for metrics and event data. This is a good thing, as we have quite an extensive set of system metrics provided by Netdata that we will want to retain so we can observe trends over time or search for anomalies.
In this post we will configure InfluxDB to receive data from Netdata. Additionally, we will reconfigure our Netdata proxy node to ship metric data to InfluxDB.
Take a moment to review the configuration and metrics collection architecture from our first post.
Reviewed? Good. While Netdata will allow us to ship metrics data from each installed instance of Netdata, this can be quite noisy, or not otherwise provide the control you would like. Fortunately, the configuration to send metrics data is the same in either case.
One other consideration when shipping data from Netdata to InfluxDB, is how best to take the data in. Netdata supports different data export types: graphite, opentsdb, json, and prometheus. Our environment will be configured to send data using the opentsdb telnet interface.
Note: As none of these are native to InfluxDB, they are exceedingly difficult to use with InfluxDB-Relay.
To reconfigure your Netdata proxy using the ansible-netdata role, the following playbook can be used:
Enable said backend (as netdata only supports one at a time)
Configure the opentsdb protocol to send data
Configure the host and port to send to
Additionally, it configures some additional features:
Send data once a second
Keep 30 seconds of data in case of connection issues
Send field names instead of UUID
Specify connection timeout in milliseconds
Once this playbook has run, your netdata instance will start shipping data. Or, trying to anyways, we haven’t yet installed and configured InfluxDB to capture it. Let’s do that now.
InfluxDB - Install and configure InfluxDB
As discussed above, InfluxDB is a stable, reliable, timeseries database. Tuned for storing our metrics for long term trending. For this environment we are going to install a single small node. Optimizing and scaling are a topic in and of themselves. To ease installation and maintenance, InfluxDB will be installed using the ansible-influxdb role.
The following Ansible playbook configures the ansible-influxdb role to listen for opentsdb messages from our Netdata instance.
Configure influxdb rather than use the default config
Use influxdb 1.5.1
Enable the admin interface
Have InfluxDB listen for opentsdb messages and store them in the netdata database
Create the netdata database
After this playbook run is successful, you will have an instance of InfluxDB collecting stats from your Netdata proxy!
Did it work?
If both playbooks ran successfully, system metrics will be flowing something like this:
nodes ==> netdata-proxy ==> influxdb
You can confirm this by logging into your InfluxDB node and running the following commands:
Check that InfluxDB is running:
# systemctl status influxdb
● influxdb.service - InfluxDB is an open-source, distributed, time series database
Loaded: loaded (/lib/systemd/system/influxdb.service; enabled; vendor preset: enabled)
Active: active (running) since Sat 2018-04-14 18:29:03 UTC; 9min ago
Main PID: 11770 (influxd)
└─11770 /usr/bin/influxd -config /etc/influxdb/influxdb.conf
Check that the netdata database was created:
Connected to http://localhost:8086 version 1.5.1
InfluxDB shell version: 1.5.1
> SHOW DATABASES;
Check that the netdata database is receiving data:
Connected to http://localhost:8086 version 1.5.1
InfluxDB shell version: 1.5.1
> use netdata;
Using database netdata
> show series;
With that, you now have high resolution system metrics being collected and sent to InfluxDB for longer term storage, analysis, and more.