Cody Bunch Some Random IT Guy - OpenStack, DevOps, Cloud, Things

Linux Filesystem Performance for Virt Workloads

As I spend quite a bit of time (an understatement, I assure you) standing up and tearing down different virtualized lab environments, I wanted to spend a little bit less on it overall. Thus, in addition to tuning some parameters at runtime, I spent some time benchmarking the difference between Virtualization engines, filesystems, and IO schedulers.

Before we begin, let’s get a few things out of the way:

  • TL;DR - The winner was libvirt/kvm with ext4 in guest, xfs on host, with noop
  • Yes, my hardware is old.
  • No, this is not exhaustive, nor super scientific

Test Hardware

CPU: 2x Quad-Core AMD Opteron(tm) Processor 2374 HE RAM: 64GB Disk: 4x 1T 5400 rpm disks, Raid 1 Controller: LSI MegaRAID 8708EM2 Rev: 1.40 OS: Ubuntu 16.04 LTS

Old hardare is old. But hey, it is a workhorse.

Test Workload

Building an openstack-ansible All-In-One, for the Pike release of OpenStack.

Rationale: To be honest, this is the workload I spend the most time with. Be it standing one up to replicate a customer issue, test an integration, build a solution, and so on. Any time I save provisioning, is time I can spend doing work

Further, the build process for an all-in-one is quite extensive and encompasses a wide variety of sub workloads: haproxy, rabbitmq, galera, lxc containers, and so on.

Test Matrix

The test matrix worked out to 8 tests in all:

Host FS Guest FS IO Sched Virt Engine
xfs xfs noop KVM
xfs xfs noop vbox
xfs ext4 noop KVM
xfs ext4 noop vbox
xfs xfs deadline KVM
xfs xfs deadline vbox
xfs ext4 deadline KVM
xfs ext4 deadline vbox

Test Process

Prepwork:

  • Create four different boxes with Packer (2 filesystems * 2 virt engines).
  • Creating a vagrant file that corresponded to each scenario
  • Create bash script to loop through the scenarios

Test:

As the goal was to reduce the time spent waiting on environments, each environment was tested with:

$ time (vagrant up --provider=$PROVIDER_NAME)

Results

Here are the results of each test. Surprisingly, ext4 on xfs was faster in all cases. Who’d have thought.

Host FS Guest FS IO Sched Virt Engine Time
xfs xfs noop KVM 174m48.193s
xfs xfs noop vbox 213m35.169s
xfs ext4 noop KVM 172m5.682s
xfs ext4 noop vbox 207m53.895s
xfs xfs deadline KVM 172m44.424s
xfs xfs deadline vbox 235m34.411s
xfs ext4 deadline KVM 172m31.418s
xfs ext4 deadline vbox 209m43.955s

Test 1:

  • Host FS: xfs
  • Guest FS: xfs
  • Virt Engine: libvirt/kvm
  • Host IO Scheduler: noop
  • Total Time: 174m48.193s

Test 2:

  • Host FS: xfs
  • Guest FS: xfs
  • Virt Engine: vbox
  • Host IO Scheduler: noop
  • Total Time: 213m35.169s

Test 3:

  • Host FS: xfs
  • Guest FS: ext4
  • Virt Engine: libvirt/kvm
  • Host IO Scheduler: noop
  • Total Time: 172m5.682s

Test 4:

  • Host FS: xfs
  • Guest FS: ext4
  • Virt Engine: vbox
  • Host IO Scheduler: noop
  • Total Time: 207m53.895s

Test 5:

  • Host FS: xfs
  • Guest FS: xfs
  • Virt Engine: libvirt/kvm
  • Host IO Scheduler: deadline
  • Total Time: 172m44.424s

Test 6:

  • Host FS: xfs
  • Guest FS: xfs
  • Virt Engine: vbox
  • Host IO Scheduler: deadline
  • Total Time: 235m34.411s

Test 7:

  • Host FS: xfs
  • Guest FS: ext4
  • Virt Engine: libvirt/kvm
  • Host IO Scheduler: deadline
  • Total Time: 172m31.418s

Test 4:

  • Host FS: xfs
  • Guest FS: ext4
  • Virt Engine: vbox
  • Host IO Scheduler: deadline
  • Total Time: 209m43.955s

Conclusions

The combination that won overall was an ext4 guest filesystem, with an xfs host filesystem, on libvirt/kvm with the noop IO scheduler.

While I expected virtualbox to be slower than KVM, an entire hours difference was pretty startling. Another surprise was that ext4 on xfs outperformed xfs on xfs in all cases.

OpenFaaS on Kubernetes on Raspberry-Pi

In this post, we’re going to setup a cluster of Raspberry Pi 2 Model B’s with Kubernetes. We are then going to install OpenFaaS on top of it so we can build serverless apps. (Not that there aren’t servers, but you’re not supposed to care, or some such).

Update!

The prior release of this post ended up duplicating a lot of the work that Alex Ellis published here..

In fact, once I’d hit publish, my post was already out of date. So, to get Kubernetes and OpenFaaS going on the PI, start there. What follows here, then, are the changes I made to the process to fit my environment.

Requirements

For my lab cluster, I wanted the environment to be functional, portable, isolated from the rest of my network, and accessible over wifi. The network layout looks a bit like this:

Network layout

Networks:

  • Green - Lab-Net - 10.127.8.x/24
  • Blue - K8s Net - 172.12.0.0/12

Build

The build has two parts, hardware and software. Rather than provide you with a complete manifest of the hardware, we’ll summarize it so we can spend more time on the software parts.

Hardware

  • 7x Raspberry Pi 2 Model B
  • 1x 8 port 100Mbit switch (It’s what I had around)
  • 1x Anker 10 port usb charger
  • 7x Ethernet cables
  • 1x usb wifi adapter
  • 2x GeauxRobot 4-layer Dog Bone Stack Clear Case Box Enclosure
  • Some zip ties

Software

Here’s where the rubber meets the road. First, familiarize yourself with Alex’s build instructions, here..

To make my process a little smoother, I used Packer to embed Docker, Kubeadm, avahi, and cloud-init into the image. Then used Hypriot’s flash utility to burn all 7 images with the right host names.

Note: Avahi allows me to connect to the nodes from node-01 over the private network by name, without having to setup DNS.

Building the new Raspbian image

Assumption: Here we make the assumption that you have Vagrant installed and operational.

Packer does not ship with an arm image builder. Thankfully, the community has supplied one: https://github.com/solo-io/packer-builder-arm-image

First, clone the repo, and start the Vagrant Test Environment

git clone https://github.com/solo-io/packer-builder-arm-image
cd packer-builder-arm-image
vagrant up && vagrant ssh

Next, you will need to create some supporting files for packer. Specifically, we need to create a json file that tells Packer what to build, a customization script to install our additional packages, and a user-data.yml for the flash process.

The files I used can be found here, and places into the /vagrant/ folder of the VM.

Note: If you are using Raspberry PI 3’s with built in Wifi, the user-data.yml I supplied will have them all connect to your network, rather than forcing them to communicate through the master node.

Finally, we’re ready to build the image, to do that, from inside the Test VM, run the following commands:

sudo packer build /vagrant/kubernetes_base.json
rsync --progress --archive /home/vagrant/output-arm-image/image /vagrant/raspbian-stretch-modified.img

You can now shutdown the Test VM.

Burning the image to the cards

Now that you have a custom image, it’s time to put it on the card. To do that, with the flash command installed, the following command will burn each SD card and set it’s hostname:

for i in {01..07}; do ~/Downloads/flash --hostname node-$i --userdata ./user-data.yml ./raspbian-stretch-modified.img; done

Place the SD cards into your RaspberryPIs and boot! This will take a few minutes.

Setting up networking

From the network diagram earlier, you’ll have seen that we’re using node-01 as both the Kubernetes master, as well as the network gateway for the rest of the nodes. To supply the nodes with connectivity, we need to configure NAT and DHCP. The following commands will do this for you:

  1. Set eth0 to static

Change the address and netmask to fit your environment.

# Set eth0 to static
echo "allow-hotplug eth0
auto eth0
iface eth0 inet static
    address 172.12.0.1
    netmask 255.240.0.0" | sudo tee /etc/network/interfaces.d/eth0

# Restart networking
sudo systemctl restart networking
  1. Configure NAT
# Configure NAT
echo -e '\n#Enable IP Routing\nnet.ipv4.ip_forward = 1' | sudo tee -a /etc/sysctl.conf \
  && sudo sysctl -p \
  && sudo iptables -t nat -A POSTROUTING -o wlan0 -j MASQUERADE \
  && sudo iptables -A FORWARD -i wlan0 -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT \
  && sudo iptables -A FORWARD -i eth0 -o wlan0 -j ACCEPT \
  && sudo apt-get install -y iptables-persistent

# Enable the service that retains our iptables through reboots
sudo systemctl enable netfilter-persistent
  1. Configure DHCP

Change dhcp-range=172.12.0.200,172.15.255.254,12h in the command below to fit your private network.

# Configure DHCP
sudo apt-get install -y dnsmasq \
  && sudo sed -i "s/#dhcp-range=192.168.0.50,192.168.0.150,12h/dhcp-range=172.12.0.200,172.15.255.254,12h/" /etc/dnsmasq.conf \
  && sudo sed -i "s/#interface=/interface=eth0/" /etc/dnsmasq.conf
sudo systemctl daemon-reload && sudo systemctl restart dnsmasq
  1. Check that your nodes are getting addresses
sudo cat /var/lib/misc/dnsmasq.leases
1516260768 b8:27:eb:65:ae:6c 172.13.235.110 node-06 01:b8:27:eb:65:ae:6c
1516259308 b8:27:eb:48:29:01 172.14.32.87 node-02 01:b8:27:eb:48:29:01
1516260040 b8:27:eb:c4:8b:6b 172.15.204.242 node-04 01:b8:27:eb:c4:8b:6b
1516261126 b8:27:eb:e2:1f:27 172.12.85.144 node-07 01:b8:27:eb:e2:1f:27
1516260385 b8:27:eb:69:fa:ff 172.14.174.146 node-05 01:b8:27:eb:69:fa:ff
1516259665 b8:27:eb:66:2d:2d 172.14.218.40 node-03 01:b8:27:eb:66:2d:2d

Installing Kubernetes

Next up, is the actual Kubernetes install process. My process differed slightly from Alex’s, in a few ways.

Master node

On the Master node I first pulled all the images. This speeds up the kubeadm init process significantly.

docker pull gcr.io/google_containers/kube-scheduler-arm:v1.8.6
docker pull gcr.io/google_containers/kube-controller-manager-arm:v1.8.6
docker pull gcr.io/google_containers/kube-apiserver-arm:v1.8.6
docker pull gcr.io/google_containers/pause-arm:3.0
docker pull gcr.io/google_containers/etcd-arm:3.0.17

After kubeadm init finishes, I installed weave using the method suggested in their documentation:

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

Adding the remaining nodes

Once all the services showed healthy, rather than ssh to each remaining node, I used the following bash one-liner:

for i in {02..07}; do ssh -t pi@node-01.local ssh -t pi@node-$i.local kubeadm join --token redacted 172.12.0.1:6443 --discovery-token-ca-cert-hash sha256:redacted; done

Install OpenFaaS

Once all the nodes show in kubectl get nodes, you can preform the OpenFaaS install as documented in Alex’s blog post.

Resources

There were a lot of guides, blog posts, and more that went into making this post:

KubeCon 2017 Austin Summary

This post is a bit belated, and will honestly still be more raw than I would like given the time elapsed. That said, I’d still like to share the keynote & session notes from one of the best put on tech conferences I’ve been to in a long while.

Before we get started, I’d like to make some comments about KubeCon itself.

First, it was overall one of the better organized shows I’ve been to, ever. Registration & badge pickup was more or less seamless. One person at the head of the line to direct you to an open station, along with a friendly and detailed explanation of what and where things are… just spot on.

Right next to registration was a small table with what looked like flair of some manner, however, on closer inspection contained what might be the most awesome bit of the conference: communication stickers.

From the event site:

Communication Stickers

Near registration, attendees can pick up communication stickers to add to their badges. Communication stickers indicate an attendee’s requested level of interaction with other attendees and press.

Green = open to communicate

Yellow = only if you know me please

Red = I’m not interested in communicating at this time

Please be respectful of attendee communication preferences.

Communication stickers were just one of the many things the event organizers did around diversity & inclusion: http://events17.linuxfoundation.org/events/kubecon-and-cloudnativecon-north-america/attend/diversity

Keynote Summaries

What follows are my raw notes from the keynotes. You’ll note, that these are from day 1’s afternoon keynotes. This isn’t because the rest weren’t interesting, these were just the ones I found most interesting.

Ben Sigelman - Lightstep

Creator of OpenTracking, CEO LightStep

Service Mesh - Making better sense of it’s architecture. Durability & operationally sane.

Microservices: Independent, coordinated, elegant.

Build security in with service mesh.

Microservice issues are like murder mysteries. Distributed tracing, telling stories, about transactions across service boundaries. OpenTrace allows introspection, lots of things are instrumented for it, different front ends possible.

Service Mesh == L7 proxy, and allows for better hooks into tracing / visibility. Observing everything is great, need mroe than edges. Open Tracing helps bring all the bits together.

Donut zone. Move fast and bake things.

Brendan Burns - This job is too hard

“Empowering folks to build things they might not have otherwise been able to.”

Distributed systems a CS101 exercise.

Rule of 3’s. It’s hard because of too many tools for each step. Info is scattered too, all over the place. Why is this hard? - Principals: Everything in one place. Build libraries. Encourage re-use.

Cloud will become a language feature. It’s already here, but will need more work. - Metaparticle, stdlib for cloud. Really neat, but still needs to grow.

metaparticle.io

Session Notes

Next up, are raw notes from the sessions I made it to. If you’re playing along at home, the CNCF has published a playlist of all the KubeCon 2017 sessions here: https://www.youtube.com/watch?v=Z3aBWkNXnhw&list=PLj6h78yzYM2P-3-xqvmWaZbbI1sW-ulZb

Microservices, Service Mesh, & CI/CD

Concepts for the talk:

  • CI
  • CD
  • Blue/Green
  • Canary Testing
  • Pipelines as code
  • Service Mesh

CI/CD Pipeline is like robots, does the same thing over and over really well Canary testing - App => proxy => filter some % of traffic to the app.

Canary w/ Microservices ecosystem is complex at best.

Sample pipeline:

Dev: build service => build container => deploy to dev => unit testing Prod: deploy canary => modify routing => score it => merge PR => update prod

istio - a new service mesh tech. well adopted.

sidecar - envoy, istio control plane, creates route rule that controls side cars

Brigade - ci/cd tool, event driven scripting for k8s

  • functions as containers
  • run in parallel or serial
  • trigger workflows (github, docker, etc)
  • javascript
  • project config as secrets
  • fits into pipelins,

Kashti - dashboard for brigade

  • easyy viewing & constructing pipelines
  • waterfall view
  • log output

Vault and Secret Management in K8s

Why does vault exist? - Secret management for infrastructure as code. Hide the creds for your interconnects and such into vault, not github.

What is a secret? All the IAM stuff.

Current State:

  • Scatter / sprawl everywhere
  • Decentralized keys
  • etc

Goals:

  • Single source of secrets truth
  • Programmatic application access (automation)
  • Operations access (CLI/UI)
  • Practical sec
  • Modern Datacenter Friendly

Secure Secret Storage - Table stakes Dynamic == harder

Security principles, same as they’ve always been:

  • C,I,A
  • Least Privilege (need to know)
  • Separation (controls)
  • Bracketing (time)
  • no-repudiation (can’t deny access)
  • defense in depth

Dynamic secrets:

  • Never provide “root” to clients
  • provide limited access based on role
  • granted on demand
  • leases are enforceable
  • audit trail

Vault supports a lot of the things, like, a lot of them. It’s plugin based.

Apps don’t use user / pass / 2fa Users use ldap/ad and 2FA

Shamir Secret Sharing - Key levels to generate the master key, to generate the encryption key. See here.

“But I have LDAP” - When you have auth for k8s, and auth internal, and github and and and… There is no more single source of truth. Stronger Identity in Vault aims to help with this.

Kernels v Distros

Distro risks - fragmentation: conformance

Optionn 1 - Ignore it

  • Doesn’t work
  • fragmentation
  • etc

Distros have challenges tho, some options for k8s, but not without issues:

  • Ignore it? Causes fragmentation
  • Own it? Needs lots of support, marketing, etc.

Middle ground:

  • Provide a ‘standard’ distro.
  • Provide a known good starting point for others.

Formalize add-ons process: kubectl apt-get update

Start with cluster/addons/?

k8s is a bit of both kernel & distro, releases every 6 weeks, good cadence. Maybe a break it / fix it tick tock.

Manage the distro

Fork all code into our namespace?

  • Only ship what is under control
  • Carry patches IFF needed

Push everything to one repo:

  • No hunting
  • managing randomly over the internet

Release every 6-12mo, distinguish component from package version. Base deprecation policies on distro releases.

Upgrade cadence with 1 quarter releases is too fast for upgrades. If you decouple ‘kernel’ from distro,

New set of community roles

  • Different skills
  • Different focus

We’ll never have less users or more freedom than we do today, so, let’s make the best decision.

Is the steering committee? - A dozen times this week. Every session on the dev day. Steering committee will delegate it, so let’s have these discussions.

Model of cloud integrations ~= hardware drivers?

Overall

It was refreshing to go to a conference as an attendee, and have this much great content over such a short period.

Beginning Home Coffee Roasting

About a year ago now, I began an adventure into the world of home coffee roasting in the hope of, maybe one day, backing out of tech, and into coffee. While that is a story for another time, here is some of the hard-won knowledge from my first year, in the hopes that it will help someone else.

Home Roasting

I’ve broken this post into a few sections, in the order I wish I’d taken.

  • Training materials
  • Roasters
  • Beans

Training Materials

Before you grab a roaster, spend some time reading about how it all works. Watch a bunch of videos on how other folks are doing it, the different machines, etc. To that end, here are a few hopping off points:

Reading material

Before going much further, grab a copy of this, and read it over: Coffee Roaster’s Companion - Scott Rao

Most anything by Scott Rao is great. The book mentioned here, however is specific to roasting. It breaks down the styles of different machines, the roasting process, and the chemical changes that happen inside the beans as roasting happens. Along the way, Scott provides a world of guidance that will both help you get started, and fine tune you process.

Video material

Sweet Maria’s (the place I source beans from) is a sort of full-service home roasting shop. To that end, they have their own YouTube channel which has some decent guides: https://www.youtube.com/user/sweetmarias

Additionally, and on the ‘how to roast the things’ end, Mill City Roasters has a masters class they published via youtube: https://www.youtube.com/channel/UCfpTQQtvqLhHG_GWhk3Xp4A/playlists

In Person materials

This will largely depend on your area & willingness to spend, but there are a number of coffee roaster training classes available. The ones I found near us, were few, far between, and more expensive than I was ready for at the time.

Roasters

Roasting coffee can be done with most anything where you can apply heat and some movement to the beans while they, well, roast. This means you can get there with anything from a frying pan and wooden spoon, to something commercial grade. At the home roaster / small scale batch roaster end, you have three basic types:

Air Roasters

The most common here is an air popcorn popper. If you go this route, be sure to choose one with high wattage, strong plastic, and an easy to bypass thermal cutoff. Coffee beans roast at a much higher temp than popcorn does, so the higher wattage will help get you there faster, and the plastic / cutoff help ensure the machine can withstand the roasting process more than once*.

_ I didn’t learn this lesson the first time, or, the second or third. While it can be done, most machines from big box stores just won’t stand up to the punishment._

Also in the air range, you have things like the SSR-700 which, while having a small capacity, allows for manual and computer control. It also has a decently sized community of home roasters that share roast profiles for various beans. Overall, this makes the on-ramp from “I want to roast” to “my first good cup” much quicker.

Electric

After air roasters, the next step up is some flavor of electric roaster. Electric roasters are generally of the ‘drum’ type, with an honorable mention of the Bonaverde which is a pan style thing that takes you from green beans to a pot of coffee. In this realm, the Behmor 1600+ is my roaster of choice, and the one I use currently.

It has the largest capacity for home / hobby coffee roaster allowing you to roast up to a pound at a time. Additionally, it has a number of baked in roast profiles, and the ability to operate as a full manual controlled roaster. While I wish it had better / more instrumentation, I’ve learned to live without, working by smell, sight, smell, and such.

Electric roasters, however, have a few eccentricities:

They all require ‘clean’ and consistent power. Home power outlets may vary from one to the next on the actual voltage coming out of the outlet, further said voltage may vary as other things on the same circuit add or remove load. All of this will have a direct impact on the ability of the roaster to heat the beans. To this end, I’ve added a sine wave generating UPS.

In the electric realm, I’m watching the Aillio R1, which like the SR-700 offers complete computer control, like the Behmor offers a huge capacity, and like higher end gas machines, lets you roast back to back with less overhead / cooldown time.

Gas

There are a number of gas roasters aimed at the home market, however, as they usually require a gas source of some sort (stove at the small end, propane tank or natural gas line at the mid/high end) and some way to vent the smoke and other fumes.

Given the requirements for running a gas machine, I did not research these as deeply for a starter machine. However, the greater capacities and control offered by a gas powered roaster will have me looking at them if/when I expand.

Beans

It used to be sourcing beans was much harder. There was only one shop in my town that would sell them, and you basically got whatever selection they had. Then Sweet Maria’s came around and changed the game for home roasters. There are now a double-handful of dedicated shops like Sweet Maria’s, and well, even Amazon will ship you some via Prime.

My recommendation here, is to grab a green bean sampler, one that’s larger than you think you’ll need, and use it to learn your chosen roaster, and to get a really good feel for a particular bean/varietal/region.

Atlanta VMUG - Containers Slides

Atlanta VMUG Usercon - Containers: Beyond the hype

Herin lie the slides, images, and Dockerfile to review this session as I presented it.

Behind the scenes

Because this is a presentation on containers, I thought it only right that I use containers to present it. In that way, besides being a neat trick, it let me emphasize the change in workflow, as well as the usefulness of containers.

I used James Turnbull’s “Presenting with Docker” as the runtime to serve the slides. Then mounted three volumes to provide the slides, images, and a variant of index.html so I could live demo a change of theme.

Viewing the presentation

Assuming you have Docker installed, use the following commands to build and launch the presentation:

# Clone the presentation
git clone https://github.com/bunchc/atlanta-vmug-2017-10-19.git
cd atlanta-vmug-2017-10-19/

# Pull the docker image
docker pull jamtur01/docker-presentation

# Run the preso
docker run -p 8000:8000 --name docker_presentation \
  -v $PWD/images:/opt/presentation/images \
  -v $PWD/slides:/opt/presentation/slides \
  -d jamtur01/docker-presentation

Then browse to http://localhost:8000. To view speaker notes, press S to view in speaker mode.

Keystone Credential Migration Error

Credential migration in progress. Cannot perform writes to credential table.

In openstack-ansible versions 15.1.7 and 15.1.8, there is an issue with the version of shade and the keystone db_sync steps not completing properly. This is fixed in 15.1.9, however, if running one afore mentioned releases, the following may help.

Symptom:

Keystone reports 500 error when attempting to operate on the credential table.

You will find something similar to this in the keystone.log file

./keystone.log:2017-10-04 18:54:43.978 13170 ERROR keystone.common.wsgi [req-19551bfb-c4d5-4582-adc0-6edcbe7585a5 84f7baa50ec34454bdb5d6a2254278b3 98186b853beb47a8bcf94cc7f179bf76 - default default] (pymysql.err.InternalError) (1644, u'Credential migration in progress. Cannot perform writes to credential table.') [SQL: u'INSERT INTO credential (id, user_id, project_id, encrypted_blob, type, key_hash, extra) VALUES (%(id)s, %(user_id)s, %(project_id)s, %(encrypted_blob)s, %(type)s, %(key_hash)s, %(extra)s)'] [parameters: {'user_id': u'84f7baa50ec34454bdb5d6a2254278b3', 'extra': '{}', 'key_hash': '8e3a186ac35259d9c5b952201973dda4dfc1eefe', 'encrypted_blob': 'gAAAAABZ1S5zAOe7DBj5-IoOe3ci1C1QzyLcHFRV3vJvoqpWL3qVjG8EQybUaZJN_-n3vFvoR_uIL2-2Ic2Sug9jImAt-XgM0w==', 'project_id': None, 'type': u'cert', 'id': 'ff09de37ad2a4fce97993da17176e288'}]

To validate:

  1. Attach to the keystone container and enter the venv
lxc-attach --name $(lxc-ls -1| grep key)
cd /openstack/venvs/keystone-15.1.7
source bin/activate
source ~/openrc
  1. Attempt to create a credential entry:
openstack credential create admin my-secret-stuff

An unexpected error prevented the server from fulfilling your request. (HTTP 500) (Request-ID: req-d8814c07-59a6-4a06-80dc-1f46082f0866)

Fix at build time

To fix when building, add shade 1.22.2 to the global-requirements-pins.txt prior to building the environment:

echo "shade==1.22.2" | tee -a /opt/openstack-ansible/global-requirement-pins.txt

scripts/bootstrap-ansible.sh \
    && scripts/bootstrap-aio.sh \
    && scripts/run-playbooks.sh

To fix while running

  • Pin shade to 1.22.2
  • Rerun os-keystone-install.yml
  • keystone-manage db_sync expand, migrate, and contract
  1. Pin shade:
echo "shade==1.22.2" | tee -a /opt/openstack-ansible/global-requirements-pins.txt
  1. Run os-keystone-install.yml
cd /opt/openstack-ansible/playbooks
openstack-ansible -vvv os-keystone-install.yml

With shade pinned, the following steps should unlock the credential table in the keystone database:

  1. Attach to the keystone container and enter the venv
lxc-attach --name $(lxc-ls -1| grep key)
cd /openstack/venvs/keystone-15.1.7
source bin/activate
source ~/openrc
  1. Expand the keystone database
keystone-manage db_sync --expand
  1. Migrate the keystone database
keystone-manage db_sync --migrate
  1. Then, contract the keystone database
keystone-manage db_sync --contract

Note: These are the same steps the os_keystone role uses.

  1. After this is done, test credential creation:
openstack credential create admin my-secret-stuff

+------------+----------------------------------+
| Field      | Value                            |
+------------+----------------------------------+
| blob       | my-secret-stuff                  |
| id         | 4d1f2dd232854dd3b52dc0ea2dd2f451 |
| project_id | None                             |
| type       | cert                             |
| user_id    | 187654e532cb43599159c5ea0be84a68 |
+------------+----------------------------------+

Still didn’t work?

  1. Dump the keystone database to a file, then make a backup of said file
lxc-attach --name $(lsc-ls -1 | grep galera)
mysqldump keystone > keystone.orig
cp keystone.orig keystone.edited
  1. Edit the file to remove / add the following
--- edit out this section ---
BEGIN
  IF NEW.encrypted_blob IS NULL THEN
    SIGNAL SQLSTATE '45000' SET MESSAGE_TEXT = 'Credential migration in progress. Cannot perform writes to credential table.';
  END IF;
  IF NEW.encrypted_blob IS NOT NULL AND OLD.blob IS NULL THEN
    SIGNAL SQLSTATE '45000' SET MESSAGE_TEXT = 'Credential migration in progress. Cannot perform writes to credential table.';
  END IF;
END */;;
--- end edits ---

--- add this to the first line ---
USE keystone;
--- end addition ---
  1. Then apply the changes
mysql < keystone.edited
  1. After this is done, test credential creation:
openstack credential create admin my-secret-stuff

+------------+----------------------------------+
| Field      | Value                            |
+------------+----------------------------------+
| blob       | my-secret-stuff                  |
| id         | 4d1f2dd232854dd3b52dc0ea2dd2f451 |
| project_id | None                             |
| type       | cert                             |
| user_id    | 187654e532cb43599159c5ea0be84a68 |
+------------+----------------------------------+

Resources

The following resources were not harmed during the filming of this blog post:

Reviewing Instance Console Logs in OpenStack

Console logs are critical for troubleshooting the startup process of an instance. These logs are produced at boot time, before the console becomes available. However when working with a cloud hosted instances, accessing these can be difficult. OpenStack Compute provides a mechanism for accessing the console logs.

Getting Ready

To access the console logs of an instance, the following information is required:

  • openstack command line client
  • openrc file containing appropriate credentials
  • The name or ID of the instance

For this example we will view the last 5 lines of the cookbook.test instance.

How to do it…

To show the console logs of an instance, use the following command:

# openstack console log show --lines 5 cookbook.test

[[0;32m  OK  [0m] Started udev Coldplug all Devices.
[[0;32m  OK  [0m] Started Dispatch Password Requests to Console Directory Watch.
[[0;32m  OK  [0m] Started Set console font and keymap.
[[0;32m  OK  [0m] Created slice system-getty.slice.
[[0;32m  OK  [0m] Found device /dev/ttyS0.

How it works…

The openstack console log show command collects the console logs, as if you were connected to the server via a serial port or sitting behind the keyboard and monitor at boot time. The command will, by default, return all of the logs generated to that point. To limit the amount of output, the --lines parameter can be used to return a specific number of lines from the end of the log.

Rescuing an OpenStack instance

Rescuing an instance

OpenStack compute provides a handy troubleshooting tool with rescue mode. Should a user lose an SSH key, or be otherwise not be able to boot and access an instance, say, bad IPTABLES settings or failed network configuration, rescue mode will start a minimal instance and attach the disk from the failed instance to aid in recovery.

Getting Ready

To put an instance into rescue mode, you will need the following information:

  • openstack command line client
  • openrc file containing appropriate credentials
  • The name or ID of the instance

The instance we will use in this example is cookbook.test

How to do it…

To put an instance into rescue mode, use the following command:

# openstack server rescue cookbook.test
+-----------+--------------+
| Field     | Value        |
+-----------+--------------+
| adminPass | zmWJRw6C5XHk |
+-----------+--------------+

To verify an instance is in rescue mode, use the following command:

# openstack server show cookbook.test -c name -c status
+--------+---------------+
| Field  | Value         |
+--------+---------------+
| name   | cookbook.test |
| status | RESCUE        |
+--------+---------------+

Note: When in rescue mode, the disk of the instance in rescue mode is attached as a secondary. In order to access the data on the disk, you will need to mount it.

To exit rescue mode, use the following command:

# openstack server unrescue cookbook.test

Note: This command will produce no output if successful.

How it works…

The command openstack server rescue provides a rescue environment with the disk of your instance attached. First it powers off the named instance. Then, boots the rescue environment, and attaches the disks of the instance. Finally, it provides you with the login credentials for the rescue instance.

Accessing the rescue instance is done via SSH. Once logged into the rescue instance, you can mount the disk using mount <path to disk> /mnt.

Once you have completed your troubleshooting or recovery, the unrescue command reverses this process. First stopping the rescue environment, and detaching the disk. Then booting the instance as it was.

OpenStack Image Shelving

Somewhat unique to OpenStack Nova is the ability to “shelve” an instance. Instance shelving allows you to retain the state of an instance without having it consume resources. A shelved instance will be retained as a bootable instance for a configurable amount of time, then deleted. This is useful as part of an instance lifecycle process, or to conserve resources.

Getting Ready

To shelve an instance, the following information is required:

  • openstack command line client
  • openrc file containing appropriate credentials
  • The name or ID of the instance

How to do it…

To shelve an instance, the following commands are used:

Check the status of the instance

# openstack server show cookbook.test -c name -c status
+--------+---------------+
| Field  | Value         |
+--------+---------------+
| name   | cookbook.test |
| status | ACTIVE        |
+--------+---------------+

Shelve the instance

# openstack server shelve cookbook.test

Note: This command produces no output when successful. Shelving an instance may take a few moments depending on your environment.

Check the status of the instance

# openstack server show cookbook.test -c name -c addresses -c status
+-----------+-------------------+
| Field     | Value             |
+-----------+-------------------+
| addresses | public=10.1.13.9  |
| name      | cookbook.test     |
| status    | SHELVED_OFFLOADED |
+-----------+-------------------+

Note: A shelved instance will retain the addresses it has been assigned.

Un-shelving the instance

# openstack server unshelve cookbook.test

Note: This command produces no output when successful. As with shelving the instance may take a few moments to become active depending on your environment.

Check the status

# openstack server show cookbook.test -c name -c addresses -c status
+-----------+------------------+
| Field     | Value            |
+-----------+------------------+
| addresses | public=10.1.13.9 |
| name      | cookbook.test    |
| status    | ACTIVE           |
+-----------+------------------+

How it works…

When told to shelve an instance, OpenStack compute will first stop the instance. It then creates an instance snapshot to retain the state of the instance. The runtime details, such as number of vCPUs, memory, and IP addresses, are retained so that the instance can be unshelved and rescheduled at a later time.

This differs from shutdown an instance, in that the resources of a shutdown instance are still reserved on the host on which it resided, so that it can be powered back on quickly. A shelved instance however, will still show in openstack server list, while the resources that were assigned will remain available. Additionally, as the shelved instance will need to be restored from an image OpenStack compute will perform placement as if the instance were new, and starting it will take some time.

Run OpenStack Tempest in openstack-ansible

It took me longer than I would like to admit to get tempest running in openstack-ansible, so here is how I did it, in the hopes I’ll remember (or that it will save someone else time).

Getting Started

This post assumes a relatively recent version of openstack-ansible. For this post, that is an Ocata all-in-one build (15.1.7 specifically). Log into the deployment node.

Installing Tempest

Before we can run tempest, we have to install it first. To install tempest, on the deployment node, run the following:

# cd /opt/openstack-ansible/playbooks
# openstack-ansible -vvvv os-tempest-install.yml

<<lots of output>>

PLAY RECAP *********************************************************************
aio1_utility_container-b81d907c : ok=68   changed=1    unreachable=0    failed=0

The prior command uses the openstack-ansible wrapper to pull in the appropriate variables for your openstack environment, and installs tempest.

Running tempest

To run tempest, log into the controller node (this being an AIO, it is the deployment node, controller, compute, etc). Then attach to the utility container:

# lxc-ls | grep utility

aio1_utility_container-b81d907c

# lxc-attach --name aio1_utility_container-b81d907c
root@aio1-utility-container-b81d907c:~# ls
openrc

Once attached to the utility container, activate the tempest venv:

# cd /openstack/venvs/tempest-15.1.7
# source bin/activate

Once in the venv, we need to tell Tempest what workspace to use. Fortunately, the os-tempest-install playbook prepares a tempest ‘workspace’ for you. Lets move into that directory and launch our tempest tests:

(tempest-15.1.7) # cd /openstack/venvs/tempest-15.1.7/workspace
(tempest-15.1.7) # tempest run --smoke -w 4

<<so.much.output>>

======
Totals
======
Ran: 93 tests in 773.0000 sec.
 - Passed: 80
 - Skipped: 11
 - Expected Fail: 0
 - Unexpected Success: 0
 - Failed: 2
Sum of execute time for each test: 946.8377 sec.

All done!