Cody Bunch Some Random IT Guy - OpenStack, DevOps, Cloud, Things

OpenFaaS on Kubernetes on Raspberry-Pi

In this post, we’re going to setup a cluster of Raspberry Pi 2 Model B’s with Kubernetes. We are then going to install OpenFaaS on top of it so we can build serverless apps. (Not that there aren’t servers, but you’re not supposed to care, or some such).

As I found when building this cluster out, the Kubernetes space is moving fast. Extremely so. That means, this post, like the ones I followed to build the cluster, will be outdated by the time you get here. With luck however, you’ll find a clue or two here that will help you get going.

Install the head node

First things first, you need to install and prep the first node. My head node is Raspbian Jesse (https://www.raspberrypi.org/downloads/raspbian/), written with Etcher to an SD card. Once booted, you have some work to do.

Setup the node

  1. Update the OS and install some required packages:
sudo apt-get update \
    && sudo apt-get dist-upgrade -y --force-yes
sudo apt-get install -y \
    vim \
    git \
    wget \
    curl \
    unzip \
    build-essential \
    raspi-config \
    mosh \
    ntpdate \
    glances \
    avahi-daemon \
    netatalk
  1. Next enable ssh:
sudo systemctl enable ssh
sudo systemctl start ssh
  1. Disable swap:
sudo dphys-swapfile swapoff && \
  sudo dphys-swapfile uninstall && \
  sudo update-rc.d dphys-swapfile remove

3a. Enable cgroups:

Add the following to the end of /boot/cmdline.txt:

cgroup_enable=cpuset cgroup_enable=memory
  1. Copy over ssh keys
ssh-copy-id pi@node-01.local
  1. (Optional) Setup the screen

My head node has a screen that I’ll be using to show cluster status. It was installed and setup thusly:

curl -SLs https://apt.adafruit.com/add-pin | sudo bash
sudo apt-get install -y --force-yes raspberrypi-bootloader adafruit-pitft-helper raspberrypi-kernel

sudo adafruit-pitft-helper -t 35r
  1. Make eth0 static
echo "allow-hotplug eth0
auto eth0
iface eth0 inet static
    address 172.16.1.1
    netmask 255.255.255.0" | sudo tee /etc/network/interfaces.d/eth0
  1. Install Docker
curl -sSL get.docker.com | sh && \
sudo usermod pi -aG docker

(Optional) My other cluster nodes run Hypriot, which meant I needed to do some extra leg work to get the right docker onboard:

sudo apt-get autoremove -y docker-engine \
    && sudo apt-get purge docker-engine -y \
    && sudo rm -rf /etc/docker/ \
    && sudo rm -f /etc/systemd/system/multi-user.target.wants/docker.service \
    && sudo rm -rf /var/lib/docker \
    && sudo systemctl daemon-reload \
    && sudo apt-get install -y docker-engine=17.03.1~ce-0~raspbian-jessie
  1. (Optional) Setup NAT (from ethernet to wifi)

I wanted my cluster to be isolated from the rest of my network. To this end, all the cluster nodes communicate over ethernet, while the head node functions as a NAT device, allowing them to connect over it’s wifi interface to the rest of my network.

echo -e '\n#Enable IP Routing\nnet.ipv4.ip_forward = 1' | sudo tee -a /etc/sysctl.conf \
    && sudo sysctl -p \
    && sudo iptables -t nat -A POSTROUTING -o wlan0 -j MASQUERADE \
    && sudo iptables -A FORWARD -i wlan0 -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT \
    && sudo iptables -A FORWARD -i eth0 -o wlan0 -j ACCEPT \
    && sudo apt-get install -y iptables-persistent

sudo systemctl enable netfilter-persistent

8a. (Optional) Setup DHCP

As my cluster nodes are isolated, the head node runs DHCP to provide addressing.

sudo apt-get install -y dnsmasq \
    && sudo sed -i "s/#dhcp-range=192.168.0.50,192.168.0.150,12h/dhcp-range=172.16.1.2,172.16.1.254,12h/" /etc/dnsmasq.conf \
    && sudo sed -i "s/#interface=/interface=eth0/" /etc/dnsmasq.conf
  1. Reboot

At this point we’ve made extensive modifications to the head node. To ensure swap stays disabled, cgroups are enabled, and the screen does as it should, a reboot is needed here.

sudo reboot
  1. Install kubeadm
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update -q && sudo apt-get install -qy kubeadm=1.8.5-00
  1. Initialize the head node

As we want our cluster to communicate over the private network, we need to specify address for the API server to listen on. This is done with --apiserver-advertise-address= and allows the other nodes in our cluster to join properly.

We also specify --pod-network-cidr as a pool of addresses that our pods will be assigned.

Note: This will take a while.

sudo kubeadm init \
    --apiserver-advertise-address=172.16.1.1 \
    --pod-network-cidr 172.16.1.0/24

Then, run the bit it gives you at the end:

mkdir -p $HOME/.kube \
    && sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config \
    && sudo chown $(id -u):$(id -g) $HOME/.kube/config

Note: Save the join token. You can generate another one if you forgot to save this, but you will need the token to join additional nodes.

  1. Check that it worked
kubectl get pods --namespace=kube-system
  1. Generate a new machine ID

As a side effect of the imaging process, you may end up with nodes that do not have a unique machine-id. Generate a new one:

sudo rm /var/lib/dbus/machine-id
sudo rm /etc/machine-id
sudo systemd-machine-id-setup
sudo reboot
  1. Networking

On the master node:

kubectl apply -f https://git.io/weave-kube-1.6
  1. Check our work:
kubectl exec -n kube-system weave-net-2zl6f -c weave -- /home/weave/weave --local status connections
  1. (Optional) Create a user & role

If you will be deploying the dashboard, you will need an admin user, role, and login token.

Creating the user & role:

echo "apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system" | tee role.yml

echo "apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system"| tee user.yml

kubectl create -f user.yml
kubectl create -f role.yml

Getting a login token:

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')

Additional nodes

  1. Updates
sudo apt-get update && sudo apt-get dist-upgrade -y --force-yes
  1. Install kubeadm
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - && \
  echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list && \
  sudo apt-get update -q && \
  sudo apt-get install -qy kubeadm
  1. Join the cluster

This is where the token from before comes in handy:

sudo kubeadm join --token  172.16.1.1:6443 --discovery-token-unsafe-skip-ca-verification
  1. Check for the new nodes

On the head node, you can use the following command to check if your new nodes have joined properly:

kubectl get nodes

Install OpenFaaS

At this point you can follow from step 2.0b on the official OpenFaaS guide: https://github.com/openfaas/faas/blob/master/guide/deployment_k8s.md

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')

git clone https://github.com/openfaas/faas-netes
cd faas-netes
kubectl apply -f faas.armhf.yml,rbac.yml,monitoring.armhf.yml

Resources

There were a lot of guides, blog posts, and more that went into making this post:

KubeCon 2017 Austin Summary

This post is a bit belated, and will honestly still be more raw than I would like given the time elapsed. That said, I’d still like to share the keynote & session notes from one of the best put on tech conferences I’ve been to in a long while.

Before we get started, I’d like to make some comments about KubeCon itself.

First, it was overall one of the better organized shows I’ve been to, ever. Registration & badge pickup was more or less seamless. One person at the head of the line to direct you to an open station, along with a friendly and detailed explanation of what and where things are… just spot on.

Right next to registration was a small table with what looked like flair of some manner, however, on closer inspection contained what might be the most awesome bit of the conference: communication stickers.

From the event site:

Communication Stickers

Near registration, attendees can pick up communication stickers to add to their badges. Communication stickers indicate an attendee’s requested level of interaction with other attendees and press.

Green = open to communicate

Yellow = only if you know me please

Red = I’m not interested in communicating at this time

Please be respectful of attendee communication preferences.

Communication stickers were just one of the many things the event organizers did around diversity & inclusion: http://events17.linuxfoundation.org/events/kubecon-and-cloudnativecon-north-america/attend/diversity

Keynote Summaries

What follows are my raw notes from the keynotes. You’ll note, that these are from day 1’s afternoon keynotes. This isn’t because the rest weren’t interesting, these were just the ones I found most interesting.

Ben Sigelman - Lightstep

Creator of OpenTracking, CEO LightStep

Service Mesh - Making better sense of it’s architecture. Durability & operationally sane.

Microservices: Independent, coordinated, elegant.

Build security in with service mesh.

Microservice issues are like murder mysteries. Distributed tracing, telling stories, about transactions across service boundaries. OpenTrace allows introspection, lots of things are instrumented for it, different front ends possible.

Service Mesh == L7 proxy, and allows for better hooks into tracing / visibility. Observing everything is great, need mroe than edges. Open Tracing helps bring all the bits together.

Donut zone. Move fast and bake things.

Brendan Burns - This job is too hard

“Empowering folks to build things they might not have otherwise been able to.”

Distributed systems a CS101 exercise.

Rule of 3’s. It’s hard because of too many tools for each step. Info is scattered too, all over the place. Why is this hard? - Principals: Everything in one place. Build libraries. Encourage re-use.

Cloud will become a language feature. It’s already here, but will need more work. - Metaparticle, stdlib for cloud. Really neat, but still needs to grow.

metaparticle.io

Session Notes

Next up, are raw notes from the sessions I made it to. If you’re playing along at home, the CNCF has published a playlist of all the KubeCon 2017 sessions here: https://www.youtube.com/watch?v=Z3aBWkNXnhw&list=PLj6h78yzYM2P-3-xqvmWaZbbI1sW-ulZb

Microservices, Service Mesh, & CI/CD

Concepts for the talk:

  • CI
  • CD
  • Blue/Green
  • Canary Testing
  • Pipelines as code
  • Service Mesh

CI/CD Pipeline is like robots, does the same thing over and over really well Canary testing - App => proxy => filter some % of traffic to the app.

Canary w/ Microservices ecosystem is complex at best.

Sample pipeline:

Dev: build service => build container => deploy to dev => unit testing Prod: deploy canary => modify routing => score it => merge PR => update prod

istio - a new service mesh tech. well adopted.

sidecar - envoy, istio control plane, creates route rule that controls side cars

Brigade - ci/cd tool, event driven scripting for k8s

  • functions as containers
  • run in parallel or serial
  • trigger workflows (github, docker, etc)
  • javascript
  • project config as secrets
  • fits into pipelins,

Kashti - dashboard for brigade

  • easyy viewing & constructing pipelines
  • waterfall view
  • log output

Vault and Secret Management in K8s

Why does vault exist? - Secret management for infrastructure as code. Hide the creds for your interconnects and such into vault, not github.

What is a secret? All the IAM stuff.

Current State:

  • Scatter / sprawl everywhere
  • Decentralized keys
  • etc

Goals:

  • Single source of secrets truth
  • Programmatic application access (automation)
  • Operations access (CLI/UI)
  • Practical sec
  • Modern Datacenter Friendly

Secure Secret Storage - Table stakes Dynamic == harder

Security principles, same as they’ve always been:

  • C,I,A
  • Least Privilege (need to know)
  • Separation (controls)
  • Bracketing (time)
  • no-repudiation (can’t deny access)
  • defense in depth

Dynamic secrets:

  • Never provide “root” to clients
  • provide limited access based on role
  • granted on demand
  • leases are enforceable
  • audit trail

Vault supports a lot of the things, like, a lot of them. It’s plugin based.

Apps don’t use user / pass / 2fa Users use ldap/ad and 2FA

Shamir Secret Sharing - Key levels to generate the master key, to generate the encryption key. See here.

“But I have LDAP” - When you have auth for k8s, and auth internal, and github and and and… There is no more single source of truth. Stronger Identity in Vault aims to help with this.

Kernels v Distros

Distro risks - fragmentation: conformance

Optionn 1 - Ignore it

  • Doesn’t work
  • fragmentation
  • etc

Distros have challenges tho, some options for k8s, but not without issues:

  • Ignore it? Causes fragmentation
  • Own it? Needs lots of support, marketing, etc.

Middle ground:

  • Provide a ‘standard’ distro.
  • Provide a known good starting point for others.

Formalize add-ons process: kubectl apt-get update

Start with cluster/addons/?

k8s is a bit of both kernel & distro, releases every 6 weeks, good cadence. Maybe a break it / fix it tick tock.

Manage the distro

Fork all code into our namespace?

  • Only ship what is under control
  • Carry patches IFF needed

Push everything to one repo:

  • No hunting
  • managing randomly over the internet

Release every 6-12mo, distinguish component from package version. Base deprecation policies on distro releases.

Upgrade cadence with 1 quarter releases is too fast for upgrades. If you decouple ‘kernel’ from distro,

New set of community roles

  • Different skills
  • Different focus

We’ll never have less users or more freedom than we do today, so, let’s make the best decision.

Is the steering committee? - A dozen times this week. Every session on the dev day. Steering committee will delegate it, so let’s have these discussions.

Model of cloud integrations ~= hardware drivers?

Overall

It was refreshing to go to a conference as an attendee, and have this much great content over such a short period.

Beginning Home Coffee Roasting

About a year ago now, I began an adventure into the world of home coffee roasting in the hope of, maybe one day, backing out of tech, and into coffee. While that is a story for another time, here is some of the hard-won knowledge from my first year, in the hopes that it will help someone else.

Home Roasting

I’ve broken this post into a few sections, in the order I wish I’d taken.

  • Training materials
  • Roasters
  • Beans

Training Materials

Before you grab a roaster, spend some time reading about how it all works. Watch a bunch of videos on how other folks are doing it, the different machines, etc. To that end, here are a few hopping off points:

Reading material

Before going much further, grab a copy of this, and read it over: Coffee Roaster’s Companion - Scott Rao

Most anything by Scott Rao is great. The book mentioned here, however is specific to roasting. It breaks down the styles of different machines, the roasting process, and the chemical changes that happen inside the beans as roasting happens. Along the way, Scott provides a world of guidance that will both help you get started, and fine tune you process.

Video material

Sweet Maria’s (the place I source beans from) is a sort of full-service home roasting shop. To that end, they have their own YouTube channel which has some decent guides: https://www.youtube.com/user/sweetmarias

Additionally, and on the ‘how to roast the things’ end, Mill City Roasters has a masters class they published via youtube: https://www.youtube.com/channel/UCfpTQQtvqLhHG_GWhk3Xp4A/playlists

In Person materials

This will largely depend on your area & willingness to spend, but there are a number of coffee roaster training classes available. The ones I found near us, were few, far between, and more expensive than I was ready for at the time.

Roasters

Roasting coffee can be done with most anything where you can apply heat and some movement to the beans while they, well, roast. This means you can get there with anything from a frying pan and wooden spoon, to something commercial grade. At the home roaster / small scale batch roaster end, you have three basic types:

Air Roasters

The most common here is an air popcorn popper. If you go this route, be sure to choose one with high wattage, strong plastic, and an easy to bypass thermal cutoff. Coffee beans roast at a much higher temp than popcorn does, so the higher wattage will help get you there faster, and the plastic / cutoff help ensure the machine can withstand the roasting process more than once*.

_ I didn’t learn this lesson the first time, or, the second or third. While it can be done, most machines from big box stores just won’t stand up to the punishment._

Also in the air range, you have things like the SSR-700 which, while having a small capacity, allows for manual and computer control. It also has a decently sized community of home roasters that share roast profiles for various beans. Overall, this makes the on-ramp from “I want to roast” to “my first good cup” much quicker.

Electric

After air roasters, the next step up is some flavor of electric roaster. Electric roasters are generally of the ‘drum’ type, with an honorable mention of the Bonaverde which is a pan style thing that takes you from green beans to a pot of coffee. In this realm, the Behmor 1600+ is my roaster of choice, and the one I use currently.

It has the largest capacity for home / hobby coffee roaster allowing you to roast up to a pound at a time. Additionally, it has a number of baked in roast profiles, and the ability to operate as a full manual controlled roaster. While I wish it had better / more instrumentation, I’ve learned to live without, working by smell, sight, smell, and such.

Electric roasters, however, have a few eccentricities:

They all require ‘clean’ and consistent power. Home power outlets may vary from one to the next on the actual voltage coming out of the outlet, further said voltage may vary as other things on the same circuit add or remove load. All of this will have a direct impact on the ability of the roaster to heat the beans. To this end, I’ve added a sine wave generating UPS.

In the electric realm, I’m watching the Aillio R1, which like the SR-700 offers complete computer control, like the Behmor offers a huge capacity, and like higher end gas machines, lets you roast back to back with less overhead / cooldown time.

Gas

There are a number of gas roasters aimed at the home market, however, as they usually require a gas source of some sort (stove at the small end, propane tank or natural gas line at the mid/high end) and some way to vent the smoke and other fumes.

Given the requirements for running a gas machine, I did not research these as deeply for a starter machine. However, the greater capacities and control offered by a gas powered roaster will have me looking at them if/when I expand.

Beans

It used to be sourcing beans was much harder. There was only one shop in my town that would sell them, and you basically got whatever selection they had. Then Sweet Maria’s came around and changed the game for home roasters. There are now a double-handful of dedicated shops like Sweet Maria’s, and well, even Amazon will ship you some via Prime.

My recommendation here, is to grab a green bean sampler, one that’s larger than you think you’ll need, and use it to learn your chosen roaster, and to get a really good feel for a particular bean/varietal/region.

Atlanta VMUG - Containers Slides

Atlanta VMUG Usercon - Containers: Beyond the hype

Herin lie the slides, images, and Dockerfile to review this session as I presented it.

Behind the scenes

Because this is a presentation on containers, I thought it only right that I use containers to present it. In that way, besides being a neat trick, it let me emphasize the change in workflow, as well as the usefulness of containers.

I used James Turnbull’s “Presenting with Docker” as the runtime to serve the slides. Then mounted three volumes to provide the slides, images, and a variant of index.html so I could live demo a change of theme.

Viewing the presentation

Assuming you have Docker installed, use the following commands to build and launch the presentation:

# Clone the presentation
git clone https://github.com/bunchc/atlanta-vmug-2017-10-19.git
cd atlanta-vmug-2017-10-19/

# Pull the docker image
docker pull jamtur01/docker-presentation

# Run the preso
docker run -p 8000:8000 --name docker_presentation \
  -v $PWD/images:/opt/presentation/images \
  -v $PWD/slides:/opt/presentation/slides \
  -d jamtur01/docker-presentation

Then browse to http://localhost:8000. To view speaker notes, press S to view in speaker mode.

Keystone Credential Migration Error

Credential migration in progress. Cannot perform writes to credential table.

In openstack-ansible versions 15.1.7 and 15.1.8, there is an issue with the version of shade and the keystone db_sync steps not completing properly. This is fixed in 15.1.9, however, if running one afore mentioned releases, the following may help.

Symptom:

Keystone reports 500 error when attempting to operate on the credential table.

You will find something similar to this in the keystone.log file

./keystone.log:2017-10-04 18:54:43.978 13170 ERROR keystone.common.wsgi [req-19551bfb-c4d5-4582-adc0-6edcbe7585a5 84f7baa50ec34454bdb5d6a2254278b3 98186b853beb47a8bcf94cc7f179bf76 - default default] (pymysql.err.InternalError) (1644, u'Credential migration in progress. Cannot perform writes to credential table.') [SQL: u'INSERT INTO credential (id, user_id, project_id, encrypted_blob, type, key_hash, extra) VALUES (%(id)s, %(user_id)s, %(project_id)s, %(encrypted_blob)s, %(type)s, %(key_hash)s, %(extra)s)'] [parameters: {'user_id': u'84f7baa50ec34454bdb5d6a2254278b3', 'extra': '{}', 'key_hash': '8e3a186ac35259d9c5b952201973dda4dfc1eefe', 'encrypted_blob': 'gAAAAABZ1S5zAOe7DBj5-IoOe3ci1C1QzyLcHFRV3vJvoqpWL3qVjG8EQybUaZJN_-n3vFvoR_uIL2-2Ic2Sug9jImAt-XgM0w==', 'project_id': None, 'type': u'cert', 'id': 'ff09de37ad2a4fce97993da17176e288'}]

To validate:

  1. Attach to the keystone container and enter the venv
lxc-attach --name $(lxc-ls -1| grep key)
cd /openstack/venvs/keystone-15.1.7
source bin/activate
source ~/openrc
  1. Attempt to create a credential entry:
openstack credential create admin my-secret-stuff

An unexpected error prevented the server from fulfilling your request. (HTTP 500) (Request-ID: req-d8814c07-59a6-4a06-80dc-1f46082f0866)

Fix at build time

To fix when building, add shade 1.22.2 to the global-requirements-pins.txt prior to building the environment:

echo "shade==1.22.2" | tee -a /opt/openstack-ansible/global-requirement-pins.txt

scripts/bootstrap-ansible.sh \
    && scripts/bootstrap-aio.sh \
    && scripts/run-playbooks.sh

To fix while running

  • Pin shade to 1.22.2
  • Rerun os-keystone-install.yml
  • keystone-manage db_sync expand, migrate, and contract
  1. Pin shade:
echo "shade==1.22.2" | tee -a /opt/openstack-ansible/global-requirements-pins.txt
  1. Run os-keystone-install.yml
cd /opt/openstack-ansible/playbooks
openstack-ansible -vvv os-keystone-install.yml

With shade pinned, the following steps should unlock the credential table in the keystone database:

  1. Attach to the keystone container and enter the venv
lxc-attach --name $(lxc-ls -1| grep key)
cd /openstack/venvs/keystone-15.1.7
source bin/activate
source ~/openrc
  1. Expand the keystone database
keystone-manage db_sync --expand
  1. Migrate the keystone database
keystone-manage db_sync --migrate
  1. Then, contract the keystone database
keystone-manage db_sync --contract

Note: These are the same steps the os_keystone role uses.

  1. After this is done, test credential creation:
openstack credential create admin my-secret-stuff

+------------+----------------------------------+
| Field      | Value                            |
+------------+----------------------------------+
| blob       | my-secret-stuff                  |
| id         | 4d1f2dd232854dd3b52dc0ea2dd2f451 |
| project_id | None                             |
| type       | cert                             |
| user_id    | 187654e532cb43599159c5ea0be84a68 |
+------------+----------------------------------+

Still didn’t work?

  1. Dump the keystone database to a file, then make a backup of said file
lxc-attach --name $(lsc-ls -1 | grep galera)
mysqldump keystone > keystone.orig
cp keystone.orig keystone.edited
  1. Edit the file to remove / add the following
--- edit out this section ---
BEGIN
  IF NEW.encrypted_blob IS NULL THEN
    SIGNAL SQLSTATE '45000' SET MESSAGE_TEXT = 'Credential migration in progress. Cannot perform writes to credential table.';
  END IF;
  IF NEW.encrypted_blob IS NOT NULL AND OLD.blob IS NULL THEN
    SIGNAL SQLSTATE '45000' SET MESSAGE_TEXT = 'Credential migration in progress. Cannot perform writes to credential table.';
  END IF;
END */;;
--- end edits ---

--- add this to the first line ---
USE keystone;
--- end addition ---
  1. Then apply the changes
mysql < keystone.edited
  1. After this is done, test credential creation:
openstack credential create admin my-secret-stuff

+------------+----------------------------------+
| Field      | Value                            |
+------------+----------------------------------+
| blob       | my-secret-stuff                  |
| id         | 4d1f2dd232854dd3b52dc0ea2dd2f451 |
| project_id | None                             |
| type       | cert                             |
| user_id    | 187654e532cb43599159c5ea0be84a68 |
+------------+----------------------------------+

Resources

The following resources were not harmed during the filming of this blog post:

Reviewing Instance Console Logs in OpenStack

Console logs are critical for troubleshooting the startup process of an instance. These logs are produced at boot time, before the console becomes available. However when working with a cloud hosted instances, accessing these can be difficult. OpenStack Compute provides a mechanism for accessing the console logs.

Getting Ready

To access the console logs of an instance, the following information is required:

  • openstack command line client
  • openrc file containing appropriate credentials
  • The name or ID of the instance

For this example we will view the last 5 lines of the cookbook.test instance.

How to do it…

To show the console logs of an instance, use the following command:

# openstack console log show --lines 5 cookbook.test

[[0;32m  OK  [0m] Started udev Coldplug all Devices.
[[0;32m  OK  [0m] Started Dispatch Password Requests to Console Directory Watch.
[[0;32m  OK  [0m] Started Set console font and keymap.
[[0;32m  OK  [0m] Created slice system-getty.slice.
[[0;32m  OK  [0m] Found device /dev/ttyS0.

How it works…

The openstack console log show command collects the console logs, as if you were connected to the server via a serial port or sitting behind the keyboard and monitor at boot time. The command will, by default, return all of the logs generated to that point. To limit the amount of output, the --lines parameter can be used to return a specific number of lines from the end of the log.

Rescuing an OpenStack instance

Rescuing an instance

OpenStack compute provides a handy troubleshooting tool with rescue mode. Should a user lose an SSH key, or be otherwise not be able to boot and access an instance, say, bad IPTABLES settings or failed network configuration, rescue mode will start a minimal instance and attach the disk from the failed instance to aid in recovery.

Getting Ready

To put an instance into rescue mode, you will need the following information:

  • openstack command line client
  • openrc file containing appropriate credentials
  • The name or ID of the instance

The instance we will use in this example is cookbook.test

How to do it…

To put an instance into rescue mode, use the following command:

# openstack server rescue cookbook.test
+-----------+--------------+
| Field     | Value        |
+-----------+--------------+
| adminPass | zmWJRw6C5XHk |
+-----------+--------------+

To verify an instance is in rescue mode, use the following command:

# openstack server show cookbook.test -c name -c status
+--------+---------------+
| Field  | Value         |
+--------+---------------+
| name   | cookbook.test |
| status | RESCUE        |
+--------+---------------+

Note: When in rescue mode, the disk of the instance in rescue mode is attached as a secondary. In order to access the data on the disk, you will need to mount it.

To exit rescue mode, use the following command:

# openstack server unrescue cookbook.test

Note: This command will produce no output if successful.

How it works…

The command openstack server rescue provides a rescue environment with the disk of your instance attached. First it powers off the named instance. Then, boots the rescue environment, and attaches the disks of the instance. Finally, it provides you with the login credentials for the rescue instance.

Accessing the rescue instance is done via SSH. Once logged into the rescue instance, you can mount the disk using mount <path to disk> /mnt.

Once you have completed your troubleshooting or recovery, the unrescue command reverses this process. First stopping the rescue environment, and detaching the disk. Then booting the instance as it was.

OpenStack Image Shelving

Somewhat unique to OpenStack Nova is the ability to “shelve” an instance. Instance shelving allows you to retain the state of an instance without having it consume resources. A shelved instance will be retained as a bootable instance for a configurable amount of time, then deleted. This is useful as part of an instance lifecycle process, or to conserve resources.

Getting Ready

To shelve an instance, the following information is required:

  • openstack command line client
  • openrc file containing appropriate credentials
  • The name or ID of the instance

How to do it…

To shelve an instance, the following commands are used:

Check the status of the instance

# openstack server show cookbook.test -c name -c status
+--------+---------------+
| Field  | Value         |
+--------+---------------+
| name   | cookbook.test |
| status | ACTIVE        |
+--------+---------------+

Shelve the instance

# openstack server shelve cookbook.test

Note: This command produces no output when successful. Shelving an instance may take a few moments depending on your environment.

Check the status of the instance

# openstack server show cookbook.test -c name -c addresses -c status
+-----------+-------------------+
| Field     | Value             |
+-----------+-------------------+
| addresses | public=10.1.13.9  |
| name      | cookbook.test     |
| status    | SHELVED_OFFLOADED |
+-----------+-------------------+

Note: A shelved instance will retain the addresses it has been assigned.

Un-shelving the instance

# openstack server unshelve cookbook.test

Note: This command produces no output when successful. As with shelving the instance may take a few moments to become active depending on your environment.

Check the status

# openstack server show cookbook.test -c name -c addresses -c status
+-----------+------------------+
| Field     | Value            |
+-----------+------------------+
| addresses | public=10.1.13.9 |
| name      | cookbook.test    |
| status    | ACTIVE           |
+-----------+------------------+

How it works…

When told to shelve an instance, OpenStack compute will first stop the instance. It then creates an instance snapshot to retain the state of the instance. The runtime details, such as number of vCPUs, memory, and IP addresses, are retained so that the instance can be unshelved and rescheduled at a later time.

This differs from shutdown an instance, in that the resources of a shutdown instance are still reserved on the host on which it resided, so that it can be powered back on quickly. A shelved instance however, will still show in openstack server list, while the resources that were assigned will remain available. Additionally, as the shelved instance will need to be restored from an image OpenStack compute will perform placement as if the instance were new, and starting it will take some time.

Run OpenStack Tempest in openstack-ansible

It took me longer than I would like to admit to get tempest running in openstack-ansible, so here is how I did it, in the hopes I’ll remember (or that it will save someone else time).

Getting Started

This post assumes a relatively recent version of openstack-ansible. For this post, that is an Ocata all-in-one build (15.1.7 specifically). Log into the deployment node.

Installing Tempest

Before we can run tempest, we have to install it first. To install tempest, on the deployment node, run the following:

# cd /opt/openstack-ansible/playbooks
# openstack-ansible -vvvv os-tempest-install.yml

<<lots of output>>

PLAY RECAP *********************************************************************
aio1_utility_container-b81d907c : ok=68   changed=1    unreachable=0    failed=0

The prior command uses the openstack-ansible wrapper to pull in the appropriate variables for your openstack environment, and installs tempest.

Running tempest

To run tempest, log into the controller node (this being an AIO, it is the deployment node, controller, compute, etc). Then attach to the utility container:

# lxc-ls | grep utility

aio1_utility_container-b81d907c

# lxc-attach --name aio1_utility_container-b81d907c
root@aio1-utility-container-b81d907c:~# ls
openrc

Once attached to the utility container, activate the tempest venv:

# cd /openstack/venvs/tempest-15.1.7
# source bin/activate

Once in the venv, we need to tell Tempest what workspace to use. Fortunately, the os-tempest-install playbook prepares a tempest ‘workspace’ for you. Lets move into that directory and launch our tempest tests:

(tempest-15.1.7) # cd /openstack/venvs/tempest-15.1.7/workspace
(tempest-15.1.7) # tempest run --smoke -w 4

<<so.much.output>>

======
Totals
======
Ran: 93 tests in 773.0000 sec.
 - Passed: 80
 - Skipped: 11
 - Expected Fail: 0
 - Unexpected Success: 0
 - Failed: 2
Sum of execute time for each test: 946.8377 sec.

All done!

VMworld Day -1

vmworl

This, this was the easy day. The day where one finishes driving in (Yes, driving).

Going to the Hoover Dam, and well, having an easy dinner with friends you only see a few times a year.

Damn Dam

And last but not least, sending the final emails around the vCoffee exchange:

vCoffee

vCoffee Exchange

If you missed the emails, here are the details:

  • Drop off: Monday, vBrownBag area in the VMTN communities space.
  • Pickup: Wednesday, vBrownBag area in the VMTN communities space.
  • How: Once dropped off, I’ll mix them up Secret Santa style before Wednesday.
  • Also: There may or may not be a surprise to go along with it.