Cody Bunch Some Random IT Guy - OpenStack, DevOps, Cloud, Things

Mentorship Program H2, 2015

In April this year, I kicked off a ‘Mentorship Program’, and while I don’t much like the title of program, as that tends make one think it’s more formal than it really is, I think it’s time I kicked up and invited in another round of folks into the program.

First some background:

The Program

For what is left of 2015, and maybe a bit more, I’ll ‘mentor’ you towards some end of your choosing. Be it professional, personal, or both. In practical terms, this means at a minimum this means: 1 hour a week on Skype (hangouts, phone, etc) to talk about where you are currently, what you are working on, etc.

These are 1 - to - 1 mentorships, that is, you and I working on helping you get to that next step.

Class H2 - 2015

Ok, so we’re starting late to be calling it H2, as there is just under 5 months left in the year. But, who knows, we did huge things last round in just three. So, This ‘class’ if that’s what we’ll call it, will again be five individuals (you are likely one of them), who would like some help, advice, a coach / mentor (it is a mentorship program, no?). You are currently thinking about, or looking to work on that next thing, or need a push to help you get to that next level career wise. Whatever it is, I’m here to help.


Having described it some in the prior paragraphs, the only real qualifiers for you are:

  • You have to be willing to hustle, to do things, to make changes
  • Don’t be an asshole
  • Have some way for us to get in contact on the regular.


Why do this with me? Umm, well, I do things, and over the past few rounds have helped a few handfuls of folk get down their path as well.

Selection Process

Well, here’s the kicker. The selection process, if there are more than 5 sign-ups, involves me looking over the applications, reading a bit about you, and perhaps asking some additional questions.


In case there are more than 5, what do? In that case, after I have done a first round of selections, I will send out emails notifying everyone, and asking for permission to share your information with other folks who would also make good mentors. It would then be between y’all to sort out the details from that point.


The application!

That’s all there is to it. Disclaimer: It’s a Google form, who’s only recipient is me (insofar as these things can be guaranteed).

OpenStack Heat, Specify Boot Order

Another post inspired by

The question was:

If there are two instances(os:nova:server), how can I specify the boot order?

The Heat Orchestration Template, or HOT, specification provides a ‘depends_on’ attribute, that works generically on Heat resources. That is, depends_on a network, depends_on another instance, database, Cinder volume, etc.

In the OpenStack docs they provide the following example:

    type: OS::Nova::Server
    depends_on: server2

    type: OS::Nova::Server

Or for multiple servers:

    type: OS::Nova::Server
    depends_on: [ server2, server3 ]

    type: OS::Nova::Server

    type: OS::Nova::Server

Is /thing/ Manditory to Become an Expert in OpenStack

This post comes about by way of

Red hat is mandatory to become expert in openstack ?

While not entirely clear or very specific question wise, here’s the answer I put forth:


So, attempting to extract some context from your question to provide a better answer, I’ll attempt to go two routes:

1) “Is RedHat Linux mandatory…”; and

2) “Is RedHat OpenStack mandatory…”

The short answer to these questions, is that no, no specific Linux or OpenStack distribution is required to become an expert. That being said, some familiarity with Linux will give you a good foundation for working with OpenStack. Additionally, an OpenStack distribution will help ease the learning curve associated with learning OpenStack.

There are plenty of OpenStack distribution independent resources available (the #vBrownBag and OpenStack Cookbook among them).

Hope this helps.

Full-Disclosure, I am associated with both the OpenStack Cookbook and the #vBrownBag podcast.

A few links to this end as well:

Pi-Hole in the cloud

Pi Hole? Cloud? What?

The short intro is that the Pi-Hole is a Raspberry Pi project designed to set your rPI up as a DNS server that blacklists and blocks ADs. This is great and all, but rather limiting if one does not have a Pi (or their Pi is currently in use in about a million other ways).

Getting started

Before we get too deep into things, you’ll need to get some requirements into place first. Specifically, you’ll need some manner of Ubuntu 14.04 host. As I’ve stated we’re doing this in the cloud, I’ll be using an 8GB ‘standard’ instance for this.

Once you have the host available, log in and run apt-get update/upgrade:

sudo apt-get update && sudo apt-get upgrade -y

As an optional next step, harden the instance some:

curl | sudo bash

Note: Go read that script first. Because: HT @nonapeptide

Do the thing

The actual do things part here is two fold. First is to install the Pi-Hole bits onto your cloud instance. The second is to then set said cloud instance as your DNS server.

Installing ‘Pi-Hole’

This step is relatively straight forward, but involves that nasty curl sudo bash bit again, so again I strongly strongly recommend reading the script. At a high level the script does a few things:
  • Installs OS updates
  • Installs dnsmasq (DNS server)
  • Installs lighttpd (http server)
  • Stops said services for modification
  • Backs up the original config files
  • Copies new ones into place
  • Creates and runs
  • Creates is where the magic happens. This script, found here. Said script grabs and parses a few rather large lists of ad serving domains, and adds them to your DNS blacklist.

For more, and very specific details, the Jacob, who created the Pi-Hole, has an excellent write-up.

Some additional niceties

Now that you’ve got Pi-Hole installed, there are a few things I do to keep things updated & humming along. That is, running once is a good start, but in the arms race of Internet advertising, one needs to keep moving. To do this, I create a cronjob to run weekly, make sure we’re up to date:

sudo echo "47 6    * * 7   root    /usr/local/bin/" >> /etc/crontab

Using the Pi-Hole

Setting your DNS server should be relatively straight forward. However, to ensure you do not forget a device or two, and capture all other devices as they come and go, I’ve found it preferential to configure this on your edge device (Say your Wifi router or so) .


In this post we showed you how to use the Pi-Hole setup, but instead of on a Raspberry Pi, we built it on a generic Ubuntu 14.04 box “in the cloud” so it’ll be available to you everywhere.

Open Tabs - SecDevOps Edition

This was supposed to be a long, interesting, change the way you think kind of post. Instead, I got distracted by CI/CD & Infosec, and what that might mean to apply that in conjunction with DevOps to allow for some manner of continuous attestation against an infrastructure. Bloody Squirrels.

Ansible, CIS, and Ubuntu

Following on from my RHEL and CIS with Ansible post, comes qutie a bit of work to proceed down the Ubuntu path in applying CIS benchmarks with Ansible. Before we get too deep, however, it is important to call out that this Ansible role is still based on the RHEL benchmarks, just applied to the applicable systems in Ubuntu. This is because the benchmarks for RHEL have been further developed and harden many parts of the system the Ubuntu benchmarks didn’t touch.

To begin with, we’ll use the adapted Ansible role from here. Like so:

git clone /etc/ansible/roles/cis-ubuntu

From there, create a playbook.yaml that contains the following:

- hosts: all
  user: root
    - group_by: key=os_

- hosts: os_CentOS
  user: root
    - cis-centos

- hosts: os_Ubuntu
  user: root
    - cis-ubuntu

Your playbook file contains three sections. The first uses a ‘group_by’ task to separate hosts out by operating system. The last two sections then apply the right CIS role according to the OS reported back in.

Finally, apply the playbooks as follows:

ansbile-playbook -i /etc/ansible/hosts ./playbook.yaml

Using the CIS Ansible Role against CentOS/RHEL

Nothing much new here, excepting before a day or so ago, I was completely unfamiliar with Ansible. Now I’m slightly less so, but felt this needed to be written down before I forgot. Who knows, maybe it’ll help you as well.

Center for Internet Security Benchmarks & You

The CIS puts out a number of testable security benchmarks. If you’ve not read one of these, there are some… 400 or so items that are scored as part of the benchmark. The benchmark also has various levels, and when you follow the guidance, will allow you to produce some manner of attestation that you have followed the guide and produced a reasonably secure system. At least, that’s the idea.

In the world where the productivity of an operator or administrator (when did that pendulum swing back?) is measured in the tens of thousands per admin on staff, and with these being ‘cattle’ and being cycled constantly, hardening these against each of these checks gets to be… interesting.

Using a Config Management Tool

As I’ve written on before this can be handled at in userdata. Additionally, you can do this in a very OpenStack way using Heat. This can also be done in a config management tool of your choice, like Salt.

Using Ansible-Role-CIS

In this case, and after a long winded way of getting here, we’re going to use the Ansible CIS role from here.

Now assuming you have the various ssh bits taken care of per the ansible docs, you will need to do the following:

On the box to receive the role:

sudo yum install -y libselinux-python

On the ‘control’ node, you will need to create a playbook.yaml as follows:

- hosts: all
  user: root
    - cis

Once that’s saved, you can run the following:

ansbile-playbook -i /etc/ansible/hosts -C ./playbook.yaml

This will run the playbook in a ‘read only’ or test mode and will tell you what is out of compliance and where. To push the changes, remove the -C, like this:

ansbile-playbook -i /etc/ansible/hosts ./playbook.yaml


In this post we’ve talked about the CIS hardening benchmarks as well as some practices for how to harden cloud servers in an automated way. We wrapped up showing you how to do this using Ansible against a RHEL or CentOS style server.


Many thanks to Major Hayden for getting this out there.

Podcasting at the Summit

Each summit seems to get a bit bigger, a bit better, at least in terms of podcasting. This year, there is apparently quite a line up headed to Vancouver:

There are also rumors that The Cloudcast will be there.

OpenStack Vancouver in Security

Having done some work around OpenStack and Security, and even spoken in a prior summit, I’m always interested to see how many InfoSec sessions get accepted and what the various topics are.

This time around I am really excited, not so much by the quantity, but by the content and direction folks are thinking. Take a look over some of these sessions:

The first two address, or begin to anyways, what happens to security as one moves to an agile / devops / deploy to production all the time style world. The next one addresses hardening and security OpenStack at the source code level.

Following that, a session addressing detection, which has long needed some vision. Finally, the last two by Rob Clark look like they’ll drive home the points made in the others.

There are plenty more security related sessions at this summit as well.

Update: Userdata Hardening Script

A while back I did a cleanse of the gists I had collected on github. One of the unfortunate side-effects of this, was that I lost some post content. Apologies.

The biggest one that went missing was the userdata script I use on Ubuntu servers. Here it is, updated, and with some additions for better firewalling / logging and the like. Shoot me a note on twitter with any questions or comments.