Cody Bunch Some Random IT Guy - OpenStack, DevOps, Cloud, Things

Is /thing/ Manditory to Become an Expert in OpenStack

This post comes about by way of ask.openstack.org

Red hat is mandatory to become expert in openstack ?

While not entirely clear or very specific question wise, here’s the answer I put forth:

Howdy!

So, attempting to extract some context from your question to provide a better answer, I’ll attempt to go two routes:

1) “Is RedHat Linux mandatory…”; and

2) “Is RedHat OpenStack mandatory…”

The short answer to these questions, is that no, no specific Linux or OpenStack distribution is required to become an expert. That being said, some familiarity with Linux will give you a good foundation for working with OpenStack. Additionally, an OpenStack distribution will help ease the learning curve associated with learning OpenStack.

There are plenty of OpenStack distribution independent resources available (the #vBrownBag and OpenStack Cookbook among them).

Hope this helps.

Full-Disclosure, I am associated with both the OpenStack Cookbook and the #vBrownBag podcast.

A few links to this end as well:

Pi-Hole in the cloud

Pi Hole? Cloud? What?

The short intro is that the Pi-Hole is a Raspberry Pi project designed to set your rPI up as a DNS server that blacklists and blocks ADs. This is great and all, but rather limiting if one does not have a Pi (or their Pi is currently in use in about a million other ways).

Getting started

Before we get too deep into things, you’ll need to get some requirements into place first. Specifically, you’ll need some manner of Ubuntu 14.04 host. As I’ve stated we’re doing this in the cloud, I’ll be using an 8GB ‘standard’ instance for this.

Once you have the host available, log in and run apt-get update/upgrade:

sudo apt-get update && sudo apt-get upgrade -y

As an optional next step, harden the instance some:

curl https://gist.githubusercontent.com/bunchc/fa5787f9a398ee0c70e1/raw/265dc8a61ce63e53e0f97eb0099a7bd2fd0d71c8/user_data_hardening.sh | sudo bash

Note: Go read that script first. Because: https://pbs.twimg.com/media/CHG5xIbWcAAzBPL.png HT @nonapeptide

Do the thing

The actual do things part here is two fold. First is to install the Pi-Hole bits onto your cloud instance. The second is to then set said cloud instance as your DNS server.

Installing ‘Pi-Hole’

This step is relatively straight forward, but involves that nasty curl sudo bash bit again, so again I strongly strongly recommend reading the script. At a high level the script does a few things:
  • Installs OS updates
  • Installs dnsmasq (DNS server)
  • Installs lighttpd (http server)
  • Stops said services for modification
  • Backs up the original config files
  • Copies new ones into place
  • Creates and runs gravity.sh
  • Creates chrometer.sh

gravity.sh is where the magic happens. This script, found here. Said script grabs and parses a few rather large lists of ad serving domains, and adds them to your DNS blacklist.

For more, and very specific details, the Jacob, who created the Pi-Hole, has an excellent write-up.

Some additional niceties

Now that you’ve got Pi-Hole installed, there are a few things I do to keep things updated & humming along. That is, running gravity.sh once is a good start, but in the arms race of Internet advertising, one needs to keep moving. To do this, I create a cronjob to run gravity.sh weekly, make sure we’re up to date:

sudo echo "47 6    * * 7   root    /usr/local/bin/gravity.sh" >> /etc/crontab

Using the Pi-Hole

Setting your DNS server should be relatively straight forward. However, to ensure you do not forget a device or two, and capture all other devices as they come and go, I’ve found it preferential to configure this on your edge device (Say your Wifi router or so) .

Summary

In this post we showed you how to use the Pi-Hole setup, but instead of on a Raspberry Pi, we built it on a generic Ubuntu 14.04 box “in the cloud” so it’ll be available to you everywhere.

Open Tabs - SecDevOps Edition

This was supposed to be a long, interesting, change the way you think kind of post. Instead, I got distracted by CI/CD & Infosec, and what that might mean to apply that in conjunction with DevOps to allow for some manner of continuous attestation against an infrastructure. Bloody Squirrels.

Ansible, CIS, and Ubuntu

Following on from my RHEL and CIS with Ansible post, comes qutie a bit of work to proceed down the Ubuntu path in applying CIS benchmarks with Ansible. Before we get too deep, however, it is important to call out that this Ansible role is still based on the RHEL benchmarks, just applied to the applicable systems in Ubuntu. This is because the benchmarks for RHEL have been further developed and harden many parts of the system the Ubuntu benchmarks didn’t touch.

To begin with, we’ll use the adapted Ansible role from here. Like so:

git clone https://github.com/bunchc/ansible-role-cis /etc/ansible/roles/cis-ubuntu

From there, create a playbook.yaml that contains the following:

- hosts: all
  user: root
  tasks:
    - group_by: key=os_

- hosts: os_CentOS
  user: root
  roles:
    - cis-centos

- hosts: os_Ubuntu
  user: root
  roles:
    - cis-ubuntu

Your playbook file contains three sections. The first uses a ‘group_by’ task to separate hosts out by operating system. The last two sections then apply the right CIS role according to the OS reported back in.

Finally, apply the playbooks as follows:

ansbile-playbook -i /etc/ansible/hosts ./playbook.yaml

Using the CIS Ansible Role against CentOS/RHEL

Nothing much new here, excepting before a day or so ago, I was completely unfamiliar with Ansible. Now I’m slightly less so, but felt this needed to be written down before I forgot. Who knows, maybe it’ll help you as well.

Center for Internet Security Benchmarks & You

The CIS puts out a number of testable security benchmarks. If you’ve not read one of these, there are some… 400 or so items that are scored as part of the benchmark. The benchmark also has various levels, and when you follow the guidance, will allow you to produce some manner of attestation that you have followed the guide and produced a reasonably secure system. At least, that’s the idea.

In the world where the productivity of an operator or administrator (when did that pendulum swing back?) is measured in the tens of thousands per admin on staff, and with these being ‘cattle’ and being cycled constantly, hardening these against each of these checks gets to be… interesting.

Using a Config Management Tool

As I’ve written on before this can be handled at in userdata. Additionally, you can do this in a very OpenStack way using Heat. This can also be done in a config management tool of your choice, like Salt.

Using Ansible-Role-CIS

In this case, and after a long winded way of getting here, we’re going to use the Ansible CIS role from here.

Now assuming you have the various ssh bits taken care of per the ansible docs, you will need to do the following:

On the box to receive the role:

sudo yum install -y libselinux-python

On the ‘control’ node, you will need to create a playbook.yaml as follows:

- hosts: all
  user: root
  roles:
    - cis

Once that’s saved, you can run the following:

ansbile-playbook -i /etc/ansible/hosts -C ./playbook.yaml

This will run the playbook in a ‘read only’ or test mode and will tell you what is out of compliance and where. To push the changes, remove the -C, like this:

ansbile-playbook -i /etc/ansible/hosts ./playbook.yaml

Summary

In this post we’ve talked about the CIS hardening benchmarks as well as some practices for how to harden cloud servers in an automated way. We wrapped up showing you how to do this using Ansible against a RHEL or CentOS style server.

Acknowledgment!

Many thanks to Major Hayden for getting this out there.

Podcasting at the Summit

Each summit seems to get a bit bigger, a bit better, at least in terms of podcasting. This year, there is apparently quite a line up headed to Vancouver:

There are also rumors that The Cloudcast will be there.

OpenStack Vancouver in Security

Having done some work around OpenStack and Security, and even spoken in a prior summit, I’m always interested to see how many InfoSec sessions get accepted and what the various topics are.

This time around I am really excited, not so much by the quantity, but by the content and direction folks are thinking. Take a look over some of these sessions:

The first two address, or begin to anyways, what happens to security as one moves to an agile / devops / deploy to production all the time style world. The next one addresses hardening and security OpenStack at the source code level.

Following that, a session addressing detection, which has long needed some vision. Finally, the last two by Rob Clark look like they’ll drive home the points made in the others.

There are plenty more security related sessions at this summit as well.

Update: Userdata Hardening Script

A while back I did a cleanse of the gists I had collected on github. One of the unfortunate side-effects of this, was that I lost some post content. Apologies.

The biggest one that went missing was the userdata script I use on Ubuntu servers. Here it is, updated, and with some additions for better firewalling / logging and the like. Shoot me a note on twitter with any questions or comments.

vDM Zombies and other late night ramblings

Ok, so I blame @discoposse (web) for this. That is, I’ve been drawn into the Interop Virtual Design Master challenge. What follows below are the late night ramblings of someone without enough caffeine or knowlege of what exactly is handy after a Zombie outbreak.

Intro

The prompts for this challenge, both parts 1 and 2 deal with how to bring infrastructure back online in a post zombie apocalypse world. As not much more had been specified other than a number of users and make the infrastructure defensible, we’re going to go off the deep end rather than with a traditional setup.

This ‘design’ is setup to be phased in as various parts of the infrastructure are brought online.

Assumptions

Someone did something dumb. Likely mixing both nano technology and bioengineered genetic weapons. Whatever the case, to say there was fecal matter and that damned fan would be an understatement. Everything is offline. Everything. The power grid is in shambles, most of the power plants have become hold out pockets of zombies, or have fallen in to various stages of ‘melt down’. Communications, which over the last several decades has come to depend largely on electronics, well, they’re turbo offline.

In both the Vegas and Anchorage hold out, there have been a few strong personalities that have stepped forward. We used to call them preppers, but, well, now they’re the folk with solar panels and food stock. There are also a number of first responders (fire fighters, police, and what’s left of the national guard) that have begun to reestablish some form of normal.

As the mission at this stage is to handle the intimidate situation dictates the use of pen/pencil & paper to keep records.

It is also assumed, that the zombie outbreak was middling to quick. Allowing for some preparation and notification for folks to arrive at the shelter facilities. About 3-7 days worth of food & water. Or, enough time to secure the means for producing more of the same.

User Persona

There are two basic user persona we’re designing for:

  • First responders / Preppers
  • Survivors

First Responder / Prepper

In this user class we have the ‘helpers’. These are the folks with the guns, food, shelter, and some solar panels. As users of our new infrastructure, they require the ability to communicate with one another over middling to long distances to support both the day to day operations of the camp as well as scouting missions and logistics.

Survivors

The survivors depend on our infrastructure both directly to receive news and to reach out to loved ones who may have survived.

Phase 1 - Food, Shelter, Support, Comms

Phase one begins with bringing the bases of operation online and providing basic services for those coming in and dealing with the outbreak.

Design Goals

Establish a base camp, bring basic services (food, etc) online, and support the communications and logistics missions to acquire further supplies.

Basic Design

This basic design is a rough outline of the first phase of this plan. That is, get a base-camp up to basic operational capacity: Shelter & Comms.

These however will be approached out of order, as based on the assumptions, survivors & helpers will have enough food & water supplies to bring comms back online first to further support the turn-up.

Note: Due to time constraints, we’ll not go into a full operational plan, contingencies, troubleshooting, and the rest that would accompany a full design. Any haste at this stage would result in the loss of life.

Need 1 - Comms

As we are dealing with first responders, their vehicles and persons are equipped with some form of radio communication. This, in addition to the solar / generators or other power producing equipment provided by the preppers will support the first effort at comms.

In this case, radios were chosen as they are readily available, designed for adverse conditions, and are relatively low power. Through the use of a relay network, that is people stationed at 75% of the effective radio distance to relay the message, we are able to provide robust communication back to base camp to organize parties to salvage food and water supplies from what bits of civilization that are now uninhabited. This communications network also has regular check-ins to protect against node failure and give early warning of zombie movements in the are.

This radio network has limited broadcast abilities due to power constraints. However, once daily, survivors are allowed to broadcast a prerecorded message to reach out to loved ones. Any messages received throughout the day are relayed by the network and radio operators to the survivor in question.

Need 2 - Shelter

Shelter! This is where the preppers shine. Between personal shipping container bunkers, extra tarps and enough ammo to clear out an old shopping mall’s worth of zombies. More specifically, this design requires the preppers to take over the supernap. While the existing comms infrastructure in the area isn’t hugely valuable at this stage, the size and design of the facility will allow for the shelter of people, recirculation of air, and be easily defensible.

The design requires the Alaska based preppers take over the 5th Avenue Mall in Anchorage. Once cleared & secured, the building has access to nearby parks and greenways to provide additional food and water.

On Food and Water

It needs restating that the first responders and preppers bringing these facilities online will both be able to procure as they secure, as well as once secure & comms operational, provide the survivors the means to organize into parties and procure & secure further.

Phase 2 - Defense and Expansion

Oh. No. They. Didn’t. sqrt(-shit)^2.

That is, folks are several months into what has to be one of humanities biggest challenges to date, and now there are zombie apologists. We need to defend our infrastructure whilst bringing other survivor colonies online.

Design Goals

Defend. Expand.

Basic Design

This is broken up by component. Comms, shelter, expansion.

Comms

First up, comms! - Turns out packet radios and our distributed radio network are fairly robust. However, they are subject to ‘meat’ problems. That is, when the zombies meat you, comms are off line. To address this challenge, with the limited technologist contingent onhand, the first order of business is training. That is, each technologiest will be assigned a party of 2 ‘grunts’ and 2 guards. The technologist and the grunts job is two fold: 1) establish more powerful antenna and automated radio relay stations; and 2) train the grunts enough to allow them to fan out and perform the network upgrade autonomously.

There is another design challenge that needs to be addressed coms wise. That is, the long distance comms between survivor outposts. The radio network is still power constrained, and with the zombie apologists in action, also highly variable. To overcome this, we suggest the training of homing pigeons and a robust (n+1 birds), encoded messaging protocol.

Shelter

This is broken into two bits:

  • Alaska
  • Las Vegas

In Alaska it is recommended that the encampment work to harden the 5th Avenue Mall. Specifically on the 6th Avenue street side we recommend the destruction of the two over-road people bridges, and the defense of all ingress / egress points except those critical with concrete of other available building material.

In Las Vegas, continue to employ the private security team (the dudes with M16’s at the Supernap). The approaches to each building are already Tier IV Gold certified, and fairly robust.

Expand

To expand to the next three survivor outposts, we suggest the following:

  • Salvage multiple trucks from the local area to include, as available:
    • School Bus (or charter bus)
    • Mitsubshi FUSO Flatbed trucks

In the schoolbus will be sent an number of first responders and trained survivors to establish the next base-camp. The flatbed cargo truck is to be loaded with supplies: gasoline, food, water, ammo, vehicle repair parts, radio equipment, and pigeons.

Each additional network will provide additional radio broadcast points, further hardening the radio network against failure and allowing the survivors a greater chance of reaching loved ones. Additionally, the increasd robustness of the network will allow the first responders to better communicate and organize supplies, scouting sorties, defense, and broadcast zombie movement.

VMware & Vagrant Performance Hacks

I believe I’ve posted on this before, however I think I deleted the gist that contained these bits of info.

Here are some Vagrantfile settings specific to VMware desktop products that will help you eek out a bit more performance.

# VMware Fusion / Workstation
config.vm.provider :vmware_fusion or config.vm.provider :vmware_workstation do |vmware, override|
override.vm.synced_folder ".", "/vagrant", type: "nfs"

# Fusion Performance Hacks
vmware.vmx["logging"] = "FALSE"
vmware.vmx["MemTrimRate"] = "0"
vmware.vmx["MemAllowAutoScaleDown"] = "FALSE"
vmware.vmx["mainMem.backing"] = "swap"
vmware.vmx["sched.mem.pshare.enable"] = "FALSE"
vmware.vmx["snapshot.disabled"] = "TRUE"
vmware.vmx["isolation.tools.unity.disable"] = "TRUE"
vmware.vmx["unity.allowCompostingInGuest"] = "FALSE"
vmware.vmx["unity.enableLaunchMenu"] = "FALSE"
vmware.vmx["unity.showBadges"] = "FALSE"
vmware.vmx["unity.showBorders"] = "FALSE"
vmware.vmx["unity.wasCapable"] = "FALSE"
vmware.vmx["vhv.enable"] = "TRUE"
end

The basics of what we’re doing is as follows:

  • Using NFS shared folders
  • Turning off logging
  • Tuning memory settings
  • Turning off unity

Hope this saves you some time.