Cody Bunch Some Random IT Guy - OpenStack, DevOps, Cloud, Things

Live Blog - Cloud Security: Do you know where your workloads are running?

Session background can be found here.

Speaker: Raghu Yeluri, Intel Corporation

As an Enterprise and/or a Cloud service provider, you would have to ensure that all regulatory requirements for workload and data sovereignty are met. You have to answer the questions from your customers like:

where is my workload running? , Are my workloads running in a compliant location? , How can I trust the Integrity of the host servers on which my workloads are running , can you prove to me that my workloads and data have not violated policies? , How can I control via policy where my workload can and cannot migrate and run .

Live Blog

Geo Tag / Asset tags are set in a write once area. Can be almost anything, user provided names, gps coordinates, actual asset tags, and importantly, the certificate of attestation from the TPM.

Today we’re using Glance Image Registry to set the launch properties / policies: e.g. Only runs in France. The “Trust and Launch” scheduler filter runs last against the list of servers left. It then runs a variant of Open Attestation to say, “Which are trusted?”. From there, the scheduler will deploy. This is all automatic. Only setting the tags is manual. This same attestation happens during migrations as well.

This is how we enable boundary control. We added Horizon plugins to support launch policies. Extended Nova scheduler for location filtering, and glance for policies. Finally we provide a number of tools that work in conjunction (OAT, etc). The end-to-end bits can be done entirely in OSS however.

This should be upstream in Kilo, however, we (Intel) provide scripts to make this work in Icehouse & Juno.

Live Demo - Lol WiFi!

There is no maximum number of tags that can be set, things like GEO, PCI-DSS, etc can be set, these are then used to select the servers from there.

Magic number of policies: 5. This was in conversation with NIST and MSP’s

Looking forward:

  • Extend geo-tagging for volumes
  • Tenant-controlled encryption / decryption under controlled circumstances

Extend geo-tagging for volumes - Basically the above but for Cinder. The scheduler is pluggable, so we should be able to make this happen. The assumption being x86 storage host. This will be more difficult with traditional SAN/NAS due to not being TXT enabled… yet.

Paris Summit - Day 2

Day number 2, Day number 2! He touched the cloud!

#vBrownBag Day 2

Day number 2 had a number of awesome #vBrownBag tech talks, one can find that playlist here.

ZeroVM Sessions

We had a mini track going yesterday afternoon starting at 3:40 with Carina C. Zona kicking it off with “Good, Fast, Cheap: Pick 3”. The 4:40 session right on the heels was a 90minute hands on workshop with ZeroCloud, or that sweet spot where ZeroVM meets OpenStack Swift. The recordings / slides for these are not yet available however. If you would like to try your hands at the workshop, you should be able to with the info from here.

Other notes from Day 2

  • ZOMG The food in Paris.
  • ZOMG The deserts in Paris.
  • If the waiter tells you no, he means it.
  • Sausage that says «AAAAA» is not your friend.

OpenStack Cookbook - Now on Juno

With the summit this week, I feel it’s appropriate to announce that the scripts supporting the OpenStack Cookbook have been updated (and are mostly functional) on OpenStack Juno.

To get started with them, you’ll need either VMware Workstation/Fusion or Virtualbox, Vagrant, and Git.

git clone https://github.com/openstackcookbook/openstackcookbook -b juno
cd openstackcookbook
vagrant up

These scripts will be updated and expanded once again as we being writing on the third edition of the OpenStack Cookbook.

Paris Summit - Day 1

Hooray Paris!

Well, maybe. Having only seen the inside of the conference center and hotel lobby of the hotel nextdoor, I didn’t get to do much summiting today. Rather, the day was spent prepping for tomorrows workshop.

I do hear that the #vBrownBags are doing extremely well with 19 individual sessions recorded (and streamed?) today, and a full schedule of 100-ish more this week.

Clone all the USB keys! - Method 3

Method 3 you ask? Wel, sure, I wrote on this around the time of OSCON earlier this year. However, for the OpenStack in Paris, we needed a larger axe. Thanks to Lars & Mr. Cloud Null, we managed to apply the appropriate amount of strategic violence:

cat ~/fuckshitup.sh
#!/bin/bash
set -e -o -v

for i in `jot 19 3`;
    do sudo diskutil erasedisk FAT32 PARIS /dev/disk${i}
done

for i in `jot 19 3`;
    do sudo diskutil partitionDisk /dev/disk${i} MBR MS-DOS PARIS 0b
done

for i in `jot 19 3`;
    do rsync --progress -az /Volumes/PARIS "/Volumes/PARIS $i/" &
done

First it formats all the disks, then it rsyncs all the things.

OpenStack Book Discounts During Summit

Starting now and running the week of the summit, the two best selling OpenStack books will be on super sale:

OpenStack Cookbook

Book : nieX72Mn7U

eBook: k1QxrwyMvD

Learning OpenStack Networking (Neutron)

Book : luyLRpSQ

eBook: IXQ1swn2

Paris Summit Preparation

As you might have guessed from the large number of ZeroVM posts recently, that something was up. If you are going to be at the OpenStack Summit in Paris, there will be a ZeroVM workshop given by my coworkers Egle, Lars, and I.

This session will be a 90-minute, into the deep end, building applications to work with ZeroVM.

The materials, if you want to play along at home, can be found here:

Still missing are the slides and video which I will update the post with after the show. See you in Paris.

Using Packer to Make Vagrant Boxes

Part of working on the 3rd edition of the OpenStack Cookbook required, among other things, a new release of Ubuntu that came pre-loaded with the Juno OpenStack packages. Not a problem! Excepting that there were no ready made images on Vagrant Cloud. Enter Packer.

Packer?

What the deuce is packer? From the packer.io site:

Packer is a tool for creating identical machine images for multiple platforms from a single source configuration.

Said another way, it takes the ‘configuration is code’ and pushes it back to the golden images, to a degree. The idea is one can shrink the time to deploy if instead of starting with the most generic box and adding, you start somewhere in the middle, adding common things to a ‘golden’ image, and then deploying on top of that.

In the use-case for the OpenStack Cookbook? Well, we just needed a manila Ubuntu 14.10 box for Fusion and Virtualbox. As there are three of us on the project now, having a common starting ground is ideal.

Our Packer Setup

I’ll leave installing Packer as an exercise for the reader. The docs are pretty good in the regard.

For our build, we use the following packer.json:

{
  "builders": [
    {"type": "virtualbox-iso",
    "guest_os_type": "Ubuntu_64",
    "iso_url": "http://mirror.anl.gov/pub/ubuntu-iso/CDs/utopic/ubuntu-14.10-server-amd64.iso",
    "iso_checksum": "91bd1cfba65417bfa04567e4f64b5c55",
    "iso_checksum_type":"md5",
    "http_directory": "preseed",
    "ssh_username": "vagrant",
    "ssh_password": "vagrant",
    "boot_wait":"5s",
    "output_directory": "ubuntu64_basebox_virtualbox",
    "shutdown_command": "echo 'shutdown -P now' > shutdown.sh; echo 'vagrant'|sudo -S sh 'shutdown.sh'",
    "boot_command": [
      "<esc><esc><enter><wait>",
      "/install/vmlinuz noapic ",
      "ks=http://:/preseed.cfg ",
      "debian-installer=en_US auto locale=en_US kbd-chooser/method=us ",
      "hostname= ",
      "fb=false debconf/frontend=noninteractive ",
      "keyboard-configuration/modelcode=SKIP keyboard-configuration/layout=USA ",
      "keyboard-configuration/variant=USA console-setup/ask_detect=false ",
      "initrd=/install/initrd.gz -- ",
      "<enter>"]
    },
    {"type": "vmware-iso",
    "guest_os_type": "ubuntu-64",
    "iso_url": "http://mirror.anl.gov/pub/ubuntu-iso/CDs/utopic/ubuntu-14.10-server-amd64.iso",
    "iso_checksum": "91bd1cfba65417bfa04567e4f64b5c55",
    "iso_checksum_type":"md5",
    "http_directory": "preseed",
    "ssh_username": "vagrant",
    "ssh_password": "vagrant",
    "boot_wait":"5s",
    "output_directory": "ubuntu64_basebox_vmware",
    "shutdown_command": "echo 'shutdown -P now' > shutdown.sh; echo 'vagrant'|sudo -S sh 'shutdown.sh'",
    "boot_command": [
      "<esc><esc><enter><wait>",
      "/install/vmlinuz noapic ",
      "ks=http://:/preseed.cfg ",
      "debian-installer=en_US auto locale=en_US kbd-chooser/method=us ",
      "hostname= ",
      "fb=false debconf/frontend=noninteractive ",
      "keyboard-configuration/modelcode=SKIP keyboard-configuration/layout=USA ",
      "keyboard-configuration/variant=USA console-setup/ask_detect=false ",
      "initrd=/install/initrd.gz -- ",
      "<enter>"]
    }],
  "provisioners": [{
    "type": "shell",
    "execute_command": "echo 'vagrant' | sudo -S sh ''",
    "inline": [
      "apt-get update -y",
      "apt-get install -y linux-headers-$(uname -r) build-essential dkms",
      "apt-get clean",
      "mount -o loop VBoxGuestAdditions.iso /media/cdrom",
      "sh /media/cdrom/VBoxLinuxAdditions.run",
      "umount /media/cdrom",
      "mkdir /home/vagrant/.ssh",
      "mkdir /root/.ssh",
      "wget -qO- https://raw.github.com/mitchellh/vagrant/master/keys/vagrant.pub >> ~/.ssh/authorized_keys",
      "echo 'vagrant ALL=NOPASSWD:ALL' > /tmp/vagrant",
      "chmod 0440 /tmp/vagrant",
      "mv /tmp/vagrant /etc/sudoers.d/"
    ]}],
  "post-processors": [
  {
      "type": "vagrant",
      "override": {
        "virtualbox": {
          "output": "utopic-x64-virtualbox.box"
        },
        "vmware": {
          "output": "utopic-x64-vmware.box"
        }
      }
    }
  ]
}

This in turn calls a kickstart (yes kickstart on Ubuntu) file stored in the preseed folder. This looks like:

install
text

cdrom

lang en_US.UTF-8
keyboard us

network --device eth0 --bootproto dhcp

timezone --utc America/Chicago

zerombr
clearpart --all --initlabel
bootloader --location=mbr

part /boot --fstype=ext3 --size=256 --asprimary
part pv.01 --size=1024 --grow --asprimary
volgroup vg_root pv.01
logvol swap --fstype swap --name=lv_swap --vgname=vg_root --size=1024
logvol / --fstype=ext4 --name=lv_root --vgname=vg_root --size=1024 --grow

auth --enableshadow --enablemd5

# rootpw is vagrant
rootpw --iscrypted $1$dUDXSoA9$/bEOTiK9rmsVgccsYir8W0
user --disabled

firewall --disabled

skipx

reboot

%packages
ubuntu-minimal
openssh-server
openssh-client
wget
curl
git
man
vim
ntp

%post
apt-get update

apt-get upgrade -y linux-generic

update-grub

useradd -m -s /bin/bash vagrant
echo vagrant:vagrant | chpasswd

mkdir -m 0700 -p /home/vagrant/.ssh

curl https://raw.githubusercontent.com/mitchellh/vagrant/master/keys/vagrant.pub >> /home/vagrant/.ssh/authorized_keys

chmod 600 /home/vagrant/.ssh/authorized_keys
chown -R vagrant:vagrant /home/vagrant/.ssh

echo "vagrant ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers

sed -i 's/^PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config

apt-get clean

rm -rf /tmp/*

rm -f /var/log/wtmp /var/log/btmp

history -c

Once both files are in place and packer is installed, building the images is as simple as:

packer build packer.json

This will take a long time, but it does eventually get there, and will produce two .box files in the output folders you specified in the packer.json. From there, you can upload them somewhere and have vagrantcloud make them available, as we have under bunchc/utopic-x64.

Summary

In this post we looked at how to use packer.io to build Vagrant boxes for both VMware Workstation / Fusion and Virtualbox.

Changing Jenkins Home Folder

Everything Jenkins does happens in the context of the jenkins user by default. While that sounds obvious, it has some interesting implications if you aren’t careful.

The Problem

In my homelab, I’ve installed 14.04 onto a 16gb usb key. This is a hold over from the days when it was an ESXi lab, but, it works for the most part. However, for things like downloading vagrant boxes and the like, one needs to mount external storage. Enter the problem: Jenkins sets itself up in /var/lib/jenkins by default, which lives on my very limited usb stick.

This in turn means anytime Jenkins tried to ‘vagrant up’ a thing, it would fill my root partition trying to store all the temporary files vagrant used. Whoops.

Changing the Jenkins Home Directory

There are a number of posts out there on how to do this. None of them, however, addressed the vagrant issue in particular. To do that, one has to hit it with a larger bat.

First Change /etc/defaults

The first thing to do, is change how Jenkins thinks of itself. The most straight forward way I found was to change that in /etc/defaults/jenkins. This will be around line 23 or 25:

# jenkins home location
JENKINS_HOME=/new/location

Then Change /etc/passwd

This is the step that was needed to fix the problem of Jenkins running vagrant and vagrant subsequently filling /:

Change: jenkins:x:105:114:Jenkins,,,:/var/lib/jenkins:/bin/bash

To this: jenkins:x:105:114:Jenkins,,,:/new/location:/bin/bash

Finally Restart Jenkins

Now to make sure all the things stick:

sudo /etc/init.d/jenkins restart

Summary

In this post we showed you how to change the Jenkins home directory, and why it’s also important to change it in /etc/passwd.

ZeroVM - A ZeroCloud Lab

At this point we’re pretty far down the ZeroVM rabbit hole. We’ve covered quite a bit of ground to get to this stage as well. Specific to this post, you will want to know more about ZeroCloud. You can do that here.

Regarding getting up to speed on ZeroVM as I am, you will find the following useful:

Building The ZeroCloud Lab

There are a few ways of going about this, however before we get started, realize that this is still early days and if you find this post a few months or a year or two down the road, it will likely be super out of date.

To get working with ZeroCloud, the most straight forward path is to follow the directions in the ZeroVM Docs here.

Once you’ve finished, you should have a functional ZeroCloud lab.

Experimenting in the ZeroCloud Lab

Having a lab up in running is one thing, but it’s rather boring thing by itself. So, let’s do something with it!

ZeroCloud Hello World Example

The first thing we’ll try to do, is a generic “Hello World” example, taken from here.

1. Log into the lab:

vagrant ssh

cd devstack/
./rejoin-stack.sh
source /vagrant/adminrc

zpm auth
export OS_AUTH_TOKEN=PKIZ_Zrz_Qa5NJm44FWeF7Wp...
export OS_STORAGE_URL=http://127.0.0.1:8080/v1/AUTH_7fbcd8784f8843a180cf187bbb12e49c

Note: You will need to change the AUTH_TOKEN and STORAGE_URL values to match those returned by zpm auth.

2. Create our “Hello World” python file:

cat > example <<EOF
#!file://python2.7:python
import sys
print("Hello from ZeroVM!")
print("sys.platform is '%s'" % sys.platform)
EOF

3. Now we run the example:

curl -i -X POST -H "X-Auth-Token: $OS_AUTH_TOKEN" \
  -H "X-Zerovm-Execute: 1.0" -H "Content-Type: application/python" \
  --data-binary @example $OS_STORAGE_URL

Ok, so what did we do then? In the first step we logged in to our lab, sourced a file that contains credentials for our environment, and then set some additional environment variables to store our auth token and swift url.

Next we created the hello-world example file. In the absence of a text editor we just dumped everything between “«EOF” and “EOF” into the file ‘example’. Note the first line in the file begins with #!file://. This is important as it lets the parser know what to do with said file, in this case, fire up python.

Finally, we use curl against our Swift install, letting it know to jump out to the ZeroVM middleware and that our application type is python. Additionally we specify our entire program as the payload in --data-binary

Summary

In this post, we showed you how to build a ZeroCloud lab and run an example “Hello World” application within it. If you would like to explore some more complex applications, be sure to explore the various talks and videos here.