Cody Bunch Some Random IT Guy - OpenStack, DevOps, Cloud, Things

Holiday Zen Moments

Apologies in advance if this is a bit more personal than technical. There is plenty more tech content coming, have no fear.

I ride because I ride

It’s the holidays WOOOOO! Well, maybe no seven O’s woo, but still, a good time nevertheless. On the Zen moments thing, about 6 years ago, my father told me this story, and designed the sticker you see above.

The story:

Five students of a Zen master was back from the market on their bicycles. As they dismounted, their master asked: “Why are you riding your bicycles?”

Each of them came up with different answers to their master’s query.

The first student said: “It is the bicycle that is carrying the sack of potatoes. I am glad that my back has escaped the pain of bearing the weight.”

The master was glad and said: ”You are a smart boy. When you become old you will be saved of a hunch back unlike me.”

The second student had a different answer: ”I love to have my eyes over the trees and the sprawling fields as I go riding.”

The teacher commended: “You have your eyes open and you see the world.”

The third disciple came up with yet a different answer: ”When I ride I am content to chant ‘nam myoho renge kyo’”

The master spoke words of appreciation: ”Your mind will roll with ease like a newly trued wheel.”

The fourth disciple said: “Riding my bicycle I live in perfect harmony of things.”

The pleased master said: ”You are actually riding the golden path of non-harming or non violence.”

The fifth student said: ”I ride my bicycle to ride my bicycle.”

The master walked up to him and sat at his feet and said: “I am your disciple!”

Having ridden a bicycle for a number of years, I have used it for various means and in various phases. Weight-loss, transportation, racing, harmony with nature, etc. However, over the last several years through varied events, dramas and the like, I have learned that in cycling: “I ride because I ride”.

During the holidays, one can get caught up in the presents, people, dramas, and the ever present exhaustion of well, the holidays. Over the years, I’ve been in all of the above situations and then some. This year, like in cycling, I am trying to “Holidays because I Holidays”.

Regardless of how, what, or why you get together this season, try to take a moment, sit back and enjoy them as much as you can.

If you also ride because you ride, and would like a sticker, either email me (bunchc at gmail) or ping me on twitter and we can arrange something.

Live Blog - Keystone to Keystone Federation

Session details here.

Speakers:

  • Marek Denis - Research Fellow, CERN
  • Steve Martinelli - Software Developer, IBM
  • Joe Savak Sr. - Product Manager, Rackspace
  • Brad Topol - Distinguished Engineer, IBM

In this presentation, we describe the federated identity enhancements we have added to support Keystone to Keystone federation for enabling hybrid cloud functionality. We begin with an overview of key hybrid cloud use cases that have been identified by our stakeholders including those being encountered by OpenStack superuser CERN. We then discuss our SAML based approach for enabling Keystones to trust each other and provide authorization and role support for resources in hybrid cloud environments.

Live Blog

Lots of different folks interested in identity federtion, Academia, companies, lots and lots of folks.

Use cases? - Easy to confiture, cloud bursting, central policy point, federating out, federating in. Keep the client small. No new protocols.

“Federate In” - You already have identity provider, SAML, etc Folks already have SSO / Identity. Federate allows for use of existing credentials to work with OpenStack MSP’s.

“Federate Out” - That is, you setup a trust between on prem and off prem clouds.

Cern’s Use-Case

Cern has 70,000 cores, they need more to process ALL the data they produce. This requires federation out allows folks to use pay-as-you-go to hire out additional resources as needed.

Cern also needs to be able to allow folks to federate in from others in science community.

Now an interlude for Keystone classic Auth.

Federated identity in Incehouse - Integrate existing tools, SAML, etc. There is a diagram, it has lots of arrows, the gist is you send SAML to keystone, keystone gives you a token, and things are good. This worked, but not as well as it could. Mapping engine, that is, groups in one system are not the same as groups in others. Woo Mapping: “IBM Regular Employees” –> “regular_canada” etc.

New diagram for Federation in Juno. A lot more arrows. This time around, Keystone is the provider, and will provide some level of attestation to the other Keystone in the trust relationship. Once the trust is in place, the user passes the token to either.

The SAML generator takes the token and goes backwards. Token –> SAML Generator –> SAML Assertion

Now we’re at a slide covering all manner of config data. Important bits: Mapping is still a thing. You also need to ‘prime’ the SAML assertion pump.

keystone-manage now has a metadata generation thing.

Back to Cern: - 2 datacenters, OpenStack Cells, Cells not popular. 40k users in AD, and 12k more ADFS (Federation). Cern uses SAML2, and will be the first OpenStack in the world to allow federate-in to allow external entities to consume their resources.

Patches to the OpenStack and Keystone clients.

Looking forward:

  • Auth-N
  • Horizon Integration
  • More & Better Mapping
  • Fine Grained ACLs
  • More protocols

Live Blog - Cloud Security: Do you know where your workloads are running?

Session background can be found here.

Speaker: Raghu Yeluri, Intel Corporation

As an Enterprise and/or a Cloud service provider, you would have to ensure that all regulatory requirements for workload and data sovereignty are met. You have to answer the questions from your customers like:

where is my workload running? , Are my workloads running in a compliant location? , How can I trust the Integrity of the host servers on which my workloads are running , can you prove to me that my workloads and data have not violated policies? , How can I control via policy where my workload can and cannot migrate and run .

Live Blog

Geo Tag / Asset tags are set in a write once area. Can be almost anything, user provided names, gps coordinates, actual asset tags, and importantly, the certificate of attestation from the TPM.

Today we’re using Glance Image Registry to set the launch properties / policies: e.g. Only runs in France. The “Trust and Launch” scheduler filter runs last against the list of servers left. It then runs a variant of Open Attestation to say, “Which are trusted?”. From there, the scheduler will deploy. This is all automatic. Only setting the tags is manual. This same attestation happens during migrations as well.

This is how we enable boundary control. We added Horizon plugins to support launch policies. Extended Nova scheduler for location filtering, and glance for policies. Finally we provide a number of tools that work in conjunction (OAT, etc). The end-to-end bits can be done entirely in OSS however.

This should be upstream in Kilo, however, we (Intel) provide scripts to make this work in Icehouse & Juno.

Live Demo - Lol WiFi!

There is no maximum number of tags that can be set, things like GEO, PCI-DSS, etc can be set, these are then used to select the servers from there.

Magic number of policies: 5. This was in conversation with NIST and MSP’s

Looking forward:

  • Extend geo-tagging for volumes
  • Tenant-controlled encryption / decryption under controlled circumstances

Extend geo-tagging for volumes - Basically the above but for Cinder. The scheduler is pluggable, so we should be able to make this happen. The assumption being x86 storage host. This will be more difficult with traditional SAN/NAS due to not being TXT enabled… yet.

Paris Summit - Day 2

Day number 2, Day number 2! He touched the cloud!

#vBrownBag Day 2

Day number 2 had a number of awesome #vBrownBag tech talks, one can find that playlist here.

ZeroVM Sessions

We had a mini track going yesterday afternoon starting at 3:40 with Carina C. Zona kicking it off with “Good, Fast, Cheap: Pick 3”. The 4:40 session right on the heels was a 90minute hands on workshop with ZeroCloud, or that sweet spot where ZeroVM meets OpenStack Swift. The recordings / slides for these are not yet available however. If you would like to try your hands at the workshop, you should be able to with the info from here.

Other notes from Day 2

  • ZOMG The food in Paris.
  • ZOMG The deserts in Paris.
  • If the waiter tells you no, he means it.
  • Sausage that says «AAAAA» is not your friend.

OpenStack Cookbook - Now on Juno

With the summit this week, I feel it’s appropriate to announce that the scripts supporting the OpenStack Cookbook have been updated (and are mostly functional) on OpenStack Juno.

To get started with them, you’ll need either VMware Workstation/Fusion or Virtualbox, Vagrant, and Git.

git clone https://github.com/openstackcookbook/openstackcookbook -b juno
cd openstackcookbook
vagrant up

These scripts will be updated and expanded once again as we being writing on the third edition of the OpenStack Cookbook.

Paris Summit - Day 1

Hooray Paris!

Well, maybe. Having only seen the inside of the conference center and hotel lobby of the hotel nextdoor, I didn’t get to do much summiting today. Rather, the day was spent prepping for tomorrows workshop.

I do hear that the #vBrownBags are doing extremely well with 19 individual sessions recorded (and streamed?) today, and a full schedule of 100-ish more this week.

Clone all the USB keys! - Method 3

Method 3 you ask? Wel, sure, I wrote on this around the time of OSCON earlier this year. However, for the OpenStack in Paris, we needed a larger axe. Thanks to Lars & Mr. Cloud Null, we managed to apply the appropriate amount of strategic violence:

cat ~/fuckshitup.sh
#!/bin/bash
set -e -o -v

for i in `jot 19 3`;
    do sudo diskutil erasedisk FAT32 PARIS /dev/disk${i}
done

for i in `jot 19 3`;
    do sudo diskutil partitionDisk /dev/disk${i} MBR MS-DOS PARIS 0b
done

for i in `jot 19 3`;
    do rsync --progress -az /Volumes/PARIS "/Volumes/PARIS $i/" &
done

First it formats all the disks, then it rsyncs all the things.

OpenStack Book Discounts During Summit

Starting now and running the week of the summit, the two best selling OpenStack books will be on super sale:

OpenStack Cookbook

Book : nieX72Mn7U

eBook: k1QxrwyMvD

Learning OpenStack Networking (Neutron)

Book : luyLRpSQ

eBook: IXQ1swn2

Paris Summit Preparation

As you might have guessed from the large number of ZeroVM posts recently, that something was up. If you are going to be at the OpenStack Summit in Paris, there will be a ZeroVM workshop given by my coworkers Egle, Lars, and I.

This session will be a 90-minute, into the deep end, building applications to work with ZeroVM.

The materials, if you want to play along at home, can be found here:

Still missing are the slides and video which I will update the post with after the show. See you in Paris.

Using Packer to Make Vagrant Boxes

Part of working on the 3rd edition of the OpenStack Cookbook required, among other things, a new release of Ubuntu that came pre-loaded with the Juno OpenStack packages. Not a problem! Excepting that there were no ready made images on Vagrant Cloud. Enter Packer.

Packer?

What the deuce is packer? From the packer.io site:

Packer is a tool for creating identical machine images for multiple platforms from a single source configuration.

Said another way, it takes the ‘configuration is code’ and pushes it back to the golden images, to a degree. The idea is one can shrink the time to deploy if instead of starting with the most generic box and adding, you start somewhere in the middle, adding common things to a ‘golden’ image, and then deploying on top of that.

In the use-case for the OpenStack Cookbook? Well, we just needed a manila Ubuntu 14.10 box for Fusion and Virtualbox. As there are three of us on the project now, having a common starting ground is ideal.

Our Packer Setup

I’ll leave installing Packer as an exercise for the reader. The docs are pretty good in the regard.

For our build, we use the following packer.json:

{
  "builders": [
    {"type": "virtualbox-iso",
    "guest_os_type": "Ubuntu_64",
    "iso_url": "http://mirror.anl.gov/pub/ubuntu-iso/CDs/utopic/ubuntu-14.10-server-amd64.iso",
    "iso_checksum": "91bd1cfba65417bfa04567e4f64b5c55",
    "iso_checksum_type":"md5",
    "http_directory": "preseed",
    "ssh_username": "vagrant",
    "ssh_password": "vagrant",
    "boot_wait":"5s",
    "output_directory": "ubuntu64_basebox_virtualbox",
    "shutdown_command": "echo 'shutdown -P now' > shutdown.sh; echo 'vagrant'|sudo -S sh 'shutdown.sh'",
    "boot_command": [
      "<esc><esc><enter><wait>",
      "/install/vmlinuz noapic ",
      "ks=http://:/preseed.cfg ",
      "debian-installer=en_US auto locale=en_US kbd-chooser/method=us ",
      "hostname= ",
      "fb=false debconf/frontend=noninteractive ",
      "keyboard-configuration/modelcode=SKIP keyboard-configuration/layout=USA ",
      "keyboard-configuration/variant=USA console-setup/ask_detect=false ",
      "initrd=/install/initrd.gz -- ",
      "<enter>"]
    },
    {"type": "vmware-iso",
    "guest_os_type": "ubuntu-64",
    "iso_url": "http://mirror.anl.gov/pub/ubuntu-iso/CDs/utopic/ubuntu-14.10-server-amd64.iso",
    "iso_checksum": "91bd1cfba65417bfa04567e4f64b5c55",
    "iso_checksum_type":"md5",
    "http_directory": "preseed",
    "ssh_username": "vagrant",
    "ssh_password": "vagrant",
    "boot_wait":"5s",
    "output_directory": "ubuntu64_basebox_vmware",
    "shutdown_command": "echo 'shutdown -P now' > shutdown.sh; echo 'vagrant'|sudo -S sh 'shutdown.sh'",
    "boot_command": [
      "<esc><esc><enter><wait>",
      "/install/vmlinuz noapic ",
      "ks=http://:/preseed.cfg ",
      "debian-installer=en_US auto locale=en_US kbd-chooser/method=us ",
      "hostname= ",
      "fb=false debconf/frontend=noninteractive ",
      "keyboard-configuration/modelcode=SKIP keyboard-configuration/layout=USA ",
      "keyboard-configuration/variant=USA console-setup/ask_detect=false ",
      "initrd=/install/initrd.gz -- ",
      "<enter>"]
    }],
  "provisioners": [{
    "type": "shell",
    "execute_command": "echo 'vagrant' | sudo -S sh ''",
    "inline": [
      "apt-get update -y",
      "apt-get install -y linux-headers-$(uname -r) build-essential dkms",
      "apt-get clean",
      "mount -o loop VBoxGuestAdditions.iso /media/cdrom",
      "sh /media/cdrom/VBoxLinuxAdditions.run",
      "umount /media/cdrom",
      "mkdir /home/vagrant/.ssh",
      "mkdir /root/.ssh",
      "wget -qO- https://raw.github.com/mitchellh/vagrant/master/keys/vagrant.pub >> ~/.ssh/authorized_keys",
      "echo 'vagrant ALL=NOPASSWD:ALL' > /tmp/vagrant",
      "chmod 0440 /tmp/vagrant",
      "mv /tmp/vagrant /etc/sudoers.d/"
    ]}],
  "post-processors": [
  {
      "type": "vagrant",
      "override": {
        "virtualbox": {
          "output": "utopic-x64-virtualbox.box"
        },
        "vmware": {
          "output": "utopic-x64-vmware.box"
        }
      }
    }
  ]
}

This in turn calls a kickstart (yes kickstart on Ubuntu) file stored in the preseed folder. This looks like:

install
text

cdrom

lang en_US.UTF-8
keyboard us

network --device eth0 --bootproto dhcp

timezone --utc America/Chicago

zerombr
clearpart --all --initlabel
bootloader --location=mbr

part /boot --fstype=ext3 --size=256 --asprimary
part pv.01 --size=1024 --grow --asprimary
volgroup vg_root pv.01
logvol swap --fstype swap --name=lv_swap --vgname=vg_root --size=1024
logvol / --fstype=ext4 --name=lv_root --vgname=vg_root --size=1024 --grow

auth --enableshadow --enablemd5

# rootpw is vagrant
rootpw --iscrypted $1$dUDXSoA9$/bEOTiK9rmsVgccsYir8W0
user --disabled

firewall --disabled

skipx

reboot

%packages
ubuntu-minimal
openssh-server
openssh-client
wget
curl
git
man
vim
ntp

%post
apt-get update

apt-get upgrade -y linux-generic

update-grub

useradd -m -s /bin/bash vagrant
echo vagrant:vagrant | chpasswd

mkdir -m 0700 -p /home/vagrant/.ssh

curl https://raw.githubusercontent.com/mitchellh/vagrant/master/keys/vagrant.pub >> /home/vagrant/.ssh/authorized_keys

chmod 600 /home/vagrant/.ssh/authorized_keys
chown -R vagrant:vagrant /home/vagrant/.ssh

echo "vagrant ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers

sed -i 's/^PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config

apt-get clean

rm -rf /tmp/*

rm -f /var/log/wtmp /var/log/btmp

history -c

Once both files are in place and packer is installed, building the images is as simple as:

packer build packer.json

This will take a long time, but it does eventually get there, and will produce two .box files in the output folders you specified in the packer.json. From there, you can upload them somewhere and have vagrantcloud make them available, as we have under bunchc/utopic-x64.

Summary

In this post we looked at how to use packer.io to build Vagrant boxes for both VMware Workstation / Fusion and Virtualbox.