Cody Bunch Some Random IT Guy - OpenStack, DevOps, Cloud, Things

Seti@home on Raspberry Pi with Kubernetes

In this post we take our Raspberry PI cluster, deploy Kubernetes to it, and then use a deployment set to launch the Boinc client to churn seti@home data.

The cluster

Prep The Cluster

First things first, we need to flash all 8 nodes with the latest hypriot image. We do this using their flash tool, a bash for loop, and some flash card switching:

for i in {1..8}; do flash --hostname node0$i https://github.com/hypriot/image-builder-rpi/releases/download/v1.1.3/hypriotos-rpi-v1.1.3.img.zip; done

Once you have the cards flashed, install them into your Pi’s and boot them up, we’ve got some more prep to do.

Copy SSH Keys

The first thing to do is enable keybased logins. You’ll be prompted for the password each time. Password: hypriot

for i in {1..8}; do ssh-copy-id pirate@node0$i; done

Run updates

for i in {1..8}; do ssh pirate@node0$i -t 'sudo apt-get update -qq && sudo apt-get upgrade -qqy --force-yes'

Build the cluster

Here is where the fun starts. On each node, you’re going to want to install Kubernetes as described here.

Fire Up BOINC & Seti@Home

For this I used the kubernetes dashboard, tho the command line would work just as well.

Click create to launch the creation wizard. You’ll see something like this where you can provide a name, image, and number of pods. My settings are captured in the image:

k8s new deployment

Next, we need to open the advanced settings. This is where we specify the environment variables, again captured in the following image:

Environment variables

For reference these are:

 BOINC_CONFIG_CONTENTS = "<account>

<master_url>http://setiathome.berkeley.edu/</master_url>

<authenticator>your_authenticator_code_here_get_it_from_setiathome</authenticator>
</account>"

BOINC_CONFIG_FILENAME = account_setiathome.berkeley.edu.xml

Finally, save & deploy it, this’ll take a minute or two.

All done:

all done

Summary

In this post, you flashed a bunch of Raspberry Pi’s with Hypriot and built a Kubernetes cluster with them. From there you logged into the dashboard and deployed seti@home.

Resources

My First Django App

Having had some time over the winter break, I took some time to watch the excellent Django Webcast on O’Reilly’s Safari. What follows here are the commands used in said video cast used to get started with an app:

Setting up a virtual environment

My first instinct to keep the development environment separate from my working world was to fire up a VM and go with it. Turns out, in Python you can still work locally without much fear of breaking your box. You do this with virtualenv’s, and manage those with virtualenvwrapper.

Note: One can use virtual environments without virtualenvwrapper. virtualenvwrapper made things a bit easier for me.

Install virtualenvwrapper on OSX

For this, I assume you have a working homebrew:

brew update
brew install pyenv-virtualenvwrapper

Install virtualenvwrapper on Ubuntu 16.04

Thankfully, it’s a happy little apt-package for us here:

sudo apt-get update
sudo apt-get install virtualenvwrapper

Configuring virtualenvwrapper

Now that you have it installed on your system, the following .bash_profile settings set up some specific behaviors in virtualenvwrapper. These work on both OSX and Ubuntu:

echo "export WORKON_HOME=$HOME/.virtualenvs" >> ~/.bashrc
echo "export PROJECT_HOME=$HOME/projects" >> ~/.bashrc

source ~/.bashrc

The first line tells virtualenvwrapper where you would like it to store all of the files (python, pip, and all their parts) for said virtualenv. The second line tells virtualenvwrapper where your code lives. Finally, we pull said values into out working bash shell.

Create and enter your virtual env

Now that’s all sorted, let’s make a virtual environment to work on:

mkvirtualenv -p /usr/bin/python3 newProject

Breaking this down, the -p /usr/bin/python3 tells virtualenv to install python3 into our virtualenv. The name newProject is well, the new project name. This command will produce output like the following:

$ mkvirtualenv -p /usr/bin/python3 newProject
Already using interpreter /usr/bin/python3
Using base prefix '/usr'
New python executable in newProject/bin/python3
Also creating executable in newProject/bin/python
Installing setuptools, pip...done.

To enter your virtual environment and start working on things:

$ cd ~/projects/
$ mkdir newProject
$ cd newProject/
$ workon newProject

Installing and Getting Started with Django

Ok, so that was a lot of setup to get to this point, but, here we are, it’s time to install Django, create the structure of our application, and finally start Django’s built in webserver to make sure it is all working.

To install Django inside your virtual environment:

$ pip install django
Downloading/unpacking django
  Downloading Django-1.10.5-py2.py3-none-any.whl (6.8MB): 6.8MB downloaded
Installing collected packages: django
Successfully installed django
Cleaning up...

Now let’s install the skeleton of our app:

django-admin startproject newProject

This will create a directory structure like this:

$ tree
.
└── newProject
    ├── manage.py
    └── newProject
        ├── __init__.py
        ├── settings.py
        ├── urls.py
        └── wsgi.py

2 directories, 5 files

Next up, we will want to fire up django’s built in server and validate our install:

$ cd newProject
$ python manage.py migrate

Operations to perform:
  Apply all migrations: admin, auth, contenttypes, sessions
Running migrations:
  Applying contenttypes.0001_initial... OK
  Applying auth.0001_initial... OK
  Applying admin.0001_initial... OK
  Applying admin.0002_logentry_remove_auto_add... OK
  Applying contenttypes.0002_remove_content_type_name... OK
  Applying auth.0002_alter_permission_name_max_length... OK
  Applying auth.0003_alter_user_email_max_length... OK
  Applying auth.0004_alter_user_username_opts... OK
  Applying auth.0005_alter_user_last_login_null... OK
  Applying auth.0006_require_contenttypes_0002... OK
  Applying auth.0007_alter_validators_add_error_messages... OK
  Applying auth.0008_alter_user_username_max_length... OK
  Applying sessions.0001_initial... OK

$ python manage.py runserver
Performing system checks...

System check identified no issues (0 silenced).
January 07, 2017 - 23:52:17
Django version 1.10.5, using settings 'newProject.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.

Taking a look at what we did there:

  • manage.py migrate - preforms the initial migration & population of a sqlite database so our shell application will work.
  • manage.py runserver - started Django’s built in webserver

While there is a lot more to actually writing a Django app, this is essentially how you get started on a net-new application.

Summary

In this post we installed virtualenvwrapper and used it to crate a new virtual environment. We then installed Django, performed an initial database migration and run Django’s built in server so we could browse to and test the shell application.

Some MS Resources

After a conversation with a friend who has a relative recently diagnosed, I felt compelled to share a bunch of the same links with the greater readership both on the blog and via Twitter. Mayhaps you’ve already seen the tweet storm, but I thought I’d collect the resources here a bit longer form.

I have taken the time to read and vet each one of these resources. Meaning I have spent some time distilling a lot of the crap out there into this list. If you’re newly diagnosed, know someone who was, or want to come up to speed fast, start here.

Note: This list is only a starting point. If you have a resource you’ve found useful, please drop me a note bunchc@gmail.com or @cody_bunch on twttier.

Diet

I start with diet rather than disease, because it was the single thing we did that has had the greatest impact. Taking the approach that you or someone you know may be newly diagnosed, I know you have questions, but trust me, start here.

The one thing that has had the single biggest impact for us on our MS journey has been Terry Wahls. Mind, we’re super inconsistent in the application of the Wahls diet(s), and still have had great effect.

  • Terry Wahls Tedx
  • Terry Wahls - Minding My Mitochondria Terry Wahls has two books out. This was the first, and reads more like a research scientist than new-age treatment. Read this after watching the TedX
  • Terry Wahls - Wahls Protocol This one reads a bit more turbo Paleo hipster. Still, totally worth it, as between the first book and this one, there have been a number of actual scientific studies done on variants of the Wahls protocol and to good effect. This book reports those findings.

Disease

Next up, some information on the disease itself. These links can be dry / more scientific reading. If you’re comfortable with that, they will bring you up to date with where MS related research and knowledge are as of 2015/2016.

  • For Nurses and Nurse Practitioners (MS Society) Read all of them. Start with the quick reference and follow that up with the handbook (3 & 4).
  • National MS Society Resource page This page helped us not only find a Dr who understands MS, but also helped us coordinate a transition from FL to TX back in the day. They have links to local support groups, education, and more.
  • MS Society Research page Here you will find links to both how the MS Society spends it’s money on research, as well as links to said research studies. If you’ve heard of a treatment, maybe Dr. OZ recommended a thing, read about the research behind it here.
  • A comprehensive research review It’s getting a bit older (a 2014 book), but research takes time, and it’s still a good primer. It won’t hold your hand however. Be prepared for science.

The Human Side

The disease, like others, has a human component. Below are two of the blogs that have helped me figure out what that means for other folks and incorporate some of that into our family.

Summary

This post, like the twitter rant before it, isn’t to garner sy/empathy. Rather, it’s to help others who find themselves in the same boat. You aren’t alone, and if you’d like, feel free to reach out to chat.

Seti@Home with Docker on Raspberry Pi

A little while back Tim from CERN commented related to my HomeLab. Specifically he was wanting me to run LHC@Home workloads. While I promise those are coming, they do not currently support ARM CPU’s. There is an @home citizen science project that does however. Seti. You know, the one that started it all.

Aliens

To get started with this on docker on rPI you need to do a few things:

Don’t worry about any of the values the first time around, we’re only getting it going enough to get our authenticator code

docker run -d -t --env BOINC_CONFIG_CONTENTS="<account>
<master_url>http://setiathome.berkeley.edu/</master_url>
<authenticator>your_authenticator_code_here_get_it_from_setiathome</authenticator>
</account>" --env BOINC_CONFIG_FILENAME=account_setiathome.berkeley.edu.xml -i boinc

Next attach to the container and get your authenticator code:

docker exec -i -t /bin/bash <id of container>
boinccmd --lookup_account http://setiathome.berkeley.edu <your_email> <your_password>

Copy the authenticator code it gives you, and kill the container:

docker kill <id of container>

Finally restart it with the correct auth code: Science!

docker run -d -t --env BOINC_CONFIG_CONTENTS="<account>
<master_url>http://setiathome.berkeley.edu/</master_url>
<authenticator>your_authenticator_code_here_get_it_from_setiathome</authenticator>
</account>" --env BOINC_CONFIG_FILENAME=account_setiathome.berkeley.edu.xml -i boinc

Once done, you can fire up htop and confirm all your CPU’s have maxed out.

Resources

Git at home with Hypriot & docker-compose

The folks at Hypriot put out a post recently on running your own git server using Gogs in Docker on the Raspberry Pi. While this was a great start, I wanted to mix and match this with their DockerUI container.

Thing is, I didn’t want to start them each separately using a mix of docker commands. To solve this I set up following in my docker-compose.yml:

gogs:
  image: hypriot/rpi-gogs-raspbian
  restart: always
  volumes:
    - '/home/pirate/gogs-data/:/data'
  expose:
    - 22
    - 3000
  ports:
    - '8022:22'
    - '3000:3000'


dockerui:
  image: hypriot/rpi-dockerui
  restart: always
  volumes:
    - '/var/run/docker.sock:/var/run/docker.sock'
  expose:
    - 9000
  ports:
    - '80:9000'

Then fired up the environment using:

docker-compose run -d

And there you go. DockerUI and Gogs for local git repos.

Useful bash functions for virsh/kvm

When working on getting my first VM on KVM started, there were some things that were missing or non-obvious to a newbie like me.

That is, listing the IP address assigned to a VM or ‘dom’ required some digging. (Note, this is addressed in newer releases of KVM). To that end, and after a lot of time googling, I came across this gist that provides a lot of useful functions.

Specific to the IP address bit, it gives you virt-addr:

## List all our VMs
# virsh list --all
 Id    Name                           State
----------------------------------------------------
 2     logging1.openstackci.local     running
 3     network2.openstackci.local     running
 4     network1.openstackci.local     running
 5     infra1.openstackci.local       running
 6     infra2.openstackci.local       running
 7     infra3.openstackci.local       running
 8     compute1.openstackci.local     running
 9     compute2.openstackci.local     running
 10    cinder2.openstackci.local      running
 11    cinder1.openstackci.local      running
 12    swift1.openstackci.local       running
 13    swift3.openstackci.local       running
 14    swift2.openstackci.local       running

## List all IPs for ID 5
# virt-addr 5
10.0.0.17
172.29.236.100
10.0.0.100
10.0.0.4

## Works by name too
# virt-addr infra1.openstackci.local
10.0.0.17
172.29.236.100
10.0.0.100
10.0.0.4

Configuring Hierarchical Token Bucket (QoS) on Ubuntu 16.04

Hierarchical Token Bucket, or HTB is one of a bunch of ways to perform QoS on Linux. This post will cover the HTB concept at a very (very) high level, and provide some example configurations that provide for both a traffic guarantee and a traffic limit.

Linux Traffic Control

As I’ve recently discovered, Linux has an extremely robust network traffic management system. This goes well beyond network name spaces and iptables rules and into the world of QoS. Network QoS on Linux is handled by the ‘traffic control’ subsystem.

TC, or Traffic Control refers to the entirety of the queue management for a given network. For a good overview of TC, start here.

Hierarchical Token Bucket

The best explanation of Hierarchical Token Bucket, or HTB I’ve seen so far is to imagine a fixed size bucket into which traffic flows. This bucket can only drain network traffic at a given rate. Each token, or packet, is checked against a defined hierarchy, and released from the bucket accordingly.

This allows the system administrator to define both minimum guarantees for a traffic classification as well as limits for the same. Our configuration below explores this on two Ubuntu 16.04 VMs.

Configuration

Configuring HTB has 3 basic steps:

  • Change the interface queuing to HTB
  • Define traffic classes
  • Configure filters to map traffic to it’s class

The following example configures eth2 to use HTB, defines a class of traffic to be shaped to 1Mbit, and then uses the TC classifier to market all packets with a source or destination port of 8000 and map it to that class.

sudo su - 
# Change the queuing for eth2 to HTB
tc qdisc add dev eth2 root handle 1: htb

# Create a traffic class
tc class add dev eth2 parent 1: classid 1:8000 htb rate 1Mbit

# Create traffic filters to match the class
tc filter add dev eth2 protocol ip parent 1: prio 1 u32 \
    match ip dport 8000 0xffff flowid 1:8000

tc filter add dev eth2 protocol ip parent 1: prio 1 u32 \
    match ip sport 8000 0xffff flowid 1:8000

The commands above will produce no output if successful. If you would like to confirm your changes, the following commands will help:

## Report the queuing defined for the interface
$ tc qdisc show dev eth2

qdisc htb 1: root refcnt 2 r2q 10 default 0 direct_packets_stat 21 direct_qlen 1000

## Report the class definitions
$ tc class show dev eth2

class htb 1:8000 root prio 0 rate 1Mbit ceil 1Mbit burst 1600b cburst 1600b

## Report the filters configured for a given device
$ tc filter show dev eth2

filter parent 1: protocol ip pref 1 u32
filter parent 1: protocol ip pref 1 u32 fh 800: ht divisor 1
filter parent 1: protocol ip pref 1 u32 fh 800::800 order 2048 key ht 800 bkt 0 flowid 1:80
  match 00001f40/0000ffff at 20
filter parent 1: protocol ip pref 1 u32 fh 800::801 order 2049 key ht 800 bkt 0 flowid 1:80
  match 1f400000/ffff0000 at 20

With these configured, it is now time to test our configuration…

Testing HTB

For our test scenario, we are going to use two virtual machines on the same L2 network, something like this:

         +-------------------------------------+
         | vSwitch                             |
         +-------------------------------------+
         eth1 |                       | eth2
         +--------------+       +--------------+
         | server01     |       | server02     |
         | 172.16.1.101 |       | 172.16.2.101 |
         +--------------+       +--------------+

First, you’ll need to install iperf on both nodes:

sudo apt-get install -qy iperf

Then on server02, start iperf in server mode listening on port 8000 as we specified in our filter:

sudo iperf -s -p 8000

On server01, we then run the iperf client. We specify to connect to server02, run the test for 5 minutes (300 seconds), bidirectionally (-d) on port 8000 (-p) and report statistics at a 5 second interval (-i):

sudo iperf -c 172.16.2.101 -t 300 -d -p 8000 -i 5

If you have configured things correctly you will see output like the following:

[  5] 45.0-50.0 sec   379 KBytes   621 Kbits/sec
[  3] 45.0-50.0 sec   768 KBytes  1.26 Mbits/sec
[  5] 50.0-55.0 sec   352 KBytes   577 Kbits/sec

with that, you have configured and tested your first HTB traffic filter.

Summary

In this post there was a lot going on. The tl;dr is, Linux supports network QoS through a mechanism called Traffic Control or TC. HTB is one of many techniques you can use to control said traffic. In this post, we also explored how to configure and test an HTB traffic queue.

Resources

KVM and OVS on Ubuntu 16.04

Lately I’ve been digging a bit deeper into what actually goes on behind the scenes when it comes to cloudy things. Specifically, I wanted to get a virtual machine booted with KVM and connected to the network using OVS. This was made difficult as the posts describing how to do this are either out of date or make knowledge assumptions that could throw a beginner off.

What follows then, are my notes on how I am currently preforming this on Ubuntu 16.04 with OVS.

Install KVM

First up, we install KVM:

# Update and install the needed packages
PACKAGES="qemu-kvm libvirt-bin bridge-utils virtinst"
sudo apt-get update
sudo apt-get dist-upgrade -qy

sudo apt-get install -qy ${PACKAGES}

# add our current user to the right groups
sudo adduser `id -un` libvirtd
sudo adduser `id -un` kvm

The above code block first defines a list of packages we need. Next, it updates the your Ubuntu package cache, the installed system. Then it installs the packages needed to operate KVM: libvirt-bin, qemu-kvm, bridge-utils, virtinst. In order these packages handle: virtualization, management, networking, and ease of use.

The final two commands add our current user to the groups needed to operate KVM.

Install OVS

Next up, we install and configure OVS for use with KVM.

First we need to remove the default network to prevent conflicts down the road.

sudo virsh net-destroy default
sudo virsh net-autostart --disable default

Next up, we install the OVS packages and start the service:

sudo apt-get install -qy openvswitch-switch openvswitch-common
sudo service openvswitch-switch start

Now that OVS is installed and started, it is time to start configuring things. The below code block:

  • enables ipv4 forwarding
  • creates and ovs bridge
  • adds eth2 to the bridge
sudo echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
sudo sysctl -p /etc/sysctl.conf

sudo ovs-vsctl add-br br0
sudo ovs-vsctl add-port br0 eth2

Our last task then, is to reconfigure the network interfaces, moving the IP that used to reside on eth2 to br0. To do that, your /etc/network/interfaces file should look similar to this:

Note: What follows only contains the two sections of the interfaces file we need for this example.

# Set eth2 for bridging
auto eth2
iface eth2 inet manual

# The OVS bridge interface
auto br0
iface br0 inet static
  address 172.16.2.101
  netmask 255.255.255.0
  bridge_ports eth2
  bridge_fd 9
  bridge_hello 2
  bridge_maxage 12
  bridge_stp off

Finally, restart networking:

sudo ifdown --force -a && sudo ifup --force -a

Booting your first VM

Now that we have OVS and KVM installed, it’s time to boot our first VM. We do that by using the virt-install command. The virt-install command itself has like, ALL of the parameters, and we’ll discuss what they do after the code block:

Note: You should have DHCP running somewhere on the network that you have bridged to.

sudo virt-install -n demo01 -r 256 --vcpus 1 \
    --location "http://us.archive.ubuntu.com/ubuntu/dists/xenial/main/installer-i386/" \
    --os-type linux \
    --os-variant ubuntu16.04 \
    --network bridge=br0,virtualport_type='ovs' \
    --graphics vnc \
    --hvm --virt-type kvm \
    --disk size=8,path=/var/lib/libvirt/images/test01.img \
    --noautoconsole \
    --extra-args 'console=ttyS0,115200n8 serial '

Like I said, a pretty involved command. We’ll break down the more interesting or non-obvious parameters:

  • -r is memory in MB
  • –location is the URL that contains the initrd image to boot linux
  • –os-variant for this one, consult the man page for virt-install to get the right name.
  • –network - This one took me the longest to sort out. That is, bridge=br0 was straight forward, but knowing to set virtualport_type to OVS took looking at the man page more times than I would like to admit.
  • –hvm and –virt-type kvm - These values tell virt-install to create a vm that will run on a hypervisor, and that the hypervisor of choice is KVM.
  • –disk - the two values here, size and path are straight forward. Disk size in GB and the path to where it should create the file.

Once you execute this command, virt-install will create an XML that represents the VM, boot it, and attempt to install an operating system. This will leave you a prompt to attach to the VM and finish the installation.

Attaching to the console

When creating the VM we supplied --extra-args 'console=ttyS0,115200n8 serial '. This tells virsh to supply said parameters to the VMs boot sequence, which supplies us with a serial console.

To attach to the console and finish our installation we need to first get the ID of the VM we’re working with, then attach to it. You get the VM ID by running virsh list --all:

$ sudo virsh list --all
 Id    Name                           State
----------------------------------------------------
 1     demo01                         running

To attach to the console: sudo virsh console 1 should bring up the ubuntu installer:


  ┌──────────────────────┤ [!!] Select your location ├──────────────────────┐
  │                                                                         │
  │ The selected location will be used to set your time zone and also for   │
  │ example to help select the system locale. Normally this should be the   │
  │ country where you live.                                                 │
  │                                                                         │
  │ This is a shortlist of locations based on the language you selected.    │
  │ Choose "other" if your location is not listed.                          │
  │                                                                         │
  │ Country, territory or area:                                             │
  │                                                                         │
  │                         Nigeria                                         │
  │                         Philippines          ▒                          │
  │                         Singapore            ▒                          │
  │                         South Africa                                    │
  │                         United Kingdom       ▒                          │
  │                         United States                                   │
  │                                                                         │
  │     <Go Back>                                                           │
  │                                                                         │
  └─────────────────────────────────────────────────────────────────────────┘

<Tab> moves; <Space> selects; <Enter> activates buttons

Resources

There were so, so many blogs used as a staring point for this that I’m sure I’ve missed one or two, but here we go:

Easy Conference Tunneling - OSX, Sidestep, and Cloud Servers

VMworld 2016 is upon us. Or was at the time of this writing. That doesn’t change the message, however. When you are traveling for work, play or otherwise, who knows who else is on the WiFi with you. Who is snooping your traffic, and so on.

In this post, we’ll cover setting up a Cloud server, ssh keys, and sidestep to provide you with traffic tunneling and encryption from where-ever you are.

Assumptions:

  • An account with some cloud provider
  • A recent version of OSX

Set up the Cloud Server

THe instructions for this will vary some depending on the provider you use. What you are looking for however, is some flavor of Ubuntu 14.04 or higher. From there, apply this, to provide a basic level of hardening.

You can either copy / paste it in as user data, or run it line by line (or as a script on the remote host).

Set up SSH Keys

First, lets check to see if you have existing ssh keys:

ls -al ~/.ssh

You are looking for one of the following:

id_rsa.pub
id_dsa.pub
id_ecdsa.pub
id_ed25519.pub

Not there? Want a new one for this task? Let’s make one. From the terminal:

ssh-keygen -t rsa -b 4096

When prompted just give it all the default answers (yes yes, passwordless keys are the devil, but, we’re only using this key, for this server, for this conference, right?)

Finally we need to copy the new keys over to your server:

ssh-copy-id user@your.cloudserver.com

Finally use the key to log in to your cloud server, and disable password logins:

ssh user@your.cloudserver.com

sudo sed -i 's/#PasswordAuthentication yes/PasswordAuthentication no/' /etc/ssh/sshd_conf

cat /etc/ssh/sshd_conf | grep PasswordAuthentication

sudo service ssh restart

So what we just did was:

  • Log into the cloud server
  • Disable password auth
  • Confirm that we disabled password auth (That is, there is no # in front of the line and that it reads ‘no’)
  • Restarted SSH to enable the setting

Sidestep

Sidestep is the glue that pulls all of this together. From their site:

When Sidestep detects you connecting to an unprotected wireless network, it automatically encrypts all of your Internet traffic and reroutes it through a secure connection to a server of your choosing, which acts as your Internet proxy. And it does all this in the background so that you don’t even notice it.

So, first things first, download and install this the same as you would other OSX packages. Once installed you will need to configure it.

First, set up the actual proxy host & click test:

Next, set sidestep up to work automagically:

Summary

In this post we showed you how to setup a budget tunneling solution for when you are out and about conferencing, or otherwise on a network you do not trust.

Revisiting BGP on Linux w/ Cumulus Topology Converter

A post or two ago, we explored setting up BGP to route between networks using Linux and Cumulus Quagga. As fun as this exercise was, I would not want to have to set it up this way each and every time I needed a lab. To that point, I haven’t actually touched the homelab it was built on since that point. Why? because complicated.

Then enters Scott Lowe, and Technology Short Take #70. At the bottom of the networking section, as if it weren’t the coolest thing on the list, there is this gem:

"This looks neat—I need to try to find time to give it a spin."

So that’s what we’re doing!

Topology converter’s job, is to take a formatted text file that represents the lab you’d like built, and to generate it as a Vagrantfile that can then be spun up with all of the plumbing taken care of for you.

Getting Started

My lab setup is a 2012 15” Retina MacBook, 16bg ram, and the latest Vagrant (1.8.5) & Virtualbox (5.1.4). From there we start with the installation instructions found here.. Additionally, you’ll want to clone the repo: git clone https://github.com/CumulusNetworks/topology_converter && cd topology_converter

You’ll also want to install the vagrant-cumulus plugin: vagrant plugin install vagrant-cumulus

Remaking our BGP Lab

Now that we’ve got the tools installed, we need to create our topology. Remember, we used this last time:

Topology

In the language of topology converter, that looks like this:

graph dc1 {
  "spine01" [function="spine" os="CumulusCommunity/cumulus-vx" version="3.0.1" memory="512" config="./helper_scripts/extra_switch_config.sh" mgmt_ip="172.10.100.1"]
  "spine02" [function="spine" os="CumulusCommunity/cumulus-vx" version="3.0.1" memory="512" config="./helper_scripts/extra_switch_config.sh" mgmt_ip="172.20.100.1"]
  "spine03" [function="spine" os="CumulusCommunity/cumulus-vx" version="3.0.1" memory="512" config="./helper_scripts/extra_switch_config.sh" mgmt_ip="172.30.100.1"]
  "leaf01" [function="leaf" os="CumulusCommunity/cumulus-vx" version="3.0.1" memory="512" config="./helper_scripts/extra_switch_config.sh" mgmt_ip="172.10.100.2"]
  "leaf02" [function="leaf" os="CumulusCommunity/cumulus-vx" version="3.0.1" memory="512" config="./helper_scripts/extra_switch_config.sh" mgmt_ip="172.20.100.2"]
  "leaf03" [function="leaf" os="CumulusCommunity/cumulus-vx" version="3.0.1" memory="512" config="./helper_scripts/extra_switch_config.sh" mgmt_ip="172.30.100.2"]
  "server01" [function="host" os="boxcutter/ubuntu1404" memory="512" ubuntu=True config="./helper_scripts/extra_server_config.sh" mgmt_ip="172.10.100.3"]
  "server02" [function="host" os="boxcutter/ubuntu1404" memory="512" ubuntu=True config="./helper_scripts/extra_server_config.sh" mgmt_ip="172.20.100.3"]
  "server03" [function="host" os="boxcutter/ubuntu1404" memory="512" ubuntu=True config="./helper_scripts/extra_server_config.sh" mgmt_ip="172.30.100.3"]

  "leaf01":"swp40" -- "spine01":"swp1"
  "leaf02":"swp40" -- "spine02":"swp1"
  "leaf03":"swp40" -- "spine03":"swp1"

  "spine01":"swp31" -- "spine02":"swp31"
  "spine02":"swp32" -- "spine03":"swp32"
  "spine03":"swp33" -- "spine01":"swp33"

  "server01":"eth0" -- "leaf01":"swp1"
  "server02":"eth0" -- "leaf02":"swp1"
  "server03":"eth0" -- "leaf03":"swp1"
}

There are four sections to this file. The first is the node definitions. Spine, leaf, and server. Or, routers, switches, and hosts. Each is given an OS to run and a management IP.

The next three sections define the connections between the nodes. In our case, interface 40 on the switches uplinks to their router. The routers each link to one another, and the hosts to the switches.

Once you have this file saved as bgplab.dot (or similar), run the topology converter command (which produces a very verbose bit of output):

$ python ./topology_converter.py ./bgplab.dot

######################################
          Topology Converter
######################################
>> DEVICE: spine02
     code: CumulusCommunity/cumulus-vx
     memory: 512
     function: spine
     mgmt_ip: 172.20.100.1
     config: ./helper_scripts/config_switch.sh
     hostname: spine02
     version: 3.0.1
       LINK: swp1
               remote_device: leaf02
               mac: 44:38:39:00:00:04
               network: net2
               remote_interface: swp40
       LINK: swp31
               remote_device: spine01
               mac: 44:38:39:00:00:10
               network: net8
               remote_interface: swp31
       LINK: swp32
               remote_device: spine03
               mac: 44:38:39:00:00:0d
               network: net7
               remote_interface: swp32
...

Starting the lab

Ok, so what we’ve done so far was install topology converter, write a topology file to represent our lab, and finally, we converted that to a Vagrant file. We have one more pre-flight check to run before we can fire up the lab, and that is to make sure Vagrant recognizes what we’re trying to do:

$ vagrant status
Current machine states:

spine02                   not created (vmware_fusion)
spine03                   not created (vmware_fusion)
spine01                   not created (vmware_fusion)
leaf02                    not created (vmware_fusion)
leaf03                    not created (vmware_fusion)
leaf01                    not created (vmware_fusion)
server01                  not created (vmware_fusion)
server03                  not created (vmware_fusion)
server02                  not created (vmware_fusion)

Looks good, excepting that vmware_fusion bit. Thankfully, that’s an artifact of my local environment, and can be worked around by specifying --provider=virtualbox

So, let’s do that: Warning, this will take a WHILE

vagrant up --provider=virtualbox


LOTS OF OUTPUT

$ vagrant status
Current machine states:

spine02                   running (virtualbox)
spine03                   running (virtualbox)
spine01                   running (virtualbox)
leaf02                    running (virtualbox)
leaf03                    running (virtualbox)
leaf01                    running (virtualbox)
server01                  running (virtualbox)
server03                  running (virtualbox)
server02                  running (virtualbox)

Accessing the lab

Ok, to recap one more time, we’ve installed topology converter, written a topology file, converted that to a Vagrant environment, and fired that up within virtualbox. That’s A LOT of work, so if you need a coffee, I understand.

While we’ve done all the work of creating the lab, we haven’t configured anything as yet. While we’ll leave that as an exercise for another post, we will show you how to access a node in said lab. The most straight forward way, is with vagrant ssh [nodename]

vagrant ssh spine02

Welcome to Cumulus VX (TM)

Cumulus VX (TM) is a community supported virtual appliance designed for
experiencing, testing and prototyping Cumulus Networks' latest technology.
For any questions or technical support, visit our community site at:
http://community.cumulusnetworks.com

The registered trademark Linux (R) is used pursuant to a sublicense from LMI,
the exclusive licensee of Linus Torvalds, owner of the mark on a world-wide
basis.
vagrant@spine02:~$ sudo su - cumulus

Summary

Long post is long. However, in this post, we have used Cumulus Topology converter to create a lab network topology with 3 routers, 3 switches, and 3 hosts that are now waiting for configuration.