Cody Bunch Some Random IT Guy - OpenStack, DevOps, Cloud, Things

Configuring Hierarchical Token Bucket (QoS) on Ubuntu 16.04

Hierarchical Token Bucket, or HTB is one of a bunch of ways to perform QoS on Linux. This post will cover the HTB concept at a very (very) high level, and provide some example configurations that provide for both a traffic guarantee and a traffic limit.

Linux Traffic Control

As I’ve recently discovered, Linux has an extremely robust network traffic management system. This goes well beyond network name spaces and iptables rules and into the world of QoS. Network QoS on Linux is handled by the ‘traffic control’ subsystem.

TC, or Traffic Control refers to the entirety of the queue management for a given network. For a good overview of TC, start here.

Hierarchical Token Bucket

The best explanation of Hierarchical Token Bucket, or HTB I’ve seen so far is to imagine a fixed size bucket into which traffic flows. This bucket can only drain network traffic at a given rate. Each token, or packet, is checked against a defined hierarchy, and released from the bucket accordingly.

This allows the system administrator to define both minimum guarantees for a traffic classification as well as limits for the same. Our configuration below explores this on two Ubuntu 16.04 VMs.

Configuration

Configuring HTB has 3 basic steps:

  • Change the interface queuing to HTB
  • Define traffic classes
  • Configure filters to map traffic to it’s class

The following example configures eth2 to use HTB, defines a class of traffic to be shaped to 1Mbit, and then uses the TC classifier to market all packets with a source or destination port of 8000 and map it to that class.

sudo su - 
# Change the queuing for eth2 to HTB
tc qdisc add dev eth2 root handle 1: htb

# Create a traffic class
tc class add dev eth2 parent 1: classid 1:8000 htb rate 1Mbit

# Create traffic filters to match the class
tc filter add dev eth2 protocol ip parent 1: prio 1 u32 \
    match ip dport 8000 0xffff flowid 1:8000

tc filter add dev eth2 protocol ip parent 1: prio 1 u32 \
    match ip sport 8000 0xffff flowid 1:8000

The commands above will produce no output if successful. If you would like to confirm your changes, the following commands will help:

## Report the queuing defined for the interface
$ tc qdisc show dev eth2

qdisc htb 1: root refcnt 2 r2q 10 default 0 direct_packets_stat 21 direct_qlen 1000

## Report the class definitions
$ tc class show dev eth2

class htb 1:8000 root prio 0 rate 1Mbit ceil 1Mbit burst 1600b cburst 1600b

## Report the filters configured for a given device
$ tc filter show dev eth2

filter parent 1: protocol ip pref 1 u32
filter parent 1: protocol ip pref 1 u32 fh 800: ht divisor 1
filter parent 1: protocol ip pref 1 u32 fh 800::800 order 2048 key ht 800 bkt 0 flowid 1:80
  match 00001f40/0000ffff at 20
filter parent 1: protocol ip pref 1 u32 fh 800::801 order 2049 key ht 800 bkt 0 flowid 1:80
  match 1f400000/ffff0000 at 20

With these configured, it is now time to test our configuration…

Testing HTB

For our test scenario, we are going to use two virtual machines on the same L2 network, something like this:

         +-------------------------------------+
         | vSwitch                             |
         +-------------------------------------+
         eth1 |                       | eth2
         +--------------+       +--------------+
         | server01     |       | server02     |
         | 172.16.1.101 |       | 172.16.2.101 |
         +--------------+       +--------------+

First, you’ll need to install iperf on both nodes:

sudo apt-get install -qy iperf

Then on server02, start iperf in server mode listening on port 8000 as we specified in our filter:

sudo iperf -s -p 8000

On server01, we then run the iperf client. We specify to connect to server02, run the test for 5 minutes (300 seconds), bidirectionally (-d) on port 8000 (-p) and report statistics at a 5 second interval (-i):

sudo iperf -c 172.16.2.101 -t 300 -d -p 8000 -i 5

If you have configured things correctly you will see output like the following:

[  5] 45.0-50.0 sec   379 KBytes   621 Kbits/sec
[  3] 45.0-50.0 sec   768 KBytes  1.26 Mbits/sec
[  5] 50.0-55.0 sec   352 KBytes   577 Kbits/sec

with that, you have configured and tested your first HTB traffic filter.

Summary

In this post there was a lot going on. The tl;dr is, Linux supports network QoS through a mechanism called Traffic Control or TC. HTB is one of many techniques you can use to control said traffic. In this post, we also explored how to configure and test an HTB traffic queue.

Resources

KVM and OVS on Ubuntu 16.04

Lately I’ve been digging a bit deeper into what actually goes on behind the scenes when it comes to cloudy things. Specifically, I wanted to get a virtual machine booted with KVM and connected to the network using OVS. This was made difficult as the posts describing how to do this are either out of date or make knowledge assumptions that could throw a beginner off.

What follows then, are my notes on how I am currently preforming this on Ubuntu 16.04 with OVS.

Install KVM

First up, we install KVM:

# Update and install the needed packages
PACKAGES="qemu-kvm libvirt-bin bridge-utils virtinst"
sudo apt-get update
sudo apt-get dist-upgrade -qy

sudo apt-get install -qy ${PACKAGES}

# add our current user to the right groups
sudo adduser `id -un` libvirtd
sudo adduser `id -un` kvm

The above code block first defines a list of packages we need. Next, it updates the your Ubuntu package cache, the installed system. Then it installs the packages needed to operate KVM: libvirt-bin, qemu-kvm, bridge-utils, virtinst. In order these packages handle: virtualization, management, networking, and ease of use.

The final two commands add our current user to the groups needed to operate KVM.

Install OVS

Next up, we install and configure OVS for use with KVM.

First we need to remove the default network to prevent conflicts down the road.

sudo virsh net-destroy default
sudo virsh net-autostart --disable default

Next up, we install the OVS packages and start the service:

sudo apt-get install -qy openvswitch-switch openvswitch-common
sudo service openvswitch-switch start

Now that OVS is installed and started, it is time to start configuring things. The below code block:

  • enables ipv4 forwarding
  • creates and ovs bridge
  • adds eth2 to the bridge
sudo echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
sudo sysctl -p /etc/sysctl.conf

sudo ovs-vsctl add-br br0
sudo ovs-vsctl add-port br0 eth2

Our last task then, is to reconfigure the network interfaces, moving the IP that used to reside on eth2 to br0. To do that, your /etc/network/interfaces file should look similar to this:

Note: What follows only contains the two sections of the interfaces file we need for this example.

# Set eth2 for bridging
auto eth2
iface eth2 inet manual

# The OVS bridge interface
auto br0
iface br0 inet static
  address 172.16.2.101
  netmask 255.255.255.0
  bridge_ports eth2
  bridge_fd 9
  bridge_hello 2
  bridge_maxage 12
  bridge_stp off

Finally, restart networking:

sudo ifdown --force -a && sudo ifup --force -a

Booting your first VM

Now that we have OVS and KVM installed, it’s time to boot our first VM. We do that by using the virt-install command. The virt-install command itself has like, ALL of the parameters, and we’ll discuss what they do after the code block:

Note: You should have DHCP running somewhere on the network that you have bridged to.

sudo virt-install -n demo01 -r 256 --vcpus 1 \
    --location "http://us.archive.ubuntu.com/ubuntu/dists/xenial/main/installer-i386/" \
    --os-type linux \
    --os-variant ubuntu16.04 \
    --network bridge=br0,virtualport_type='ovs' \
    --graphics vnc \
    --hvm --virt-type kvm \
    --disk size=8,path=/var/lib/libvirt/images/test01.img \
    --noautoconsole \
    --extra-args 'console=ttyS0,115200n8 serial '

Like I said, a pretty involved command. We’ll break down the more interesting or non-obvious parameters:

  • -r is memory in MB
  • –location is the URL that contains the initrd image to boot linux
  • –os-variant for this one, consult the man page for virt-install to get the right name.
  • –network - This one took me the longest to sort out. That is, bridge=br0 was straight forward, but knowing to set virtualport_type to OVS took looking at the man page more times than I would like to admit.
  • –hvm and –virt-type kvm - These values tell virt-install to create a vm that will run on a hypervisor, and that the hypervisor of choice is KVM.
  • –disk - the two values here, size and path are straight forward. Disk size in GB and the path to where it should create the file.

Once you execute this command, virt-install will create an XML that represents the VM, boot it, and attempt to install an operating system. This will leave you a prompt to attach to the VM and finish the installation.

Attaching to the console

When creating the VM we supplied --extra-args 'console=ttyS0,115200n8 serial '. This tells virsh to supply said parameters to the VMs boot sequence, which supplies us with a serial console.

To attach to the console and finish our installation we need to first get the ID of the VM we’re working with, then attach to it. You get the VM ID by running virsh list --all:

$ sudo virsh list --all
 Id    Name                           State
----------------------------------------------------
 1     demo01                         running

To attach to the console: sudo virsh console 1 should bring up the ubuntu installer:


  ┌──────────────────────┤ [!!] Select your location ├──────────────────────┐
  │                                                                         │
  │ The selected location will be used to set your time zone and also for   │
  │ example to help select the system locale. Normally this should be the   │
  │ country where you live.                                                 │
  │                                                                         │
  │ This is a shortlist of locations based on the language you selected.    │
  │ Choose "other" if your location is not listed.                          │
  │                                                                         │
  │ Country, territory or area:                                             │
  │                                                                         │
  │                         Nigeria                                         │
  │                         Philippines          ▒                          │
  │                         Singapore            ▒                          │
  │                         South Africa                                    │
  │                         United Kingdom       ▒                          │
  │                         United States                                   │
  │                                                                         │
  │     <Go Back>                                                           │
  │                                                                         │
  └─────────────────────────────────────────────────────────────────────────┘

<Tab> moves; <Space> selects; <Enter> activates buttons

Resources

There were so, so many blogs used as a staring point for this that I’m sure I’ve missed one or two, but here we go:

Easy Conference Tunneling - OSX, Sidestep, and Cloud Servers

VMworld 2016 is upon us. Or was at the time of this writing. That doesn’t change the message, however. When you are traveling for work, play or otherwise, who knows who else is on the WiFi with you. Who is snooping your traffic, and so on.

In this post, we’ll cover setting up a Cloud server, ssh keys, and sidestep to provide you with traffic tunneling and encryption from where-ever you are.

Assumptions:

  • An account with some cloud provider
  • A recent version of OSX

Set up the Cloud Server

THe instructions for this will vary some depending on the provider you use. What you are looking for however, is some flavor of Ubuntu 14.04 or higher. From there, apply this, to provide a basic level of hardening.

You can either copy / paste it in as user data, or run it line by line (or as a script on the remote host).

Set up SSH Keys

First, lets check to see if you have existing ssh keys:

ls -al ~/.ssh

You are looking for one of the following:

id_rsa.pub
id_dsa.pub
id_ecdsa.pub
id_ed25519.pub

Not there? Want a new one for this task? Let’s make one. From the terminal:

ssh-keygen -t rsa -b 4096

When prompted just give it all the default answers (yes yes, passwordless keys are the devil, but, we’re only using this key, for this server, for this conference, right?)

Finally we need to copy the new keys over to your server:

ssh-copy-id user@your.cloudserver.com

Finally use the key to log in to your cloud server, and disable password logins:

ssh user@your.cloudserver.com

sudo sed -i 's/#PasswordAuthentication yes/PasswordAuthentication no/' /etc/ssh/sshd_conf

cat /etc/ssh/sshd_conf | grep PasswordAuthentication

sudo service ssh restart

So what we just did was:

  • Log into the cloud server
  • Disable password auth
  • Confirm that we disabled password auth (That is, there is no # in front of the line and that it reads ‘no’)
  • Restarted SSH to enable the setting

Sidestep

Sidestep is the glue that pulls all of this together. From their site:

When Sidestep detects you connecting to an unprotected wireless network, it automatically encrypts all of your Internet traffic and reroutes it through a secure connection to a server of your choosing, which acts as your Internet proxy. And it does all this in the background so that you don’t even notice it.

So, first things first, download and install this the same as you would other OSX packages. Once installed you will need to configure it.

First, set up the actual proxy host & click test:

Next, set sidestep up to work automagically:

Summary

In this post we showed you how to setup a budget tunneling solution for when you are out and about conferencing, or otherwise on a network you do not trust.

Revisiting BGP on Linux w/ Cumulus Topology Converter

A post or two ago, we explored setting up BGP to route between networks using Linux and Cumulus Quagga. As fun as this exercise was, I would not want to have to set it up this way each and every time I needed a lab. To that point, I haven’t actually touched the homelab it was built on since that point. Why? because complicated.

Then enters Scott Lowe, and Technology Short Take #70. At the bottom of the networking section, as if it weren’t the coolest thing on the list, there is this gem:

"This looks neat—I need to try to find time to give it a spin."

So that’s what we’re doing!

Topology converter’s job, is to take a formatted text file that represents the lab you’d like built, and to generate it as a Vagrantfile that can then be spun up with all of the plumbing taken care of for you.

Getting Started

My lab setup is a 2012 15” Retina MacBook, 16bg ram, and the latest Vagrant (1.8.5) & Virtualbox (5.1.4). From there we start with the installation instructions found here.. Additionally, you’ll want to clone the repo: git clone https://github.com/CumulusNetworks/topology_converter && cd topology_converter

You’ll also want to install the vagrant-cumulus plugin: vagrant plugin install vagrant-cumulus

Remaking our BGP Lab

Now that we’ve got the tools installed, we need to create our topology. Remember, we used this last time:

Topology

In the language of topology converter, that looks like this:

graph dc1 {
  "spine01" [function="spine" os="CumulusCommunity/cumulus-vx" version="3.0.1" memory="512" config="./helper_scripts/extra_switch_config.sh" mgmt_ip="172.10.100.1"]
  "spine02" [function="spine" os="CumulusCommunity/cumulus-vx" version="3.0.1" memory="512" config="./helper_scripts/extra_switch_config.sh" mgmt_ip="172.20.100.1"]
  "spine03" [function="spine" os="CumulusCommunity/cumulus-vx" version="3.0.1" memory="512" config="./helper_scripts/extra_switch_config.sh" mgmt_ip="172.30.100.1"]
  "leaf01" [function="leaf" os="CumulusCommunity/cumulus-vx" version="3.0.1" memory="512" config="./helper_scripts/extra_switch_config.sh" mgmt_ip="172.10.100.2"]
  "leaf02" [function="leaf" os="CumulusCommunity/cumulus-vx" version="3.0.1" memory="512" config="./helper_scripts/extra_switch_config.sh" mgmt_ip="172.20.100.2"]
  "leaf03" [function="leaf" os="CumulusCommunity/cumulus-vx" version="3.0.1" memory="512" config="./helper_scripts/extra_switch_config.sh" mgmt_ip="172.30.100.2"]
  "server01" [function="host" os="boxcutter/ubuntu1404" memory="512" ubuntu=True config="./helper_scripts/extra_server_config.sh" mgmt_ip="172.10.100.3"]
  "server02" [function="host" os="boxcutter/ubuntu1404" memory="512" ubuntu=True config="./helper_scripts/extra_server_config.sh" mgmt_ip="172.20.100.3"]
  "server03" [function="host" os="boxcutter/ubuntu1404" memory="512" ubuntu=True config="./helper_scripts/extra_server_config.sh" mgmt_ip="172.30.100.3"]

  "leaf01":"swp40" -- "spine01":"swp1"
  "leaf02":"swp40" -- "spine02":"swp1"
  "leaf03":"swp40" -- "spine03":"swp1"

  "spine01":"swp31" -- "spine02":"swp31"
  "spine02":"swp32" -- "spine03":"swp32"
  "spine03":"swp33" -- "spine01":"swp33"

  "server01":"eth0" -- "leaf01":"swp1"
  "server02":"eth0" -- "leaf02":"swp1"
  "server03":"eth0" -- "leaf03":"swp1"
}

There are four sections to this file. The first is the node definitions. Spine, leaf, and server. Or, routers, switches, and hosts. Each is given an OS to run and a management IP.

The next three sections define the connections between the nodes. In our case, interface 40 on the switches uplinks to their router. The routers each link to one another, and the hosts to the switches.

Once you have this file saved as bgplab.dot (or similar), run the topology converter command (which produces a very verbose bit of output):

$ python ./topology_converter.py ./bgplab.dot

######################################
          Topology Converter
######################################
>> DEVICE: spine02
     code: CumulusCommunity/cumulus-vx
     memory: 512
     function: spine
     mgmt_ip: 172.20.100.1
     config: ./helper_scripts/config_switch.sh
     hostname: spine02
     version: 3.0.1
       LINK: swp1
               remote_device: leaf02
               mac: 44:38:39:00:00:04
               network: net2
               remote_interface: swp40
       LINK: swp31
               remote_device: spine01
               mac: 44:38:39:00:00:10
               network: net8
               remote_interface: swp31
       LINK: swp32
               remote_device: spine03
               mac: 44:38:39:00:00:0d
               network: net7
               remote_interface: swp32
...

Starting the lab

Ok, so what we’ve done so far was install topology converter, write a topology file to represent our lab, and finally, we converted that to a Vagrant file. We have one more pre-flight check to run before we can fire up the lab, and that is to make sure Vagrant recognizes what we’re trying to do:

$ vagrant status
Current machine states:

spine02                   not created (vmware_fusion)
spine03                   not created (vmware_fusion)
spine01                   not created (vmware_fusion)
leaf02                    not created (vmware_fusion)
leaf03                    not created (vmware_fusion)
leaf01                    not created (vmware_fusion)
server01                  not created (vmware_fusion)
server03                  not created (vmware_fusion)
server02                  not created (vmware_fusion)

Looks good, excepting that vmware_fusion bit. Thankfully, that’s an artifact of my local environment, and can be worked around by specifying --provider=virtualbox

So, let’s do that: Warning, this will take a WHILE

vagrant up --provider=virtualbox


LOTS OF OUTPUT

$ vagrant status
Current machine states:

spine02                   running (virtualbox)
spine03                   running (virtualbox)
spine01                   running (virtualbox)
leaf02                    running (virtualbox)
leaf03                    running (virtualbox)
leaf01                    running (virtualbox)
server01                  running (virtualbox)
server03                  running (virtualbox)
server02                  running (virtualbox)

Accessing the lab

Ok, to recap one more time, we’ve installed topology converter, written a topology file, converted that to a Vagrant environment, and fired that up within virtualbox. That’s A LOT of work, so if you need a coffee, I understand.

While we’ve done all the work of creating the lab, we haven’t configured anything as yet. While we’ll leave that as an exercise for another post, we will show you how to access a node in said lab. The most straight forward way, is with vagrant ssh [nodename]

vagrant ssh spine02

Welcome to Cumulus VX (TM)

Cumulus VX (TM) is a community supported virtual appliance designed for
experiencing, testing and prototyping Cumulus Networks' latest technology.
For any questions or technical support, visit our community site at:
http://community.cumulusnetworks.com

The registered trademark Linux (R) is used pursuant to a sublicense from LMI,
the exclusive licensee of Linus Torvalds, owner of the mark on a world-wide
basis.
vagrant@spine02:~$ sudo su - cumulus

Summary

Long post is long. However, in this post, we have used Cumulus Topology converter to create a lab network topology with 3 routers, 3 switches, and 3 hosts that are now waiting for configuration.

Mosh-ing on Ubuntu & OSX

Today? Today was a treat. Instead of staying holed up in the home office, I decided to head out and attempt to people for a few hours. While the results of my peopling varied (Coffee shops are strange) I also encountered less than good network connectivity. That is, if the firewall logs are to be believed, I wasn’t the only one running a torrent or twelve.

Terrible network connections right? SSH is typically robust in the regard. Not this time, apparently. Enter Mosh.

What is Mosh

Mosh, or mobile shell, comes to you from MIT (here). Mosh uses SSH to establish the initial connection and authentication. This lets all your ssh-key files and such keep working. It then launches it’s own udp mosh-server to handle the actual session.

It’s a great little package, and worth looking at a bit more. I’ve been using it instead of SSH for most the boxes I manage for a while now.

Installing and Using Mosh

As with all things, there are two components. That which you install on your laptop, and the bits that run on the box you are managing. This section assumes OSX on the desktop and Ubuntu on the server. That said, if you check the Digital Ocean link in the references section, you’ll see the install covered for other platforms. I’d also recommend adding this to whatever scripts you use to bootstrap a server.

Installing Mosh on OSX

brew install mosh

That’s it!

Oh, you don’t have homebrew installed? While I’d suggest fixing that, you can also download the Mosh package from here, and install like any other OSX package.

Installing Mosh on Ubuntu 14.04

This is also straight forward:

sudo apt-get install -y mosh

Additionally, I added an exception for it in IPTABLES:

sudo iptables -I INPUT 1 -p udp --dport 60000:61000 -j ACCEPT

Logging in

All that setup, phew. Fortunately, from this point forward it works very much like ssh, in that:

ssh bunchc@codybunch.local

Becomes:

mosh bunchc@codybunch.local

If you need to pass port forwarding bits along, it is still straight forward, but a bit less so than standard ssh:

mosh --ssh"ssh -p 9000" bunchc@codybunch.local

Resources

Getting started with BGP on Linux with Cumuls Quagga

Before we begin, I’d like to admit something: “I spent a good number of years as a Windows sysadmin.” There, I said it. Now, I’m proud of my time as a Windows guy, so that’s’ not why I bring it up. I mention my time as a Windows admin perhaps to provide a reason why it took me so long to get BGP between some Linux VMs.

What we’re building

The above figure depicts a three node, three router network that we will create in this post. Following these steps:

  1. Install the routers
  2. Configure the routers
  3. Configure the hosts
  4. Test connectivity

Install the Routers

In our setup we start with Ubuntu 14.04 boxes with two interfaces. We assume eth0 is on the 10.0.1.0/24 network, and eth1 is on the 172.x.100.0/24 network.

First download the 14.04 package from here. In the commands below wget this file from another server in the environment, you can get it there however you like.

Configure the interfaces

On each router, configure your interfaces, substituting network addresses where relevant:

# cat /etc/network/interfaces

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet static
  address 10.0.1.19/24
  gateway 10.0.1.1
  dns-nameservers 4.2.2.2 8.8.8.8

auto eth1
iface eth1 inet static
  address 172.10.100.1
  netmask 255.255.255.0

Then, restart the interfaces:

sudo service networking restart

Enable routing

sudo echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
sudo sysctl -p

Install Quagga

As root, run the following commands:

apt-get update && apt-get -qq dist-upgrade && reboot
wget http://location/of/the/download/roh-ubuntu1404.tar
tar -xvf roh-ubuntu1404.tar
cd ubuntu1404/
dpkg -i quagga_0.99.23.1-1+cl3u2_trusty_amd64.deb duagga-dbg_0.99.23.1-1+cl3u2_trusty_amd64.deb quagga-doc_0.99.23.1-1+cl3u2_trusty_all.deb

We then need to provide some initial start up configuration for Quagga. We do this by changing the /etc/quagga/daemons file to reflect the services we would like started. In this case, the obligatory Zebra, and BGP:

zebra=yes
bgpd=yes
ospfd=no
ospf6d=no
ripd=no
ripngd=no
isisd=no

Start Quagga

Now that Quagga is installed, we need to start said service:

service quagga start

Configure the Routers

with Quagga installed and running on each Quagga host, we can configure the needed bits to get BGP operational. The commands and configuration that follow configure the 10.0.1.19/24 router from our diagram above. Note, you will want to choose Private AS numbers from here: Private BGP AS Numbers. In the example, these are 64512, 513, and 514 respectively.

# vtysh
conf t

hostname cab1-router
password zebra
enable password zebra
log file /var/log/quagga/bgpd.log
!
router bgp 64512
 bgp router-id 10.0.1.19
 network 172.10.100.0/24
 neighbor 10.0.1.20 remote-as 64513
 neighbor 10.0.1.21 remote-as 64514
 distance bgp 150 150 150
!
exit
exit
wr mem

Configure the Hosts

For each host behind out of our Quagga routers, you need to set the interfaces file to an appropriate address and gateway per host:

$ cat /etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet static
  address 172.30.100.100/24
  gateway 172.30.100.1

Testing Connectivity

Now that all three routers and all three VMs are configured, we will test connectivity. First between routers, then between VMs:

From router 1:

cab1-router# ping 172.20.100.1
PING 172.20.100.1 (172.20.100.1) 56(84) bytes of data.
64 bytes from 172.20.100.1: icmp_seq=1 ttl=64 time=0.367 ms
^C
--- 172.20.100.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms
cab1-router# ping 172.20.100.100
PING 172.20.100.100 (172.20.100.100) 56(84) bytes of data.
64 bytes from 172.20.100.100: icmp_seq=1 ttl=63 time=0.448 ms
^C
--- 172.20.100.100 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.298/0.373/0.448/0.075 ms

From VM3:

[root@cab3-test-vm:~]
[17:03:59] $ ping 172.20.100.100
PING 172.20.100.100 (172.20.100.100) 56(84) bytes of data.
64 bytes from 172.20.100.100: icmp_seq=1 ttl=62 time=0.443 ms
64 bytes from 172.20.100.100: icmp_seq=2 ttl=62 time=0.475 ms
^C
--- 172.20.100.100 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.443/0.459/0.475/0.016 ms

[root@cab3-test-vm:~]
[17:10:06] $ ping 172.10.100.1
PING 172.10.100.1 (172.10.100.1) 56(84) bytes of data.
64 bytes from 172.10.100.1: icmp_seq=1 ttl=63 time=0.179 ms
64 bytes from 172.10.100.1: icmp_seq=2 ttl=63 time=0.410 ms
^C
--- 172.10.100.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.179/0.294/0.410/0.116 ms

Summary

In this post we built three L3 networks, and configured BGP routing between them using Quagga on a Ubuntu 14.04 host. We then tested connectivity between these networks.

References

Make Cloud Great Again - On Crowd Funding

via GIPHY

In case y’all missed it, beginning around the OpenStack Austin 2016 summit, I launched a “Make the Cloud Great Again” crowd funding campaign. As I sit here typing this, I’ve dropped all the packages off at the post office, for shipping, and feel it’s important to write some notes for those who might consider their own campaign. That, or to remind myself how difficult this was in case I decide to do it again.

Genesis

Like most of my more terrible ideas, this one started on Twitter. An snark comment about #MakeTheCloudGreatAgain, as a play on the Trump campaign started it all. I reached out to StitchTek and worked with Mr. Colotti on the trucker hat design. We played with some ideas, and ordered some samples.

The Platform

Indiegogo was the platform of choice. This was due to the flexible nature of funding. Meaning, I didn’t have to hit the entire goal to be able to ship. The platform has some decent tools for promotion and updates, even if the buttons aren’t always where you would expect.

BackerKit

BackerKit basically handles everything after Indiegogo or kickstarter release funds. It’ll handle like surveys, add-ons, additional orders, and more. Where this fit into our process, was surveys and generating shipping labels, as well as communicating tracking information.

Enlisting Help

When the campaign broke the 100% barrier, I enlisted the help of GetFriday, a division of Your Man in India. Basically a per hour assistant. They managed the campaign updates, and everything from the close of the campaign to buying shipping.

Packing and Shipping

The Hats

The rubber met the road here. Lots of hours, trips to Office Max, and then some went into this step. 40+ is a lot of hats.

In Summary

It’s done, over with, finished. The hats have shipped. Would I do it again? Maybe. This was a lot more effort than anticipated, that’s for sure.

Speaker Notes & Slides - BOS VMUG 2016

This last week I had the wonderful opportunity to speak at the Boston VMUG user conference as a part of both a community panel, as well as a talk on “Automation - You’re doing it wrong”. This post then includes the speaker notes for said session in raw form, as well as the similarly raw slides.

Slides

Notes


Automation - You’re doing it wrong

Automation has been a major topic at IT conferences more or less forever. Be it batch files, PowerShell, VCO (you know, before they renamed it). We’ve even had movements towards automating IT workloads at scale… DevOps, etc. In this talk, we aim, not to cover whatever is the latest and greatest in tactical tools for automation. Rather, our aim is to equip you with the mental models to help resolve some of the underlying issues that prevent IT teams from achieving the goal of having a more automated system, by approaching the problem with a different mindset.

  • Introductions
    • Who is Cody
    • About the space
  • Processes
    • The Credit Union
      • Standard Operating Procedure(s)
      • A Big Project
      • A Small shell script
      • What happened…
      • Lessons learned
    • The Roomba example:
      • “O2O” - Order to Online
      • The Whiteboard
      • Lessons learned
  • People
    • The Credit Union
      • The People
        • The “Mainframe Guy”
        • The “Operator”
        • The “Desktop Guy”
        • The “Web Guy”
  • Stop addressing first order tasks
    • Evolution of Automation
      • Mechanical Turks
      • Task Based
      • “DevOps” / Product Dev
    • Look at the whole system
    • Design an autonomous system
  • Buying Time
    • “If we are engineering processes and solutions that are not automatable, we continue having to staff humans to maintain the system. If we have to staff humans to do the work, we are feeding the machines with the blood, sweat, and tears of human beings. Think The Matrix with less special effects and more pissed off System Administrators” - Joseph Bironas, Google SRE

Updated vSensei Reading List

Apparently, these are useful. With that in mind, here is what I’ve been reading since the first of the year. Unlike the other lists, there isn’t much specific direction in this one.

The importance of being little - This one, spawned a bunch of my more recent twitter borne rants. If nothing else, it is amazing how similar life in the workplace is similar to the life of a preschooler, and unfortunately just as training in most cases.

4 Hour Work Week - This one gets an occasional re-read. If you can get past the Tim Ferrisness of it, there is some good “GO DO THE DAMN THING” advice in there.

Influence - Just wrapped this one up prior to going to a show. It was interesting to spot all the varied techniques in use, in a real setting.

Seeking Wisdom - Excellent read.

Profit First - This one came to me via a podcast recommendation. While I’m not sure I’d recommend it out right, the way it reads is very tele-salesy. The gist of it, is basically set your business up in a similar way to how you would your personal finances post David Ramsey class.

Buddhism Without Beliefs - I’m still processing this one, honestly, and it may get a second listen soon so I can better process the lessons contained within. It gets to the root of things like, ‘why mindfulness’ and ‘why meditation’, along with “meditation isn’t just sitting, it can also be bicycling, etc”.

Debt, the first 5,000 Years - A good little history of money that sheds some light on some of the stupider things we do to each other.

Elon Musk - Quick read on the behind the scenes of some of the most interesting BIG engineering going on today.

Unstoppable - A little preachy, but in a good way. Bill Bill Bill Bill… Bill Nye the Science Guy!

Inside of a Dog - Because being mindful isn’t just for people.

Kubernetes Up And Running - Well, damn this book is good. Accessible, working examples, good overview of k8s. Everything I’d want in a first book on the subject.

Hardware Startup - There was a phase early in the year where I thought I was cool, and would launch a kickstarter for something or other. This was the first book recommended, and while it hasn’t completely disabused me of the notion of a hardware startup, it pointed out HUGE gaps in my knowledge, that I am working to fill.

OpenStack Networking Essentials - One day, I hope to write as well as Denton. He takes and breaks down the complicated subject of the OpenStack Neutron project into the bare essentials of what you need to know. Then explains it in a way that is incredibly accessible. I was an editor of this volume

Troubleshooting OpenStack - Like the Neutron book above, Tony takes and makes a hard subject a bit easier, and lot more understandable. If you use OpenStack, you’ll want this book.

The Wahls Protcol - My wife has MS. I needed to know more. This got me a bit closer.

Docker Clustering on Raspberry Pi

The folks behind Hypriot have made getting Docker up and going on the Raspberry Pi a near on trivial task: Download the image, power it on. They’ve also done the same thing for Docker clustering, here. Which is great, until it isn’t.

That is, their clustering lab has a number of hard requirements around VLAN networking support. If you’ve got a plethora of dumb switches at home, this just wont do!

Have no fear, however, the combination of their image, and their clustering packages seem to be the magical combination.

Hardware bits to have

  • Some number of Raspberry Pi’s (I’ve got 3x rPI 2 for this, I think anything b+ will work)
  • At least one wifi dongle
  • An Ethernet switch

Let’s do this!

This process roughly breaks down like this:

Pull the image down from here. At the time of writing it was “Hector” 0.6.1

Use the Hypriot ‘flash’ script to load your SD card:

```flash –hostname Drekkar –ssid WiFi-Iron –password “$DAVE” /path/to/theimage.img.zip


Do this for each card you have. Once your cards are imaged, plug them in, boot the Pi's, and [find them on the network.](https://github.com/adafruit/Adafruit-Pi-Finder)

For this next trick, I used ssh and broadcast input in iTerm (TMUX works well too), to issue the following commands across all the nodes in my cluster:

sudo apt-get update && sudo apt-get install hypriot-cluster-lab


The prior commands will take a couple of minutes to complete. Once they do, pick a node to be 'master', and direct all input to it (rather than all the nodes). On that node, run the following:

sudo systemctl start cluster-start ```

Again, this will take a few minutes. Once finished, run the prior command on the rest of your cluster nodes. At this point, fetch a coffee, it’ll be a while.

Cool! You’ve now got a Docker Cluster running on Raspberry PI’s

To read more about what the Hypriot group has going on, and some exercises to do with said cluster, check out their git page here.