Cody Bunch Some Random IT Guy - OpenStack, DevOps, Cloud, Things

ZeroVM - Getting Started, Again

Ok, so now that we’ve covered a LOT of ZeroVM background & isolation details, let’s actually start to get our hands dirty.

tl;dr - git clone https://github.com/bunchc/vagrant-zerovm.git && cd vagrant-zerovm && vagrant up

Getting Started

ZeroVM has packages for Ubuntu 12.04, so in order to get started you will either need hardware with 12.04 installed or a VM of the same. Once you have a VM up and running you are ready to install.

Installing ZeroVM

To install ZeroVM, log into your Ubuntu 12.04 setup and run the following commands:

Install some needed packages:

sudo apt-get update
sudo apt-get install -y curl wget

Install the ZeroVM repository and key:

sudo su -c 'echo "deb http://packages.zerovm.org/apt/ubuntu/ precise main" > /etc/apt/sources.list.d/zerovm-precise.list' 
wget -O- http://packages.zerovm.org/apt/ubuntu/zerovm.pkg.key | sudo apt-key add - 

Next, refresh our repository and install ZeroVM.

sudo apt-get update
sudo apt-get install -y zerovm zerovm-cli

With that, you have ZeroVM installed.

By itself that is not interesting, so let’s show off a the Python version of “Hello world” run inside ZeroVM:

wget http://packages.zerovm.org/zerovm-samples/python.tar 
echo 'print "Hello"' > hello.py

zvsh --zvm-image python.tar python @hello.py 

The first command fetches a cPython which has been cross compiled to work in ZeroVM. Next we set up our example file and run it. There are some even more interesting examples if you head over the ZeroVM download site.

Summary

In this post, you setup an Ubuntu 12.04 machine, installed the ZeroVM apt repositories, and installed ZeroVM. Finally, you created a Python of hello world and ran that within your newly installed ZeroVM.

ZeroVM - Isolation

Here we go with another post on ZeroVM. This time to help get you up to speed on how ZeroVM provides isolation. For a reminder of what ZeroVM is, how it is different than containers and other virtualization technologies, see my prior post here, or the ZeroVM docs here.

Isolation

The ZeroVM site itself doesn’t say much on the isolation provided by ZeroVM. This is largely because it derives it’s isolation through the use of the Google NaCL project. The tl;dr for NaCL, is that it requires applications be ported over to it’s sandboxing environment, in which a subset of processor instructions are made available.

The sandbox environment provided by NaCL is a limited subset of processor instructions that prevent various syscalls that could be destructive. Further, ZeroVM takes the 50 syscalls available in NaCL and reduces that to six. Meaning, if your code contains a syscall other than one of the six currently allowed, your code will fail when ZeroVM attempts to execute the specific syscall. The six syscalls allowed are currently:

  • pread
  • pwrite
  • jail
  • unjail
  • fork
  • exit

We can test this by trying an invalid write instead of pwrite with the following C++

#include <unistd.h>
#include <stdio.h>
#include <sys/types.h>

int main()
{
    char data[128];

    if(read(0, data, 128) < 0)
     write(2, "An error occurred in the read.\n", 31);

}

This is what it looks like when it implodes:

$ zvsh syscalls
----------/tmp/tmpl7Mi1v----------
Node = 1
Version = 20130611
Timeout = 50
Memory = 4294967296, 0
Program = /home/vagrant/syscalls/syscalls
Channel = /dev/stdin,/dev/stdin,0,0,4294967296,4294967296,0,0
Channel = /tmp/tmpOfgnG6/stdout,/dev/stdout,0,0,0,0,4294967296,4294967296
Channel = /tmp/tmpOfgnG6/stderr,/dev/stderr,0,0,0,0,4294967296,4294967296
Channel = /tmp/tmpeNIzx4,/dev/nvram,3,0,4294967296,4294967296,4294967296,4294967296
-------------------------
----------/tmp/tmpeNIzx4----------
[args]
args = syscalls
[mapping]
channel=/dev/stdin,mode=char
channel=/dev/stdout,mode=char
channel=/dev/stderr,mode=char

-------------------------
2
0
0
disable
0.00 0.00 0 0 0 0 0 0 0 0
src/loader/elf_util.c 178: Segment 0 is of unexpected type 0x6, flag 0x5

ERROR: ZeroVM return code is 8

What is important here, at least in terms of telling why it failed is: ERROR: ZeroVM return code is 8. Error codes 1 - 17 indicate an issue in untrusted code. In our example, it was specifically the read / write bits that caused the issue.

Down The Rabbit Hole

Time to jump down the rabbit hole. Into what actually just happened. First some diagrams:

ZeroVM Stack

In this first diagram, you are looking at the internal structure of the ZeroVM process. ZVM Trusted Code Base Architecture

Working from the outside in, the grey area represents everything that is ZeroVM, all of it’s runtime data, IO Channels, virtual file systems, and most importantly, the user code to be executed.

Next, in the pink area is where end-user code gets loaded up, from there a call will progress down the stack. In the last white layer you can see the individual sys-calls as implemented in ZeroVM as well as their corresponding trappings.

One last thing to note is the two sys-calls for zvm_pread and zvm_pwrite, their traps, handlers, and this thing called a trampoline. The trampolines are a mechanisim that facilitates the switch from untrusted to trusted execution. A syscall will start in the ‘untrusted’ context, if it is an allowed call, the ZRT or ZeroVM Run Time via it’s hooks into the ZVM Syscall API and passes the call to a “trampoline” to perform the context switch needed to allow the syscall to execute in trusted space.

ZeroVM Guest Memory Footprint

This next diagram offers a different view on the above ZeroVM guest. In the below diagram, you will see how the NaCL & ZeroVM trampolines are implemented in memory, how that in turn gets passed down to the corresponding syscall, the trusted ELF binary, and where the rest of ZeroVM is contained.

Anatomy of the ZeroVM Guest Memory footprint

Summary

In this post, we took a deeper look into how isolation is handled within ZeorVM. We started with a high level discussion of NaCL and ZeroVM’s implementation of limited syscalls. We then attempted to execute an invalid syscall. Finally we got into the nitty gritty of how that syscall is trapped via the ZeroVM runtime and where in it’s memory stack context switches from trusted to untrusted code happen.

Resources

ZeroVM - Some Background

In a little over three weeks, at the Paris Summit, I’ll be helping give a 90 minute workshop on ZeroVM. 90 minutes is not actually a lot of time to get a good grasp on what ZeroVM is, how it operates, and most importantly, why it is important.

These next few posts however, I am hoping, will shed some light on that.

What is ZeroVM?

From the ZeroVM website:

ZeroVM is an open source virtualization technology that is based on the Chromium Native Client (NaCl) project. ZeroVM creates a secure and isolated execution environment which can run a single thread or application

Read that over a few times. What it is saying, is where Docker provides packaging, traditional virtualization provides full OS isolation, ZeroVM provides an environment specific to a single application or thread.

Explained another way, ZeroVM provides isolation at the thread or application level. It provides a “sandbox” environment for you to run arbitrary “untrusted” code. Some examples of how this could be useful can be found here and here.

ZeroVM does this by using the NaCL from Google to provide isolation and security. ZeroVM also has a number of other layers to provide the additional services for things like a Posix file system, I/O, and channels.

Use Cases

Due to the small size, speed, and isolation nature of ZeroVM, it is able to solve some problems which are very “big data” or cloud centric. Let’s take a look at some:

Where Does ZeroVM Fit?

This diagram from the Atlanta OpenStack Summit should help:

ZeroVM vs Traditional Virt

Additionally, this OpenStack Summit presentation should help clarify some as well.

Traditional Virtualization

Traditional Virtualization is perhaps the easiest to cover, it’s the most familiar way of isolating the things. I’ve been running VMware Workstation/Fusion, their enterprise products, etc for ages. I imagine a lot of you have as well. To recap tho:

  • Type 1 or 2 hypervisor
  • Each VM is a set of processes
  • CPU instructions virtualized (it’s a bit more complicated now-a-days with Hardware extensions)
  • Carries around a full OS installation
  • All resources isolated
  • Very few published VM escapes

A few clarifying points. I’ve simplified the CPU conversation for this discussion. That is, while hardware extensions provide native or near native execution, there is still quite a bit happening in software for those instructions which do not translate. Additionally, the “all resources isolated” is a bit of a simplification as well. Most modern hypervisors will do some manner of memory sharing, compression, and consolidation. However, memory is still generally isolated to a specific VM.

Containers (Docker / LXC)

Containers are all the rage these days. Docker has popularized and simplified the way we manage them. You get the same or better benefit from the shared hardware, however, containers strip quite a few of the layers:

  • Shared Kernel / OS
  • Super low overhead
  • Fast startup
  • ‘secure’
  • Managed via namespaces and process isolation

The important thing to note here, are the shared Kernel bits and the low overhead. The shared Kernel basically means everyone will run from the same Linux Kernel, and thus have the same features and limits therein. However, as an upside to that, you also shed having to carry around a full OS installation, networking stack, etc, and thus have much less overhead. This enables faster startup times.

ZeroVM

Finally, and most important to what we’re talking about is ZeroVM. ZeroVM is somewhere a bit beyond containers. That is you still have:

  • Shared hardware
  • Low Overhead
  • Fast Startup

However, you are no longer carrying an OS or Kernel around. In turn, this further minimizes the attack surface. That is, there are a total of 6 system calls available from within ZeroVM.

In addition, due to it’s extremely small size one can spin up a new ZeroVM instance in miliseconds, vs seconds or minutes for the other technologies. This speed advantage lends itself well to one of ZeroVMs primary use cases in the data-pipeline / data-processing space. Indeed it is the magic that makes ZeroCloud work. It does, however, require a different way of thinking about and approaching the problem.

That is, ZeroVM instances are designed to be extremely temporary in nature. Load your data in, handle the processing, push it back out to disk and move on. Conceptually, it takes you a few steps further down the ‘everything is disposable’ path, insofar as you will need to design and rewrite your apps to work with ZeroVM.

Summary

While a wall of words, I hope I have provided some clarity around the what of ZeroVM and where it fits.

Instance Shelving

Shelving? No no no, not that kind of shelving. Similar however. Image Shelving in Nova allows you to power off instances without experiencing the resource penalty in keeping them around.

Instance Shelving

Before instance shelving in OpenStack, if a user powered down an instance, the resources would still be in use on the compute node that housed said instance. Quite wasteful, no? Say an instance is powered off for 72+ hours. Shelving allows you to keep all the various bits associated with the VM while moving it to somewhere that is not the hypervisor to conserve resources.

Working with Shelving

The assumption here is that you have either devstack or some other flavor of OpenStack running. There isn’t anything additional you need to configure to make this work.

First, let’s log into our controller and look around at what’s running:

# nova list
+--------------------------------------+-------+--------+------------+-------------+-----------------------------------------------+
| ID                                   | Name  | Status | Task State | Power State | Networks                                      |
+--------------------------------------+-------+--------+------------+-------------+-----------------------------------------------+
| 05617907-3931-414f-88f3-fd180f69fde6 | test1 | ACTIVE | -          | Running     | cookbook_network_1=10.200.0.2, 192.168.100.11 |
+--------------------------------------+-------+--------+------------+-------------+-----------------------------------------------+

Next, let’s shelve an instance:

# nova shelve test1

Did it get shelved?

# nova list
+--------------------------------------+-------+-------------------+------------+-------------+-----------------------------------------------+
| ID                                   | Name  | Status            | Task State | Power State | Networks                                      |
+--------------------------------------+-------+-------------------+------------+-------------+-----------------------------------------------+
| 05617907-3931-414f-88f3-fd180f69fde6 | test1 | SHELVED_OFFLOADED | -          | Shutdown    | cookbook_network_1=10.200.0.2, 192.168.100.11 |
+--------------------------------------+-------+-------------------+------------+-------------+-----------------------------------------------+

Note: The change in status to “SHELVED_OFFLOADED”

Now let’s unshelve it & check status:

# nova unshelve test1
# nova list
+--------------------------------------+-------+-------------------+------------+-------------+-----------------------------------------------+
| ID                                   | Name  | Status            | Task State | Power State | Networks                                      |
+--------------------------------------+-------+-------------------+------------+-------------+-----------------------------------------------+
| 05617907-3931-414f-88f3-fd180f69fde6 | test1 | SHELVED_OFFLOADED | spawning   | Shutdown    | cookbook_network_1=10.200.0.2, 192.168.100.11 |
+--------------------------------------+-------+-------------------+------------+-------------+-----------------------------------------------+

There we go!

Summary

In this post, we worked with one of the new-ish features in Nova, instance shelving. Good for when you need to stop instances for a long time in a non-impactful way.

References

Running Rackspace Private Cloud on the Rackspace Public Cloud

Private Cloud on the Public Cloud? As odd as that sounds, or as inception as it makes you feel (Clouds in clouds?!), I’ve found that since downsizing my homelab quite a bit, I’ve needed to find other ways to work on and try out various things that exceed the capacity of my laptop. RPC 9 is one of them.

Disclaimer: I work for Rackspace, and while this post is largely focused around two of our products, I put it out here in the hopes that a) someone will find it useful, and b) it’ll help me later.

Rackspace Private Cloud (RPC)

Our docs will do it a lot more justice in terms of description than I can, so I encourage you to go here and take a few minutes to get familiar.

A few things I want to point out are related to the architecture of RPC9, specifically:

RPC Infrastructure

Looking over the diagram, there are a lot of hosts involved now. 3x Infrastructure nodes, a logging host, n-Compute hosts, deployment hosts, and finally a set of load balancers. This thing is big. Bigger than my laptop that’s for sure.

Running Cloud on Cloud

Thankfully, however, while it’s big, the folks who wrote this provided some OpenStack Heat templates that make setting it up externally much easier. Those can be found here.

Getting Started

To build the cloud on the cloud you’ll need the following info & apps installed somewhere you have access to:

  • An “heatrc” or similar file containing
    • Rackspace Username
    • Rackspace API Key
    • Endpoint(s) to deploy to
  • Python-HeatClient
  • An SSH Key

To create the “heatrc” file, start with the below template and then edit as needed:

export OS_USERNAME=rackspace_cloud_username
export OS_PASSWORD=rackspace_cloud_password
export OS_TENANT_ID=rackspace_cloud_account_number
export OS_AUTH_URL=https://identity.api.rackspacecloud.com/v2.0/
export HEAT_URL=https://ord.orchestration.api.rackspacecloud.com/v1/${OS_TENANT_ID}

Note: OS_TENANT_ID is your cloud account number. You can get to this by logging into mycloud.rackspace.com and clicking your account name in the upper right.

Once you’ve created the file and replaced said values, install the Heat client: pip install python-heatclient

Installing RPC 9 on the Rackspace Public Cloud

To kick off the installation, follow these commands:

curl https://raw.githubusercontent.com/rcbops/ansible-lxc-rpc/master/scripts/rpc9.0.0-aio-rax-heat-template.yml > rpc9-rax-heat.yaml

source ./heatrc

heat stack-create RPC9-Stack -f ./rpc9-rax-heat.yaml \
  -P image_name="Ubuntu 14.04 LTS (Trusty Tahr)" \
  -P ssh_key_name="lol_ssh_key" \
  -P flavor_name="8 GB Performance"

+--------------------------------------+------------+--------------------+----------------------+
| id                                   | stack_name | stack_status       | creation_time        |
+--------------------------------------+------------+--------------------+----------------------+
| c2b6c1b0-0098-441d-9999-c778b108a181 | RPC9-Stack | CREATE_IN_PROGRESS | 2014-10-13T15:13:58Z |
+--------------------------------------+------------+--------------------+----------------------+

This next part takes quite a bit of time to complete and is why we used a performance instance, to make the provision happen just a bit faster. You can keep an eye on it’s build status with watch -n 15 heat stack-list

Once it completes, you will need to find out what IP address it has been assigned, to do that, use these commands:

List the stacks:

$ heat stack-list
+--------------------------------------+------------+-----------------+----------------------+
| id                                   | stack_name | stack_status    | creation_time        |
+--------------------------------------+------------+-----------------+----------------------+
| c2b6c1b0-0098-441d-9999-c778b108a181 | RPC9-Stack | CREATE_COMPLETE | 2014-10-13T15:13:58Z |
+--------------------------------------+------------+-----------------+----------------------+

List the available outputs:

$ heat output-list RPC9-Stack
+------------------+-------------------------------------------------------+
| output_key       | description                                           |
+------------------+-------------------------------------------------------+
| RPCAIO_password  | The password for all the things.                      |
| RPCAIO_public_ip | The public IP address of the newly configured Server. |
+------------------+-------------------------------------------------------+

Finally show the IP:

$ heat output-show RPC9-Stack RPCAIO_public_ip
"127.0.0.100"

Summary

In this post we showed you how to nest the Rackspace Private Cloud installation on the Rackspace Public Cloud. A useful trick for testing it out without having to use ALL your local resources up.

OpenStack Cookbook 3rd Edition

Oh Oh Oh! The lesson in book writing is that it is both terrible and addictive. That is, right as we wrapped the second edition of the OpenStack Cookbook, I promised myself “Never again!”.

Now, some number of months later, Kevin and myself, we’ve chatted a bit, and have decided to go down that road to update the book again. Indeed, we’re also looking at bringing Egle along for the ride this time.

3rd Edition Highlights

The third edition will target either a late Juno or early Kilo release (Hooray Relevancy!). In addition to general updates, were adding or overhauling the following:

  • OpenStack Datacenter Automation
  • OpenStack Scaling
  • Image Management & Conversion
  • More Operations Recipes
  • HTTPS!
  • OpenStack Heat
  • Additional Neutron Services (LBaaS, VPNaaS)
  • Using 3rd Party drivers

30 Posts in 30 Days

I found this morning I was in a bit of a blogging slump. That is, I’d not posted anything in quite a while, even though I have plenty of exciting things going on at the moment. So, with that said, happening across this post by Greg Ferro (@etherealmind), I thought I’d jump into the fray.

I encourage you to do the same.

Currently Reading - Sept 2014

I’ve found overtime my reading habbits have changed from mostly tech books to a fair mix of things, with some things that suprised even me… business books. o.O? Here’s what’s currently on my list and in progress, first the boring ones:

Then on the more fun side:

MariaDB With Galera on Vagrant

Found myself in some training this last week using MariaDB, and being that I like to get a bit more hands on than most, using the class provided lab environment wasn’t going to cut it. This meant wrapping some scripting into a Vagrant environment so I could reliably reporduce the three node lab.

Getting Started

You’ll need Vagrant and Git. It’s also preferred that you have vagrant-cachier installed. You should have vagrant-cachier anyways, but alas, that is not for right now. Once you have these out of the way, do the following:

  1. git clone https://github.com/bunchc/mariadb-galera-vagrant.git
  2. cd mariadb-galera-vagrant
  3. vagrant up

Validating the Cluster

After a few minutes, you should be able to log into any of the nodes. Specifically, node-01 will be used to ‘bootstrap’ the cluster, the other two nodes will join from there.

To validate the cluster:

vagrant ssh mariadb-02
sudo su -
mysql -uroot
SHOW GLOBAL STATUS LIKE 'wsrep%';

You should be able to do the above from any of the nodes in the cluster.

Raspberry Pi as PXE Server

As I start to move from OpenStack Compute Cells in Devstack to OpenStack Compute Cells physicall, I needed to re-think my homelab some. It had been running some variation of vsphere and the OpenStack Cookbook work from various projects prior. Basically, it was a Hodor of a home lab.

Hodor!

Rather than lose a bit of hardware to foreman, or one of the new razor forks (I may still go this route later), I decided instead to pound a Raspberry Pi into service. To turn the rPI into a usable provisioning server, I did the following:

  1. Provision a 16gb card with Raspbian
  2. Beat Networking Into Submission
  3. Setup IP Tables for Nat
  4. Configure DHCP, PXE, TFTP, etc

We’ll skip step 1 as the folks at raspberrypi.org cover that pretty well already.

Beat Networking Into Submission

So the rPI does some auto-hotplugging bits that can cause you some issues if you try to use both wifi and ethernet at the same time. If you’re not expecting them, well… let’s just say I spent too long trying to solve before googling the problem. Here’s what my networking config files look like:

# cat /etc/network/interfaces
auto lo
iface lo inet loopback

auth eth0
allow-hotplug eth0
iface eth0 inet static
    address 172.16.0.1
    netmask 255.255.255.0

auto wlan0
allow-hotplug wlan0
iface wlan0 inet static
wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf
    address 10.0.1.15
    netmask 255.255.255.0
    broadcast 10.0.1.255
    gateway 10.0.1.1

iface default inet dhcp

# cat /etc/default/ifplugd
INTERFACES="eth0"
HOTPLUG_INTERFACES="eth0"
ARGS="-q -f -u0 -d10 -w -I"
SUSPEND_ACTION="stop"

Once you have that, reboot the rPI and you should be good to go.

Setup IP Tables for NAT

This is also pretty well straight forward, but for some reason I end up googling it each time. Here it is for reference:

# NAT
iptables --table nat --append POSTROUTING --out-interface wlan0 -j MASQUERADE
iptables --append FORWARD --in-interface eth0 -j ACCEPT
iptables-save | sudo tee /etc/iptables.conf
iptables-restore < /etc/iptables.conf
sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
sed -i "s/exit 0/iptables-restore < \/etc\/\iptables.conf \nexit 0/g" /etc/rc.local
sed -i "s/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/g" /etc/sysctl.conf

Configure DHCP, PXE, TFTP, etc

This one is a bit more involved, but is all done via the following bash commands:

MY_IP=$(ifconfig eth0 | awk '/inet addr/ {split ($2,A,":"); print A[2]}')

# Install the things
sudo apt-get install -y dnsmasq nfs-kernel-server syslinux-common

# Setup some shell folders
sudo mkdir -p /tftpboot/images/ubuntu/14.04/amd64
sudo cp -r /usr/lib/syslinux/* /tftpboot/
sudo mkdir -p /tftpboot/pxelinux.cfg/Ubuntu
sudo cp /usr/lib/syslinux/vesamenu.c32 /tftpboot/

# Create a pxe.conf file
sudo cat > /tftpboot/pxelinux.cfg/pxe.conf <<EOF
MENU TITLE  PXE Server 
NOESCAPE 1
ALLOWOPTIONS 1
PROMPT 0
MENU WIDTH 80
MENU ROWS 14
MENU TABMSGROW 24
MENU MARGIN 10
MENU COLOR border               30;44      #ffffffff #00000000 std
EOF

# Create our PXE Menu
sudo cat > /tftpboot/pxelinux.cfg/default <<EOF
DEFAULT vesamenu.c32 
TIMEOUT 600
ONTIMEOUT BootLocal
PROMPT 0
MENU INCLUDE pxelinux.cfg/pxe.conf
NOESCAPE 1
LABEL BootLocal
        localboot 0
        TEXT HELP
        Boot to local hard disk
        ENDTEXT
MENU BEGIN Ubuntu
MENU TITLE Ubuntu 
        LABEL Previous
        MENU LABEL Previous Menu
        TEXT HELP
        Return to previous menu
        ENDTEXT
        MENU EXIT
        MENU SEPARATOR
        MENU INCLUDE Ubuntu/Ubuntu.menu
MENU END
EOF

sudo cat > /tftpboot/pxelinux.cfg/Ubuntu/Ubuntu.menu <<EOF
LABEL 2
        MENU LABEL Ubuntu 14.04 (64-bit)
        kernel tftp://$MY_IP/images/ubuntu/14.04/amd64/install/netboot/ubuntu-installer/amd64/linux
        append auto=true priority=critical vga=788 initrd=tftp://$MY_IP/images/ubuntu/14.04/amd64/install/netboot/ubuntu-installer/amd64/initrd.gz locale=en_US.UTF-8 kbd-chooser/method=us netcfg/choose_interface=auto url=tftp://172.16.11.250/preseed.cfg
        TEXT HELP
        Boot the Ubuntu 14.04 64-bit DVD
        ENDTEXT
EOF

# Configure dnsmasq for tftp & dhcp
sudo cat >> /etc/dnsmasq.conf <<EOF
server=$MY_IP@eth0
interface=eth0
no-dhcp-interface=wlan0
dhcp-range=172.16.0.10,172.16.0.253,12h
dhcp-boot=pxelinux.0
pxe-service=x86PC,"Booting from Network...",pxelinux
enable-tftp
tftp-root=/tftpboot
dhcp-boot=pxelinux.0,servername,$MY_IP
EOF
sudo service dnsmasq restart

# Get 14.04 and extract the needful
wget -O ~/ubuntu-14.04-server-amd64.iso http://mirror.anl.gov/pub/ubuntu-iso/CDs/trusty/ubuntu-14.04.1-server-amd64.iso

sudo mkdir /mnt/loop
sudo mount -o loop -t iso9660 ~/ubuntu-14.04-server-amd64.iso /mnt/loop
sudo cp -R /mnt/loop/* /tftpboot/images/ubuntu/14.04/amd64
sudo cp -R /mnt/loop/.disk /tftpboot/images/ubuntu/14.04/amd64
sudo umount /mnt/loop

# Get a generic preseed
wget -O /tftpboot/preseed.cfg https://help.ubuntu.com/lts/installation-guide/example-preseed.txt

# Setup NFS mounts
echo "
/tftpboot/images/ubuntu/    *(ro,sync,no_subtree_check)" | sudo tee -a /etc/exports

# Enable RPCBind, NFS, and restart them
update-rc.d rpcbind enable && update-rc.d nfs-common enable
service rpcbind start
service nfs-kernel-server restart

The commands themselves are commented pretty well. Generically what they do is install dnsmasq for dhcp & tftp. From there we download and extract all the things from the ISO that we’ll need, configure some menus, download a basic preseed, and restart some services.

Summary

In this post, we took a lowly Raspberry Pi and indentured it into some network servitude as a pxe / tftp server. We did this by installing and configuring dnsmasq for tftp and dhcp. Additionally we set up some fancy pxeboot menus and configured them to boot locally as a priority and to the network in times of need. Finally, we pulled down an Ubuntu 14.04 image and generic preseed file to use for automatic installs.

Resources