Cody Bunch Some Random IT Guy - OpenStack, DevOps, Cloud, Things

ZeroVM - Getting Started with ZeroCloud

If you are getting up to speed on ZeroVM as I am, you will find the following useful:

Additionally, this post does a great job of explaining how ZeroVM fits when you start to approach middling to large (Read: Big Data) problems. Go read it now.

Getting Started with ZeroCloud

Now that you’ve read the introduction by Lars, and I trust that you have, we can start looking into the ‘what’ of ZeroCloud. In this post we’ll touch on it’s architecture and use case or two. In a subsequent post we’ll show you how to build your own small-scale lab for such an environment.

Note: Also go here, watch this, around the 16:30 mark, pay attention to the ‘t = data / Rate’ bits.

Note: This is where things in ZeroVM start to get really cool.

ZeroCloud Architecture

The best way to describe the what and how of ZeroCloud is with a few diagrams which we will then dive into. These are borrowed form slides in past presentations.

Highlevel ZeroCloud Architecture

This first diagram is a hugely simplified view of a swift environment, that is, if you have worked with OpenStack Swift it should look familiar enough. The “Proxy” nodes at the top are Swift Proxies, the Storage Nodes below are well, object storage nodes.

What changes for ZeroCloud is the addition of a few ZVM parts made available via Swift Middleware. In this case ZVM represents job brokers, some messaging, ZeroVM itself, as well as an ‘executor’ that handles the connecting of channels and the reading / writing of objects in Swift. This next diagram breaks that down further:

ZeroCloud Dataflow Architecture

In this diagram, which admittedly is an eyechart, each step in the ZeroVM Swift Middleware is shown. Working left to right:

1. The user initiates a job request via the Swift REST API by sending a post request with the code to be executed.

2. This hits the Swift Proxy node and is handed to the ZeroVM middleware, which, via a job scheduler finds where the closest replica of your data is, and sends the request along.

3. The job arrives at the executor who then fires up a net-new ZeroVM session for the job, opens the required IO Channels to your file* and if needed other ZeroVM sessions.

4. The ZeroVM job, finished with it’s work, sends the response back up to the Proxy Server who then serves the response back to the user.

Note: re: file in #3, in the default configuration this is a net new copy of the file rather than the actual object itself. This prevents accidental damage to the file.

ZeroCloud Use Cases

So what do you do with a deterministic, computable, distributed data store? Some things that jump to mind are things like “A Giant MapReduce Cluster”, similar to the one described here. As Lars describes, ZeroCloud can address one of the biggest MapReduce computational issues, that is data locality.

In addition to MapReduce type workloads, what else can one do? One could couple ElasticSeach and Log Stash, and indeed serve Kibana directly from Zero Cloud. Video trans-coding is another use case that I’ve seen.

ZeroVM/ZeroCloud then, works well when you have a data heavy workload that would benefit being attacked in a distributed fashion.

Summary

In this post we introduced you to ZeroCloud, the amalgamation of ZeroVM and OpenStack Swift. We dove into the architecture of ZeroCloud and introduced some use cases where having programmable distributed storage system is insanely useful.

ZeroVM - IO Operations in ZeroVM

If you are just joining these posts, you might want to take a few minutes and review these other posts on ZeroVM:

IO Operations in ZeroVM

IO in ZeroVM is an odd sort of beast. As you are not working in a container or a VM you do not inherit the unlimited read/write to arbirtrary places as you would have otherwise. Instead the abstraction unit for ZeroVM is the channel.

Channels

As the intro said, a channel is the unit of IO abstraction within ZeroVM. That means any network communication, any file write, any input from a pipe, or output to swift, each is an individual channel.

Additionally, channels have to be explicitly defined prior to launching a ZeroVM instance. In some ways, this is like specifying the VMDK for a traditional VM to boot from. The requirement to specify all channels before hand does provide a powerful security mechanism in that all IO is effectively sandboxed. An erroneous process can not cause damage outside of the narrow constraints defined within said channel.

Channels in ZeroVM are unique in other ways. Each channel, regardless of type, is presented to the instance as a file handle.

Channel Quotas

To keep with the security and isolation provided within ZeroVM, along with the explicit channel definition, one must also specify a quota for IO. You specify said quota with four bits of information: (1) Total reads; (2) Total writes; (3) Total Read Size; and (4) Total Write Size.

Quotas are handled on a ‘first one hit’ basis. So, if you have 10,000 reads to do, but said number of reads exceeds the total read size in bytes at read number 3,500, well, that’s it. No more reads for you. The inverse of this is also true, when you run out of the number of allowed reads or writes before you hit the size quota, you will also get a failure returned.

When you do hit this limit, however, ZeroVM tries to fail gracefully, in that you will still have all the data read or written up to that point. Below is an example of what happens when you exceed the number of available reads:

$ dd if=/dev/urandom of=/home/vagrant/file.txt bs=1048576 count=100

$ cat > /home/vagrant/example.py <<EOF
file = open('/dev/3.file.txt', 'r')
print file.read(5)
EOF

$ su - vagrant -c "zvsh --zvm-save-dir ~/ --zvm-image python.tar --zvm-image ~/file.txt python @example.py"

$ sudo sed -i "s|Channel = /home/vagrant/file.txt,/dev/3.file.txt,3,0,4294967296,4294967296,0,0|Channel = /home/vagrant/file.txt,/dev/3.file.txt,3,0,3,3,3,3|g" /home/vagrant/manifest.1
$ sudo sed -i "s|/home/vagrant/stdout.1|/dev/stdout|g" /home/vagrant/manifest.1
$ sudo sed -i "s|/home/vagrant/stderr.1|/dev/stderr|g" /home/vagrant/manifest.1

$ zerovm -t 2 manifest.1

Traceback (most recent call last):
  File "/dev/1.example.py", line 2, in <module>
    print file.read(5)
IOError: [Errno -122] Unknown error 4294967174

What is happening in that example is this:

1. Create a 100MB file full of random bits

2. Create ‘example.py’ to read the first 5 characters in the file

3. Use zvsh to run example.py inside ZeroVM… this should dump some garbage characters to the console.

4. Change the quota from ‘huge’ to 3

5. Adjust the manifest file to allow stderr and stdout to work properly

6. Run ZeroVM with the updated manifest & receive IO error

Specifying Channels in a Manifest File

So that example took me way too long to figure out. Mostly due to a few small things and one big one: Creating the manifest file properly. What follows is a quick way to create a manifest file as well as our example manifest and a breakdown thereof.

Creating a Manifest File

It turns out that the zvsh utility that is bundled with ZeroVM has the ability to save the manifest (and other runtime files) when you launch it. To generate the template manifest that is used and modified in this example, run the following command:

zvsh --zvm-save-dir ~/ --zvm-image python.tar --zvm-image ~/file.txt python @example.py

Breakdown of the Manifest File

The example file that follows was for the specific bits used in our example. It can, however, make for a good starting point for your own manifest files.

Node = 1
Version = 20130611
Timeout = 50
Memory = 4294967296,0
Program = /home/vagrant/boot.1
Channel = /dev/stdin,/dev/stdin,0,0,4294967296,4294967296,0,0
Channel = /dev/stdout,/dev/stdout,0,0,0,0,4294967296,4294967296
Channel = /dev/stderr,/dev/stderr,0,0,0,0,4294967296,4294967296
Channel = /home/vagrant/example.py,/dev/1.example.py,3,0,4294967296,4294967296,0,0
Channel = /home/vagrant/python.tar,/dev/2.python.tar,3,0,4294967296,4294967296,0,0
Channel = /home/vagrant/file.txt,/dev/3.file.txt,3,0,3,3,3,3
Channel = /home/vagrant/boot.1,/dev/self,3,0,4294967296,4294967296,0,0
Channel = /home/vagrant/nvram.1,/dev/nvram,3,0,4294967296,4294967296,4294967296,4294967296

Taken line by line:

1. Node an optional parameter specifying the node number.

2. Version mandatory. As manifest versions are incompatible. This tells both ZeroVM and you which version of the manifest you are using.

3. Timeout in seconds, is a mandatory parameter. This prevents long running ZeroVM jobs from consuming excess resources.

4. Memory Also mandatory, This is a comma separated value, where the first position is the 32-bit value representing memory. In our case, the max 4GB. The second number can be either 0 or 1 specifying the eTag used to checksum all data passing through ZeroVM

5. Program Specifies the ZeroVM cross compiled binary that we will run

6. Channel The channel definitions, these follow the format:

Channel = <uri>,<alias>,<type>,<etag>,<gets>,<get_size>,<puts>,<put_size>

Summary

In this post we covered how IO is handled in ZeroVM. We first explained the Channel concept, how quotas are handled within channels, and finally, how to add a channel or channels at ZeroVM runtime using a manifest.

ZeroVM Link Dump

Instead of curating my own links this time, my friend and colleague Carina C. Zona, ZeroVM’s community manager has gathered these as they relate to ZeroVM:

Posts by Lars

“ZeroVM Architecture and ZVM Runtime (ZRT)”

Ryan McKinney at University of Texas San Antonio Cloud & Big Data Laboratory [2014-08-05]

“Changing the World with ZeroVM and Swift”

Jakub Krajcovic at PyConAU OpenStack Miniconf [2014-08-01]

“Deep Dive into ZeroVM”

Van Lindberg at OSCON Expo [2014-07-23]

  • (materials unavailable)

“What Is ZeroVM?”

Carina C. Zona at OSCON Expo [2014-07-22]

“ZeroVM: Virtualization for the Cloud”

Lars Butler at EuroPython [2014-07-21]

“Introduction to ZeroVM”

Muharem Jrnjadovic at Open Cloud Day [2014-07]

“NoSQL - Computable Object Store with OpenStack Swift and ZeroVM”

(aka “Programmable Object Store with OpenStack Swift and ZeroVM”) Adrian Otto at BigDataCamp [2014-06-14]

“Using ZeroVM and Swift to Build a Compute Enabled Storage Platform”

Blake Yeager & Camuel Gilydov at OpenStack Summit [2014-05-14]

“Using ZeroVM and Swift to Build a Compute Enabled Storage Platform”

Blake Yeager at Open BigCloud Symposium [2014-05-08]

  • (materials unavailable)

“ZeroVM Background”

Prosunjit Biswas at University of Texas at San Antionio [UTSA] Institute of Cyber Security [2014-04-23]

“Process Virtualization with ZeroVM”

Jakub Krajcovi at rax.io [2014-02-26]

##”ZeroVM Zwift: OpenStack Platform” Camuel Gilydov and Constantine Peresypkin at OpenStack Summit [2013-04]

“Big Data on OpenStack”

Camuel Gilydov at OpenStack Isreal [2012-06]

“Containers or VMs: Which Virtualization Technology Works for You?”

Panel discussion with ZeroVM, Docker, and VMWare at while42 meetup [2014-03-04]

ZeroVM - Getting Started, Again

Ok, so now that we’ve covered a LOT of ZeroVM background & isolation details, let’s actually start to get our hands dirty.

tl;dr - git clone https://github.com/bunchc/vagrant-zerovm.git && cd vagrant-zerovm && vagrant up

Getting Started

ZeroVM has packages for Ubuntu 12.04, so in order to get started you will either need hardware with 12.04 installed or a VM of the same. Once you have a VM up and running you are ready to install.

Installing ZeroVM

To install ZeroVM, log into your Ubuntu 12.04 setup and run the following commands:

Install some needed packages:

sudo apt-get update
sudo apt-get install -y curl wget

Install the ZeroVM repository and key:

sudo su -c 'echo "deb http://packages.zerovm.org/apt/ubuntu/ precise main" > /etc/apt/sources.list.d/zerovm-precise.list' 
wget -O- http://packages.zerovm.org/apt/ubuntu/zerovm.pkg.key | sudo apt-key add - 

Next, refresh our repository and install ZeroVM.

sudo apt-get update
sudo apt-get install -y zerovm zerovm-cli

With that, you have ZeroVM installed.

By itself that is not interesting, so let’s show off a the Python version of “Hello world” run inside ZeroVM:

wget http://packages.zerovm.org/zerovm-samples/python.tar 
echo 'print "Hello"' > hello.py

zvsh --zvm-image python.tar python @hello.py 

The first command fetches a cPython which has been cross compiled to work in ZeroVM. Next we set up our example file and run it. There are some even more interesting examples if you head over the ZeroVM download site.

Summary

In this post, you setup an Ubuntu 12.04 machine, installed the ZeroVM apt repositories, and installed ZeroVM. Finally, you created a Python of hello world and ran that within your newly installed ZeroVM.

ZeroVM - Isolation

Here we go with another post on ZeroVM. This time to help get you up to speed on how ZeroVM provides isolation. For a reminder of what ZeroVM is, how it is different than containers and other virtualization technologies, see my prior post here, or the ZeroVM docs here.

Isolation

The ZeroVM site itself doesn’t say much on the isolation provided by ZeroVM. This is largely because it derives it’s isolation through the use of the Google NaCL project. The tl;dr for NaCL, is that it requires applications be ported over to it’s sandboxing environment, in which a subset of processor instructions are made available.

The sandbox environment provided by NaCL is a limited subset of processor instructions that prevent various syscalls that could be destructive. Further, ZeroVM takes the 50 syscalls available in NaCL and reduces that to six. Meaning, if your code contains a syscall other than one of the six currently allowed, your code will fail when ZeroVM attempts to execute the specific syscall. The six syscalls allowed are currently:

  • pread
  • pwrite
  • jail
  • unjail
  • fork
  • exit

We can test this by trying an invalid write instead of pwrite with the following C++

#include <unistd.h>
#include <stdio.h>
#include <sys/types.h>

int main()
{
    char data[128];

    if(read(0, data, 128) < 0)
     write(2, "An error occurred in the read.\n", 31);

}

This is what it looks like when it implodes:

$ zvsh syscalls
----------/tmp/tmpl7Mi1v----------
Node = 1
Version = 20130611
Timeout = 50
Memory = 4294967296, 0
Program = /home/vagrant/syscalls/syscalls
Channel = /dev/stdin,/dev/stdin,0,0,4294967296,4294967296,0,0
Channel = /tmp/tmpOfgnG6/stdout,/dev/stdout,0,0,0,0,4294967296,4294967296
Channel = /tmp/tmpOfgnG6/stderr,/dev/stderr,0,0,0,0,4294967296,4294967296
Channel = /tmp/tmpeNIzx4,/dev/nvram,3,0,4294967296,4294967296,4294967296,4294967296
-------------------------
----------/tmp/tmpeNIzx4----------
[args]
args = syscalls
[mapping]
channel=/dev/stdin,mode=char
channel=/dev/stdout,mode=char
channel=/dev/stderr,mode=char

-------------------------
2
0
0
disable
0.00 0.00 0 0 0 0 0 0 0 0
src/loader/elf_util.c 178: Segment 0 is of unexpected type 0x6, flag 0x5

ERROR: ZeroVM return code is 8

What is important here, at least in terms of telling why it failed is: ERROR: ZeroVM return code is 8. Error codes 1 - 17 indicate an issue in untrusted code. In our example, it was specifically the read / write bits that caused the issue.

Down The Rabbit Hole

Time to jump down the rabbit hole. Into what actually just happened. First some diagrams:

ZeroVM Stack

In this first diagram, you are looking at the internal structure of the ZeroVM process. ZVM Trusted Code Base Architecture

Working from the outside in, the grey area represents everything that is ZeroVM, all of it’s runtime data, IO Channels, virtual file systems, and most importantly, the user code to be executed.

Next, in the pink area is where end-user code gets loaded up, from there a call will progress down the stack. In the last white layer you can see the individual sys-calls as implemented in ZeroVM as well as their corresponding trappings.

One last thing to note is the two sys-calls for zvm_pread and zvm_pwrite, their traps, handlers, and this thing called a trampoline. The trampolines are a mechanisim that facilitates the switch from untrusted to trusted execution. A syscall will start in the ‘untrusted’ context, if it is an allowed call, the ZRT or ZeroVM Run Time via it’s hooks into the ZVM Syscall API and passes the call to a “trampoline” to perform the context switch needed to allow the syscall to execute in trusted space.

ZeroVM Guest Memory Footprint

This next diagram offers a different view on the above ZeroVM guest. In the below diagram, you will see how the NaCL & ZeroVM trampolines are implemented in memory, how that in turn gets passed down to the corresponding syscall, the trusted ELF binary, and where the rest of ZeroVM is contained.

Anatomy of the ZeroVM Guest Memory footprint

Summary

In this post, we took a deeper look into how isolation is handled within ZeorVM. We started with a high level discussion of NaCL and ZeroVM’s implementation of limited syscalls. We then attempted to execute an invalid syscall. Finally we got into the nitty gritty of how that syscall is trapped via the ZeroVM runtime and where in it’s memory stack context switches from trusted to untrusted code happen.

Resources

ZeroVM - Some Background

In a little over three weeks, at the Paris Summit, I’ll be helping give a 90 minute workshop on ZeroVM. 90 minutes is not actually a lot of time to get a good grasp on what ZeroVM is, how it operates, and most importantly, why it is important.

These next few posts however, I am hoping, will shed some light on that.

What is ZeroVM?

From the ZeroVM website:

ZeroVM is an open source virtualization technology that is based on the Chromium Native Client (NaCl) project. ZeroVM creates a secure and isolated execution environment which can run a single thread or application

Read that over a few times. What it is saying, is where Docker provides packaging, traditional virtualization provides full OS isolation, ZeroVM provides an environment specific to a single application or thread.

Explained another way, ZeroVM provides isolation at the thread or application level. It provides a “sandbox” environment for you to run arbitrary “untrusted” code. Some examples of how this could be useful can be found here and here.

ZeroVM does this by using the NaCL from Google to provide isolation and security. ZeroVM also has a number of other layers to provide the additional services for things like a Posix file system, I/O, and channels.

Use Cases

Due to the small size, speed, and isolation nature of ZeroVM, it is able to solve some problems which are very “big data” or cloud centric. Let’s take a look at some:

Where Does ZeroVM Fit?

This diagram from the Atlanta OpenStack Summit should help:

ZeroVM vs Traditional Virt

Additionally, this OpenStack Summit presentation should help clarify some as well.

Traditional Virtualization

Traditional Virtualization is perhaps the easiest to cover, it’s the most familiar way of isolating the things. I’ve been running VMware Workstation/Fusion, their enterprise products, etc for ages. I imagine a lot of you have as well. To recap tho:

  • Type 1 or 2 hypervisor
  • Each VM is a set of processes
  • CPU instructions virtualized (it’s a bit more complicated now-a-days with Hardware extensions)
  • Carries around a full OS installation
  • All resources isolated
  • Very few published VM escapes

A few clarifying points. I’ve simplified the CPU conversation for this discussion. That is, while hardware extensions provide native or near native execution, there is still quite a bit happening in software for those instructions which do not translate. Additionally, the “all resources isolated” is a bit of a simplification as well. Most modern hypervisors will do some manner of memory sharing, compression, and consolidation. However, memory is still generally isolated to a specific VM.

Containers (Docker / LXC)

Containers are all the rage these days. Docker has popularized and simplified the way we manage them. You get the same or better benefit from the shared hardware, however, containers strip quite a few of the layers:

  • Shared Kernel / OS
  • Super low overhead
  • Fast startup
  • ‘secure’
  • Managed via namespaces and process isolation

The important thing to note here, are the shared Kernel bits and the low overhead. The shared Kernel basically means everyone will run from the same Linux Kernel, and thus have the same features and limits therein. However, as an upside to that, you also shed having to carry around a full OS installation, networking stack, etc, and thus have much less overhead. This enables faster startup times.

ZeroVM

Finally, and most important to what we’re talking about is ZeroVM. ZeroVM is somewhere a bit beyond containers. That is you still have:

  • Shared hardware
  • Low Overhead
  • Fast Startup

However, you are no longer carrying an OS or Kernel around. In turn, this further minimizes the attack surface. That is, there are a total of 6 system calls available from within ZeroVM.

In addition, due to it’s extremely small size one can spin up a new ZeroVM instance in miliseconds, vs seconds or minutes for the other technologies. This speed advantage lends itself well to one of ZeroVMs primary use cases in the data-pipeline / data-processing space. Indeed it is the magic that makes ZeroCloud work. It does, however, require a different way of thinking about and approaching the problem.

That is, ZeroVM instances are designed to be extremely temporary in nature. Load your data in, handle the processing, push it back out to disk and move on. Conceptually, it takes you a few steps further down the ‘everything is disposable’ path, insofar as you will need to design and rewrite your apps to work with ZeroVM.

Summary

While a wall of words, I hope I have provided some clarity around the what of ZeroVM and where it fits.

Instance Shelving

Shelving? No no no, not that kind of shelving. Similar however. Image Shelving in Nova allows you to power off instances without experiencing the resource penalty in keeping them around.

Instance Shelving

Before instance shelving in OpenStack, if a user powered down an instance, the resources would still be in use on the compute node that housed said instance. Quite wasteful, no? Say an instance is powered off for 72+ hours. Shelving allows you to keep all the various bits associated with the VM while moving it to somewhere that is not the hypervisor to conserve resources.

Working with Shelving

The assumption here is that you have either devstack or some other flavor of OpenStack running. There isn’t anything additional you need to configure to make this work.

First, let’s log into our controller and look around at what’s running:

# nova list
+--------------------------------------+-------+--------+------------+-------------+-----------------------------------------------+
| ID                                   | Name  | Status | Task State | Power State | Networks                                      |
+--------------------------------------+-------+--------+------------+-------------+-----------------------------------------------+
| 05617907-3931-414f-88f3-fd180f69fde6 | test1 | ACTIVE | -          | Running     | cookbook_network_1=10.200.0.2, 192.168.100.11 |
+--------------------------------------+-------+--------+------------+-------------+-----------------------------------------------+

Next, let’s shelve an instance:

# nova shelve test1

Did it get shelved?

# nova list
+--------------------------------------+-------+-------------------+------------+-------------+-----------------------------------------------+
| ID                                   | Name  | Status            | Task State | Power State | Networks                                      |
+--------------------------------------+-------+-------------------+------------+-------------+-----------------------------------------------+
| 05617907-3931-414f-88f3-fd180f69fde6 | test1 | SHELVED_OFFLOADED | -          | Shutdown    | cookbook_network_1=10.200.0.2, 192.168.100.11 |
+--------------------------------------+-------+-------------------+------------+-------------+-----------------------------------------------+

Note: The change in status to “SHELVED_OFFLOADED”

Now let’s unshelve it & check status:

# nova unshelve test1
# nova list
+--------------------------------------+-------+-------------------+------------+-------------+-----------------------------------------------+
| ID                                   | Name  | Status            | Task State | Power State | Networks                                      |
+--------------------------------------+-------+-------------------+------------+-------------+-----------------------------------------------+
| 05617907-3931-414f-88f3-fd180f69fde6 | test1 | SHELVED_OFFLOADED | spawning   | Shutdown    | cookbook_network_1=10.200.0.2, 192.168.100.11 |
+--------------------------------------+-------+-------------------+------------+-------------+-----------------------------------------------+

There we go!

Summary

In this post, we worked with one of the new-ish features in Nova, instance shelving. Good for when you need to stop instances for a long time in a non-impactful way.

References

Running Rackspace Private Cloud on the Rackspace Public Cloud

Private Cloud on the Public Cloud? As odd as that sounds, or as inception as it makes you feel (Clouds in clouds?!), I’ve found that since downsizing my homelab quite a bit, I’ve needed to find other ways to work on and try out various things that exceed the capacity of my laptop. RPC 9 is one of them.

Disclaimer: I work for Rackspace, and while this post is largely focused around two of our products, I put it out here in the hopes that a) someone will find it useful, and b) it’ll help me later.

Rackspace Private Cloud (RPC)

Our docs will do it a lot more justice in terms of description than I can, so I encourage you to go here and take a few minutes to get familiar.

A few things I want to point out are related to the architecture of RPC9, specifically:

RPC Infrastructure

Looking over the diagram, there are a lot of hosts involved now. 3x Infrastructure nodes, a logging host, n-Compute hosts, deployment hosts, and finally a set of load balancers. This thing is big. Bigger than my laptop that’s for sure.

Running Cloud on Cloud

Thankfully, however, while it’s big, the folks who wrote this provided some OpenStack Heat templates that make setting it up externally much easier. Those can be found here.

Getting Started

To build the cloud on the cloud you’ll need the following info & apps installed somewhere you have access to:

  • An “heatrc” or similar file containing
    • Rackspace Username
    • Rackspace API Key
    • Endpoint(s) to deploy to
  • Python-HeatClient
  • An SSH Key

To create the “heatrc” file, start with the below template and then edit as needed:

export OS_USERNAME=rackspace_cloud_username
export OS_PASSWORD=rackspace_cloud_password
export OS_TENANT_ID=rackspace_cloud_account_number
export OS_AUTH_URL=https://identity.api.rackspacecloud.com/v2.0/
export HEAT_URL=https://ord.orchestration.api.rackspacecloud.com/v1/${OS_TENANT_ID}

Note: OS_TENANT_ID is your cloud account number. You can get to this by logging into mycloud.rackspace.com and clicking your account name in the upper right.

Once you’ve created the file and replaced said values, install the Heat client: pip install python-heatclient

Installing RPC 9 on the Rackspace Public Cloud

To kick off the installation, follow these commands:

curl https://raw.githubusercontent.com/rcbops/ansible-lxc-rpc/master/scripts/rpc9.0.0-aio-rax-heat-template.yml > rpc9-rax-heat.yaml

source ./heatrc

heat stack-create RPC9-Stack -f ./rpc9-rax-heat.yaml \
  -P image_name="Ubuntu 14.04 LTS (Trusty Tahr)" \
  -P ssh_key_name="lol_ssh_key" \
  -P flavor_name="8 GB Performance"

+--------------------------------------+------------+--------------------+----------------------+
| id                                   | stack_name | stack_status       | creation_time        |
+--------------------------------------+------------+--------------------+----------------------+
| c2b6c1b0-0098-441d-9999-c778b108a181 | RPC9-Stack | CREATE_IN_PROGRESS | 2014-10-13T15:13:58Z |
+--------------------------------------+------------+--------------------+----------------------+

This next part takes quite a bit of time to complete and is why we used a performance instance, to make the provision happen just a bit faster. You can keep an eye on it’s build status with watch -n 15 heat stack-list

Once it completes, you will need to find out what IP address it has been assigned, to do that, use these commands:

List the stacks:

$ heat stack-list
+--------------------------------------+------------+-----------------+----------------------+
| id                                   | stack_name | stack_status    | creation_time        |
+--------------------------------------+------------+-----------------+----------------------+
| c2b6c1b0-0098-441d-9999-c778b108a181 | RPC9-Stack | CREATE_COMPLETE | 2014-10-13T15:13:58Z |
+--------------------------------------+------------+-----------------+----------------------+

List the available outputs:

$ heat output-list RPC9-Stack
+------------------+-------------------------------------------------------+
| output_key       | description                                           |
+------------------+-------------------------------------------------------+
| RPCAIO_password  | The password for all the things.                      |
| RPCAIO_public_ip | The public IP address of the newly configured Server. |
+------------------+-------------------------------------------------------+

Finally show the IP:

$ heat output-show RPC9-Stack RPCAIO_public_ip
"127.0.0.100"

Summary

In this post we showed you how to nest the Rackspace Private Cloud installation on the Rackspace Public Cloud. A useful trick for testing it out without having to use ALL your local resources up.

OpenStack Cookbook 3rd Edition

Oh Oh Oh! The lesson in book writing is that it is both terrible and addictive. That is, right as we wrapped the second edition of the OpenStack Cookbook, I promised myself “Never again!”.

Now, some number of months later, Kevin and myself, we’ve chatted a bit, and have decided to go down that road to update the book again. Indeed, we’re also looking at bringing Egle along for the ride this time.

3rd Edition Highlights

The third edition will target either a late Juno or early Kilo release (Hooray Relevancy!). In addition to general updates, were adding or overhauling the following:

  • OpenStack Datacenter Automation
  • OpenStack Scaling
  • Image Management & Conversion
  • More Operations Recipes
  • HTTPS!
  • OpenStack Heat
  • Additional Neutron Services (LBaaS, VPNaaS)
  • Using 3rd Party drivers

30 Posts in 30 Days

I found this morning I was in a bit of a blogging slump. That is, I’d not posted anything in quite a while, even though I have plenty of exciting things going on at the moment. So, with that said, happening across this post by Greg Ferro (@etherealmind), I thought I’d jump into the fray.

I encourage you to do the same.