Cody Bunch Some Random IT Guy - OpenStack, DevOps, Cloud, Things

RedHat OpenShift Hands On Workshop, San Antonio - Raw Notes

RedHat came to town recently to give a one day, almost entirely lab driven workshop around OpenShift. The workshop was well put together, and the live labs were over-all pretty good.

What follows here, are my raw notes from the lab, sanitized of usernames & passwords, and some light editing for things that were pretty ugly.


Begin notes

The parksmap image: docker.io/openshiftroadshow/parksmap:1.2.0

Check status

$ oc get pods
NAME               READY     STATUS    RESTARTS   AGE
parksmap-1-a3ppj   1/1       Running   0          33m

$ oc status
In project explore-xx on server https://127.0.0.1:443

svc/parksmap - 172.30.203.33:8080
  dc/parksmap deploys istag/parksmap:1.2.0
    deployment #1 deployed 32 minutes ago - 1 pod

1 warning identified, use 'oc status -v' to see details.

$ oc status -v
In project explore-xx on server https://127.0.0.1:443

svc/parksmap - 172.30.203.33:8080
  dc/parksmap deploys istag/parksmap:1.2.0
    deployment #1 deployed 32 minutes ago - 1 pod

Warnings:
  * Unable to list statefulsets resources.  Not all status relationships can be established.

Info:
  * dc/parksmap has no readiness probe to verify pods are ready to accept traffic or ensure deployment is successful.
    try: oc set probe dc/parksmap --readiness ...
  * dc/parksmap has no liveness probe to verify pods are still running.
    try: oc set probe dc/parksmap --liveness ...

View details with 'oc describe <resource>/<name>' or list everything with 'oc get all'.

Scaling

Get some info about the pod before

$ oc get pods
NAME               READY     STATUS    RESTARTS   AGE
parksmap-1-a3ppj   1/1       Running   0          33m

$ oc get dc
NAME       REVISION   DESIRED   CURRENT   TRIGGERED BY
parksmap   1          1         1         config,image(parksmap:1.2.0)

$ oc get rc
NAME         DESIRED   CURRENT   READY     AGE
parksmap-1   1         1         1         33m

Scale the deployment config

$ oc scale --replicas=2 dc/parksmap
deploymentconfig "parksmap" scaled

$ oc get pods
NAME               READY     STATUS    RESTARTS   AGE
parksmap-1-a3ppj   1/1       Running   0          34m
parksmap-1-tuj0b   1/1       Running   0          9s

Review the new config

$ oc describe svc parksmap
Name:           parksmap
Namespace:      explore-xx
Labels:         app=parksmap
Selector:       deploymentconfig=parksmap
Type:           ClusterIP
IP:         172.30.203.33
Port:           8080-tcp    8080/TCP
Endpoints:      10.1.16.37:8080,10.1.20.20:8080
Session Affinity:   None
No events.

$ oc get endpoints
NAME       ENDPOINTS                         AGE
parksmap   10.1.16.37:8080,10.1.20.20:8080   35m

Autohealing: This deletes one of the pods, then watches the new one create:

oc delete pod parksmap-1-a3ppj; watch "oc get pods"

Scale down: This sets us back to one replica and the watches the new one terminate.

oc scale --replicas=1 dc/parksmap; watch "oc get pods"

Routes

Get routes:

$ oc get routes

Get the name of our service:

$ oc get services

Expose it:

$ oc expose service parksmap
$ oc get routes
NAME       HOST/PORT                                                   PATH      SERVICES   PORT       TERMINATION   WILDCARD
parksmap   parksmap-explore-xx.cloudapps.ksat.openshift3roadshow.com             parksmap   8080-tcp                 None

Logs

Get logs:

$ oc logs parksmap-1-a3ppj
14:47:51.350 [main] DEBUG io.fabric8.kubernetes.client.Config - Trying to configure client from Kubernetes config...
14:47:51.373 [main] DEBUG io.fabric8.kubernetes.client.Config - Did not find Kubernetes config at: [/.kube/config]. Ignoring.
14:47:51.373 [main] DEBUG io.fabric8.kubernetes.client.Config - Trying to configure client from service account...
14:47:51.374 [main] DEBUG io.fabric8.kubernetes.client.Config - Found service account ca cert at: [/var/run/secrets/kubernetes.io/serviceaccount/ca.crt].
14:47:51.381 [main] DEBUG io.fabric8.kubernetes.client.Config - Found service account token at: [/var/run/secrets/kubernetes.io/serviceaccount/token].
14:47:51.381 [main] DEBUG io.fabric8.kubernetes.client.Config - Trying to configure client namespace from Kubernetes service account namespace path...
14:47:51.381 [main] DEBUG io.fabric8.kubernetes.client.Config - Found service account namespace at: [/var/run/secrets/kubernetes.io/serviceaccount/namespace].
2017-04-04 14:47:53.101  WARN 1 --- [           main] i.f.s.cloud.kubernetes.StandardPodUtils  : Failed to get pod with name:[parksmap-1-a3ppj]. You should look into this if things aren't working as you expect. Are you missing serviceaccount permissions?

Also pod, then archive, loads EFK

RBAC

Fix service account

$ oc policy add-role-to-user view -z default
role "view" added: "default"

Grant other users access:

$ oc policy add-role-to-user view userxx
role "view" added: "userxx"

View acceesses:

$ oc describe policyBindings :default -n explore-xx
Name:                   :default
Namespace:              explore-xx
Created:                21 hours ago
Labels:                 <none>
Annotations:                <none>
Last Modified:              2017-04-04 10:50:03 -0500 CDT
Policy:                 <none>
RoleBinding[admin]:
                    Role:           admin
                    Users:          userxx
                    Groups:         <none>
                    ServiceAccounts:    <none>
                    Subjects:       <none>
RoleBinding[system:deployers]:
                    Role:           system:deployer
                    Users:          <none>
                    Groups:         <none>
                    ServiceAccounts:    deployer
                    Subjects:       <none>
RoleBinding[system:image-builders]:
                    Role:           system:image-builder
                    Users:          <none>
                    Groups:         <none>
                    ServiceAccounts:    builder
                    Subjects:       <none>
RoleBinding[system:image-pullers]:
                    Role:           system:image-puller
                    Users:          <none>
                    Groups:         system:serviceaccounts:explore-xx
                    ServiceAccounts:    <none>
                    Subjects:       <none>
RoleBinding[view]:
                    Role:           view
                    Users:          userxx
                    Groups:         <none>
                    ServiceAccounts:    default
                    Subjects:       <none>

Show service accounts:

$ oc describe serviceaccounts -n explore-xx
Name:       builder
Namespace:  explore-xx
Labels:     <none>

Mountable secrets:  builder-dockercfg-z921w
                    builder-token-22bfm

Tokens:             builder-token-0imdk
                    builder-token-22bfm

Image pull secrets: builder-dockercfg-z921w


Name:       default
Namespace:  explore-xx
Labels:     <none>

Mountable secrets:  default-token-yhj99
                    default-dockercfg-q4i5u

Tokens:             default-token-f9zyz
                    default-token-yhj99


Image pull secrets: default-dockercfg-q4i5u

Name:       deployer
Namespace:  explore-xx
Labels:     <none>


Image pull secrets: deployer-dockercfg-bwpor

Mountable secrets:  deployer-token-ajlo3
                    deployer-dockercfg-bwpor

Tokens:             deployer-token-ajlo3
                    deployer-token-aqcyk

Name:       jenkins
Namespace:  explore-xx
Labels:     app=jenkins-ephemeral
        template=jenkins-ephemeral-template

Mountable secrets:  jenkins-token-16g9q
                    jenkins-dockercfg-x2ftc

Tokens:             jenkins-token-16g9q
                    jenkins-token-l24vf

Image pull secrets: jenkins-dockercfg-x2ftc

Redeploy app:

$ oc deploy parksmap --latest --follow
Flag --latest has been deprecated, use 'oc rollout latest' instead
Started deployment #2
--> Scaling up parksmap-2 from 0 to 1, scaling down parksmap-1 from 1 to 0 (keep 1 pods available, don't exceed 2 pods)
    Scaling parksmap-2 up to 1
    Scaling parksmap-1 down to 0
--> Success

Check on that:

$ oc get dc/parksmap
NAME       REVISION   DESIRED   CURRENT   TRIGGERED BY
parksmap   2          1         1         config,image(parksmap:1.2.0)

Remote shell

Get pods, then login:

$ oc get pods
NAME               READY     STATUS    RESTARTS   AGE
parksmap-2-k7o7m   1/1       Running   0          2m

$ oc rsh parksmap-2-k7o7m
sh-4.2$

One-off commands:

$ oc exec parksmap-2-k7o7m -- ls -l /parksmap.jar
-rw-r--r--. 1 root root 21753930 Feb 20 11:14 /parksmap.jar

$ oc rsh parksmap-2-k7o7m whoami
whoami: cannot find name for user ID 1001050000 

S2I deploys

$ oc new-app --image="simple-java-s2i:latest" --name="nationalparks" http://gitlab-127.0.0.1/userxx/nationalparks.git
Flag --image has been deprecated, use --image-stream instead
--> Found image e2182f7 (6 months old) in image stream "openshift/simple-java-s2i" under tag "latest" for "simple-java-s2i:latest"

    Java S2I builder 1.0
    --------------------
    Platform for building Java (fatjar) applications with maven or gradle

    Tags: builder, maven-3, gradle-2.6, java, microservices, fatjar

    * The source repository appears to match: jee
    * A source build using source code from http://gitlab-127.0.0.1/userxx/nationalparks.git will be created
      * The resulting image will be pushed to image stream "nationalparks:latest"
      * Use 'start-build' to trigger a new build
    * This image will be deployed in deployment config "nationalparks"
    * Port 8080/tcp will be load balanced by service "nationalparks"
      * Other containers can access this service through the hostname "nationalparks"

--> Creating resources ...
    imagestream "nationalparks" created
    buildconfig "nationalparks" created
    deploymentconfig "nationalparks" created
    service "nationalparks" created
--> Success
    Build scheduled, use 'oc logs -f bc/nationalparks' to track its progress.
    Run 'oc status' to view your app.

Check status:

$ oc get builds
NAME              TYPE      FROM          STATUS     STARTED          DURATION
nationalparks-1   Source    Git@240e177   Complete   57 seconds ago   52s

Build logs:

$ oc logs -f builds/nationalparks-1
Pushing image 172.30.17.230:5000/explore-xx/nationalparks:latest ...
Pushed 0/12 layers, 0% complete
Pushed 1/12 layers, 15% complete
Pushed 2/12 layers, 22% complete
Pushed 3/12 layers, 29% complete
Pushed 4/12 layers, 41% complete
Pushed 5/12 layers, 52% complete
Pushed 6/12 layers, 59% complete
Pushed 7/12 layers, 65% complete
Pushed 8/12 layers, 73% complete
Pushed 9/12 layers, 83% complete
Pushed 10/12 layers, 96% complete
Pushed 11/12 layers, 100% complete
Pushed 12/12 layers, 100% complete
Push successful

Add a DB

$ oc new-app --template="mongodb-ephemeral" \
    -p MONGODB_USER=mongodb \
    -p MONGODB_PASSWORD=mongodb \
    -p MONGODB_DATABASE=mongodb \
    -p MONGODB_ADMIN_PASSWORD=mongodb

Wire the DB to the rest

$ oc env dc nationalparks \
    -e MONGODB_USER=mongodb \
    -e MONGODB_PASSWORD=mongodb \
    -e MONGODB_DATABASE=mongodb \
    -e MONGODB_SERVER_HOST=mongodb

deploymentconfig "nationalparks" updated

$ oc get dc nationalparks -o yaml

      - env:
        - name: MONGODB_USER
          value: mongodb
        - name: MONGODB_PASSWORD
          value: mongodb
        - name: MONGODB_DATABASE
          value: mongodb
        - name: MONGODB_SERVER_HOST
          value: mongodb

$ oc env dc/nationalparks --list
# deploymentconfigs nationalparks, container nationalparks
MONGODB_USER=mongodb
MONGODB_PASSWORD=mongodb
MONGODB_DATABASE=mongodb
MONGODB_SERVER_HOST=mongodb

Set some labels:

oc label route nationalparks type=parksmap-backend

Redeploy the front-end:

oc rollout latest parksmap

Config Maps

$ wget http://gitlab-127.0.0.1/user98/nationalparks/raw/1.2.1/ose3/application-dev.properties

$ oc create configmap nationalparks --from-file=application.properties=./application-dev.properties

Describe it:

$ oc describe configmap nationalparks
Name:       nationalparks
Namespace:  explore-xx
Labels:     <none>
Annotations:    <none>

Data
====
application.properties: 123 bytes

$ oc get configmap nationalparks -o yaml
apiVersion: v1
data:
  application.properties: |
    # NationalParks MongoDB
    mongodb.server.host=mongodb
    mongodb.user=mongodb
    mongodb.password=mongodb
    mongodb.database=mongodb
kind: ConfigMap
metadata:
  creationTimestamp: 2017-04-04T16:54:38Z
  name: nationalparks
  namespace: explore-xx
  resourceVersion: "298191"
  selfLink: /api/v1/namespaces/explore-xx/configmaps/nationalparks
  uid: 638b0913-1957-11e7-9e39-02ef4875286e

Wire up the configmap:

$ oc set volumes dc/nationalparks --add -m /opt/openshift/config --configmap-name=nationalparks

Now remove the env variables:

$ oc env dc/nationalparks MONGODB_USER- MONGODB_PASSWORD- MONGODB_DATABASE- MONGODB_SERVER_HOST-

Set up some probes:

$ oc set probe dc/nationalparks \
    --readiness \
    --get-url=http://:8080/ws/healthz/ \
    --initial-delay-seconds=20 \
    --timeout-seconds=1
$ oc set probe dc/nationalparks \
    --liveness \
    --get-url=http://:8080/ws/healthz/ \
    --initial-delay-seconds=20 \
    --timeout-seconds=1

CICD Lab

Deploy Jenkins:

$ oc new-app --template="jenkins-ephemeral"

Add permission:

$ oc policy add-role-to-user edit -z jenkins
role "edit" added: "jenkins"

Remove the route label:

$ oc label route nationalparks type-

Create mongo-live

$ oc new-app --template="mongodb-ephemeral" \
    -p MONGODB_USER=mongodb \
    -p MONGODB_PASSWORD=mongodb \
    -p MONGODB_DATABASE=mongodb \
    -p MONGODB_ADMIN_PASSWORD=mongodb \
    -p DATABASE_SERVICE_NAME=mongodb-live

Pull down new configmap:

$ wget http://gitlab-ce-workshop-infra.cloudapps.ksat.openshift3roadshow.com/user98/nationalparks/raw/1.2.1/ose3/application-live.properties

$ oc create configmap nationalparks-live --from-file=application.properties=./application-live.properties

Tag our live build:

$ oc tag nationalparks:latest nationalparks:live

Use our new build:

$ oc new-app --image="nationalparks:live" --name="nationalparks-live"

Set env variables (because configmap is broken in this lab):

$ oc env dc/nationalparks-live \
    -e MONGODB_USER=mongodb \
    -e MONGODB_PASSWORD=mongodb \
    -e MONGODB_DATABASE=mongodb \
    -e MONGODB_SERVER_HOST=mongodb-live

Add a route, load the data:

$ oc expose service nationalparks-live
curl http://nationalparks-live-explore-xx.127.0.0.1/ws/data/load

Add a label:

oc label route nationalparks-live type=parksmap-backend

Disable auto builds for latest:

oc set triggers dc/nationalparks --from-image=nationalparks:latest --remove

Create pipeline:

$ oc new-app dev-live-pipeline \
→     -p PROJECT_NAME=explore-xx
--> Deploying template "openshift/dev-live-pipeline" to project explore-xx

     dev-live-pipeline
     ---------
     CI/CD Pipeline for Dev and Live environments


     * With parameters:
        * Pipeline name=nationalparks-pipeline
        * Project name=explore-xx
        * Dev resource name=nationalparks
        * Live resource name=nationalparks-live
        * ImageStream name=nationalparks
        * GitHub Trigger=a5iYjDTN # generated
        * Generic Trigger=FY3tGSrP # generated

--> Creating resources ...
    buildconfig "nationalparks-pipeline" created
--> Success
    Use 'oc start-build nationalparks-pipeline' to start a build.
    Run 'oc status' to view your app.

Start the pipeline:

$ oc start-build nationalparks-pipeline
build "nationalparks-pipeline-1" started

Check our data. This spits out a boat load of text/json data:

curl http://nationalparks-live-explore-xx.127.0.0.1/ws/data/all

Promote the pipeline via the gui.

Promote from the gui

Rollback:

$ oc rollback nationalparks-live
#5 rolled back to nationalparks-live-3
Warning: the following images triggers were disabled: nationalparks:live
  You can re-enable them with: oc set triggers dc/nationalparks-live --auto

Check on that:

curl http://nationalparks-live-explore-xx.127.0.0.1/ws/info/

Re-enable the new images trigger:

$ oc deploy nationalparks-live --enable-triggers
Flag --enable-triggers has been deprecated, use 'oc set triggers' instead
Enabled image triggers: nationalparks:live

Roll forward:

$ oc rollback nationalparks-live-4
#6 rolled back to nationalparks-live-4
Warning: the following images triggers were disabled: nationalparks:live
  You can re-enable them with: oc set triggers dc/nationalparks-live --auto

Links

  • https://docs.openshift.com/enterprise/3.0/architecture/core_concepts/routes.html
  • https://docs.openshift.com/enterprise/3.0/admin_guide/manage_authorization_policy.html
  • https://docs.openshift.com/enterprise/3.1/dev_guide/deployments.html
  • https://docs.openshift.com/enterprise/3.0/dev_guide/new_app.html
  • https://docs.openshift.com/enterprise/3.0/dev_guide/service_accounts.html
  • https://docs.openshift.com/enterprise/3.0/dev_guide/volumes.html
  • https://blog.openshift.com/openshift-3-3-pipelines-deep-dive/

Basic Automated Windows 10 post-install

It has been forever and a day since I’ve used Windows systems on a daily basis. However, in that time, the tools to do post install setup has evolved quite a bit. There are now tools like boxstarter and chocolatey, that when coupled with PowerShell, give you something akin to homebrew and dotfiles on osx (or apt & dotfiles on ubuntu, etc).

The following, are the commands I use to configure a fresh Windows 10 system:

START http://boxstarter.org/package/url?https://gist.github.com/bunchc/44e380258384505758b6244e615e75ed/raw/d648fffc21cb3cc7df79e50be6c05b05d29c79cc/0-SystemConfiguration.txt

START http://boxstarter.org/package/url?https://gist.githubusercontent.com/bunchc/44e380258384505758b6244e615e75ed/raw/239e8f6ca240a0c365619f242c14017f2f0de43e/1-Base%2520apps%2520setup.txt

START http://boxstarter.org/package/url?https://gist.github.com/bunchc/44e380258384505758b6244e615e75ed/raw/bae2117eba4091a78428493c2c821996ea5e3615/2-Dev%2520apps.txt

START https://github.com/Nummer/Destroy-Windows-10-Spying/releases/download/1.6.722/DWS_Lite.exe

The first command configures Windows updates and various privacy settings. The second command, pulls down a number of packages to provide a basic working environment. The third command does much the same with some additional packages. The last command pulls down a utility that helps clean up how much Win10 collects and sends home.

The first three commands launch boxstarter and supply a script to it. I’ve included those here:

Resources

I didn’t do this alone. In this case, my versions of the scripts are almost identical to the originals from GreyKarnival:

Additionally, the privacy tool run at the end comes from here.

packer.io with vSphere

Packer isn’t exactly a new tool. In fact, I’ve covered using packer to build Vagrant boxes a little while ago. This time around, I’m going to share some notes and the json file I used to get this to build and upload properly to vSphere.

My Build Environment:

I am running these builds from OSX 10.12.3 with:

  • VMware Fusion 8.5.3
  • packer.io 0.12.3
  • vSphere 6.5
    • ESXi 6.5a
    • VCSA 6.5

Packer json template.

The entire json I use is here. I have copied the relevant sections below. First, the variables section. You will want to swap these with values specific to your environment. The values I’ve supplied came from the vGhetto lab builder.

    "variables": {
        "vsphere_host": "vcenter65-1.vghetto.local",
        "vsphere_user": "administrator@vghetto.local",
        "vsphere_pass": "VMware1!",
        "vsphere_datacenter": "Datacenter",
        "vsphere_cluster": "\"VSAN-Cluster\"",
        "vsphere_datastore": "virtual_machines",
        "vsphere_network": "\"VM Network\""
    },

Next, post-processors. Here be the magic.

post-processors

Some highlights:

  • type - tells packer we’re uploading to vsphere
  • keep_input_artifact - setting this to true helps troubleshooting
  • only - tells packer to only run this post-processor for the named builds.
  • the remaining lines - the vSphere specific variables from the prior section.

Note: Only change the variables rather than specifying names directly. Otherwise, OVFTool will get stupid angry about escaping characters.

The Packer to vSphere Build

Once you have all the parts in place, you can run the following command to kick off the packer build that will dump it’s artifacts into vSphere:

packer build -parallel=false ubuntu-14.04.json

Now, the packer command will produce a LOT of output, even without debugging enabled. If you would like to review said output or dump it to a file in case sometehing goes sideways:

time { packer build -parallel=false ubuntu-14.04.json; } 2>&1 | tee -a /tmp/packer.log

This will time how long packer takes to do it’s thing and dumps all output to /tmp/packer.log

When the command finishes you’ll see the following output:

==> ubuntu-14.04.amd64.vmware: Running post-processor: vsphere
    ubuntu-14.04.amd64.vmware (vsphere): Uploading output-ubuntu-14.04.amd64.vmware/packer-ubuntu-14.04.amd64.vmware.vmx to vSphere
Build 'ubuntu-14.04.amd64.vmware' finished.

vSphere client

With that, all should be in working order.

A Simple Terraform on vSphere Build

This post will talk a bit about how to use Terraform to deploy a simple config against vSphere. Simple? Here’s what we’re building:

  • VMs -
    • OpenSUSE 42.2
      • 2 vCPU
      • 8GB Ram
      • 20GB Disk
    • CentOS7
      • 2 vCPU
      • 8GB Ram
      • 20GB Disk

As with prior posts, I am building this on top of a vSphere lab from here. Along with the following:

  • VMware Fusion 8.5.3
  • Terraform 0.8.8
  • vSphere 6.5
    • ESXi 6.5a
    • VCSA 6.5

Defining the Environment

To build our two VM environment, we need to create three files at the root of the directory you plan to build from. These files are:

$ ls -l
-rw-r--r--  1 bunchc  staff  1788 Mar  8 16:02 build.tf
-rw-r--r--  1 bunchc  staff   172 Mar  8 15:01 terraform.tfvars
-rw-r--r--  1 bunchc  staff   162 Mar  8 15:01 variables.tf

Each of these files has the following use:

  • build.tf - defines the infrastructure to build. This includes definitions for VMs, networks, storage, which files to copy where, and then some.
  • variables.tf - defines any variables to be used in build.tf
  • terraform.tfvars - supplies the actual values for the variables

In the following sections we review each file as it pertains to our environment.

build.tf

Below I have broken out the sections of build.tf that are of interest to us. If you are following along, you will want to copy/paste each section into a single file.

This first section tells terraform how to connect to vSphere. You will notice there are no actual values provided. These come from variables.tf and terraform.tfvars

# Configure the VMware vSphere Provider
provider "vsphere" {
    vsphere_server = "${var.vsphere_vcenter}"
    user = "${var.vsphere_user}"
    password = "${var.vsphere_password}"
    allow_unverified_ssl = true
}

The second section defines the OpenSUSE VM. We do this by telling Terraform to create a resource, and then providing the type and name of said resource. The only other thing I will call out in this section is the ‘disk’ clause. When using ‘template’ inside the disk clause, you do not need to specify a disk size.

# Build openSUSE
resource "vsphere_virtual_machine" "opensuse-openstack" {
    name   = "opensuse-openstack"
    vcpu   = 2
    memory = 8192
    domain = "vghetto.local"
    datacenter = "${var.vsphere_datacenter}"
    cluster = "${var.vsphere_cluster}"

    # Define the Networking settings for the VM
    network_interface {
        label = "VM Network"
        ipv4_gateway = "10.0.1.1"
        ipv4_address = "10.0.1.190"
        ipv4_prefix_length = "24"
    }

    dns_servers = ["10.0.1.10", "8.8.8.8"]

    # Define the Disks and resources. The first disk should include the template.
    disk {
        template = "openSUSE-Leap-42.2-NET-x86_64.vmware"
        datastore = "virtual_machines"
        type ="thin"
    }

    # Define Time Zone
    time_zone = "America/Chicago"
}

The third section that follows, defines the second VM. You will see it’s a repeat of the first.

# Build CentOS
resource "vsphere_virtual_machine" "centos-openstack" {
    name   = "centos-openstack"
    vcpu   = 2
    memory = 8192
    domain = "vghetto.local"
    datacenter = "${var.vsphere_datacenter}"
    cluster = "${var.vsphere_cluster}"

    # Define the Networking settings for the VM
    network_interface {
        label = "VM Network"
        ipv4_gateway = "10.0.1.1"
        ipv4_address = "10.0.1.180"
        ipv4_prefix_length = "24"
    }

    dns_servers = ["10.0.1.10", "8.8.8.8"]

    # Define the Disks and resources. The first disk should include the template.
    disk {
        template = "CentOS-7-x86_64.vmware"
        datastore = "virtual_machines"
        type ="thin"
    }

    # Define Time Zone
    time_zone = "America/Chicago"
}

variables.tf

Next up in the files we need to make is variables.tf. I’ve provided in wholesale below:

# Variables
variable "vsphere_vcenter" {}
variable "vsphere_user" {}
variable "vsphere_password" {}
variable "vsphere_datacenter" {}
variable "vsphere_cluster" {}

Note: We are only defining things here, not providing them values just yet.

terraform.tfvars

Here’s the good stuff. That is, this is the file that maps values to the variables, and in our case, stores credentials. You will want to add this to .gitignore (or whatever your source control uses):

vsphere_vcenter = "10.0.1.170"
vsphere_user = "administrator@vghetto.local"
vsphere_password = "VMware1!"
vsphere_datacenter = "Datacenter"
vsphere_cluster = "VSAN-Cluster"

Yes, yes, I left my creds in. These are, after-all the default creds for the vGhetto autobuilder.

Building the Infrastructure

Now that you have all the config files in place there are two steps left, validate and deploy. Fist, we validate:

terraform plan

If you have everything correct so far, you will see the following:

terraform plan

Neat! Let’s build:

terraform apply

If you glance in vCenter, you will notice the build has indeed kicked off:

vCenter screenshot

Terraform also produces the following output when successful:

terraform build

.screenrc tricks

I’m not a tmux user.

There, I said it. I guess as tmux was becoming the thing to use to have lots of terminals open, I’d moved on to being Admin for a while. Who knows. screen 4lyfe.

With that said, here’s a bit of my screenrc that makes life easier:

# Status bar
hardstatus always # activates window caption
hardstatus string '%{= wk}[ %{k}%H %{k}][%= %{= wk}%?%-Lw%?%{r}(%{r}%n*%f%t%?(%u)%?%{r})%{k}%?%+Lw%?%?%= %{k}][%{b} %Y-%m-%d %{k}%c %{k}]'

# Terminal options
term "xterm"
attrcolor b ".I"

# Turn off startup messaage
startup_message off

# Set the OSX term name to the current window
termcapinfo xterm* 'hs:ts=\\E]2;:fs=\\007:ds=\\E]2;screen\\007'

# In case of ssh disconnect or any weirdness, the screen will auto detach
autodetach on

The first line tells the status bar to always display. The second one, tells screen what this status should looklike. In this case, current user, windows with names, and date/time:

[ bunchc ][ 0$ irssi  (1*$vagrant)  2$ rbac-testing  3-$ docker-01 ][ 2017-02-22 15:35 ]

The next lines tell screen to:

  • Set the term type to xterm for nested ssh
  • Use bright colors for bold items
  • Turn off the boiler plate when starting
  • Set the OSX window title
  • Autodetach if ssh breaks

Getting remote hostnames as window names

This is not so much a screen thing as an ssh thing. First pull down this script somewhere local. For me that’s /home/bunchc/scripts/

Then add these two lines to your .ssh/config:

# Screen prompts to the remote hostname
Host *
    PermitLocalCommand yes
    LocalCommand /home/bunchc/scripts/screen_ssh.sh

Reloading the config from within screen

Now that you’ve got these settings, reload the screenrc file: ctrl-a : source ~/.screenrc

Resources

This post comes about after having collected these settings over a while. I’d love to give credit to all the original authors, finding posts from 2007 - 2009… well.

Extended Github Like Service on rPI

The good folks over at Hypriot have a wonderful tutorial on how to run your own GitHub like service. To do this, they use Gogs on a raspian image. A great starting point, but, well, what if you wanted to scale it some?

Yes yes, we’re running on a Raspberry PI, ‘scale’ in this case just means build it a bit more like you’d deploy on real hardware.

Forgiving my ascii, the following depicts what we will build:


+------------------------+
|                        |
|      nginx             |
|                        |
+-----------+------------+
            |
            |
+-----------v------------+
|                        |
|      gogs              |
|                        |
+-----------+------------+
            |
            |            
+-----------v------------+
|                        |
|     postgres           |
|                        |
+------------------------+

This build is involved, so, lets dive in.

Before Starting

To complete the lab as described, you’ll need at least one Raspberry Pi running Hypriot 1.x and a recent version of Docker and docker-compose.

Everything that follows was adapted for rPI, meaning, with some work, could be run on an x86 Docker as well.

What we’re building

In this lab we will build 3 containers:

  • nginx - A reverse proxy that handles incoming requests to our git server
  • gogs - The GoGit service
  • postgres - a SQL backend for gogs.

Ok, maybe not entirely production, but, a few steps closer.

Getting Started

To get started, we’ll make the directory structure. The end result should look similar to this:

$ tree -d
.
├── gogs
│   ├── custom
│   │   └── conf
│   └── data
├── nginx
│   └── conf
└── postgres
    └── docker-entrypoint-initdb.d

You can create this with the following command:

mkdir gogs-project; cd gogs-project
mkdir -p gogs/custom/conf gogs/data \
    nginx/conf postgres/docker-entrypoint-initdb.d

Next, at the root of the project folder, create a file called ‘env’. This will be the environment file in which we store info about our database.

cat > env <<EOF
DB_NAME=myproject_web
DB_USER=myproject_web
DB_PASS=shoov3Phezaimahsh7eb2Tii4ohkah8k
DB_SERVICE=postgres
DB_PORT=5432
EOF

Define the hosts in docker-compose.yaml

Next up, we define all of the hosts in a docker-compose.yaml file. I’ll explain each as we get into their respective sections:

nginx:
  restart: always
  build: ./nginx/
  ports:
    - "80:80"
  links:
    - gogs:gogs

gogs:
  restart: always
  build: ./gogs/
  expose:
    - "3000"
  links:
    - postgres:postgres
  volumes:
    - ./gogs/data:/data
  command: gogs/gogs web

postgres:
  restart: always
  image: rotschopf/rpi-postgres
  volumes:
    - ./postgres/docker-entrypoint-initdb.d:/docker-entrypoint-initdb.d
  env_file:
    - env
  expose:
    - "5432"

Build the Reverse Proxy

Repeated here, is the nginx section of the docker-compose.yaml file.

nginx:
  restart: always
  build: ./nginx/
  ports:
    - "80:80"
  links:
    - gogs:gogs
  • restart - This container should always be running. Docker will restart this container if it crashes.
  • build - Specifies a directory where the Dockerfile for this container lives.
  • ports - Tells docker to map an external port to this container
  • links - Creates a link between the containers and creates an /etc/hosts entry for name resolution

You’ll notice we told docker-compose we wanted to build this container rather than recycle one. To do that, you will need to place a Dockerfile into the ./nginx/ folder we created earlier. The Dockerfile should have the following contents:

FROM hypriot/rpi-alpine-scratch:v3.4
RUN apk add --update nginx \
    && rm -rf /var/cache/apk/*

COPY conf/nginx.conf /etc/nginx/nginx.conf
COPY conf/default.conf /etc/nginx/conf.d/default.conf

CMD ["nginx", "-g", "daemon off;"]
  • FROM - sets our base image.
  • RUN - installs nginx and cleans out the apk cache to save space.
  • COPY - these two lines pull in nginx configurations
  • CMD - sets nginx to run when the container gets launched

Finally, we need to provide the configurations specified in the copy commands. The contents of which follow. These should be placed into ./nginx/config/

$ cat nginx/conf/nginx.conf

user  nginx;
worker_processes  1;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        off;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;

    include /etc/nginx/conf.d/*.conf;
}

This first config provides a very basic setup for nginx, and then pulls additional configuration in using the include line.

$ cat nginx/conf/default.conf
server {

    listen 80;
    server_name git.isa.fuckingasshat.com;
    charset utf-8;

    location / {
        proxy_pass http://gogs:3000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }

}

This file tells nginx to listen on port 80, and to serve all requests to / back to our gogs instance over port 3000.

Let’s look at what we’ve got so far:

$ pwd
/home/pirate/rpi-gogs-docker-alpine/nginx

$ tree
.
├── conf
│   ├── default.conf
│   └── nginx.conf
└── Dockerfile

Building the Go Git service

What follows is how we build our Go Git container. A reminder of what this looks like in docker-compose.yaml:

gogs:
  restart: always
  build: ./gogs/
  expose:
    - "3000"
  links:
    - postgres:postgres
  volumes:
    - ./gogs/data:/data
  command: gogs/gogs web
  • restart - We’d like this running, all the time
  • build - Build our gogs image from the Dockerfile in ./gogs/
  • expose - Tells docker to expose port 3000 on the container
  • links - Ties us in to our database server
  • volumes - Tells docker we want to map ./gogs/data to /data in the container for our repo storage
  • command - launches the gogs web service

Neat, right? Now let’s take a look inside the ./gogs/Dockerfile:

FROM hypriot/rpi-alpine-scratch:v3.4

# Install the packages we need
RUN apk --update add \
    openssl \
    linux-pam-dev \
    build-base \
    coreutils \
    libc6-compat \
    git \
    && wget -O gogs_latest_raspi2.zip \
        https://cdn.gogs.io/gogs_v0.9.128_raspi2.zip \
    && unzip ./gogs_latest_raspi2.zip \
    && mkdir -p /gogs/custom/http /gogs/custom/conf \
    && /gogs/gogs cert \
    -ca=true \
    -duration=8760h0m0s \
    -host=git.isa.fuckingsshat.local \
    && mv *.pem /gogs/custom/http/ \
    && rm -f /*.zip \
    && rm -rf /var/cache/apk/*

COPY custom/conf/app.ini /gogs/custom/conf/

EXPOSE 22
EXPOSE 3000
CMD ["gogs/gogs" "web"]

Unlike the nginx file, there is a LOT going on here.

  • FROM - Same alpine source image. Tiny, quick, and does what we need.
  • RUN - This is actually a bunch of commands tied together into a single line to reduce the number of Docker build steps. This breaks down as follows:
    • apk –update add - install the required packages
    • wget - pull down the latest gogs binary for pi.
    • unzip - decompress it
    • mkdir - create a folder for our config and ssl certs
    • /gogs/gogs cert - creates the ssl certificates
    • mv *.pem - puts the certificates into the right spot
    • rm - these two clean up our image so we stay small
  • COPY - This pulls our app.ini file into our container
  • EXPOSE - tells docker to expose these ports
  • CMD - launch the gogs web service

Still with me? We have one last file to create, the app.ini file. To do that, we will pull down a generic app.ini from the gogs project and then add our specific details.

First, pull in the file:

$ curl -L https://raw.githubusercontent.com/gogits/gogs/master/conf/app.ini \
    > ./gogs/custom/conf/app.ini

Next, find the database section, and provide the data from your env file:

[database]
; Either "mysql", "postgres" or "sqlite3", it's your choice
DB_TYPE = postgres
HOST = postgres:5432
NAME = myproject_web
USER = myproject_web
PASSWD = shoov3Phezaimahsh7eb2Tii4ohkah8k
; For "postgres" only, either "disable", "require" or "verify-full"
SSL_MODE = disable
; For "sqlite3" and "tidb", use absolute path when you start as service
PATH = data/gogs.db

That’s it for this section. Which should now look like this:

$ pwd
/home/pirate/rpi-gogs-docker-alpine/gogs
HypriotOS/armv7: pirate@node-01 in ~/rpi-gogs-docker-alpine/gogs
$ tree
.
├── custom
│   └── conf
│       └── app.ini
├── data
└── Dockerfile

Setting up Postgres

Unlike the last two, postgres is fairly simple to setup, as instead of building it from scratch, we’re running a community supplied image. Let’s take a look at the postgres section of docker-compose.yaml:

postgres:
  restart: always
  image: rotschopf/rpi-postgres
  volumes:
    - ./postgres/docker-entrypoint-initdb.d:/docker-entrypoint-initdb.d
  env_file:
    - env
  expose:
    - "5432"
  • restart - like our other containers, we want this one to run, all the time.
  • image - specifies the postgres image to use
  • volumes - This is a special volume for our postgres container. The container will run script contained within. You’ll see how we use this next.
  • env_file - turns our env file into environment variables within the container
  • expose - makes port 5432 available for connections

Now, in your ./postgres/docker-entrypoint-initdb.d/ directory, we’re going to place a script that will create our database user and database:

$ cat ./postgres/docker-entrypoint-initdb.d/postgres.sh
#!/bin/env sh
psql -U postgres -c "CREATE USER $DB_USER PASSWORD '$DB_PASS'"
psql -U postgres -c "CREATE DATABASE $DB_NAME OWNER $DB_USER

Cool, once you’ve done that, you have finished the prep work. Now we can make the magic!

Make the Magic!

Ok, so, that was a lot of work. With that in place, you can now build and launch the gogs environment:

docker-compose build --no-cache
docker-compose up -d

Once that completes, point a web browser to your raspberry pi on port 80, and enjoy your own github service.

Summary

Long post is long. We used docker compose to build two docker images and launch a third. All three of which are tied together to provide a github like service.

Resources

Kube Dashboard on Master Node

Putting this here so I don’t forget.

First we need to make it so we can schedule pods to master. So, let’s taint the node (thanks Sam!)

kubectl taint nodes --all dedicated-

Then add this to the kubernetes-dashboard.yaml, replacing master-node for your environment.

    spec:
      containers:
      - name: kubernetes-dashboard
        image: gcr.io/google_containers/kubernetes-dashboard-arm:v1.5.1
*** some stuff ***
      nodeSelector:
        kubernetes.io/hostname: master-node

Disabling / Enabling OSX Finder Tabs

Here’s to hoping Google picks this up, I couldn’t find it elsewhere:

To tell Finder you don’t want tabs when you open a folder:

$ defaults write com.apple.finder FinderSpawnTab -bool false

To put things back to normal:

$ defaults write com.apple.finder FinderSpawnTab -bool true

To figure out what a particular setting is:

$ defaults read com.apple.finder > ~/before.list

# Make the change

$ defaults read com.apple.finder > ~/before.list
$ diff ~/before.list ~/after.list

Seti@home on Raspberry Pi with Kubernetes

In this post we take our Raspberry PI cluster, deploy Kubernetes to it, and then use a deployment set to launch the Boinc client to churn seti@home data.

The cluster

Prep The Cluster

First things first, we need to flash all 8 nodes with the latest hypriot image. We do this using their flash tool, a bash for loop, and some flash card switching:

for i in {1..8}; do flash --hostname node0$i https://github.com/hypriot/image-builder-rpi/releases/download/v1.1.3/hypriotos-rpi-v1.1.3.img.zip; done

Once you have the cards flashed, install them into your Pi’s and boot them up, we’ve got some more prep to do.

Copy SSH Keys

The first thing to do is enable keybased logins. You’ll be prompted for the password each time. Password: hypriot

for i in {1..8}; do ssh-copy-id pirate@node0$i; done

Run updates

for i in {1..8}; do ssh pirate@node0$i -t 'sudo apt-get update -qq && sudo apt-get upgrade -qqy --force-yes'

Build the cluster

Here is where the fun starts. On each node, you’re going to want to install Kubernetes as described here.

Fire Up BOINC & Seti@Home

For this I used the kubernetes dashboard, tho the command line would work just as well.

Click create to launch the creation wizard. You’ll see something like this where you can provide a name, image, and number of pods. My settings are captured in the image:

k8s new deployment

Next, we need to open the advanced settings. This is where we specify the environment variables, again captured in the following image:

Environment variables

For reference these are:

 BOINC_CONFIG_CONTENTS = "<account>

<master_url>http://setiathome.berkeley.edu/</master_url>

<authenticator>your_authenticator_code_here_get_it_from_setiathome</authenticator>
</account>"

BOINC_CONFIG_FILENAME = account_setiathome.berkeley.edu.xml

Finally, save & deploy it, this’ll take a minute or two.

All done:

all done

Summary

In this post, you flashed a bunch of Raspberry Pi’s with Hypriot and built a Kubernetes cluster with them. From there you logged into the dashboard and deployed seti@home.

Resources

My First Django App

Having had some time over the winter break, I took some time to watch the excellent Django Webcast on O’Reilly’s Safari. What follows here are the commands used in said video cast used to get started with an app:

Setting up a virtual environment

My first instinct to keep the development environment separate from my working world was to fire up a VM and go with it. Turns out, in Python you can still work locally without much fear of breaking your box. You do this with virtualenv’s, and manage those with virtualenvwrapper.

Note: One can use virtual environments without virtualenvwrapper. virtualenvwrapper made things a bit easier for me.

Install virtualenvwrapper on OSX

For this, I assume you have a working homebrew:

brew update
brew install pyenv-virtualenvwrapper

Install virtualenvwrapper on Ubuntu 16.04

Thankfully, it’s a happy little apt-package for us here:

sudo apt-get update
sudo apt-get install virtualenvwrapper

Configuring virtualenvwrapper

Now that you have it installed on your system, the following .bash_profile settings set up some specific behaviors in virtualenvwrapper. These work on both OSX and Ubuntu:

echo "export WORKON_HOME=$HOME/.virtualenvs" >> ~/.bashrc
echo "export PROJECT_HOME=$HOME/projects" >> ~/.bashrc

source ~/.bashrc

The first line tells virtualenvwrapper where you would like it to store all of the files (python, pip, and all their parts) for said virtualenv. The second line tells virtualenvwrapper where your code lives. Finally, we pull said values into out working bash shell.

Create and enter your virtual env

Now that’s all sorted, let’s make a virtual environment to work on:

mkvirtualenv -p /usr/bin/python3 newProject

Breaking this down, the -p /usr/bin/python3 tells virtualenv to install python3 into our virtualenv. The name newProject is well, the new project name. This command will produce output like the following:

$ mkvirtualenv -p /usr/bin/python3 newProject
Already using interpreter /usr/bin/python3
Using base prefix '/usr'
New python executable in newProject/bin/python3
Also creating executable in newProject/bin/python
Installing setuptools, pip...done.

To enter your virtual environment and start working on things:

$ cd ~/projects/
$ mkdir newProject
$ cd newProject/
$ workon newProject

Installing and Getting Started with Django

Ok, so that was a lot of setup to get to this point, but, here we are, it’s time to install Django, create the structure of our application, and finally start Django’s built in webserver to make sure it is all working.

To install Django inside your virtual environment:

$ pip install django
Downloading/unpacking django
  Downloading Django-1.10.5-py2.py3-none-any.whl (6.8MB): 6.8MB downloaded
Installing collected packages: django
Successfully installed django
Cleaning up...

Now let’s install the skeleton of our app:

django-admin startproject newProject

This will create a directory structure like this:

$ tree
.
└── newProject
    ├── manage.py
    └── newProject
        ├── __init__.py
        ├── settings.py
        ├── urls.py
        └── wsgi.py

2 directories, 5 files

Next up, we will want to fire up django’s built in server and validate our install:

$ cd newProject
$ python manage.py migrate

Operations to perform:
  Apply all migrations: admin, auth, contenttypes, sessions
Running migrations:
  Applying contenttypes.0001_initial... OK
  Applying auth.0001_initial... OK
  Applying admin.0001_initial... OK
  Applying admin.0002_logentry_remove_auto_add... OK
  Applying contenttypes.0002_remove_content_type_name... OK
  Applying auth.0002_alter_permission_name_max_length... OK
  Applying auth.0003_alter_user_email_max_length... OK
  Applying auth.0004_alter_user_username_opts... OK
  Applying auth.0005_alter_user_last_login_null... OK
  Applying auth.0006_require_contenttypes_0002... OK
  Applying auth.0007_alter_validators_add_error_messages... OK
  Applying auth.0008_alter_user_username_max_length... OK
  Applying sessions.0001_initial... OK

$ python manage.py runserver
Performing system checks...

System check identified no issues (0 silenced).
January 07, 2017 - 23:52:17
Django version 1.10.5, using settings 'newProject.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.

Taking a look at what we did there:

  • manage.py migrate - preforms the initial migration & population of a sqlite database so our shell application will work.
  • manage.py runserver - started Django’s built in webserver

While there is a lot more to actually writing a Django app, this is essentially how you get started on a net-new application.

Summary

In this post we installed virtualenvwrapper and used it to crate a new virtual environment. We then installed Django, performed an initial database migration and run Django’s built in server so we could browse to and test the shell application.