Cody Bunch Some Random IT Guy - OpenStack, DevOps, Cloud, Things

On Computerized Note Taking

I have blogged on and off about note taking, their importance, some random techniques and such. After a year or two of trying /LOTS/ of different tools, techniques, and the like, I thought I’d share the current method that has stuck for me.

This post is going to be rather long, so: tl;dr - Rakefile with templates, markdown formatted, auto-saved to an personal gogs server.

This post has four parts:

  • How I got here
  • The types of notes I take
  • My current process
  • The Tools

How I got here

The long and short of this story, is I missed the days of putting .LOG into Windows notepad and having it timestamp on each file open. Couple that with what is formerly Google Desktop search, and you had all of your notes, right there, indexed and with easy access.

This is not a knock on Evernote and other tools of the like. I tried just about every one of them. Along with easy searching, version control, and all the other good stuff, I gain a few extra benefits: easy transition to blog posts (commit to a different repo), easy export as static html or PDF for others consumption (using pandoc).

The Types of Notes

When taking notes on the computer, there are three basic types I take:

  • Case-Notes
  • Call-Notes
  • Draft Posts

Case-Notes

These are analogous to project notes and are the most generic form of notes I take. The template for these looks like this:

Customer:    Orangutan Roasters
Project:     Burundi Roast
Author:      Your Name
Date:        2017-04-17
categories: 
---
  
# Orangutan-Roasters

Some background here
# Burundi Roast

Some background here
# Notes:

The information at the top is metadata, which wile searchable, does not get exported to PDF. It then includes two sections to provide background information about the task, customer, project, you name it. This is useful for context setting. Finally, you have a heading for freeform notes. Here I add timestamps as I go, so I have a running log of what I am doing and some context to return to.

Call-Notes

Any time my phone rings, I take notes. This way I have a reference point of who I called, when, what about, etc. All the stuff you’d get from a recorded call, but you know, searchable. The template for that looks like this:

---
Subject:     Orangutan Roasters at Farmers Market
Customer:    Orangutan Roasters
Project:     Cottage Food Sales
Author:      Not Me
Date:        2017-04-17
categories: 
---
# Background

The context of the call goes here.

# Call details:
Organiser:   Not Me
Bridge:      1-800-867-5309,,112233#

On the call:
* 

# Unformatted notes:

Start taking notes here

Much like the more generic case-notes, the bits at the top are metadata that do not get exported, but are useful for finding this again later. Additionally, when using rake to create the template file, you can supply the call bridge for easy copy-paste later.

The Tools

There are a few tools used here:

  • Rake / Rakefile - Provides the template used for the different note / blog types
  • Sublime Text - My editor of choice
  • The Git and GitAutoCommit modules for Sublime - Version history
  • pandoc module for Sublime - Easy export to Word, PDF, or HTML

Organization

As you will see reflected in the Rakefile that follows, I keep my notes in a bit of a tree structure:

$ tree
.
├── Rakefile
├── call-notes
    └── 2017-04-13-pest-control.md
└── case-notes
    ├── 2017-03-02-home-scratch.md
    ├── 2017-03-02-work-scratch.md
    └── 2017-04-13-home-networking.md

The drafts for blog posts live in their own tree. As long as you specify the path in the Rakefile, you can store your notes anywhere.

The Rakefile

The rakefile I use to create notes currently has three sections, one for each type of note I take most regularly: Blogs (:post), Project (:note), and Call (:call)

To create a new note, from the command line one runs rake notetype parameter="thing". For example, if I wanted to open a new file for a coffee roast I would use a command like this:

rake note customer="Orangutan Roasters" project="Buruni Roast"

Which creates a file named similar to 2017-04-17-Orangutan-Roasters-Burundai-Tasting.md and which contains the following template text:

Customer:    Orangutan Roasters
Project:     Burundi Roast
Author:      Your Name
Date:        2017-04-17
categories: 
---
  
# Orangutan-Roasters

Some background here
# Burundi Roast

Some background here
# Notes:

Now that you have the template file in place, you can open in an editor of your choice, and markdown being super close to plain text, you’re off and going.

Sublime Text

My editor of choice. Yes yes, I know I can do these things in vimacs or whatever and to each their own.

To install Sublime on OSX:

brew install Caskroom/cask/sublime-text3

This in turn links the subl command to /usr/local/bin/subl. This allows you to open notes as follows:

subl case-notes/2017-04-17-Orangutan-Roasters-Burundai-Tasting.md

Sublime Modules

As discussed above I use the Git, GitAutoCommit and pandoc modules. These can each be installed using their specific instructions. I’ve linked those here:

Once you have those, these minor config tweeks will save you some heartache. For Pandoc, in your user-settings (Click sublime, Preferences, Package Settings, Pandoc, Settings - User), paste the following in:

    {
      "default": {

           "pandoc-path": "/usr/local/bin/pandoc",

        "transformations": {

          "HTML 5": {
            "scope": {
              "text.html.markdown": "markdown"
            },
            "syntax_file": "Packages/HTML/HTML.tmLanguage",
            "pandoc-arguments": [
              "-t", "html",
              "--filter", "/usr/local/bin/pandoc-citeproc",
              "--to=html5",
              "--no-highlight",
            ]
          },

          "PDF": {
            "scope": {
              "text.html": "html",
              "text.html.markdown": "markdown"
            },
            "pandoc-arguments": [
              "-t", "pdf",

              "--latex-engine=/Library/TeX/Root/bin/x86_64-darwin/pdflatex"
            ]
          },

          "Microsoft Word": {
            "scope": {
              "text.html": "html",
              "text.html.markdown": "markdown"
            },
            "pandoc-arguments": [
              "-t", "docx",
              "--filter", "/usr/local/bin/pandoc-citeproc"
            ]
          },

        },
        "pandoc-format-file": ["docx", "epub", "pdf", "odt", "html"]

      }

    }

To export using Pandoc, ‘command + shift + p’, then type ‘pandoc’, press enter and select how you want to export it.

The Process

If you are still with me, thanks for sticking around. Now that all the scaffolding is in place, to create a new note, from the root of the notes folder:

$ rake call subject="Orangutan Roasters at Farmers Market" customer="Orangutan Roasters" project="Cottage Food Sales" bridge="1-800-867-5309,,112233#" owner="Not Me"

$ subl call-notes/2017-04-17-Orangutan-Roasters-Orangutan-Roasters-at-Farmers-Market.md

And you’re off.

Summary

Well, this ended up being much longer than I had expected, however, at the end of it all, you have notes that are versioned and backed up to git, and are searchable with grep or any tool of your choice.

Enable Bonjour on UBNT USG-4P

This here is super hacky, but it works.

Problem:

After segmenting the home network (kids, guest, random iot devices, etc), Bonjour stopped working.

Solution

The proposed solution is to SSH to the Security Gateway and run the following:

configure
set service mdns reflector
commit
save
exit

This works, great! Right? The problem with the above method, is that you are broadcasting your stuff out on the WAN Interface. I don’t know about you, but I don’t like the idea of what might show up on my AppleTV.

A better solution:

We start as prescribed. This makes sure the right services are installed and the config files are in place:

configure
set service mdns reflector
commit
save
exit

Now, undo that:

configure
delete service mdns
commit
save
exit

Now, edit /etc/avahi/avahi-daemon.conf with your corresponding interfaces. The relevant sections of my file look like this:

[server]
...
allow-interfaces=eth0,eth0.20,eth0.40,eth0.50,eth0.60
...

[reflector]
enable-reflector=yes

Finally, we add a script to ensure these services start on reboot & start them now:

echo "#!/bin/bash -
#title          :bonjour-fix.sh
#description    :Starts dbus and avahi on reboot on the USG-4P
#============================================================================

for service in dbus avahi-daemon; do {
    if (( $(sudo ps -ef | grep -v grep | grep "${service}" | wc -l) > 0 )); then {
        echo "[+] ${service} is running"
    } else {
        echo "[i] Attempting to start ${service}"
        sudo /etc/init.d/"${service}" start
    } fi
} done" | sudo tee /config/scripts/post-config.d/bonjour-fix.sh

sudo chmod +x /config/scripts/post-config.d/bonjour-fix.sh

sudo /etc/init.d/dbus start
sudo /etc/init.d/avahi-daemon start

Resources

This solution was put together from a number of forum posts:

Building a Lab AD with PowerShell

Remember that post where we installed and built an Active Directory domain with PowerShell and BoxStarter?

Well, Domains are cool and all, but are generally uninteresting all by themselves. To that end, you can adapt the script and CSV files found here to populate your domain with some more interesting artifacts.

For me, I incorporated this into my existing BoxStarter build, so all I have is the one command to run, the script is below:

You’ll notice, as compared to before, we’ve added some lines. Specifically:

  • Lines 24 & 25 pull down all the files and extract them
  • Lines 28 - 30 allow us to run internet scripts, change our working directory, and finally run our script.

As before, to launch this on a fresh Windows install:

START http://boxstarter.org/package/nr/url?https://gist.githubusercontent.com/bunchc/b7783fd220b5602cffc46158bac3099e/raw/a3e6b58904efb06953130112f98a5382cff7dc20/build_and_populate_domain.ps1

Once completed your script window will look similar to: PowerShell script output

And AD will look like this: AD Users and Computers

Building a Windows Domain with BoxStarter

I had a need to create and recreate Windows domains for some lab work I’ve been up to. What follows here is adapted from @davidstamen who blogs here. More specifically it is extracted from his Vagrant Windows lab, here.

Warning! Boxstarter is sort of like curl pipe sudo bash for Windows.

First things first, look over what we’re doing:

What is going on here?

  • The first two lines contain the domain name you’d like configured.
  • Lines 4 & 5 make Explorer a bit less annoying and enable remote desktop (if it’s not already).
  • Lines 7 - 15 install some useful packages
  • Line 18 enables AD
  • Line 22 installs the domain.

All of that is simple enough. The magic comes in when we use boxstarter to go from a new Windows Server to Domain Controller. From an admin command prompt on the Windows server, run the following:

START http://boxstarter.org/package/nr/url?https://gist.githubusercontent.com/bunchc/1d97b496aa1d6efe146f799b2fb34547/raw/51ebf18ca320c49c38e2f493e0ff4afad59bb0cd/domain_controller.ps1

Note: You may need to add boxstarter.org to trusted sites.

Once executing, it should look a bit like this:

BoxStarter Domain Installation

RedHat OpenShift Hands On Workshop, San Antonio - Raw Notes

RedHat came to town recently to give a one day, almost entirely lab driven workshop around OpenShift. The workshop was well put together, and the live labs were over-all pretty good.

What follows here, are my raw notes from the lab, sanitized of usernames & passwords, and some light editing for things that were pretty ugly.


Begin notes

The parksmap image: docker.io/openshiftroadshow/parksmap:1.2.0

Check status

$ oc get pods
NAME               READY     STATUS    RESTARTS   AGE
parksmap-1-a3ppj   1/1       Running   0          33m

$ oc status
In project explore-xx on server https://127.0.0.1:443

svc/parksmap - 172.30.203.33:8080
  dc/parksmap deploys istag/parksmap:1.2.0
    deployment #1 deployed 32 minutes ago - 1 pod

1 warning identified, use 'oc status -v' to see details.

$ oc status -v
In project explore-xx on server https://127.0.0.1:443

svc/parksmap - 172.30.203.33:8080
  dc/parksmap deploys istag/parksmap:1.2.0
    deployment #1 deployed 32 minutes ago - 1 pod

Warnings:
  * Unable to list statefulsets resources.  Not all status relationships can be established.

Info:
  * dc/parksmap has no readiness probe to verify pods are ready to accept traffic or ensure deployment is successful.
    try: oc set probe dc/parksmap --readiness ...
  * dc/parksmap has no liveness probe to verify pods are still running.
    try: oc set probe dc/parksmap --liveness ...

View details with 'oc describe <resource>/<name>' or list everything with 'oc get all'.

Scaling

Get some info about the pod before

$ oc get pods
NAME               READY     STATUS    RESTARTS   AGE
parksmap-1-a3ppj   1/1       Running   0          33m

$ oc get dc
NAME       REVISION   DESIRED   CURRENT   TRIGGERED BY
parksmap   1          1         1         config,image(parksmap:1.2.0)

$ oc get rc
NAME         DESIRED   CURRENT   READY     AGE
parksmap-1   1         1         1         33m

Scale the deployment config

$ oc scale --replicas=2 dc/parksmap
deploymentconfig "parksmap" scaled

$ oc get pods
NAME               READY     STATUS    RESTARTS   AGE
parksmap-1-a3ppj   1/1       Running   0          34m
parksmap-1-tuj0b   1/1       Running   0          9s

Review the new config

$ oc describe svc parksmap
Name:           parksmap
Namespace:      explore-xx
Labels:         app=parksmap
Selector:       deploymentconfig=parksmap
Type:           ClusterIP
IP:         172.30.203.33
Port:           8080-tcp    8080/TCP
Endpoints:      10.1.16.37:8080,10.1.20.20:8080
Session Affinity:   None
No events.

$ oc get endpoints
NAME       ENDPOINTS                         AGE
parksmap   10.1.16.37:8080,10.1.20.20:8080   35m

Autohealing: This deletes one of the pods, then watches the new one create:

oc delete pod parksmap-1-a3ppj; watch "oc get pods"

Scale down: This sets us back to one replica and the watches the new one terminate.

oc scale --replicas=1 dc/parksmap; watch "oc get pods"

Routes

Get routes:

$ oc get routes

Get the name of our service:

$ oc get services

Expose it:

$ oc expose service parksmap
$ oc get routes
NAME       HOST/PORT                                                   PATH      SERVICES   PORT       TERMINATION   WILDCARD
parksmap   parksmap-explore-xx.cloudapps.ksat.openshift3roadshow.com             parksmap   8080-tcp                 None

Logs

Get logs:

$ oc logs parksmap-1-a3ppj
14:47:51.350 [main] DEBUG io.fabric8.kubernetes.client.Config - Trying to configure client from Kubernetes config...
14:47:51.373 [main] DEBUG io.fabric8.kubernetes.client.Config - Did not find Kubernetes config at: [/.kube/config]. Ignoring.
14:47:51.373 [main] DEBUG io.fabric8.kubernetes.client.Config - Trying to configure client from service account...
14:47:51.374 [main] DEBUG io.fabric8.kubernetes.client.Config - Found service account ca cert at: [/var/run/secrets/kubernetes.io/serviceaccount/ca.crt].
14:47:51.381 [main] DEBUG io.fabric8.kubernetes.client.Config - Found service account token at: [/var/run/secrets/kubernetes.io/serviceaccount/token].
14:47:51.381 [main] DEBUG io.fabric8.kubernetes.client.Config - Trying to configure client namespace from Kubernetes service account namespace path...
14:47:51.381 [main] DEBUG io.fabric8.kubernetes.client.Config - Found service account namespace at: [/var/run/secrets/kubernetes.io/serviceaccount/namespace].
2017-04-04 14:47:53.101  WARN 1 --- [           main] i.f.s.cloud.kubernetes.StandardPodUtils  : Failed to get pod with name:[parksmap-1-a3ppj]. You should look into this if things aren't working as you expect. Are you missing serviceaccount permissions?

Also pod, then archive, loads EFK

RBAC

Fix service account

$ oc policy add-role-to-user view -z default
role "view" added: "default"

Grant other users access:

$ oc policy add-role-to-user view userxx
role "view" added: "userxx"

View acceesses:

$ oc describe policyBindings :default -n explore-xx
Name:                   :default
Namespace:              explore-xx
Created:                21 hours ago
Labels:                 <none>
Annotations:                <none>
Last Modified:              2017-04-04 10:50:03 -0500 CDT
Policy:                 <none>
RoleBinding[admin]:
                    Role:           admin
                    Users:          userxx
                    Groups:         <none>
                    ServiceAccounts:    <none>
                    Subjects:       <none>
RoleBinding[system:deployers]:
                    Role:           system:deployer
                    Users:          <none>
                    Groups:         <none>
                    ServiceAccounts:    deployer
                    Subjects:       <none>
RoleBinding[system:image-builders]:
                    Role:           system:image-builder
                    Users:          <none>
                    Groups:         <none>
                    ServiceAccounts:    builder
                    Subjects:       <none>
RoleBinding[system:image-pullers]:
                    Role:           system:image-puller
                    Users:          <none>
                    Groups:         system:serviceaccounts:explore-xx
                    ServiceAccounts:    <none>
                    Subjects:       <none>
RoleBinding[view]:
                    Role:           view
                    Users:          userxx
                    Groups:         <none>
                    ServiceAccounts:    default
                    Subjects:       <none>

Show service accounts:

$ oc describe serviceaccounts -n explore-xx
Name:       builder
Namespace:  explore-xx
Labels:     <none>

Mountable secrets:  builder-dockercfg-z921w
                    builder-token-22bfm

Tokens:             builder-token-0imdk
                    builder-token-22bfm

Image pull secrets: builder-dockercfg-z921w


Name:       default
Namespace:  explore-xx
Labels:     <none>

Mountable secrets:  default-token-yhj99
                    default-dockercfg-q4i5u

Tokens:             default-token-f9zyz
                    default-token-yhj99


Image pull secrets: default-dockercfg-q4i5u

Name:       deployer
Namespace:  explore-xx
Labels:     <none>


Image pull secrets: deployer-dockercfg-bwpor

Mountable secrets:  deployer-token-ajlo3
                    deployer-dockercfg-bwpor

Tokens:             deployer-token-ajlo3
                    deployer-token-aqcyk

Name:       jenkins
Namespace:  explore-xx
Labels:     app=jenkins-ephemeral
        template=jenkins-ephemeral-template

Mountable secrets:  jenkins-token-16g9q
                    jenkins-dockercfg-x2ftc

Tokens:             jenkins-token-16g9q
                    jenkins-token-l24vf

Image pull secrets: jenkins-dockercfg-x2ftc

Redeploy app:

$ oc deploy parksmap --latest --follow
Flag --latest has been deprecated, use 'oc rollout latest' instead
Started deployment #2
--> Scaling up parksmap-2 from 0 to 1, scaling down parksmap-1 from 1 to 0 (keep 1 pods available, don't exceed 2 pods)
    Scaling parksmap-2 up to 1
    Scaling parksmap-1 down to 0
--> Success

Check on that:

$ oc get dc/parksmap
NAME       REVISION   DESIRED   CURRENT   TRIGGERED BY
parksmap   2          1         1         config,image(parksmap:1.2.0)

Remote shell

Get pods, then login:

$ oc get pods
NAME               READY     STATUS    RESTARTS   AGE
parksmap-2-k7o7m   1/1       Running   0          2m

$ oc rsh parksmap-2-k7o7m
sh-4.2$

One-off commands:

$ oc exec parksmap-2-k7o7m -- ls -l /parksmap.jar
-rw-r--r--. 1 root root 21753930 Feb 20 11:14 /parksmap.jar

$ oc rsh parksmap-2-k7o7m whoami
whoami: cannot find name for user ID 1001050000 

S2I deploys

$ oc new-app --image="simple-java-s2i:latest" --name="nationalparks" http://gitlab-127.0.0.1/userxx/nationalparks.git
Flag --image has been deprecated, use --image-stream instead
--> Found image e2182f7 (6 months old) in image stream "openshift/simple-java-s2i" under tag "latest" for "simple-java-s2i:latest"

    Java S2I builder 1.0
    --------------------
    Platform for building Java (fatjar) applications with maven or gradle

    Tags: builder, maven-3, gradle-2.6, java, microservices, fatjar

    * The source repository appears to match: jee
    * A source build using source code from http://gitlab-127.0.0.1/userxx/nationalparks.git will be created
      * The resulting image will be pushed to image stream "nationalparks:latest"
      * Use 'start-build' to trigger a new build
    * This image will be deployed in deployment config "nationalparks"
    * Port 8080/tcp will be load balanced by service "nationalparks"
      * Other containers can access this service through the hostname "nationalparks"

--> Creating resources ...
    imagestream "nationalparks" created
    buildconfig "nationalparks" created
    deploymentconfig "nationalparks" created
    service "nationalparks" created
--> Success
    Build scheduled, use 'oc logs -f bc/nationalparks' to track its progress.
    Run 'oc status' to view your app.

Check status:

$ oc get builds
NAME              TYPE      FROM          STATUS     STARTED          DURATION
nationalparks-1   Source    Git@240e177   Complete   57 seconds ago   52s

Build logs:

$ oc logs -f builds/nationalparks-1
Pushing image 172.30.17.230:5000/explore-xx/nationalparks:latest ...
Pushed 0/12 layers, 0% complete
Pushed 1/12 layers, 15% complete
Pushed 2/12 layers, 22% complete
Pushed 3/12 layers, 29% complete
Pushed 4/12 layers, 41% complete
Pushed 5/12 layers, 52% complete
Pushed 6/12 layers, 59% complete
Pushed 7/12 layers, 65% complete
Pushed 8/12 layers, 73% complete
Pushed 9/12 layers, 83% complete
Pushed 10/12 layers, 96% complete
Pushed 11/12 layers, 100% complete
Pushed 12/12 layers, 100% complete
Push successful

Add a DB

$ oc new-app --template="mongodb-ephemeral" \
    -p MONGODB_USER=mongodb \
    -p MONGODB_PASSWORD=mongodb \
    -p MONGODB_DATABASE=mongodb \
    -p MONGODB_ADMIN_PASSWORD=mongodb

Wire the DB to the rest

$ oc env dc nationalparks \
    -e MONGODB_USER=mongodb \
    -e MONGODB_PASSWORD=mongodb \
    -e MONGODB_DATABASE=mongodb \
    -e MONGODB_SERVER_HOST=mongodb

deploymentconfig "nationalparks" updated

$ oc get dc nationalparks -o yaml

      - env:
        - name: MONGODB_USER
          value: mongodb
        - name: MONGODB_PASSWORD
          value: mongodb
        - name: MONGODB_DATABASE
          value: mongodb
        - name: MONGODB_SERVER_HOST
          value: mongodb

$ oc env dc/nationalparks --list
# deploymentconfigs nationalparks, container nationalparks
MONGODB_USER=mongodb
MONGODB_PASSWORD=mongodb
MONGODB_DATABASE=mongodb
MONGODB_SERVER_HOST=mongodb

Set some labels:

oc label route nationalparks type=parksmap-backend

Redeploy the front-end:

oc rollout latest parksmap

Config Maps

$ wget http://gitlab-127.0.0.1/user98/nationalparks/raw/1.2.1/ose3/application-dev.properties

$ oc create configmap nationalparks --from-file=application.properties=./application-dev.properties

Describe it:

$ oc describe configmap nationalparks
Name:       nationalparks
Namespace:  explore-xx
Labels:     <none>
Annotations:    <none>

Data
====
application.properties: 123 bytes

$ oc get configmap nationalparks -o yaml
apiVersion: v1
data:
  application.properties: |
    # NationalParks MongoDB
    mongodb.server.host=mongodb
    mongodb.user=mongodb
    mongodb.password=mongodb
    mongodb.database=mongodb
kind: ConfigMap
metadata:
  creationTimestamp: 2017-04-04T16:54:38Z
  name: nationalparks
  namespace: explore-xx
  resourceVersion: "298191"
  selfLink: /api/v1/namespaces/explore-xx/configmaps/nationalparks
  uid: 638b0913-1957-11e7-9e39-02ef4875286e

Wire up the configmap:

$ oc set volumes dc/nationalparks --add -m /opt/openshift/config --configmap-name=nationalparks

Now remove the env variables:

$ oc env dc/nationalparks MONGODB_USER- MONGODB_PASSWORD- MONGODB_DATABASE- MONGODB_SERVER_HOST-

Set up some probes:

$ oc set probe dc/nationalparks \
    --readiness \
    --get-url=http://:8080/ws/healthz/ \
    --initial-delay-seconds=20 \
    --timeout-seconds=1
$ oc set probe dc/nationalparks \
    --liveness \
    --get-url=http://:8080/ws/healthz/ \
    --initial-delay-seconds=20 \
    --timeout-seconds=1

CICD Lab

Deploy Jenkins:

$ oc new-app --template="jenkins-ephemeral"

Add permission:

$ oc policy add-role-to-user edit -z jenkins
role "edit" added: "jenkins"

Remove the route label:

$ oc label route nationalparks type-

Create mongo-live

$ oc new-app --template="mongodb-ephemeral" \
    -p MONGODB_USER=mongodb \
    -p MONGODB_PASSWORD=mongodb \
    -p MONGODB_DATABASE=mongodb \
    -p MONGODB_ADMIN_PASSWORD=mongodb \
    -p DATABASE_SERVICE_NAME=mongodb-live

Pull down new configmap:

$ wget http://gitlab-ce-workshop-infra.cloudapps.ksat.openshift3roadshow.com/user98/nationalparks/raw/1.2.1/ose3/application-live.properties

$ oc create configmap nationalparks-live --from-file=application.properties=./application-live.properties

Tag our live build:

$ oc tag nationalparks:latest nationalparks:live

Use our new build:

$ oc new-app --image="nationalparks:live" --name="nationalparks-live"

Set env variables (because configmap is broken in this lab):

$ oc env dc/nationalparks-live \
    -e MONGODB_USER=mongodb \
    -e MONGODB_PASSWORD=mongodb \
    -e MONGODB_DATABASE=mongodb \
    -e MONGODB_SERVER_HOST=mongodb-live

Add a route, load the data:

$ oc expose service nationalparks-live
curl http://nationalparks-live-explore-xx.127.0.0.1/ws/data/load

Add a label:

oc label route nationalparks-live type=parksmap-backend

Disable auto builds for latest:

oc set triggers dc/nationalparks --from-image=nationalparks:latest --remove

Create pipeline:

$ oc new-app dev-live-pipeline \
→     -p PROJECT_NAME=explore-xx
--> Deploying template "openshift/dev-live-pipeline" to project explore-xx

     dev-live-pipeline
     ---------
     CI/CD Pipeline for Dev and Live environments


     * With parameters:
        * Pipeline name=nationalparks-pipeline
        * Project name=explore-xx
        * Dev resource name=nationalparks
        * Live resource name=nationalparks-live
        * ImageStream name=nationalparks
        * GitHub Trigger=a5iYjDTN # generated
        * Generic Trigger=FY3tGSrP # generated

--> Creating resources ...
    buildconfig "nationalparks-pipeline" created
--> Success
    Use 'oc start-build nationalparks-pipeline' to start a build.
    Run 'oc status' to view your app.

Start the pipeline:

$ oc start-build nationalparks-pipeline
build "nationalparks-pipeline-1" started

Check our data. This spits out a boat load of text/json data:

curl http://nationalparks-live-explore-xx.127.0.0.1/ws/data/all

Promote the pipeline via the gui.

Promote from the gui

Rollback:

$ oc rollback nationalparks-live
#5 rolled back to nationalparks-live-3
Warning: the following images triggers were disabled: nationalparks:live
  You can re-enable them with: oc set triggers dc/nationalparks-live --auto

Check on that:

curl http://nationalparks-live-explore-xx.127.0.0.1/ws/info/

Re-enable the new images trigger:

$ oc deploy nationalparks-live --enable-triggers
Flag --enable-triggers has been deprecated, use 'oc set triggers' instead
Enabled image triggers: nationalparks:live

Roll forward:

$ oc rollback nationalparks-live-4
#6 rolled back to nationalparks-live-4
Warning: the following images triggers were disabled: nationalparks:live
  You can re-enable them with: oc set triggers dc/nationalparks-live --auto

Links

  • https://docs.openshift.com/enterprise/3.0/architecture/core_concepts/routes.html
  • https://docs.openshift.com/enterprise/3.0/admin_guide/manage_authorization_policy.html
  • https://docs.openshift.com/enterprise/3.1/dev_guide/deployments.html
  • https://docs.openshift.com/enterprise/3.0/dev_guide/new_app.html
  • https://docs.openshift.com/enterprise/3.0/dev_guide/service_accounts.html
  • https://docs.openshift.com/enterprise/3.0/dev_guide/volumes.html
  • https://blog.openshift.com/openshift-3-3-pipelines-deep-dive/

Basic Automated Windows 10 post-install

It has been forever and a day since I’ve used Windows systems on a daily basis. However, in that time, the tools to do post install setup has evolved quite a bit. There are now tools like boxstarter and chocolatey, that when coupled with PowerShell, give you something akin to homebrew and dotfiles on osx (or apt & dotfiles on ubuntu, etc).

The following, are the commands I use to configure a fresh Windows 10 system:

START http://boxstarter.org/package/url?https://gist.github.com/bunchc/44e380258384505758b6244e615e75ed/raw/d648fffc21cb3cc7df79e50be6c05b05d29c79cc/0-SystemConfiguration.txt

START http://boxstarter.org/package/url?https://gist.githubusercontent.com/bunchc/44e380258384505758b6244e615e75ed/raw/239e8f6ca240a0c365619f242c14017f2f0de43e/1-Base%2520apps%2520setup.txt

START http://boxstarter.org/package/url?https://gist.github.com/bunchc/44e380258384505758b6244e615e75ed/raw/bae2117eba4091a78428493c2c821996ea5e3615/2-Dev%2520apps.txt

START https://github.com/Nummer/Destroy-Windows-10-Spying/releases/download/1.6.722/DWS_Lite.exe

The first command configures Windows updates and various privacy settings. The second command, pulls down a number of packages to provide a basic working environment. The third command does much the same with some additional packages. The last command pulls down a utility that helps clean up how much Win10 collects and sends home.

The first three commands launch boxstarter and supply a script to it. I’ve included those here:

Resources

I didn’t do this alone. In this case, my versions of the scripts are almost identical to the originals from GreyKarnival:

Additionally, the privacy tool run at the end comes from here.

packer.io with vSphere

Packer isn’t exactly a new tool. In fact, I’ve covered using packer to build Vagrant boxes a little while ago. This time around, I’m going to share some notes and the json file I used to get this to build and upload properly to vSphere.

My Build Environment:

I am running these builds from OSX 10.12.3 with:

  • VMware Fusion 8.5.3
  • packer.io 0.12.3
  • vSphere 6.5
    • ESXi 6.5a
    • VCSA 6.5

Packer json template.

The entire json I use is here. I have copied the relevant sections below. First, the variables section. You will want to swap these with values specific to your environment. The values I’ve supplied came from the vGhetto lab builder.

    "variables": {
        "vsphere_host": "vcenter65-1.vghetto.local",
        "vsphere_user": "administrator@vghetto.local",
        "vsphere_pass": "VMware1!",
        "vsphere_datacenter": "Datacenter",
        "vsphere_cluster": "\"VSAN-Cluster\"",
        "vsphere_datastore": "virtual_machines",
        "vsphere_network": "\"VM Network\""
    },

Next, post-processors. Here be the magic.

post-processors

Some highlights:

  • type - tells packer we’re uploading to vsphere
  • keep_input_artifact - setting this to true helps troubleshooting
  • only - tells packer to only run this post-processor for the named builds.
  • the remaining lines - the vSphere specific variables from the prior section.

Note: Only change the variables rather than specifying names directly. Otherwise, OVFTool will get stupid angry about escaping characters.

The Packer to vSphere Build

Once you have all the parts in place, you can run the following command to kick off the packer build that will dump it’s artifacts into vSphere:

packer build -parallel=false ubuntu-14.04.json

Now, the packer command will produce a LOT of output, even without debugging enabled. If you would like to review said output or dump it to a file in case sometehing goes sideways:

time { packer build -parallel=false ubuntu-14.04.json; } 2>&1 | tee -a /tmp/packer.log

This will time how long packer takes to do it’s thing and dumps all output to /tmp/packer.log

When the command finishes you’ll see the following output:

==> ubuntu-14.04.amd64.vmware: Running post-processor: vsphere
    ubuntu-14.04.amd64.vmware (vsphere): Uploading output-ubuntu-14.04.amd64.vmware/packer-ubuntu-14.04.amd64.vmware.vmx to vSphere
Build 'ubuntu-14.04.amd64.vmware' finished.

vSphere client

With that, all should be in working order.

A Simple Terraform on vSphere Build

This post will talk a bit about how to use Terraform to deploy a simple config against vSphere. Simple? Here’s what we’re building:

  • VMs -
    • OpenSUSE 42.2
      • 2 vCPU
      • 8GB Ram
      • 20GB Disk
    • CentOS7
      • 2 vCPU
      • 8GB Ram
      • 20GB Disk

As with prior posts, I am building this on top of a vSphere lab from here. Along with the following:

  • VMware Fusion 8.5.3
  • Terraform 0.8.8
  • vSphere 6.5
    • ESXi 6.5a
    • VCSA 6.5

Defining the Environment

To build our two VM environment, we need to create three files at the root of the directory you plan to build from. These files are:

$ ls -l
-rw-r--r--  1 bunchc  staff  1788 Mar  8 16:02 build.tf
-rw-r--r--  1 bunchc  staff   172 Mar  8 15:01 terraform.tfvars
-rw-r--r--  1 bunchc  staff   162 Mar  8 15:01 variables.tf

Each of these files has the following use:

  • build.tf - defines the infrastructure to build. This includes definitions for VMs, networks, storage, which files to copy where, and then some.
  • variables.tf - defines any variables to be used in build.tf
  • terraform.tfvars - supplies the actual values for the variables

In the following sections we review each file as it pertains to our environment.

build.tf

Below I have broken out the sections of build.tf that are of interest to us. If you are following along, you will want to copy/paste each section into a single file.

This first section tells terraform how to connect to vSphere. You will notice there are no actual values provided. These come from variables.tf and terraform.tfvars

# Configure the VMware vSphere Provider
provider "vsphere" {
    vsphere_server = "${var.vsphere_vcenter}"
    user = "${var.vsphere_user}"
    password = "${var.vsphere_password}"
    allow_unverified_ssl = true
}

The second section defines the OpenSUSE VM. We do this by telling Terraform to create a resource, and then providing the type and name of said resource. The only other thing I will call out in this section is the ‘disk’ clause. When using ‘template’ inside the disk clause, you do not need to specify a disk size.

# Build openSUSE
resource "vsphere_virtual_machine" "opensuse-openstack" {
    name   = "opensuse-openstack"
    vcpu   = 2
    memory = 8192
    domain = "vghetto.local"
    datacenter = "${var.vsphere_datacenter}"
    cluster = "${var.vsphere_cluster}"

    # Define the Networking settings for the VM
    network_interface {
        label = "VM Network"
        ipv4_gateway = "10.0.1.1"
        ipv4_address = "10.0.1.190"
        ipv4_prefix_length = "24"
    }

    dns_servers = ["10.0.1.10", "8.8.8.8"]

    # Define the Disks and resources. The first disk should include the template.
    disk {
        template = "openSUSE-Leap-42.2-NET-x86_64.vmware"
        datastore = "virtual_machines"
        type ="thin"
    }

    # Define Time Zone
    time_zone = "America/Chicago"
}

The third section that follows, defines the second VM. You will see it’s a repeat of the first.

# Build CentOS
resource "vsphere_virtual_machine" "centos-openstack" {
    name   = "centos-openstack"
    vcpu   = 2
    memory = 8192
    domain = "vghetto.local"
    datacenter = "${var.vsphere_datacenter}"
    cluster = "${var.vsphere_cluster}"

    # Define the Networking settings for the VM
    network_interface {
        label = "VM Network"
        ipv4_gateway = "10.0.1.1"
        ipv4_address = "10.0.1.180"
        ipv4_prefix_length = "24"
    }

    dns_servers = ["10.0.1.10", "8.8.8.8"]

    # Define the Disks and resources. The first disk should include the template.
    disk {
        template = "CentOS-7-x86_64.vmware"
        datastore = "virtual_machines"
        type ="thin"
    }

    # Define Time Zone
    time_zone = "America/Chicago"
}

variables.tf

Next up in the files we need to make is variables.tf. I’ve provided in wholesale below:

# Variables
variable "vsphere_vcenter" {}
variable "vsphere_user" {}
variable "vsphere_password" {}
variable "vsphere_datacenter" {}
variable "vsphere_cluster" {}

Note: We are only defining things here, not providing them values just yet.

terraform.tfvars

Here’s the good stuff. That is, this is the file that maps values to the variables, and in our case, stores credentials. You will want to add this to .gitignore (or whatever your source control uses):

vsphere_vcenter = "10.0.1.170"
vsphere_user = "administrator@vghetto.local"
vsphere_password = "VMware1!"
vsphere_datacenter = "Datacenter"
vsphere_cluster = "VSAN-Cluster"

Yes, yes, I left my creds in. These are, after-all the default creds for the vGhetto autobuilder.

Building the Infrastructure

Now that you have all the config files in place there are two steps left, validate and deploy. Fist, we validate:

terraform plan

If you have everything correct so far, you will see the following:

terraform plan

Neat! Let’s build:

terraform apply

If you glance in vCenter, you will notice the build has indeed kicked off:

vCenter screenshot

Terraform also produces the following output when successful:

terraform build

.screenrc tricks

I’m not a tmux user.

There, I said it. I guess as tmux was becoming the thing to use to have lots of terminals open, I’d moved on to being Admin for a while. Who knows. screen 4lyfe.

With that said, here’s a bit of my screenrc that makes life easier:

# Status bar
hardstatus always # activates window caption
hardstatus string '%{= wk}[ %{k}%H %{k}][%= %{= wk}%?%-Lw%?%{r}(%{r}%n*%f%t%?(%u)%?%{r})%{k}%?%+Lw%?%?%= %{k}][%{b} %Y-%m-%d %{k}%c %{k}]'

# Terminal options
term "xterm"
attrcolor b ".I"

# Turn off startup messaage
startup_message off

# Set the OSX term name to the current window
termcapinfo xterm* 'hs:ts=\\E]2;:fs=\\007:ds=\\E]2;screen\\007'

# In case of ssh disconnect or any weirdness, the screen will auto detach
autodetach on

The first line tells the status bar to always display. The second one, tells screen what this status should looklike. In this case, current user, windows with names, and date/time:

[ bunchc ][ 0$ irssi  (1*$vagrant)  2$ rbac-testing  3-$ docker-01 ][ 2017-02-22 15:35 ]

The next lines tell screen to:

  • Set the term type to xterm for nested ssh
  • Use bright colors for bold items
  • Turn off the boiler plate when starting
  • Set the OSX window title
  • Autodetach if ssh breaks

Getting remote hostnames as window names

This is not so much a screen thing as an ssh thing. First pull down this script somewhere local. For me that’s /home/bunchc/scripts/

Then add these two lines to your .ssh/config:

# Screen prompts to the remote hostname
Host *
    PermitLocalCommand yes
    LocalCommand /home/bunchc/scripts/screen_ssh.sh

Reloading the config from within screen

Now that you’ve got these settings, reload the screenrc file: ctrl-a : source ~/.screenrc

Resources

This post comes about after having collected these settings over a while. I’d love to give credit to all the original authors, finding posts from 2007 - 2009… well.

Extended Github Like Service on rPI

The good folks over at Hypriot have a wonderful tutorial on how to run your own GitHub like service. To do this, they use Gogs on a raspian image. A great starting point, but, well, what if you wanted to scale it some?

Yes yes, we’re running on a Raspberry PI, ‘scale’ in this case just means build it a bit more like you’d deploy on real hardware.

Forgiving my ascii, the following depicts what we will build:


+------------------------+
|                        |
|      nginx             |
|                        |
+-----------+------------+
            |
            |
+-----------v------------+
|                        |
|      gogs              |
|                        |
+-----------+------------+
            |
            |            
+-----------v------------+
|                        |
|     postgres           |
|                        |
+------------------------+

This build is involved, so, lets dive in.

Before Starting

To complete the lab as described, you’ll need at least one Raspberry Pi running Hypriot 1.x and a recent version of Docker and docker-compose.

Everything that follows was adapted for rPI, meaning, with some work, could be run on an x86 Docker as well.

What we’re building

In this lab we will build 3 containers:

  • nginx - A reverse proxy that handles incoming requests to our git server
  • gogs - The GoGit service
  • postgres - a SQL backend for gogs.

Ok, maybe not entirely production, but, a few steps closer.

Getting Started

To get started, we’ll make the directory structure. The end result should look similar to this:

$ tree -d
.
├── gogs
│   ├── custom
│   │   └── conf
│   └── data
├── nginx
│   └── conf
└── postgres
    └── docker-entrypoint-initdb.d

You can create this with the following command:

mkdir gogs-project; cd gogs-project
mkdir -p gogs/custom/conf gogs/data \
    nginx/conf postgres/docker-entrypoint-initdb.d

Next, at the root of the project folder, create a file called ‘env’. This will be the environment file in which we store info about our database.

cat > env <<EOF
DB_NAME=myproject_web
DB_USER=myproject_web
DB_PASS=shoov3Phezaimahsh7eb2Tii4ohkah8k
DB_SERVICE=postgres
DB_PORT=5432
EOF

Define the hosts in docker-compose.yaml

Next up, we define all of the hosts in a docker-compose.yaml file. I’ll explain each as we get into their respective sections:

nginx:
  restart: always
  build: ./nginx/
  ports:
    - "80:80"
  links:
    - gogs:gogs

gogs:
  restart: always
  build: ./gogs/
  expose:
    - "3000"
  links:
    - postgres:postgres
  volumes:
    - ./gogs/data:/data
  command: gogs/gogs web

postgres:
  restart: always
  image: rotschopf/rpi-postgres
  volumes:
    - ./postgres/docker-entrypoint-initdb.d:/docker-entrypoint-initdb.d
  env_file:
    - env
  expose:
    - "5432"

Build the Reverse Proxy

Repeated here, is the nginx section of the docker-compose.yaml file.

nginx:
  restart: always
  build: ./nginx/
  ports:
    - "80:80"
  links:
    - gogs:gogs
  • restart - This container should always be running. Docker will restart this container if it crashes.
  • build - Specifies a directory where the Dockerfile for this container lives.
  • ports - Tells docker to map an external port to this container
  • links - Creates a link between the containers and creates an /etc/hosts entry for name resolution

You’ll notice we told docker-compose we wanted to build this container rather than recycle one. To do that, you will need to place a Dockerfile into the ./nginx/ folder we created earlier. The Dockerfile should have the following contents:

FROM hypriot/rpi-alpine-scratch:v3.4
RUN apk add --update nginx \
    && rm -rf /var/cache/apk/*

COPY conf/nginx.conf /etc/nginx/nginx.conf
COPY conf/default.conf /etc/nginx/conf.d/default.conf

CMD ["nginx", "-g", "daemon off;"]
  • FROM - sets our base image.
  • RUN - installs nginx and cleans out the apk cache to save space.
  • COPY - these two lines pull in nginx configurations
  • CMD - sets nginx to run when the container gets launched

Finally, we need to provide the configurations specified in the copy commands. The contents of which follow. These should be placed into ./nginx/config/

$ cat nginx/conf/nginx.conf

user  nginx;
worker_processes  1;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        off;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;

    include /etc/nginx/conf.d/*.conf;
}

This first config provides a very basic setup for nginx, and then pulls additional configuration in using the include line.

$ cat nginx/conf/default.conf
server {

    listen 80;
    server_name git.isa.fuckingasshat.com;
    charset utf-8;

    location / {
        proxy_pass http://gogs:3000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }

}

This file tells nginx to listen on port 80, and to serve all requests to / back to our gogs instance over port 3000.

Let’s look at what we’ve got so far:

$ pwd
/home/pirate/rpi-gogs-docker-alpine/nginx

$ tree
.
├── conf
│   ├── default.conf
│   └── nginx.conf
└── Dockerfile

Building the Go Git service

What follows is how we build our Go Git container. A reminder of what this looks like in docker-compose.yaml:

gogs:
  restart: always
  build: ./gogs/
  expose:
    - "3000"
  links:
    - postgres:postgres
  volumes:
    - ./gogs/data:/data
  command: gogs/gogs web
  • restart - We’d like this running, all the time
  • build - Build our gogs image from the Dockerfile in ./gogs/
  • expose - Tells docker to expose port 3000 on the container
  • links - Ties us in to our database server
  • volumes - Tells docker we want to map ./gogs/data to /data in the container for our repo storage
  • command - launches the gogs web service

Neat, right? Now let’s take a look inside the ./gogs/Dockerfile:

FROM hypriot/rpi-alpine-scratch:v3.4

# Install the packages we need
RUN apk --update add \
    openssl \
    linux-pam-dev \
    build-base \
    coreutils \
    libc6-compat \
    git \
    && wget -O gogs_latest_raspi2.zip \
        https://cdn.gogs.io/gogs_v0.9.128_raspi2.zip \
    && unzip ./gogs_latest_raspi2.zip \
    && mkdir -p /gogs/custom/http /gogs/custom/conf \
    && /gogs/gogs cert \
    -ca=true \
    -duration=8760h0m0s \
    -host=git.isa.fuckingsshat.local \
    && mv *.pem /gogs/custom/http/ \
    && rm -f /*.zip \
    && rm -rf /var/cache/apk/*

COPY custom/conf/app.ini /gogs/custom/conf/

EXPOSE 22
EXPOSE 3000
CMD ["gogs/gogs" "web"]

Unlike the nginx file, there is a LOT going on here.

  • FROM - Same alpine source image. Tiny, quick, and does what we need.
  • RUN - This is actually a bunch of commands tied together into a single line to reduce the number of Docker build steps. This breaks down as follows:
    • apk –update add - install the required packages
    • wget - pull down the latest gogs binary for pi.
    • unzip - decompress it
    • mkdir - create a folder for our config and ssl certs
    • /gogs/gogs cert - creates the ssl certificates
    • mv *.pem - puts the certificates into the right spot
    • rm - these two clean up our image so we stay small
  • COPY - This pulls our app.ini file into our container
  • EXPOSE - tells docker to expose these ports
  • CMD - launch the gogs web service

Still with me? We have one last file to create, the app.ini file. To do that, we will pull down a generic app.ini from the gogs project and then add our specific details.

First, pull in the file:

$ curl -L https://raw.githubusercontent.com/gogits/gogs/master/conf/app.ini \
    > ./gogs/custom/conf/app.ini

Next, find the database section, and provide the data from your env file:

[database]
; Either "mysql", "postgres" or "sqlite3", it's your choice
DB_TYPE = postgres
HOST = postgres:5432
NAME = myproject_web
USER = myproject_web
PASSWD = shoov3Phezaimahsh7eb2Tii4ohkah8k
; For "postgres" only, either "disable", "require" or "verify-full"
SSL_MODE = disable
; For "sqlite3" and "tidb", use absolute path when you start as service
PATH = data/gogs.db

That’s it for this section. Which should now look like this:

$ pwd
/home/pirate/rpi-gogs-docker-alpine/gogs
HypriotOS/armv7: pirate@node-01 in ~/rpi-gogs-docker-alpine/gogs
$ tree
.
├── custom
│   └── conf
│       └── app.ini
├── data
└── Dockerfile

Setting up Postgres

Unlike the last two, postgres is fairly simple to setup, as instead of building it from scratch, we’re running a community supplied image. Let’s take a look at the postgres section of docker-compose.yaml:

postgres:
  restart: always
  image: rotschopf/rpi-postgres
  volumes:
    - ./postgres/docker-entrypoint-initdb.d:/docker-entrypoint-initdb.d
  env_file:
    - env
  expose:
    - "5432"
  • restart - like our other containers, we want this one to run, all the time.
  • image - specifies the postgres image to use
  • volumes - This is a special volume for our postgres container. The container will run script contained within. You’ll see how we use this next.
  • env_file - turns our env file into environment variables within the container
  • expose - makes port 5432 available for connections

Now, in your ./postgres/docker-entrypoint-initdb.d/ directory, we’re going to place a script that will create our database user and database:

$ cat ./postgres/docker-entrypoint-initdb.d/postgres.sh
#!/bin/env sh
psql -U postgres -c "CREATE USER $DB_USER PASSWORD '$DB_PASS'"
psql -U postgres -c "CREATE DATABASE $DB_NAME OWNER $DB_USER

Cool, once you’ve done that, you have finished the prep work. Now we can make the magic!

Make the Magic!

Ok, so, that was a lot of work. With that in place, you can now build and launch the gogs environment:

docker-compose build --no-cache
docker-compose up -d

Once that completes, point a web browser to your raspberry pi on port 80, and enjoy your own github service.

Summary

Long post is long. We used docker compose to build two docker images and launch a third. All three of which are tied together to provide a github like service.

Resources