# Octopress New Post Alias

Since I’ve switched to Octopress, I’m constantly bumping into issues that I assume are the result of my lack of Ruby knowledge. The first was having to install rvm (to get a particular version of Ruby? I don’t remember). Now, after a system upgrade, I’m getting:

Gem::LoadError: You have already activated rake 10.4.2, but your Gemfile requires rake 10.3.2. Prepending bundle exec to your command may solve this.

Okay. I can do that, but while I’m at it, I thought it might be simpler to just create an alias for Octopress’ rake commands. So, I’ve added the following file to my system at /usr/local/bin/octo:

#! /bin/bash
bundle exec rake "$@" Now, to create a new post, all I need to type is: octo new_post['My title goes here'] Yes, I know ‘octo’ isn’t any shorter than ‘rake’, but it’s shorter than ‘bundle exec rake’ and it saves me the trouble of investigating Ruby-land enough to figure out why the plain old rake command is no longer working. Update: After writing this, I ran into another issue that required I run bundle update safe_yaml. Something about that seems to have solved this problem, too (so that it’s no longer necessary to prepend bundle exec to the rake command). Nonetheless, I think I’ll keep the ‘octo’ alias. I like it. # Entering a Docker Container I’ve been learning more about Docker and I’ve stumbled into the need to enter into a running Docker container so that I can check log files to see why a service isn’t running. I learned from a StackOverflow post that there is a new Docker command that lets you easily do this: docker exec -it [container-id] bash However, it’s only available in Docker version 1.3 or greater. The version of Ubuntu that I’m running has Docker 1.2. So, I want to add the Docker PPA so I can get more frequent updates from the Docker developers. To do that, I run the following commands: wget -qO- https://get.docker.io/gpg | sudo apt-key add - sudo sh -c "echo deb http://get.docker.io/ubuntu docker main > /etc/apt/sources.list.d/docker.list" sudo apt-get update sudo apt-get install lxc-docker This also uninstalls the ‘docker.io’ version of the package that’s found in the regular Ubuntu package repository. After installing the newer Docker version, I can use the nifty docker exec function. Update: The above currently installs Docker 1.4.1, but there is an issue with this version of Docker that causes my Packer.io builds to fail. To resolve this, a particular version of Docker (above 1.3 and below 1.4.x) can be installed with the following command: sudo apt-get install lxc-docker-1.3.3 **Update (Slight Return): And, a handy modification to the docker command is: docker exec -it$(docker ps -l -q) bash

This will let you into the Docker container that you just started.

# Permission Denied: docker.sock

A quick note from my initial Docker experimentation. When you see:

dial unix /var/run/docker.sock: permission denied

It probably means you need to add your system user to the ‘docker’ group.

# Interactive Rebase Commit Squashing

I love Travis-CI for freeing me from having to maintain a Jenkins server, but I do find it a pain to have to make a commit to my GitHub repo in order to test/add a new step in the Travis config file. Their newly announced architecture (using Docker) should make it possible for them to offer a build container for local testing in the future, but for now it’s an iterative commit-then-test process.

The result of this is that I have a bunch of GitHub commits that are really just me testing something. I’d like to be able to condense them all into one commit… hence, I’m learning about squashing Git commits with the interactive rebase feature.

I’ve just started using simple rebasing not too long ago, but I’ve never really squashed commits before so I’m documenting the process for myself (so that I can return to it when I forget it… which I’m sure I will).

To keep things simple, I’ll be working with ‘develop’ and a ‘master’ branches. All my Travis testing has taken place on the ‘develop’ branch and I’d like to condense all this into a single commit in my ‘master’ branch (which is already littered with these sorts of commits, but I’d like to do a better job about having cleaner commits going forward). So, to start, let’s checkout the ‘develop’ branch and interactively rebase it against the ‘master’ branch.

git checkout develop
git rebase -i master

This gives me something like:

pick 7a31f53 Improved Travis script
pick 464171b Explicitly open AWS Carbon port for Travis
pick 9e770be Fixed typo; added workaround for Ubuntu mirror issue
pick f539b9c Removed exit calls in Travis script
pick 5de58de Added travis_wait to Travis config and refactored config
pick b719d96 Added travis_wait to Travis config and refactored config
pick 472de9e Fixed typo in Travis config
pick 62caf9d Reshuffled Travis configuration steps
pick cec0c3b Move Travis tests into external script
pick 897271c Fixed Travis AWS instance cleanup
pick 8b52e1e Bumping up number of times test is attempted before it's considered a failure
pick 69bc970 Tweaking Travis test script

# Rebase f69c45e..69bc970 onto f69c45e
#
# Commands:
#  p, pick = use commit
#  r, reword = use commit, but edit the commit message
#  e, edit = use commit, but stop for amending
#  s, squash = use commit, but meld into previous commit
#  f, fixup = like "squash", but discard this commit's log message
#  x, exec = run command (the rest of the line) using shell
#
# These lines can be re-ordered; they are executed from top to bottom.
#
# If you remove a line here THAT COMMIT WILL BE LOST.
#
# However, if you remove everything, the rebase will be aborted.
#
# Note that empty commits are commented out

What I’ll need to do is go through all the commits after the first one and mark them as ‘squash’ or ‘fixup’. In this case, I’ll use ‘fixup’ but if I wanted to preserve the messages (in the case of something less trivial than Travis testing) I would use ‘squash’. So, when I’m done, I should have something like:

pick 7a31f53 Improved Travis script
fixup 464171b Explicitly open AWS Carbon port for Travis
fixup 9e770be Fixed typo; added workaround for Ubuntu mirror issue
fixup f539b9c Removed exit calls in Travis script
fixup 5de58de Added travis_wait to Travis config and refactored config
fixup b719d96 Added travis_wait to Travis config and refactored config
fixup 472de9e Fixed typo in Travis config
fixup 62caf9d Reshuffled Travis configuration steps
fixup cec0c3b Move Travis tests into external script
fixup 897271c Fixed Travis AWS instance cleanup
fixup 8b52e1e Bumping up number of times test is attempted before it's considered a failure
fixup 69bc970 Tweaking Travis test script

# Rebase f69c45e..69bc970 onto f69c45e
#
# Commands:
#  p, pick = use commit
#  r, reword = use commit, but edit the commit message
#  e, edit = use commit, but stop for amending
#  s, squash = use commit, but meld into previous commit
#  f, fixup = like "squash", but discard this commit's log message
#  x, exec = run command (the rest of the line) using shell
#
# These lines can be re-ordered; they are executed from top to bottom.
#
# If you remove a line here THAT COMMIT WILL BE LOST.
#
# However, if you remove everything, the rebase will be aborted.
#
# Note that empty commits are commented out

When I exit my text exitor, I’m told the commit has been made. If I’d selected ‘squash’ instead of ‘fixup’ I would have been presented with a new window in which to type my commit message (and all the previous messages would now be squashed into one big message, ready for me to edit).

I assume, if I’d wanted to provide a better commit message than my first one above, I could have marked the first commit as ‘reword’ instead of ‘pick’ (and kept all the later ones as ‘fixup’). I assume, then, I would then have been presented with an opportunity to improve my commit message (before the commit was finalized).

But, as it is, git log on the ‘develop’ branch now says:

Author: Kevin S. Clarke <ksclarke@gmail.com>
Date:   Thu Jan 1 00:59:28 2015 -0500

Improved Travis script

commit f69c45ea737777250562f5b9c6b86bd009d436f8
Author: Kevin S. Clarke <ksclarke@gmail.com>
Date:   Wed Dec 31 21:13:43 2014 -0500

Added a comment in Packer.io config

And, git log on the ‘master’ branch still says:

commit f69c45ea737777250562f5b9c6b86bd009d436f8
Author: Kevin S. Clarke <ksclarke@gmail.com>
Date:   Wed Dec 31 21:13:43 2014 -0500

Added a comment in Packer.io config

Now, I need to merge the newly rebased ‘develop’ branch into my ‘master’ branch. To do that, I just type:

checkout master
git merge develop

Then git status tells me I have one new commit that I can now push up to my ‘origin’ repository:

git push

That’s it. I have to say that I found the idea of rebasing on ‘develop’ against ‘master’ a little counter-intuitive at first. But if you think of it as a two step process (e.g., first step: clean up ‘develop’ and, then, second step: merge into ‘master’), it makes more sense.

Caveats for the Future

Ideally, the above would have been done on a feature branch and then merged into ‘develop’, later to be merged into ‘master’, but I started making edits on ‘develop’ before realizing I’d want to squash them.

To clean up this mess, I’ll go ahead and delete the ‘develop’ branch and create a new ‘develop’ branch from my newly merged ‘master’. In the future, I’ll just use (and then delete) a feature branch (since they’re temporary by design).

Just for completeness, the final steps are:

git branch -D develop
git push origin --delete develop
git checkout -b develop
git push origin develop

Hopefully, these steps are just a workaround for my lack of planning (to squash) and not something I’ll something I’ll have to do again. This is a new repository so shouldn’t have anybody who has cloned and is working on the previous ‘develop’ branch. I know… Bad developer. Bad!

# Color Coding in Nano

When I’m casually editing a file on the file system I like to use nano. Yes, I admit it. It’s quick and easy. What’s not to like?

Well, it could have color coding. But, wait… it does! I’m a bit surprised I’m just finding this out, but there is a way to enable color coding in nano.

First, you need to find the nanorc.sample file. For me, on Ubuntu, it was gzipped in:

/usr/share/doc/nano/examples/

To use it, I gunzipped it, renamed it, and moved it to my personal directory:

gunzip /usr/share/doc/nano/examples/nanorc.sample.gz -c > ~/.nanorc

Since, at the moment, I’m particularly interested in color coding for JSON files, I added the following to the end of the ~/.nanorc file:

## JSON-type files
include "/usr/share/nano/json.nanorc"

Then I created that file and added the following (found on GitHub) to it:

syntax "json" "\.json$" header "^\{$"

color blue   "\<[-]?[1-9][0-9]*([Ee][+-]?[0-9]+)?\>"  "\<[-]?[0](\.[0-9]+)?\>"
color cyan  "\<null\>"
color brightcyan "\<(true|false)\>"
color yellow ""(\\.|[^"])*"|'(\\.|[^'])*'"
color brightyellow "\"(\\"|[^"])*\"[[:space:]]*:"  "'(\'|[^'])*'[[:space:]]*:"
color magenta   "\\u[0-9a-fA-F]{4}|\$bfnrt'"/\$"
color ,green "[[:space:]]+$" color ,red " +" The next time I open a JSON file, I see pretty color coded syntax. Success! # NullPointerException on FSBlobIdIterator setChildPaths Figuring out what went wrong with my fcrepo3 move took a bit of work (i.e., floundering), so I’ll document where I went wrong. I moved the repository file system, including its data directory, between machines and thought I’d successfully updated all the necessary config files. But, when I went to rebuild Fedora’s resource index, I got the following error: Rebuilding... Initializing triplestore interface... Clearing directory /opt/fedora/data/resourceIndex... Finished. Rebuild failed: java.lang.NullPointerException at org.akubraproject.fs.FSBlobIdIterator$DirectoryNode.setChildPaths(FSBlobIdIterator.java:97)
at org.akubraproject.fs.FSBlobIdIterator$DirectoryNode.<init>(FSBlobIdIterator.java:85) at org.akubraproject.fs.FSBlobIdIterator.<init>(FSBlobIdIterator.java:45) at org.akubraproject.fs.FSBlobStoreConnection.listBlobIds(FSBlobStoreConnection.java:73) at org.akubraproject.map.IdMappingBlobStoreConnection.listBlobIds(IdMappingBlobStoreConnection.java:89) at org.fcrepo.server.storage.lowlevel.akubra.AkubraLowlevelStorage.listBlobIds(AkubraLowlevelStorage.java:506) at org.fcrepo.server.storage.lowlevel.akubra.AkubraLowlevelStorage.list(AkubraLowlevelStorage.java:282) at org.fcrepo.server.storage.lowlevel.akubra.AkubraLowlevelStorage.listObjects(AkubraLowlevelStorage.java:173) at org.fcrepo.server.storage.lowlevel.akubra.AkubraLowlevelStorageModule.listObjects(AkubraLowlevelStorageModule.java:125) at org.fcrepo.server.utilities.rebuild.Rebuild.run(Rebuild.java:133) at org.fcrepo.server.utilities.rebuild.Rebuild.main(Rebuild.java:462) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.fcrepo.server.utilities.rebuild.cli.CLILoader.main(CLILoader.java:77) It turns out the$FEDORA_HOME/server/config/spring/akubra-llstore.xml update had been somehow bungled. In that file, I’ve learned, are two important configurations:

<bean name="fsObjectStore" class="org.akubraproject.fs.FSBlobStore" singleton="true">
<constructor-arg value="urn:example.org:fsObjectStore" />
<constructor-arg value="/opt/fedora/data/objectStore"/>
</bean>

and

<bean name="fsDatastreamStore" class="org.akubraproject.fs.FSBlobStore" singleton="true">
<constructor-arg value="urn:example.org:fsDatastreamStore" />
<constructor-arg value="/opt/fedora/data/datastreamStore"/>
</bean>

In my case, the objectStore configuration had an extra /fedora/ in its path. Once I fixed that, the resource index could then be built without the NullPointerException.

WebJars is a packaging of static resources so that they are available to JVM-based containers and Web frameworks. For instance, with a Servlet 3 container, you only need to add a WebJars-packaged version of a library, like Bootstrap, to your classpath and the static files contained within will be available from your Web application.

For instance, add the following to a Maven pom.xml file and a Jetty server running from within the build will be able to provide access to the Bootstrap files.

<dependencies>
<dependency>
<groupId>org.webjars</groupId>
<artifactId>bootstrap</artifactId>
<version>3.2.0</version>
</dependency>
</dependencies>

Your HTML only needs to contain the following reference:

This works outside of the Maven context as well. Drop a WebJar into your Tomcat 7’s WEB-INF/lib directory and the static files contained within are accessible at the WebJars path. WebJars also work with the Play Framework, Grails, Ring (Clojure), and others.

In addition to being packaged within a jar file, these static files are also made available from jsDelivr, a free Content Delivery Network (CDN). For instance, see:

//cdn.jsdelivr.net/webjars/bootstrap/3.2.0/bootstrap.min.css

Once I learned about the WebJars project, I immediately liked it (and wondered if there was anything that I’d like to access that wasn’t already available in the WebJars format). Since I dabble with a fork of adore-djatoka, I thought it would make sense to publish the OpenSeadragon JavaScript library as a WebJar. So, I created a project to do just that: WebJars-OpenSeadragon.

It’s been accepted and published through the WebJars site. So, OpenSeadragon 1.1.1 is now also available from jsDelivr:

and from the following path, if the WebJars-OpenSeadragon jar is installed in your servlet container’s classpath:

To include WebJars-OpenSeadragon in a Maven-based project, you just need to include the following as a dependency in your pom.xml file.

<dependencies>
<dependency>
<groupId>org.webjars</groupId>
<version>1.1.1</version>
</dependency>
</dependencies>

My project contains a test subproject that uses WebJars-OpenSeadragon (if you’d like an example of using a WebJar from the Maven context). It also includes an HTML example of how to configure OpenSeadragon when using WebJars-OpenSeadragon:

<script type="text/javascript">
id: "contentDiv",
tileSources: "testpattern.dzi",
showNavigator: true
});
</script>

It’s that easy. My next step is to upgrade my djatoka fork to reference the OpenSeadragon WebJar (rather than provide it’s own repackaged version of OpenSeadragon). Thanks to the WebJars (and OpenSeadragon) folks for providing access to such fine tools!

# Fixing CSS MIME Type for S3 Octopress

I’ve “rebooted” my worklog using Octopress and, at the same time, started publishing my site’s static output to S3. The problem, I’ve found, comes when I’m ready to deploy the Octopress site to S3. I’m using s3cmd to do the upload, and it’s configured in my Rakefile like:

s3_bucket = "worklog.kevinclarke.info"

puts "## Deploying website via s3cmd"
ok_failed system("s3cmd sync --acl-public --reduced-redundancy public/* s3://#{s3_bucket}/")
end

The deployment works great, except my site shows up in the browser like it doesn’t have the CSS applied. After some digging, I discovered this is because s3cmd uploads the CSS file with the MIME type of text/plain instead of text/css. This seems to be the fault of python-magic. There is a patch that’s been merged in with the latest version of s3cmd, but UbuntuGnome hasn’t updated to it yet. So, the immediate (i.e., easiest) workaround is to just uninstall python-magic.

sudo aptitude remove python-magic

Once I do that, the upload to S3 via s3cmd works without a hitch. The mime-type is set correctly for CSS files as text/css. In the future, when my copy of s3cmd supports it, I’ll add the --no-mime-magic argument to the s3cmd command in my Octopress Rakefile.

# Simple Bash Tricks

I learned two new (to me) useful Bash tricks today.

The first is a way to search your Bash history (replacing a little script I wrote to do the same thing):

ctrl-r

The second is a way to jump to the beginning of a line (say after you’ve selected something from your history and need to add sudo in front of it):

ctrl-a

That’s it… on my way to using more of the conveniences that Bash has built into it.

# Synchronizing With the Upstream

Just documenting some simple Git basics for myself. Here is how to keep a local repository up to date with an upstream repo.