ZFS on Linux 4.13 in Debian Jessie

The first question that comes to mind is why bother? The big reason, for me, is thunderbolt hot-plugging. Thunderbolt hot plugging made it into 3.17. Unfortunately, Debian Jessie ships with 3.16. Luckily, 4.12 and 4.13 are available from jessie-backports. If you want to use zfsonlinux, youʼll need to do quite a bit of extra work. zfsonlinux ships packages that depend on the 3.16 kernel. Its also not as simple as just building the zfs package as they first create rpms and then convert them to debs. This is an issue because rpmbuild doesnʼt like the versioning scheme that is used for Debianʼs backported kernels.

To start with, youʼll need to download the source for the kernel to compile:

Then youʼll need to untar the source into a writable directory. i.e. cd into the desired directory and run

This next step is going to take quite a while, building the kernel. From the untared linux source directory:

You can feel free to change either LOCALVERSION or the suffix to KDEB_PKGVERSION just make sure that the values that you specify don’t contain a ..

Its much easier to do this without zfs already installed, so Iʼm just going to assume that is where you are at. Install the newly compiled kernel and reboot.

Now you have a custom kernel verion running. The next step is to install zfs. This is mostly following zfsonlinux’s instructions on generic debs but, their instructions are missing a couple of setps. Youʼll need to download spl and zfs from zfsonlinux. I would suggest grabing the latest release. You’ll also need a few build dependencies.

Now we need to compile spl and install the development packages which are required for building zfs.

Finally, we’re going to build and install zfs

Finally reboot, and you should be all set. While that is a bunch of steps, it really isnʼt too bad.

2015: The Tools I Use

Continuing on what I started last year, here is the list of tools that Iʼve used this year.

Mac

Again this year, my Mac is my primary work device.

  1. neovim — I continue to do most of my work with text, whether that is Ansible playbooks or code. I could easily just use vim but, neocon has a couple of nice extras, mainly that it properly handles pasting without using paste mode.
  2. iterm 2 — iterm continues to be great to use. I donʼt really like the built-in terminal on OS X so Iʼm lucky that iTerm exists, especially since I do almost all of my work in the terminal.
  3. tmux — I generally keep iTerm running full screen since, I do most of my work there. While this works pretty well, itʼs a bit of a waste as its a huge amount of space for just one thing at a time. I use an inverted T, where I have one large split on top and two smaller ones on the bottom. The big split on top is generally used for neovim and then I can run related tasks in the bottom two.
  4. git — git is basically the standard for version control. Git has it flaws but, I really like it.
  5. mailmate — I switched email clients since last year. Mailmate definitely feels more like a traditional email client. Itʼs really well done.
  6. Alfred — Alfred is a keyboard launcher. It does many more things than just launching apps. I use it all of the time.
  7. Arq — Arq is a great secure backup solution. It supports many cloud storage providers so youʼre able to pick your favorite.
  8. Textual — Textual is a pretty good irc client for OS X.

iPhone

  1. Tweetbot — I like using Twitter but, I really donʼt like Twitter’s design decisions. Tweetbot fits me much better, Iʼm not looking forward to the day when Twitter cuts off access to 3rd party access.
  2. Prompt — Prompt is good to have around in case you need to access a server over ssh. Prompt is a very well done ssh client but, ssh on a phone sized device isnʼt a fun experience.
  3. Spark — While the built-in mail client on iOS is perfectly functional, I find it quite cumbersome to use. Spark is a really great iOS email client.
  4. Unread — Unread is a pretty great RSS reader on iOS.

Multiple

  1. 1Password — Keeping yourself secure online is hard. Having to remember a unique password for each service is pretty much impossible, particularly if you try to make them secure. 1Password solves this problem. Itʼs so good that itʼs easier than using the same username and password for everything. Their [recently announced team features][15] are bringing this same great setup to teams. Available for Mac, iOS and a bunch of other platforms.
  2. slack — We continue to use Slack at work. Slack definitely had momentum last year but, it seems like everyone is using them this year. I like Slack but, Iʼm not sure itʼs good enough to have this much attention on it. I also think that itʼs unfortunate that many open source projects are starting to use it as their primary communication method.
  3. Dash — Dash is great documentation viewer for Appleʼs platforms. I use it everyday. Available for Mac and iOS.

Server

  1. WordPress — As I previously mentioned, Iʼm back to using WordPress to manage ruin. While there are definitely some things that I don’t like WordPress but, itʼs pretty great at handling writing.
  2. ZNC — ZNC is an irc bouncer. It has quite a number of features but, I donʼt use that many of them. I mainly just use it so that I donʼt miss anything when my machine is offline.
  3. tarsnap — Tarsnap is great solution for secure backup. The siteʼs design looks pretty dated but, its a great backup solution.

The Party of Fear

It is extremely unfortunate that the United States has developed into a two party system. Its even more unfortunate that one of those parties is unable to field respectable candidates. Prior to Tuesdaysʼs debate, the two leading candidates, Trump and Cruz, are both literal fascists. The two appeals of the Republicans: American isn’t safe and, Make America great again.

It seems that the Republican party can be characterized by a desire to have the biggest and best military force in the world so that we can stamp out any possible threat (by bombing those fuckers into the ground). Have we learned nothing? Nothing from Vietnam and both Gulf wars? The lessons of which should be apparent, we can use our military to kill people but, we can’t control them. In fact, our belligerent attitude is causing us to be significantly less safe. How many people will join ISIS after we kill their family members in collateral damage?

Unfortunately, the damage may already be done by Trump. Even if he fails to capture the Republican Nomination, he has already made open bigotry acceptable. A year ago, I would have expected provably false, racist slander to eliminate a political candidate from any election, instead, it has only propelled his campaign. I, unfortunately, know people that have a hatred of Mexican immigrants. Trumps comments have made this sort of sentiment into something that is able to be discussed.

Condemning an entire race wasn’t enough for Trump, all Muslims are in the crosshairs as well. Not only would he prevent the US from taking in the abysmally small number of Syrian refugees that Obama has committed to, He would also prevent any citizen that happens to be a muslim from returning to the United States. This is wrong. It is against everything that this country was built upon and every sane citizen should find this idea repulsive. This is already having repercussions, violence against Muslims is up. It is inciting the true American terrorists, white people.

In addition, Trump advocated for committing war crimes during the debate. He would like to target the families of ISIS members. This is flat out sickening. Under no conditions should we every consider doing this and those who preach it should be no where near running this country.

None of these things comes from a position of strength. The primary strategies for the Republican party appears to be the creation of fear and nostalgia. Their strategy requires all of us to live in a state of fear, a fear that they alone can resolve. This is not the world that I live in and you shouldn’t either. Egregiously, theyʼre also exploiting the widespread hatred of Mexican and the hatred and fear of Muslims to further solidify there following. I want no part in this and you shouldnʼt either.

Back to Basics

I’ve moved back to WordPress and I think the reason why is important.

I read Ben Brook’s most recent thoughts on WordPress and it lead me to an important  series of thinking that has culminated in what you see now, my return to using WordPress. My initial reaction to reading Ben’s post was denial. Why does it matter if I had a complicated cms setup for my writing. So what if I want to spend my time writing my own cms just to run Ruin? It doesn’t matter, does it? Thats when it hit me, it does matter.

For the longest time, I’ve wanted to write my own cms. I don’t have a particularly good reason for why I want to do this, other than I enjoy writing, developing software and I have some ideas that I want to try out. All of this is fine but, that isn’t the reason that I have this site. I have this site because I intended to write on it. Looking at this what I’ve managed to get out his year makes me sad. Compared to previous years, my output has dropped considerably. Some of it is simply dropping the linked list style posts. I don’t think that those are particularly useful to people and I’ve stopped doing them.

There are also large gaps where I apparently stopped writing at all. Each one of these is a time where I was going to finish my cms, so I stopped writing until it was “done”. That point never actually arrived though. On multiple occasions, I’ve spent weeks writing my new blogging platform only to realize that it will be a very, very long time before its complete. On most of these occasions, I did have something that would be workable but was missing features that I would call essential. At these points, I’d revert to my previous cms, Jekyll, and continue my writing. I was never quite satisfied, so I would quickly return totinkering with making my own.

It took reading Ben Brook’s post for me to step back far enough to evaluate the situation. This cycle is deadly to my writing. Furthermore, I’ve long had more projects that I wish to explore than I have time. Building blogging software is no where near the top of that list. It also isn’t the reason that I have this site. I have the site as a place for me to publish my writing, not as a place to fiddle with different cmses.

So, I’m doing exactly as he suggests, I’m using WordPress and utilizing the things that the community has created to fulfil all of my functionality desires. It took all of an hour to have all of the functionality that I wanted. Now its a just a matter of making it look the way that I want. Of course, I have to write my theme in PHP which, I don’t like but, I can just use _s. It a small price to pay to be able to concentrate on writing and building the tools I want instead of a CMS. I just need to remember that.

Replicating Jepsen Results

The requirements for running Jepsen tests and a tool to make it easier.

If you arenʼt aware, Kyle Kingsbury has a great series of posts testing whether databases live up to their claims. Its an invaluable resource as many of the databases he has tested donʼt live up to their stated goals. That being said, some of the posts are getting quite old at this point so its possible that the developers may have fixed the issues that caused them to fail their stated goals. Luckily, Kyleʼs Jepsen project is open source and youʼre free to try and replicate his results.

This does take some setup though. Youʼll need 5 database servers. Its easiest to use Debian Jessie for this as that is what Kyle uses and therefore all of the tests that heʼs written work against it. You do need to replace SystemD with SysV init before the tests will be able to run. You also need a machine to run Jepsen on. You shouldnʼt try to reuse one of the database servers for this as the tests will cut off access to some servers at certain points in the tests. For the easiest testing process, youʼll want the database servers to be called n1-n5. They need to all be resolvable by all the other database servers and the server running the tests. The server running the tests also needs to be able to ssh to all of the database servers using the same username and password/ssh key and have sudo access. These hosts must also exist in the known hosts file in the non-hashed format before Jepsen is able to execute a test. Iʼm unsure of what the default values that Jepsen uses for username and password but, youʼre easily able to change the values that it uses for each test. Finally, the server running the tests will need the Java JDK 8 and leiningen to run.

Thats quite a bit, isnʼt it? I thought that it was and given the wonderful tooling we have to replicate these sorts of environments, I thought that, for sure, someone had created a way to spin up a set of servers on AWS to run any of the tests that you would like. I wasnʼt able to locate one which likely just means that my search skills were lacking. Since I couldnʼt locate one, I made one using Terraform. jepsen-lab is relatively simple but, it goes through the process of setting up all of the previously stated requirements. It sets up all of the servers and configures them as required and once that process is complete, it outputs the ip address that youʼre able to ssh into. It does leave a number of steps for you to complete on your own: You need to clone the Jepsen repo and youʼll need to modify the test configuration for the username and password. The former is simply because I donʼt know what revision you may wish to use and the latter is because the step is dependent on which tests you chose to run. For more information on how to use jepsen-lab, see the readme in the repository.

After getting everything setup, its just a matter of running lein test from the correct directory and verifying the results. You can also make any modifications you like to see if they change the results of the tests. In future installments, Iʼll discuss the particular tests that Iʼve tried to replicate, modifications that Iʼve made and the results that Iʼve gotten.

fpm

For many developers, the way that they deploy is by checking out a specific revision from version control. While some people consider that to be a bit of an anti-pattern, I think that is a fine way to deploy applications written in dynamic languages. In theory, you could do the same thing for compiled languages, it just doesnʼt work well in practice. This would require you to compile your application on every server during the deploy. While this is possible, its very inefficient and time consuming. A much better way to do this is to build your application and then distribute the resultant artifacts. The way that Iʼve chosen to do this is by building native packages, specifically debs.

Generating these debs arenʼt very difficult. It took me quite a bit of research to figure out what needed to be there (debianʼs packaging and Clemens Leeʼs package building HowTo guides were both hugely helpful). once you figure that out, its just a matter of creating the correct directory structure and then run it through dpkg-deb. Alright then, how do you make a similar rpm? Time to do some research, huh?

Why should any of this be required? Surely many other people have figured out what is required. One of them must have documented their knowledge somehow. The answer to both of these things is of course. Theres an awesome tool called fpm that creates packages of many different types from many different sources. Of course, it can package up files and directories into debs and rpms.

Iʼve known about fpm for quite some time. In fact, I knew about it before I started building debs by hand. As I mentioned, its not terribly difficult to use dpkg-deb to produce a deb. I also donʼt really like that fpm is written in ruby. While I think ruby is a fine language, getting it installed with everything that is needed to build native gem extensions is a pain. A pain that I didnʼt want to pay for a simple cli tool. It also requires a bit more setup than that to fully utilize. The rpm output requires the rpmbuild command to be installed and Iʼm sure that some of the other outputs require similar commands to be available. Iʼd love to see a similar tool compiled into a static binary but, Iʼve long given up on ever producing this tool for myself.

As I alluded to earlier, what prompted me to start using fpm was generating rpms. Iʼve since realized that I shouldnʼt have dragged my feet on it for so long. Instead of figuring out everything that is required to generate an rpm, I just used fpm: fpm -s dir -t rpm -v $VERSION -d libgmp10 ~/.local/bin=/usr/local/bin/. Of course, I can simply swap out the rpm with deb to generate a deb instead of an rpm. This ignores many of the fancier things that fpm can do. You can easily make native packages for gems, python modules, and cpan modules (to name a few). It also supports some more “exotic” formats such as self extracting scripts and OS X packages. Iʼve converted many of my deb building scripts to use fpm and Iʼll be using fpm for all of my packaging needs.

Disabling Analytics

Running analytics has been subtly shapping what I write about and Iʼm changing that.

Iʼve been quite pleased with how this site has been going. Its been growing slowly over time, for the past few months, 10% – 20% month over month. I think thats pretty good but, obviously, that means my traffic levels are basically internet radiation. Given that is the current state of the site, Ben Brookʼs article, Death to Analytics really struck a chord with me. While I enjoy seeing my traffic grow, it doesnʼt provide me any benefit. Clearly the ever growing traffic hasʼt been motivating me to write more. In fact, its probably a detriment.

Since I know the articles that people come to my site to read, Iʼm inclined to write additional things along those lines. Unfortunately, over 60% of people come here for the various tutorials that Iʼve written. While I like that Iʼve written these and Iʼm glad that people are benefitting from them, I donʼt really want to keep writing them. I write them when I come across something that I have a hard time doing and when I think I have some knowledge that would be helpful to pass along. They arenʼt the reason that I write on this site. Feeling pressure to write more of them just keeps me from writing at all of this site and that makes me feel bad.

It also doesnʼt matter how many people are visiting my site. While I have enjoyed seeing the number of people that visit my site increase, I donʼt find people simply visiting my site particularly pleasing. Many of the people that have happened upon my little hobble probably werenʼt particularly pleased either. Knowing how many times this has occurred isnʼt something that I should care about and, if I really consider it, I donʼt care. What I really care about is making an impact on you. Of course, analytics canʼt tell me that. Only you can. I really appreciate when someone takes the time to start a discussion about one of my articles or let me know that they enjoy my site. It really made my day when one of you decided to send me some money to support the site. Iʼd love to have more of these things.

So, Iʼve removed the analytics from here. Iʼm going to do as I should have been doing anyways, writing about the things that interest me. Iʼd love to know your thoughts so, please let me know in whatever way you prefer. If you happen to love what I do here, consider supporting me in some way.

Otto

Otto is a major evolution of Vagrant that fixes a variety of pain points in Vagrant

Two weeks ago, at their first ever HashiConf, HashiCorp announced two new tools, Otto and Nomad. Both of these are great new tools but, Iʼm going to be concentrating on the former as Iʼm more interested in it. For the first time, HashiCorp is rethinking one of their current products, Vagrant.

I use Vagrant every day, its immensely useful. Its a great way to set up isolated and consistent development environments. All of that comes with a cost though. Setting up Vagrant is quite a bit of work. You have to figure out how to provision the environment. Vagrant has all of the standard choices built in, so, you can pick your favorite but, that requires you to have some existing knowledge. You could provision your environment using shell scripts but, that quickly gets painful. As this is a fairly large pain point, a variety of tools have sprung up in an attempt to fix this, such as Puphpet and Rove. For a while, I was really excited by this sort of thing. I almost built a community with Nathan Leclaire for hosting vagrant environments after he came up with the idea. Things didnʼt work as it was a really busy time for both of us. After quite a bit of thinking, Iʼm glad that we didnʼt. It just wasnʼt the right way to move things forward.

The other big problem with Vagrant is moving your app into production. You put in a lot of work to build your development environment but, theres a pretty good chance that youʼll need to put in a bunch more work to prepare your production environment. The quick setup that you do for setting up your development environment will not be sufficient for production. In a lot of ways, Vagrant seems to work better if youʼre working back from your production environment. Being able to replicate your exact production environment has lots of benefits but, if you donʼt already an existing set of scripts, roles, or modules then, using Vagrant is going to take a lot of setup to get going.

Thats where Otto comes in. Otto is HashiCorp rethinking how you should setup development environments. It can automatically detect the type of application that youʼre developing and it will build an appropriate development environment automatically. Of course, this leverages Vagrant under the covers, it just skips the laborious setup process. The other big thing that Otto provides is a way to move your application into production.

I think Otto is the answer to a question that Iʼve been pondering for quite some time: how should a small group of developers create and deploy an application? There arenʼt a whole lot of good options for small teams. Recently there have been an explosion of tools for simplifying managing application but, they seem to all be focussed on much larger teams and applications. Things like Mesos, kubernetes and Docker are all great but, they require quite a bit of knowledge to run. For a small team, theyʼre all are too much of a knowledge investment to be useful. Deploying to plain servers also requires too much knowledge to keep running and to secure. The only good option here is Heroku but, that isnʼt without its downsides. Heroku is extremely expensive for what it provides and it ties you to a proprietary platform.

Otto really fills this need. When it comes time to move your app into production, Otto will create industry standard infrastructure for you. This is very important as it allows many hard-earned lessons to automatically be handled. Iʼve felt that things like Chef roles and Puppet modules presented a similar opportunity but, both have fallen very short of that goal. This allows developers to get back to what they do best, improving your application.

As with most of HashiCorpʼs products, Otto is launching with a small set of functionality. The two main limitations are that the app types that Otto supports are rather limited and there is only one type of deployment that is available. Of course, both of these things will improve over time but, theyʼve kept me from being able to start using Otto. These days, Iʼm spending most of my time working in Clojure and Otto doesnʼt currently support jvm applications. In the current version, Otto only supports Docker, Go, PHP, Node, and Ruby but, those cover a large swath of developers. Otto also will only deploy to aws which I donʼt use due to the relatively high cost. I really want to use Otto but, its features are quite enough for me yet.

Otto is an important evolution of a crucial DevOps tool, Vagrant. It makes huge strides forward for many of the current use cases. It removes the biggest pain point of getting Vagrant up and running. It also fills a crucial need by providing a good way to move applications into production. I’m looking forward to using Otto in the future.

Apple Music

Apple Music has been out for a while and Iʼve been using. It nice in some ways but it is frustrating in others.

Apple music seems to be rather polarizing. Quite a number of people have fairly disappointed in it. From what I read, your opinion of it will be largely determined by what you were using before Apple Music. If youʼre currently have a large number of songs in iTunes then youʼre unlikely to like Apple Music. On the other hand, if youʼre currently using a music streaming service like Rdio or Spotifiy then, there is a lot to like. I happen to have been a long time customer of Rdio.

The most obvious advantage to Apple Music is its deep integration into iOS. This is definitely an unfair advantage for Apple. In the past few years, Apple has introduced apis that make 3rd party audio apps integrated more deeply into iOS but, its still not quite parity though. My carʼs audio system has the option to connect over bluetooth. In my previous usage of Rdio, quite frequently it would fail to start playing when I got into my car. This has not happened once. It is also currently the only native music app on the Apple Watch. That, of course, will be changing with WatchOS 2. Then there is the Siri integration. You can ask Siri to play one of your playlists, an artist or even a song and it will start playing in Apple Music. It seems unlikely that this particular functionality will ever appear for 3rd party apps although, the opening up of Spotlight to search 3rd party apps in iOS 9 does make this scenario seem plausible.

The initial setup for Apple Music is a little bit wonky. The interface doesʼt make it clear when youʼve selected a sufficient number of genres so, you have to figure out that youʼve selected enough and then hit next. The artist selection step seems a bit more straight forward. The inital artists that Apple Music suggested werenʼt really my taste but, after selecting the couple that I did like and hitting “More Artists” a couple of times, they got better. There is a limit to the number of artists that you can select. As you select addtional artist, the screen fills up with their bubles. When you hit the More Artists button again, it simply replaces the artists that you didnʼt select with new ones. This places a hard cap on the number of artists that you can select. Additionally, when you have quite a few artists selected, the interface is extremely slow to scroll. It is clearly optimized for selecting a small number of artists.

Apple Music Popup

The initial playlist suggestions were ok. They were pretty much exactly what I asked for, They were all related to the selections that I made. It was a mix of deep cuts of my favorite bands with a smattering of genre focuessed playlists. These days, I find the playlist suggestions to be a bit better but, I donʼt often listen to them. I use it in almost exactly the same way that I used Rdio which is mostly picking my favorite songs and downloading them for offline use. However, I have found a few songs that I like by listening to the suggestions.

Managing songs is a bit cumbersome on what must be the primary device, the iPhone (purely because there are way more iPhones than Macs or iPads). Apple has hidden most of the actions that you can do behind a pop up menu. Not only is this an obnociously long list, most of the things that you might want to do with a song are hidden in there. Strangely, the “heart” is not available in that menu. As far as I can tell, that is only available from the now playing screen. The options that are there are somewhat confusing. You can add a song to your music, you can make it available offline and you can add it to a playlist. Does making it available offline add it to your music? Does adding it to a playlist add it to your music? Its not remotely clear. There is a similar set of actions for Albums but the favorite is far easier to get to.

I mostly avoid all of that complexity. I simply make the songs that I like as loved. Then, I have a smart playlist that contains all of my loved songs. I have this set to be available offline. Using it this way is mostly automatic, I hit the heart to add a song to my playlist and then it is downloaded for offline use on my iPhone.

I canʼt exactly call Apple Music a run away success. I find it to be better than any of the other options but, I find it to be better than any of the other options. I guess it works for my very limited use case.