Linksys WRT1900ACS

I recently moved into a large residence and my Time Capsule was having some coverage issues. The edges of my residence had very weak wifi signal with somewhat frequent dropouts. Iʼve also been playing with a Raspberry Pi recently (headless, of course). With the time capsule, there’s no way to see your connected devices. There is also a complete lack of visibility into the operation of a Time Capsule. You canʼt see the amount of network traffic or see what the deviceʼs load looks like. All of these things led me to pick out a new router.

I initially considered using a Ubiquiti Edge Router and Access point but, I wanted AC networking and Ubiquiti’s AC offerings are pretty spendy. I also like to have a web interface to check on the current networking conditions. Given that the software was my primary motivation for moving away from the Time Capsule, I decided to pick that out first. In the past, Iʼve had great experiences with openwrt and so I decided to pick out a router that could run it. After a great deal of searching, I decided upon the Linksys WRT1900ACS.

The WRT1900ACS has pretty impressive hardware specs. It has a dual-core 1.6 GHz processor but, it has a paltry 512mb of ram. While that is probably plenty for what it is meant to do, it seems quite small for a device that you’re spending over $200 on. It has a simultaneous dual-band AC radio. The 2.4 GHz band runs at up to 600 Mbs and the 5GHz band runs at up to 1300 Mbs.

Iʼm not a huge fan of its appearance. While I do get a bit of nostalgia when I look at it (the design is similar to the first wifi router that my family ever owned), it sticks out a bit more than I think that it should. I am thankful that it isnʼt this bad. It is a bit larger than I expected. It is by far the largest router I’ve ever owned. I donʼt find it to be too big of a deal but, its a bit hard to hide. I’m not really sure how much the external antennae help but, it has four of them.

All of the Linksys WRT1900AC* models are marketed as &ldquoOpen Source Ready”. Given the model number, that makes sense. It was a bit of stretch when these models were originally released as not many details nor support was given to the open source projects. Things seem to have improved here as the larger open source projects have added support for these models. Flashing OpenWRT onto this router is very simple. The web interface does complain that the build isn’t recognized, it will flash it for you just fine.

That being said, it has a forced setup process. Prior to this, I canʼt remember ever needing to run through a setup process in order to make the wired portion of a router work. Until you complete the setup steps, the router refuses to route traffic to its wan port. Its obnoxious. Iʼm really glad that I didn’t buy this for stock firmware. Iʼm sure that it is full of these user hostile choices.

I didnʼt know this at the time when I purchased it but, OpenWRT support for the ACS model was a bit experimental. While I found that it worked pretty well in use, it did reboot at least once per day. I never really noticed the reboots as it is incredibly fast to reboot. Openwrtʼs luci interface also looked quite dated. It was still reminiscent of the design of early Linksys routers. I always found it to be quite functional but also very displeasing. Luckily, both of these things have changed with the recent 15.05.1 patch release. Since I installed 15.05.1, the router has been incredibly stable, exactly what youʼd want from your router. It also features a much-improved design. It feels a bit generic as its now using what appears to be the default bootstrap theme. While I do feel that its a bit plain, I really appreciate how much better it looks.

Iʼm very pleased by this router. It has greatly increased the wifi coverage at my residence. I no longer have any dead zones and the connection is always quite fast. I really like openwrt as well. It is a fantastic firmware for a router. It has given me all of the visibility that I missed on the time capsule. Its a great piece of hardware with support from a great open source software project. I highly recommend this setup to anyone that is willing to dig in enough to reflash their router.

Let’s Encrypt

Recently, Iʼve been in the process of setting up a new site from scratch. Completely from scratch: new domain, new design, and new content. This, of course, means new tls certificates. Instead of buying them with Gandi, as I have done a couple of times for this site, I thought Iʼd use Let’s Encrypt.

Letʼs Encrypt is a new certificate authority that provides free and automated certificates. While you could previously get tls certificates from StartSSL, they really burned you on revocation, even in cases of mass revocation. Buying them from Gandi was much better because of these sorts of issues but, there is a cost associated with it. In both cases, getting a certificate issued is a cumbersome process. I was hoping the Letʼs Encrypt could make this process easier.

When you head to Let’s Encryptʼs website, itʼs not immediately apparent how you go about getting a certificate issued. It turns out that you need an ACME client in order to do this. Luckily, there is an official client. On Debian Jessie, its available from the stable repo, so its just and aptitude install away. The letsencrypt utility contains a number of different ways to authenticate a site. Since I was setting up a WordPress site and I use Nginx as my webserver, I found the webroot option to be the simplist way. All you need to do is run letsencrypt certonly --webroot --webroot-path {{website root}} --domains {{domain name}} If you donʼt already have a webserver running, you can have the letsencrypt utility set up a temporary webserver just to authenticate the domain. All you need to do is run letsencrypt certonly --standalone. Both of these methods require you to already have the domain pointed at the serverʼs IP. The end result is a directory in /etc/letsencrypt/live with the certificate and private key. You can just configure your webserver to read the files from there.

Letʼs Encrypt is a much simpler, faster and cheaper way to get tls certificates. Thereʼs also a module for Apache that takes care of generating the certificate for you. Iʼll be glad when the Nginx module is no longer experimental. Iʼll be using Letʼs Encrypt for all of my certificate needs.

Plex

Iʼve heard great things about Plex but, Iʼve stubbornly holding out on trying it. I finally got around to trying it and, as with many things, I shouldnʼt have held out quite so long. Plex is a glorious experience. My wife has a huge collection of Movies and TV shows on DVDs and BluRays. Iʼm sure that she remembers what we own and where to locate them but, I donʼt. With Plex, I’m able to easily look through all of our movies and shows.

Most of Plexʼs power comes from its server. The server gets setup on a computer that you plan to leave running all of the time. The only real constraints should are that it should have a good amount of disk space and preferably a decent processor. Plex handles a large variety of files but, it doesnʼt mean that your device can play it. Plex will transcode the media if your device isnʼt capable of playing it. This can be a pretty cpu intensive task if its If you’re in control of converting the media files, you should be able to avoid any transcoding by picking a good storage format.

This server feeds a wide variety of clients. The most generic of which is that the Plex server is a DLNA server. This means that you might have a number of clients that can watch Plex content already. My television happens to have a DLNA client on it, meaning that I can simply turn on my tv and start watching. I donʼt do that though, mainly due to the sorry state of the DNLA client. Plex has a number of clients for different media platforms. The one that I most frequently use is the Apple TV. The Apple TV app is pretty workable but, I find the navigation to be a bit clunky. Itʼs also annoying that you can’t ask Siri to play any of the media in Plex but, that comes down to Apple not providing 3rd parties with a way to integrate with Siri.

The interface seems quite obvious. It presents as a rows of movie posters or DVD covers. Itʼs somewhat reminiscent of Netflixʼs interface, without the strange scrolling. Itʼs a workable interface although, it is unoriginal. What Plex amounts to is a version of Netflix filled with only those movies that you own or somehow acquired (ahem). For some people, that will be completely worthless. For those people with an extensive collection, Plex can be revolutionary. Suddenly, you have your entire movie collection at a few button press, swipes, or clicks away from anywhere in the world.

In a way, its exists in a world that doesnʼt quite exist. It seems increasingly unlikely that weʼll ever be able to fully utilize Plex. Media companies seem unwilling to provide DRM free files. Plex doesnʼt care where you get your files. Your options range from legally gray area of ripping disks to piracy. This means that youʼre either in for a lengthy conversion process or a battle with your conscience. This is the land that Plex inhabits. It doesnʼt have a particularly good UI. Its workable but its nothing revolutionary. The big draw is that it can play anything that you throw at it without DRM. This is only a big deal in that the rest of the big players are required to handle DRM.

ZFS on Linux 4.13 in Debian Jessie

The first question that comes to mind is why bother? The big reason, for me, is thunderbolt hot-plugging. Thunderbolt hot plugging made it into 3.17. Unfortunately, Debian Jessie ships with 3.16. Luckily, 4.12 and 4.13 are available from jessie-backports. If you want to use zfsonlinux, youʼll need to do quite a bit of extra work. zfsonlinux ships packages that depend on the 3.16 kernel. Its also not as simple as just building the zfs package as they first create rpms and then convert them to debs. This is an issue because rpmbuild doesnʼt like the versioning scheme that is used for Debianʼs backported kernels.

To start with, youʼll need to download the source for the kernel to compile:

Then youʼll need to untar the source into a writable directory. i.e. cd into the desired directory and run:

This next step is going to take quite a while, building the kernel. From the untared linux source directory:

You can feel free to change either LOCALVERSION or the suffix to KDEB_PKGVERSION just make sure that the values that you specify don’t contain a ..

Its much easier to do this without zfs already installed, so Iʼm just going to assume that is where you are at. Install the newly compiled kernel and reboot.

Now you have a custom kernel verion running. The next step is to install zfs. This is mostly following zfsonlinux’s instructions on generic debs but, their instructions are missing a couple of steps. Youʼll need to download spl and zfs from zfsonlinux. I would suggest grabing the latest release. You’ll also need a few build dependencies.

Now we need to compile spl and install the development packages which are required for building zfs.

Finally, we’re going to build and install zfs

Finally reboot, and you should be all set. While that is a bunch of steps, it really isnʼt too bad.

2015: The Tools I Use

Continuing on what I started last year, here is the list of tools that Iʼve used this year.

Mac

Again this year, my Mac is my primary work device.

  1. neovim — I continue to do most of my work with text, whether that is Ansible playbooks or code. I could easily just use vim but, neocon has a couple of nice extras, mainly that it properly handles pasting without using paste mode.
  2. iterm 2 — iterm continues to be great to use. I donʼt really like the built-in terminal on OS X so Iʼm lucky that iTerm exists, especially since I do almost all of my work in the terminal.
  3. tmux — I generally keep iTerm running full screen since, I do most of my work there. While this works pretty well, itʼs a bit of a waste as its a huge amount of space for just one thing at a time. I use an inverted T, where I have one large split on top and two smaller ones on the bottom. The big split on top is generally used for neovim and then I can run related tasks in the bottom two.
  4. git — git is basically the standard for version control. Git has it flaws but, I really like it.
  5. mailmate — I switched email clients since last year. Mailmate definitely feels more like a traditional email client. Itʼs really well done.
  6. Alfred — Alfred is a keyboard launcher. It does many more things than just launching apps. I use it all of the time.
  7. Arq — Arq is a great secure backup solution. It supports many cloud storage providers so youʼre able to pick your favorite.
  8. Textual — Textual is a pretty good irc client for OS X.

iPhone

  1. Tweetbot — I like using Twitter but, I really donʼt like Twitter’s design decisions. Tweetbot fits me much better, Iʼm not looking forward to the day when Twitter cuts off access to 3rd party access.
  2. Prompt — Prompt is good to have around in case you need to access a server over ssh. Prompt is a very well done ssh client but, ssh on a phone sized device isnʼt a fun experience.
  3. Spark — While the built-in mail client on iOS is perfectly functional, I find it quite cumbersome to use. Spark is a really great iOS email client.
  4. Unread — Unread is a pretty great RSS reader on iOS.

Multiple

  1. 1Password — Keeping yourself secure online is hard. Having to remember a unique password for each service is pretty much impossible, particularly if you try to make them secure. 1Password solves this problem. Itʼs so good that itʼs easier than using the same username and password for everything. Their recently announced team features are bringing this same great setup to teams. Available for Mac, iOS and a bunch of other platforms.
  2. slack — We continue to use Slack at work. Slack definitely had momentum last year but, it seems like everyone is using them this year. I like Slack but, Iʼm not sure itʼs good enough to have this much attention on it. I also think that itʼs unfortunate that many open source projects are starting to use it as their primary communication method.
  3. Dash — Dash is great documentation viewer for Appleʼs platforms. I use it everyday. Available for Mac and iOS.

Server

  1. WordPress — As I previously mentioned, Iʼm back to using WordPress to manage ruin. While there are definitely some things that I don’t like WordPress but, itʼs pretty great at handling writing.
  2. ZNC — ZNC is an irc bouncer. It has quite a number of features but, I donʼt use that many of them. I mainly just use it so that I donʼt miss anything when my machine is offline.
  3. tarsnap — Tarsnap is great solution for secure backup. The siteʼs design looks pretty dated but, its a great backup solution.

The Party of Fear

It is extremely unfortunate that the United States has developed into a two party system. Its even more unfortunate that one of those parties is unable to field respectable candidates. Prior to Tuesdaysʼs debate, the two leading candidates, Trump and Cruz, are both literal fascists. The two appeals of the Republicans: American isn’t safe and, Make America great again.

It seems that the Republican party can be characterized by a desire to have the biggest and best military force in the world so that we can stamp out any possible threat (by bombing those fuckers into the ground). Have we learned nothing? Nothing from Vietnam and both Gulf wars? The lessons of which should be apparent, we can use our military to kill people but, we can’t control them. In fact, our belligerent attitude is causing us to be significantly less safe. How many people will join ISIS after we kill their family members in collateral damage?

Unfortunately, the damage may already be done by Trump. Even if he fails to capture the Republican Nomination, he has already made open bigotry acceptable. A year ago, I would have expected provably false, racist slander to eliminate a political candidate from any election, instead, it has only propelled his campaign. I, unfortunately, know people that have a hatred of Mexican immigrants. Trumps comments have made this sort of sentiment into something that is able to be discussed.

Condemning an entire race wasn’t enough for Trump, all Muslims are in the crosshairs as well. Not only would he prevent the US from taking in the abysmally small number of Syrian refugees that Obama has committed to, He would also prevent any citizen that happens to be a muslim from returning to the United States. This is wrong. It is against everything that this country was built upon and every sane citizen should find this idea repulsive. This is already having repercussions, violence against Muslims is up. It is inciting the true American terrorists, white people.

In addition, Trump advocated for committing war crimes during the debate. He would like to target the families of ISIS members. This is flat out sickening. Under no conditions should we every consider doing this and those who preach it should be no where near running this country.

None of these things comes from a position of strength. The primary strategies for the Republican party appears to be the creation of fear and nostalgia. Their strategy requires all of us to live in a state of fear, a fear that they alone can resolve. This is not the world that I live in and you shouldn’t either. Egregiously, theyʼre also exploiting the widespread hatred of Mexican and the hatred and fear of Muslims to further solidify there following. I want no part in this and you shouldnʼt either.

Back to Basics

I’ve moved back to WordPress and I think the reason why is important.

I read Ben Brook’s most recent thoughts on WordPress and it lead me to an important series of thinking that has culminated in what you see now, my return to using WordPress. My initial reaction to reading Ben’s post was denial. Why does it matter if I had a complicated cms setup for my writing. So what if I want to spend my time writing my own cms just to run Ruin? It doesn’t matter, does it? Thats when it hit me, it does matter.

For the longest time, I’ve wanted to write my own cms. I don’t have a particularly good reason for why I want to do this, other than I enjoy writing, developing software and I have some ideas that I want to try out. All of this is fine but, that isn’t the reason that I have this site. I have this site because I intended to write on it. Looking at this what I’ve managed to get out his year makes me sad. Compared to previous years, my output has dropped considerably. Some of it is simply dropping the linked list style posts. I don’t think that those are particularly useful to people and I’ve stopped doing them.

There are also large gaps where I apparently stopped writing at all. Each one of these is a time where I was going to finish my cms, so I stopped writing until it was “done”. That point never actually arrived though. On multiple occasions, I’ve spent weeks writing my new blogging platform only to realize that it will be a very, very long time before its complete. On most of these occasions, I did have something that would be workable but was missing features that I would call essential. At these points, I’d revert to my previous cms, Jekyll, and continue my writing. I was never quite satisfied, so I would quickly return totinkering with making my own.

It took reading Ben Brook’s post for me to step back far enough to evaluate the situation. This cycle is deadly to my writing. Furthermore, I’ve long had more projects that I wish to explore than I have time. Building blogging software is no where near the top of that list. It also isn’t the reason that I have this site. I have the site as a place for me to publish my writing, not as a place to fiddle with different cmses.

So, I’m doing exactly as he suggests, I’m using WordPress and utilizing the things that the community has created to fulfil all of my functionality desires. It took all of an hour to have all of the functionality that I wanted. Now its a just a matter of making it look the way that I want. Of course, I have to write my theme in PHP which, I don’t like but, I can just use _s. It a small price to pay to be able to concentrate on writing and building the tools I want instead of a CMS. I just need to remember that.

Replicating Jepsen Results

The requirements for running Jepsen tests and a tool to make it easier.

If you arenʼt aware, Kyle Kingsbury has a great series of posts testing whether databases live up to their claims. Its an invaluable resource as many of the databases he has tested donʼt live up to their stated goals. That being said, some of the posts are getting quite old at this point so its possible that the developers may have fixed the issues that caused them to fail their stated goals. Luckily, Kyleʼs Jepsen project is open source and youʼre free to try and replicate his results.

This does take some setup though. Youʼll need 5 database servers. Its easiest to use Debian Jessie for this as that is what Kyle uses and therefore all of the tests that heʼs written work against it. You do need to replace SystemD with SysV init before the tests will be able to run. You also need a machine to run Jepsen on. You shouldnʼt try to reuse one of the database servers for this as the tests will cut off access to some servers at certain points in the tests. For the easiest testing process, youʼll want the database servers to be called n1-n5. They need to all be resolvable by all the other database servers and the server running the tests. The server running the tests also needs to be able to ssh to all of the database servers using the same username and password/ssh key and have sudo access. These hosts must also exist in the known hosts file in the non-hashed format before Jepsen is able to execute a test. Iʼm unsure of what the default values that Jepsen uses for username and password but, youʼre easily able to change the values that it uses for each test. Finally, the server running the tests will need the Java JDK 8 and leiningen to run.

Thats quite a bit, isnʼt it? I thought that it was and given the wonderful tooling we have to replicate these sorts of environments, I thought that, for sure, someone had created a way to spin up a set of servers on AWS to run any of the tests that you would like. I wasnʼt able to locate one which likely just means that my search skills were lacking. Since I couldnʼt locate one, I made one using Terraform. jepsen-lab is relatively simple but, it goes through the process of setting up all of the previously stated requirements. It sets up all of the servers and configures them as required and once that process is complete, it outputs the ip address that youʼre able to ssh into. It does leave a number of steps for you to complete on your own: You need to clone the Jepsen repo and youʼll need to modify the test configuration for the username and password. The former is simply because I donʼt know what revision you may wish to use and the latter is because the step is dependent on which tests you chose to run. For more information on how to use jepsen-lab, see the readme in the repository.

After getting everything setup, its just a matter of running lein test from the correct directory and verifying the results. You can also make any modifications you like to see if they change the results of the tests. In future installments, Iʼll discuss the particular tests that Iʼve tried to replicate, modifications that Iʼve made and the results that Iʼve gotten.

fpm

For many developers, the way that they deploy is by checking out a specific revision from version control. While some people consider that to be a bit of an anti-pattern, I think that is a fine way to deploy applications written in dynamic languages. In theory, you could do the same thing for compiled languages, it just doesnʼt work well in practice. This would require you to compile your application on every server during the deploy. While this is possible, its very inefficient and time consuming. A much better way to do this is to build your application and then distribute the resultant artifacts. The way that Iʼve chosen to do this is by building native packages, specifically debs.

Generating these debs arenʼt very difficult. It took me quite a bit of research to figure out what needed to be there (debianʼs packaging and Clemens Leeʼs package building HowTo guides were both hugely helpful). once you figure that out, its just a matter of creating the correct directory structure and then run it through dpkg-deb. Alright then, how do you make a similar rpm? Time to do some research, huh?

Why should any of this be required? Surely many other people have figured out what is required. One of them must have documented their knowledge somehow. The answer to both of these things is of course. Theres an awesome tool called fpm that creates packages of many different types from many different sources. Of course, it can package up files and directories into debs and rpms.

Iʼve known about fpm for quite some time. In fact, I knew about it before I started building debs by hand. As I mentioned, its not terribly difficult to use dpkg-deb to produce a deb. I also donʼt really like that fpm is written in ruby. While I think ruby is a fine language, getting it installed with everything that is needed to build native gem extensions is a pain. A pain that I didnʼt want to pay for a simple cli tool. It also requires a bit more setup than that to fully utilize. The rpm output requires the rpmbuild command to be installed and Iʼm sure that some of the other outputs require similar commands to be available. Iʼd love to see a similar tool compiled into a static binary but, Iʼve long given up on ever producing this tool for myself.

As I alluded to earlier, what prompted me to start using fpm was generating rpms. Iʼve since realized that I shouldnʼt have dragged my feet on it for so long. Instead of figuring out everything that is required to generate an rpm, I just used fpm: fpm -s dir -t rpm -v $VERSION -d libgmp10 ~/.local/bin=/usr/local/bin/. Of course, I can simply swap out the rpm with deb to generate a deb instead of an rpm. This ignores many of the fancier things that fpm can do. You can easily make native packages for gems, python modules, and cpan modules (to name a few). It also supports some more “exotic” formats such as self extracting scripts and OS X packages. Iʼve converted many of my deb building scripts to use fpm and Iʼll be using fpm for all of my packaging needs.

Disabling Analytics

Running analytics has been subtly shapping what I write about and Iʼm changing that.

Iʼve been quite pleased with how this site has been going. Its been growing slowly over time, for the past few months, 10% – 20% month over month. I think thats pretty good but, obviously, that means my traffic levels are basically internet radiation. Given that is the current state of the site, Ben Brookʼs article, Death to Analytics really struck a chord with me. While I enjoy seeing my traffic grow, it doesnʼt provide me any benefit. Clearly the ever growing traffic hasʼt been motivating me to write more. In fact, its probably a detriment.

Since I know the articles that people come to my site to read, Iʼm inclined to write additional things along those lines. Unfortunately, over 60% of people come here for the various tutorials that Iʼve written. While I like that Iʼve written these and Iʼm glad that people are benefitting from them, I donʼt really want to keep writing them. I write them when I come across something that I have a hard time doing and when I think I have some knowledge that would be helpful to pass along. They arenʼt the reason that I write on this site. Feeling pressure to write more of them just keeps me from writing at all of this site and that makes me feel bad.

It also doesnʼt matter how many people are visiting my site. While I have enjoyed seeing the number of people that visit my site increase, I donʼt find people simply visiting my site particularly pleasing. Many of the people that have happened upon my little hobble probably werenʼt particularly pleased either. Knowing how many times this has occurred isnʼt something that I should care about and, if I really consider it, I donʼt care. What I really care about is making an impact on you. Of course, analytics canʼt tell me that. Only you can. I really appreciate when someone takes the time to start a discussion about one of my articles or let me know that they enjoy my site. It really made my day when one of you decided to send me some money to support the site. Iʼd love to have more of these things.

So, Iʼve removed the analytics from here. Iʼm going to do as I should have been doing anyways, writing about the things that interest me. Iʼd love to know your thoughts so, please let me know in whatever way you prefer. If you happen to love what I do here, consider supporting me in some way.