Linksys WRT1900ACS

I recently moved into a large residence and my Time Capsule was having some coverage issues. The edges of my residence had very weak wifi signal with somewhat frequent dropouts. Iʼve also been playing with a Raspberry Pi recently (headless, of course). With the time capsule, there’s no way to see your connected devices. There is also a complete lack of visibility into the operation of a Time Capsule. You canʼt see the amount of network traffic or see what the deviceʼs load looks like. All of these things led me to pick out a new router.

I initially considered using a Ubiquiti Edge Router and Access point but, I wanted AC networking and Ubiquiti’s AC offerings are pretty spendy. I also like to have a web interface to check on the current networking conditions. Given that the software was my primary motivation for moving away from the Time Capsule, I decided to pick that out first. In the past, Iʼve had great experiences with openwrt and so I decided to pick out a router that could run it. After a great deal of searching, I decided upon the Linksys WRT1900ACS.

The WRT1900ACS has pretty impressive hardware specs. It has a dual-core 1.6 GHz processor but, it has a paltry 512mb of ram. While that is probably plenty for what it is meant to do, it seems quite small for a device that you’re spending over $200 on. It has a simultaneous dual-band AC radio. The 2.4 GHz band runs at up to 600 Mbs and the 5GHz band runs at up to 1300 Mbs.

Iʼm not a huge fan of its appearance. While I do get a bit of nostalgia when I look at it (the design is similar to the first wifi router that my family ever owned), it sticks out a bit more than I think that it should. I am thankful that it isnʼt this bad. It is a bit larger than I expected. It is by far the largest router I’ve ever owned. I donʼt find it to be too big of a deal but, its a bit hard to hide. I’m not really sure how much the external antennae help but, it has four of them.

All of the Linksys WRT1900AC* models are marketed as “Open Source Ready”. Given the model number, that makes sense. It was a bit of stretch when these models were originally released as not many details nor support was given to the open source projects. Things seem to have improved here as the larger open source projects have added support for these models. Flashing OpenWRT onto this router is very simple. The web interface does complain that the build isn’t recognized, it will flash it for you just fine.

That being said, it has a forced setup process. Prior to this, I canʼt remember ever needing to run through a setup process in order to make the wired portion of a router work. Until you complete the setup steps, the router refuses to route traffic to its wan port. Its obnoxious. Iʼm really glad that I didn’t buy this for stock firmware. Iʼm sure that it is full of these user hostile choices.

I didnʼt know this at the time when I purchased it but, OpenWRT support for the ACS model was a bit experimental. While I found that it worked pretty well in use, it did reboot at least once per day. I never really noticed the reboots as it is incredibly fast to reboot. Openwrtʼs luci interface also looked quite dated. It was still reminiscent of the design of early Linksys routers. I always found it to be quite functional but also very displeasing. Luckily, both of these things have changed with the recent 15.05.1 patch release. Since I installed 15.05.1, the router has been incredibly stable, exactly what youʼd want from your router. It also features a much-improved design. It feels a bit generic as its now using what appears to be the default bootstrap theme. While I do feel that its a bit plain, I really appreciate how much better it looks.

Iʼm very pleased by this router. It has greatly increased the wifi coverage at my residence. I no longer have any dead zones and the connection is always quite fast. I really like openwrt as well. It is a fantastic firmware for a router. It has given me all of the visibility that I missed on the time capsule. Its a great piece of hardware with support from a great open source software project. I highly recommend this setup to anyone that is willing to dig in enough to reflash their router.

Let’s Encrypt

Recently, Iʼve been in the process of setting up a new site from scratch. Completely from scratch: new domain, new design, and new content. This, of course, means new tls certificates. Instead of buying them with Gandi, as I have done a couple of times for this site, I thought Iʼd use Let’s Encrypt.

Letʼs Encrypt is a new certificate authority that provides free and automated certificates. While you could previously get tls certificates from StartSSL, they really burned you on revocation, even in cases of mass revocation. Buying them from Gandi was much better because of these sorts of issues but, there is a cost associated with it. In both cases, getting a certificate issued is a cumbersome process. I was hoping the Letʼs Encrypt could make this process easier.

When you head to Let’s Encryptʼs website, itʼs not immediately apparent how you go about getting a certificate issued. It turns out that you need an ACME client in order to do this. Luckily, there is an official client. On Debian Jessie, its available from the stable repo, so its just and aptitude install away. The letsencrypt utility contains a number of different ways to authenticate a site. Since I was setting up a WordPress site and I use Nginx as my webserver, I found the webroot option to be the simplist way. All you need to do is run letsencrypt certonly --webroot --webroot-path {{website root}} --domains {{domain name}} If you donʼt already have a webserver running, you can have the letsencrypt utility set up a temporary webserver just to authenticate the domain. All you need to do is run letsencrypt certonly --standalone. Both of these methods require you to already have the domain pointed at the serverʼs IP. The end result is a directory in /etc/letsencrypt/live with the certificate and private key. You can just configure your webserver to read the files from there.

Letʼs Encrypt is a much simpler, faster and cheaper way to get tls certificates. Thereʼs also a module for Apache that takes care of generating the certificate for you. Iʼll be glad when the Nginx module is no longer experimental. Iʼll be using Letʼs Encrypt for all of my certificate needs.

Plex

Recently, Iʼve been in the process of setting up a new site from scratch. Completely from scratch: new domain, new design, and new content. This, of course, means new tls certificates. Instead of buying them with Gandi, as I have done a couple of times for this site, I thought Iʼd use Let’s Encrypt.

Letʼs Encrypt is a new certificate authority that provides free and automated certificates. While you could previously get tls certificates from StartSSL, they really burned you on revocation, even in cases of mass revocation. Buying them from Gandi was much better because of these sorts of issues but, there is a cost associated with it. In both cases, getting a certificate issued is a cumbersome process. I was hoping the Letʼs Encrypt could make this process easier.

When you head to Let’s Encryptʼs website, itʼs not immediately apparent how you go about getting a certificate issued. It turns out that you need an ACME client in order to do this. Luckily, there is an official client. On Debian Jessie, its available from the stable repo, so its just and aptitude install away. The letsencrypt utility contains a number of different ways to authenticate a site. Since I was setting up a WordPress site and I use Nginx as my webserver, I found the webroot option to be the simplist way. All you need to do is run {%raw%}letsencrypt certonly --webroot --webroot-path {{website root}} --domains {{domain name}}{% endraw %} If you donʼt already have a webserver running, you can have the letsencrypt utility set up a temporary webserver just to authenticate the domain. All you need to do is run letsencrypt certonly --standalone. Both of these methods require you to already have the domain pointed at the serverʼs IP. The end result is a directory in /etc/letsencrypt/live with the certificate and private key. You can just configure your webserver to read the files from there.

Letʼs Encrypt is a much simpler, faster and cheaper way to get tls certificates. Thereʼs also a module for Apache that takes care of generating the certificate for you. Iʼll be glad when the Nginx module is no longer experimental. Iʼll be using Letʼs Encrypt for all of my certificate needs.

ZFS on Linux 4.13 in Debian Jessie

The first question that comes to mind is why bother? The big reason, for me, is thunderbolt hot-plugging. Thunderbolt hot plugging made it into 3.17. Unfortunately, Debian Jessie ships with 3.16. Luckily, 4.12 and 4.13 are available from jessie-backports. If you want to use zfsonlinux, youʼll need to do quite a bit of extra work. zfsonlinux ships packages that depend on the 3.16 kernel. It’s also not as simple as just building the zfs package as they first create rpms and then convert them to debs. This is an issue because rpmbuild doesnʼt like the versioning scheme that is used for Debianʼs backported kernels.

To start with, youʼll need to download the source for the kernel to compile:

sudo aptitude install linux-source-4.3

Then youʼll need to untar the source into a writable directory. i.e. cd into the desired directory and run:

tar zxvf /usr/src/linux-source-4.3.tar.gz

This next step is going to take quite a while, building the kernel. From the untared linux source directory:

cp /boot/config-3.16.0-4-amd64 .config
make deb-pkg LOCALVERSION=-custom KDEB_PKGVERSION=$(make kernelversion)-1

You can feel free to change either LOCALVERSION or the suffix to KDEB_PKGVERSION just make sure that the values that you specify don’t contain a ..

Its much easier to do this without zfs already installed, so Iʼm just going to assume that is where you are at. Install the newly compiled kernel and reboot.

sudo dpkg -i linux-headers-4.3.3-custom_4.3.3-1_amd64.deb linux-image-4.3.3-brendan_4.3.3-1_amd64.deb
sudo reboot

Now you have a custom kernel version running. The next step is to install zfs. This is mostly following zfsonlinux’s instructions on generic debs but, their instructions are missing a couple of steps. Youʼll need to download spl and zfs from zfsonlinux. I would suggest grabbing the latest release. You’ll also need a few build dependencies.

sudo aptitude install build-essential gawk alien fakeroot linux-headers-$(uname -r) \
  zlib1g-dev uuid-dev libblkid-dev libselinux-dev parted lsscsi wget
wget http://archive.zfsonlinux.org/downloads/zfsonlinux/spl/spl-0.6.5.4.tar.gz
wget http://archive.zfsonlinux.org/downloads/zfsonlinux/zfs/zfs-0.6.5.4.tar.gz
tar zxvf spl-0.6.5.4.tar.gz
tar zxvf zfs-0.6.5.4.tar.gz
cd spl-0.6.5

Now we need to compile spl and install the development packages which are required for building zfs.

./configure
make deb-utils deb-kmod
cd module
make
cd ..
sudo dpkg -i kmod-spl-*.deb spl-*.deb
cd ..

Finally, we’re going to build and install zfs

cd zfs-0.6.5
./configure
make deb-utils deb-kmod
ln -s /lib64/libnvpair.so.1 /lib/nvpair.so.1
ln -s /lib64/libzfs.so.2 /lib/libzfs.so.2
sudo dpkg -i zfs-*.deb kmod-zfs-*.deb libnvpair1_*.deb libuutil1_*.deb libzfs2_*.deb libzpool2_*.deb

Finally reboot, and you should be all set. While that is a bunch of steps, it really isnʼt too bad.

2015: The Tools I Use

Continuing on what I started last year, here is the list of tools that Iʼve used this year.

Mac

Again this year, my Mac is my primary work device.

  1. neovim — I continue to do most of my work with text, whether that is Ansible playbooks or code. I could easily just use vim but, neocon has a couple of nice extras, mainly that it properly handles pasting without using paste mode.
  2. iterm 2 — iterm continues to be great to use. I donʼt really like the built-in terminal on OS X so Iʼm lucky that iTerm exists, especially since I do almost all of my work in the terminal.
  3. tmux — I generally keep iTerm running full screen since, I do most of my work there. While this works pretty well, itʼs a bit of a waste as its a huge amount of space for just one thing at a time. I use an inverted T, where I have one large split on top and two smaller ones on the bottom. The big split on top is generally used for neovim and then I can run related tasks in the bottom two.
  4. git — git is basically the standard for version control. Git has it flaws but, I really like it.
  5. mailmate — I switched email clients since last year. Mailmate definitely feels more like a traditional email client. Itʼs really well done.
  6. Alfred — Alfred is a keyboard launcher. It does many more things than just launching apps. I use it all of the time.
  7. Arq — Arq is a great secure backup solution. It supports many cloud storage providers so youʼre able to pick your favorite.
  8. Textual — Textual is a pretty good irc client for OS X.

iPhone

  1. Tweetbot — I like using Twitter but, I really donʼt like Twitter’s design decisions. Tweetbot fits me much better, Iʼm not looking forward to the day when Twitter cuts off access to 3rd party access.
  2. Prompt — Prompt is good to have around in case you need to access a server over ssh. Prompt is a very well done ssh client but, ssh on a phone sized device isnʼt a fun experience.
  3. Spark — While the built-in mail client on iOS is perfectly functional, I find it quite cumbersome to use. Spark is a really great iOS email client.
  4. Unread — Unread is a pretty great RSS reader on iOS.

Multiple

  1. 1Password — Keeping yourself secure online is hard. Having to remember a unique password for each service is pretty much impossible, particularly if you try to make them secure. 1Password solves this problem. Itʼs so good that itʼs easier than using the same username and password for everything. Their recently announced team features are bringing this same great setup to teams. Available for Mac, iOS and a bunch of other platforms.
  2. slack — We continue to use Slack at work. Slack definitely had momentum last year but, it seems like everyone is using them this year. I like Slack but, Iʼm not sure itʼs good enough to have this much attention on it. I also think that itʼs unfortunate that many open source projects are starting to use it as their primary communication method.
  3. Dash — Dash is great documentation viewer for Appleʼs platforms. I use it everyday. Available for Mac and iOS.

Server

  1. WordPress — As I previously mentioned, Iʼm back to using WordPress to manage ruin. While there are definitely some things that I don’t like WordPress but, itʼs pretty great at handling writing.
  2. ZNC — ZNC is an irc bouncer. It has quite a number of features but, I donʼt use that many of them. I mainly just use it so that I donʼt miss anything when my machine is offline.
  3. tarsnap — Tarsnap is great solution for secure backup. The siteʼs design looks pretty dated but, its a great backup solution.

Back to Basics

I read Ben Brook’s most recent thoughts on WordPress and it lead me to an important series of thinking that has culminated in what you see now, my return to using WordPress. My initial reaction to reading Ben’s post was denial. Why does it matter if I had a complicated cms setup for my writing? So what if I want to spend my time writing my own cms just to run Ruin? It doesn’t matter, does it? That is when it hit me, it does matter.

For the longest time, I’ve wanted to write my own cms. I don’t have a particularly good reason for why I want to do this, other than I enjoy writing, developing software and I have some ideas that I want to try out. All of this is fine but, that isn’t the reason that I have this site. I have this site because I intended to write on it. Looking at this what I’ve managed to get out his year makes me sad. Compared to previous years, my output has dropped considerably. Some of it is simply dropping the linked list style posts. I don’t think that those are particularly useful to people and I’ve stopped doing them.

There are also large gaps where I apparently stopped writing at all. Each one of these is a time where I was going to finish my cms, so I stopped writing until it was “done”. That point never actually arrived though. On multiple occasions, I’ve spent weeks writing my new blogging platform only to realize that it will be a very, very long time before its completion. On most of these occasions, I did have something that would be workable but was missing features that I would call essential. At these points, I’d revert to my previous cms, Jekyll, and continue my writing. I was never quite satisfied, so I would quickly return to tinkering with making my own.

It took reading Ben Brook’s post for me to step back far enough to evaluate the situation. This cycle is deadly to my writing. Furthermore, I’ve long had more projects that I wish to explore than I have time. Building blogging software is nowhere near the top of that list. It also isn’t the reason that I have this site. I have the site as a place for me to publish my writing, not as a place to fiddle with different cmses.

So, I’m doing exactly as he suggests, I’m using WordPress and utilizing the things that the community has created to fulfill all of my functionality desires. It took all of an hour to have all of the functionality that I wanted. Now its a just a matter of making it look the way that I want. Of course, I have to write my theme in PHP which, I don’t like but, I can just use _s. It a small price to pay to be able to concentrate on writing and building the tools I want instead of a CMS. I just need to remember that.

Replicating Jepsen Results

If you arenʼt aware, Kyle Kingsbury has a great series of posts testing whether databases live up to their claims. Its an invaluable resource as many of the databases he has tested donʼt live up to their stated goals. That being said, some of the posts are getting quite old at this point so its possible that the developers may have fixed the issues that caused them to fail their stated goals. Luckily, Kyleʼs Jepsen project is open source and youʼre free to try and replicate his results.

This does take some setup though. Youʼll need 5 database servers. It’s easiest to use Debian Jessie for this as that is what Kyle uses and therefore all of the tests that heʼs written work against it. You do need to replace SystemD with SysV init before the tests will be able to run. You also need a machine to run Jepsen on. You shouldnʼt try to reuse one of the database servers for this as the tests will cut off access to some servers at certain points in the tests. For the easiest testing process, youʼll want the database servers to be called n1-n5. They need to all be resolvable by all the other database servers and the server running the tests. The server running the tests also needs to be able to ssh to all of the database servers using the same username and password/ssh key and have sudo access. These hosts must also exist in the known hosts file in the non-hashed format before Jepsen is able to execute a test. Iʼm unsure of what the default values that Jepsen uses for username and password but, youʼre easily able to change the values that it uses for each test. Finally, the server running the tests will need the Java JDK 8 and leiningen to run.

That is quite a bit, isnʼt it? I thought that it was and given the wonderful tooling we have to replicate these sorts of environments, I thought that, for sure, someone had created a way to spin up a set of servers on AWS to run any of the tests that you would like. I wasnʼt able to locate one which likely just means that my search skills were lacking. Since I couldnʼt locate one, I made one using Terraform. jepsen-lab is relatively simple but, it goes through the process of setting up all of the previously stated requirements. It sets up all of the servers and configures them as required and once that process is complete, it outputs the ip address that youʼre able to ssh into. It does leave a number of steps for you to complete on your own: You need to clone the Jepsen repo and youʼll need to modify the test configuration for the username and password. The former is simply because I donʼt know what revision you may wish to use and the latter is because the step is dependent on which tests you chose to run. For more information on how to use jepsen-lab, see the readme in the repository.

After getting everything setup, it’s just a matter of running lein test from the correct directory and verifying the results. You can also make any modifications you like to see if they change the results of the tests. In future installments, Iʼll discuss the particular tests that Iʼve tried to replicate, modifications that Iʼve made and the results that Iʼve gotten.

fpm

For many developers, the way that they deploy is by checking out a specific revision from version control. While some people consider that to be a bit of an anti-pattern, I think that is a fine way to deploy applications written in dynamic languages. In theory, you could do the same thing for compiled languages, it just doesnʼt work well in practice. This would require you to compile your application on every server during the deploy. While this is possible, it’s very inefficient and time-consuming. A much better way to do this is to build your application and then distribute the resultant artifacts. The way that Iʼve chosen to do this is by building native packages, specifically debs.

Generating these debs arenʼt very difficult. It took me quite a bit of research to figure out what needed to be there (debianʼs packaging and Clemens Leeʼs package building HowTo guides were both hugely helpful). once you figure that out, it’s just a matter of creating the correct directory structure and then run it through dpkg-deb. Alright then, how do you make a similar rpm? Time to do some research, huh?

Why should any of this be required? Surely many other people have figured out what is required. One of them must have documented their knowledge somehow. The answer to both of these things is of course. There’s an awesome tool called fpm that creates packages of many different types from many different sources. Of course, it can package up files and directories into debs and rpms.

Iʼve known about fpm for quite some time. In fact, I knew about it before I started building debs by hand. As I mentioned, its not terribly difficult to use dpkg-deb to produce a deb. I also donʼt really like that fpm is written in ruby. While I think ruby is a fine language, getting it installed with everything that is needed to build native gem extensions is a pain. A pain that I didnʼt want to pay for a simple cli tool. It also requires a bit more setup than that to fully utilize. The rpm output requires the rpmbuild command to be installed and Iʼm sure that some of the other outputs require similar commands to be available. Iʼd love to see a similar tool compiled into a static binary but, Iʼve long given up on ever producing this tool for myself.

As I alluded to earlier, what prompted me to start using fpm was generating rpms. Iʼve since realized that I shouldnʼt have dragged my feet on it for so long. Instead of figuring out everything that is required to generate an rpm, I just used fpm: fpm -s dir -t rpm -v $VERSION -d libgmp10 ~/.local/bin=/usr/local/bin/. Of course, I can simply swap out the rpm with deb to generate a deb instead of an rpm. This ignores many of the fancier things that fpm can do. You can easily make native packages for gems, python modules, and cpan modules (to name a few). It also supports some more “exotic” formats such as self-extracting scripts and OS X packages. Iʼve converted many of my deb building scripts to use fpm and Iʼll be using fpm for all of my packaging needs.

Disabling Analytics

Iʼve been quite pleased with how this site has been going. Its been growing slowly over time, for the past few months, 10% – 20% month over month. I think that’s pretty good but, obviously, that means my traffic levels are basically internet radiation. Given that is the current state of the site, Ben Brookʼs article, Death to Analytics really struck a chord with me. While I enjoy seeing my traffic grow, it doesnʼt provide me any benefit. Clearly, the ever-growing traffic hasʼt been motivating me to write more. In fact, it’s probably a detriment.

Since I know the articles that people come to my site to read, Iʼm inclined to write additional things along those lines. Unfortunately, over 60% of people come here for the various tutorials that Iʼve written. While I like that Iʼve written these and Iʼm glad that people are benefitting from them, I donʼt really want to keep writing them. I write them when I come across something that I have a hard time doing and when I think I have some knowledge that would be helpful to pass along. They arenʼt the reason that I write on this site. Feeling pressure to write more of them just keeps me from writing at all of this site and that makes me feel bad.

It also doesnʼt matter how many people are visiting my site. While I have enjoyed seeing the number of people that visit my site increase, I donʼt find people simply visiting my site particularly pleasing. Many of the people that have happened upon my little hobble probably werenʼt particularly pleased either. Knowing how many times this has occurred isnʼt something that I should care about and, if I really consider it, I donʼt care. What I really care about is making an impact on you. Of course, analytics canʼt tell me that. Only you can. I really appreciate when someone takes the time to start a discussion about one of my articles or let me know that they enjoy my site. It really made my day when one of you decided to send me some money to support the site. Iʼd love to have more of these things.

So, Iʼve removed the analytics from here. Iʼm going to do as I should have been doing anyways, writing about the things that interest me. Iʼd love to know your thoughts so, please let me know in whatever way you prefer. If you happen to love what I do here, consider [supporting me][2] in some way.

Otto

Two weeks ago, at their first ever HashiConf, HashiCorp announced two new tools, Otto and Nomad. Both of these are great new tools but, Iʼm going to be concentrating on the former as Iʼm more interested in it. For the first time, HashiCorp is rethinking one of their current products, Vagrant.

I use Vagrant every day, its immensely useful. Its a great way to set up isolated and consistent development environments. All of that comes with a cost though. Setting up Vagrant is quite a bit of work. You have to figure out how to provision the environment. Vagrant has all of the standard choices built in, so, you can pick your favorite but, that requires you to have some existing knowledge. You could provision your environment using shell scripts but, that quickly gets painful. As this is a fairly large pain point, a variety of tools have sprung up in an attempt to fix this, such as Puphpet and Rove. For a while, I was really excited by this sort of thing. I almost built a community with Nathan Leclaire for hosting vagrant environments after he came up with the idea. Things didnʼt work as it was a really busy time for both of us. After quite a bit of thinking, Iʼm glad that we didnʼt. It just wasnʼt the right way to move things forward.

The other big problem with Vagrant is moving your app into production. You put in a lot of work to build your development environment but, theres a pretty good chance that youʼll need to put in a bunch more work to prepare your production environment. The quick setup that you do for setting up your development environment will not be sufficient for production. In a lot of ways, Vagrant seems to work better if youʼre working back from your production environment. Being able to replicate your exact production environment has lots of benefits but, if you donʼt already an existing set of scripts, roles, or modules then, using Vagrant is going to take a lot of setup to get going.

Thats where Otto comes in. Otto is HashiCorp rethinking how you should setup development environments. It can automatically detect the type of application that youʼre developing and it will build an appropriate development environment automatically. Of course, this leverages Vagrant under the covers, it just skips the laborious setup process. The other big thing that Otto provides is a way to move your application into production.

I think Otto is the answer to a question that Iʼve been pondering for quite some time: how should a small group of developers create and deploy an application? There arenʼt a whole lot of good options for small teams. Recently there have been an explosion of tools for simplifying managing application but, they seem to all be focussed on much larger teams and applications. Things like Mesos, kubernetes and Docker are all great but, they require quite a bit of knowledge to run. For a small team, theyʼre all are too much of a knowledge investment to be useful. Deploying to plain servers also requires too much knowledge to keep running and to secure. The only good option here is Heroku but, that isnʼt without its downsides. Heroku is extremely expensive for what it provides and it ties you to a proprietary platform.

Otto really fills this need. When it comes time to move your app into production, Otto will create industry standard infrastructure for you. This is very important as it allows many hard-earned lessons to automatically be handled. Iʼve felt that things like Chef roles and Puppet modules presented a similar opportunity but, both have fallen very short of that goal. This allows developers to get back to what they do best, improving your application.

As with most of HashiCorpʼs products, Otto is launching with a small set of functionality. The two main limitations are that the app types that Otto supports are rather limited and there is only one type of deployment that is available. Of course, both of these things will improve over time but, theyʼve kept me from being able to start using Otto. These days, Iʼm spending most of my time working in Clojure and Otto doesnʼt currently support jvm applications. In the current version, Otto only supports Docker, Go, PHP, Node, and Ruby but, those cover a large swath of developers. Otto also will only deploy to aws which I donʼt use due to the relatively high cost. I really want to use Otto but, its features are quite enough for me yet.

Otto is an important evolution of a crucial DevOps tool, Vagrant. It makes huge strides forward for many of the current use cases. It removes the biggest pain point of getting Vagrant up and running. It also fills a crucial need by providing a good way to move applications into production. I’m looking forward to using Otto in the future.