Iʼve been interested in HashiCorpʼs terraform ever since it was announced. I loved the idea of being able to tie multiple providers together. You could use Route 53 for DNS and Digital Ocean for your servers or you could use DNS Simple with ec2. I havenʼt been able to try it because I use Linode for both my personal servers and at work. Unfortunately, Linode isnʼt one of the included plugins in Terraform and no one had created a plugin for it. So I did what any developer with a bit of free time and a possibly unhealthy obsession with automation tools and HashiCorp products: I built one.
In its current state, itʼs fairly light on features. It only supports managing Linodes (what Linode calls their servers) and not any of the other things that Linode provides. Iʼm hoping to get Linodeʼs DNS and NodeBalancer services added in the future. terraform-provider-linode also doesn’t support all of the options that Linode provides. In particular, I intentionally omitted all of the various alerting setting that Linode provides. While adding all of them would have been easy, the sheer number of options seemed too complex for the initial release. The currently implemented features are exactly the features that I needed in order to start using terraform. If you would like to have a particular feature, it would be great if you were able to contribute it to the project. If not, please create an issue on GitHub and Iʼll see when I can get it implemented.
Terraformʼs helper/schema made developing this plugin a breeze. I expected that it would take considerably longer and be more difficult to build terraform-provider-linode. As they mention on the provider plugin page, its really just a matter of setting up CRUD functions for a particular resource. That is all that there is to it. Terraform also includes some extremely handy functions for building acceptance tests. Terraform feels incredibly polished, I find this particularly remarkable as Terraform was only released a little under a year ago. I also found Toahʼs Linode api client library to be extremely helpful for developing this plugin. Linode, unfortunately, hasnʼt created an official Go api client but, Toah really stepped up to make one.
Please check out terraform-provider-linode if it is something that would be useful to you. If youʼd like to contribute, then I would really appriciate your help.
For quite a while, if you wanted any sort of consistancy in your servers, you needed a configuration maangement tool. This has really started to change over about the last year and a half. Weʼve seen a proliferation of tools aimed at making running a datacenter easier than managing a single machine. These are things like Apache Mesos, Kubernetes, or CoreOs. All of these tools are based on similar idea, you tell these systems that you want to run x number of this service with these constraints and they figure out how to run them. Of course, they differ quite a bit on the details but, at a broad level, this is what they do. While I find that idea to be hugely compelling, Iʼve decided to forgo one of these systems for now. This is mostly due to running a deployment of one of them and having it fail miserably at unpredictable times. We are also not at a scale in which handling distinct servers is difficult.
With that roundabout introduction out of the way, my configuration management tool of choice is Ansible. I find describing Ansible to be a bit difficult as it is a tool with a few different use cases. In some ways, it operates like a distributed scripting language. You can define a script that will change the current master database, point all of your applications at the new database, restart the old master and then point all of your applications back at the original database instance. It is also equally useful in a more tradtional view of configuration management where Ansible installs and configure individual servers in their specific roles.
Ansible has two rather broad modes of operation. One of which is ansible-pull. In this mode of operation, each server pulls down your configuration scripts and then applies them to themselves. This is somewhat similar to tradtional configuration management tools like Chef or Puppet. This mode doesnʼt appear to be used very often and this is probably a good thing. Both Chef and Puppet are far superior in this mode of operation. The typical mode of operation for Ansible is push. There is a control server, this could be your machine or a server somewhere, that initiates the Ansible run. The control server then connects to all of the servers that are part of the current playbook over ssh. Each step in the playbook is then applied sequentially with all of the servers specified for that step receiving the commands in parallel. There are knobs that you are able to turn in order to control how many servers receive the commands in parallel. This mode of operation leads to a number of really cool bits of functionality. For instance, when youʼre provisioning a new web server youʼre able to immediately add it to the load balancer which, is something that chef and puppet are unable to do 1.
The terminology that is used to describe Ansibleʼs configuration is a bit strange. There are inventories, playbooks, plays, roles, tasks, host vars, group vars and modules. I think that inventories, host vars and group vars are self explanatory. So that just leaves all of the others. I feel like the name playbooks was inspired a little too much from sports but, it actually happens to work quite well for describing their function. A playbook is the script that you run. This can be many different things, it could be as broad as a single script that creates your entire infrastructure, it might also be like the example of provisioning a new web server or even a multistep deployment script. Playbooks can include other playbooks or plays in them and they contain how information on which servers the plays should be applied to. Plays are blocks in the playbook. Each play must have a user and a list of hosts that play should run on. Plays can include custom variables, roles, and applications of modules. Modules are the basic commands of Ansible. These are things like installing a package on the server and copying a configuration file to the server. Tasks are apllications of these modules.
Roles are a set of related set of module applications. You might have a role for MySql or Postgres but given that they are the only way to include functionality between multiple playbooks (you can include other playbooks as well but that is limiting in some crucial ways), you end up using them that the name roles doesnʼt apply to. I have a database migrations role. We have a few different infastructure configurations for our app but the migrations are always applied in the same way, so those steps were pulled out into a self contained role. While semantically this doesnʼt make much sense, itʼs the only way to pull this common code out and being able to reuse it between different playbooks is extremely valuable. While the power of Ansible comes from being able to group roles and modules in playbooks to automate important processes, youʼre also able to execute the modules directly. You could have patched all of your servers for shellshock by simply running ansible -i inventory all -m apt -a 'update_cache=yes name=bash state=latest'.
Unlike other configuration management tools, Ansible is extremely easy to get started with. Itʼs especially easy by not having a server to setup, unlike Chef2 and Puppet. You can simply install Ansible on your machien and get started automating servers. Itʼs also easy to get started writing playbooks and roels because they are written in YAML. YAML is very easy to read and write and there isnʼt very much syntax to pick up. That being said playbooks have some expected keys and each of modules takes a set of arguments. Other than a few core modules, like apt, copy, template and service, I still have to look up the arguments every time that I use them. I really recommend Dash for referring to the documentation, itʼs great and it will greatly speed up your development time with Ansible. Roles have a rather complex (compared to the rest of ansible) directory structure but once you use it a couple of times, it will really click. That is enough to get up and running with Ansible but there are a few more advanced options that you won’t have seen and to get everything you should probably read the entirety of the Ansible docs except for the modules documentation. Itʼs really worth the effort, the documentation is quite succinct and will greatly assist you in writing your Ansible scripts.
Once you have that basic knowledge, you can start writing scritps for everything. If you do anything over ssh, you can write a playbook for it. In fact, you probably should, it will be much faster and much less error prone than doing it by hand (if you do it more than once). The seperation between what should be a role and what should be in the playbook becomes fairly clear early on, if the task that you are doing requires applying a template, using handlers, or copying a file from the Ansible control server, then it should be a role. This is due to how roles bundle these things togethor, it makes much more sense after youʼve been using Ansible for a while. As I pointed out previously, you should also put shared scripts steps into roles whether or not the name “role” is semantically correct. In my case, I have the steps necessary to run database migrations in an Ansible role.
In my experience, the best way to build up your Ansible scripts is by simply doing everything with them. If you need to deploy a database server, write a playbook and role for doing just that. When you enevitably find something that could be done better, add it to the role or playbook and then reapply it to all of the those nodes. This way, you don&rquo;t need to recall every step that you took when setting up the next one. This is the way that most of our playbooks and roles have been developed. For example, at the time that I was deploying logstash, I had no idea that the disk scheduler should be disabled (set to noop) on ssd based machines. Iʼve since added that step to the relevant roles. This is much the same as any other configuration management tool, youʼre able to distill
Iʼve found that Ansible is extremely good at automating complex multiple server tasks. This includes things like deployments of multiple application services. Our deployments, at Signal Vine, are run using Ansible. Our deployments are quite complex, they involve reconfiguring 4 types of servers and applying migrations to 3 different data stores. Anisble has been handling this all of this beautifully. Iʼve also written playbooks for all of the complicated operation processes. Theres one for changing database settings and restarting the database with its dependant services. With another one, Iʼm able to change the postgres master and then setup replication on the demoted master. After writing a couple of these, it becomes clear how valuable they are.
Now, what would an automation tool be without having some prebuilt patterns available? Ansible Galaxy fufills this need. Unfortunately, I donʼt have a lot of experience with it as there wasnʼt that much there a year ago 3 and Iʼm not eager to rip out working code to replace them with untested (on our servers) roles. I really like the idea of Ansible Galaxy and I really appriciate that all of the roles have a rating associated with them. It really helps you narrow down which roles you should audit. I feel that the usefulness of this roles is slightly hampered by not being able to run run a role multiple times in a play by passing it different variables. This is a feature that is due to arrive in Ansible 2 but, Ansible 2 hasnʼt shipped yet. In some cases this can be mitigated by the roleʼs author. If they make a variable that someone might want to call multiple times into an array then, they can properly handle this situations. In other cases, this isnʼt possible.
This gets into what I think is Ansibleʼs biggest sore spot, reuse and composibility. While Ansible Galaxy is nice, it has no where near the utility of Chefʼs Supermarket or Puppetʼs Forge. I donʼt think that Ansible as it stands today will ever have anything like that. Ansible is intentionally not a programming language. While I see some advantages to that for onboarding new userʼs, I really feel like it hampers your ability to abstract things. Certainly users can go too far in abstractions but limiting them so much is also painful. One of the things that I wish that I could do is loop over a set of tasks but, thats not possible. In a similar vein, Ansible has role dependencies. This is very helpful but, it doesn’t help you in all cases. If you have multiple roles that depend on a single roles with different variables set, in a single play, Ansible will not run all of them..
In the past, Iʼve used Ansible to build Docker images. While this is entirely possible4, it is not a pleasant experience. At first this seems like a good idea, you can use the same dependable scripts to deploy a particular service into any environment, whether that is a single monolithic server or into a container. The reality is that these are very different environments and you probably donʼt want to install your app in the same way on both. You will end up filling your roles with various conditionals to handle being able to run in a container or being directly installed on a server. This ends up being extermely unweildy. This also doesnʼt work with Dockerʼs expected mode of operation. Each step you specify in the Dockerfile builds up a cached image layer. Then, when you change one of the steps, Docker will use these cached layers up to the point where modified it and then run the remaining steps directly. When you are using Ansible to run the provisioning, all of it happens in a single step. So to change a single thing, youʼll need to completely rebuild the container and Ansible isnʼt well optimized for working in this way, so your container builds will take a significant period of time. I would guess it will be somewhere between 1 - 10 minutes to build a container. This isnʼt horrible but, it is enough to be annoying. Instead you should use Docker and Ansible as they were intended. Use Dockerʼs toolchain to make container build artifacts and then use Ansible to deploy those to the required servers.
Iʼve been disappointed with the testing situation in Ansible. As far as I can tell, this is the only coverage that testing has gotten. Itʼs really not enough for me. I need a little more hand holding to really understand the picture that they are trying to paint. I havenʼt yet tried testing the way that theyʼve suggested, I just canʼt see that working well. Iʼve defaulted to mostly manual testing which is a drag. I do have a staging environment that i can apply changes to before running changes agaimst our production environment. in a similar vein, itʼs entirely possible to write a playbook that fails when running it in check mode (--check) that works without a hitch when applying it. I do understand how this could happen but, itʼs very annoying that Ansible doesnʼt notice it and issue a warning for it. I do like that Ansible includes --syntax for checking whether the specified playbookʼs syntax is valid. Unfortunately, it doesnʼt check whether the variables are defined before they are used.
Another area that Iʼm not satisfied with, is how you expand the users of your Ansible scripts. I think is fairly reasonable to expect anyone handling operations in your company to be able to install and run Ansible from the command line. I donʼt think that works to expand it to everyone in your company. It isnʼt clear on how this can be done easily. It also becomes difficult to see when things are happening or when they have happened in the past. Ansible has a commerical product for this, Tower. Tower is an option for doing this but, itʼs both pricey and it may not be exactly what youʼre looking for. You also could setup Ansible tasks in your CI server but, then your CI server needs ssh access to all of your servers with sufficient access rights to do those tasks. That would mean an attacker could change the software running in your production environment if they were able to compromise your CI system. That isnʼt somethign that I would feel comfortable with.
All that being said, I think Ansible is a great product. If you arenʼt currently using a Configuration Management tool, I highly recommend that you check out Ansible. If you already have one and youʼre satisfied with it, you should probably keep using it but, you may still find Ansible useful. Itʼs very useful for doing multi-server scripting. Of the type that you might do during a deploy. Ansible is a good, dependable and efficient tool, I ʼm happy to use it.
Yes, itʼs possible to make that happen with both chef and puppet but, it involves extra steps. First you provision the new server, which registers it with the configuration management server. Then, on their next configuration run, the load balancers add the new server into the rotation. With Ansible, it can be available immidately. ↩
Chef does have chef-solo but, itʼs not what you should use if you want to use Chef. Chef with a server is much better. ↩
Thats when I started writing our Ansible scripts. Iʼve take another look at it and it seems fairly decent now. ↩
And easy if you know what you are doing. Basically you need to setup your Ansible playbook as it was being used for ansible-pull and then you need to install Ansible in the container, add your playbooks/roles, and finally invoke Ansible within the container. ↩
About a year ago, I started a new job with SignalVine as the person in charge of Operations.While I strive to give the engineering team the tools that they need to properly manage our production systems, its my responsibility to keep all of the components of our application up and running. In the last year, Iʼve used quite a few different tools to make running our application a pleasant experience. Iʼll be covering a wide variaty of things, from databases to DevOps tools. Some of these things have worked well for me, others have not.
Ansible - My Configuration management tool of choice.
At work, I recently setup Riemann for monitoring. Riemann is a monitoring tool that works on streams of events. It includes many powerful tools to work on streams of events. As an example, it has a ddt function that will differentiate the eventʼs metrics over time. This allows you get to a rate of change for a counter. While Riemann includes many powerful tools, your Riemann config file is a Clojure program so youʼre able to extend Riemann by simply modifying you config file.
I had such an occasion to do such a thing this week. We send all of our error level log messages from Logstash to Riemann in order to alert us when we need to check the logs. Doing this is a fairly simple process, we use slack and there is a built in function to send alerts to Slack. While we could send the whole log message to Slack, this isnʼt ideal for us. Our log messages can be quite long, many thousands of characters and sending that to Slack makes for a fairly bad experience. What we decided to do instead was write the full metric to a file and link to that file in the Slack message. Unfortunately there isnʼt really a built in way to do this in Riemann. You could write the messages to Riemannʼs log file but that isnʼt what we are looking for here as that results in a single large log file rather than individual log files.
What I decided to do was create a function that would write out the message to a file with name set to the messages sha-256 hash. Generating the has was the most complicated part of this. The complication arose from my lack of knowledge on the various libraries that can generate a hash. The way that I figured this out was by Googling variations on Clojure/Java sha-256 hashes and then trying them at the Clojure REPL on a checkout of the Riemann source. Unfortunately neither of the Clojure hashing libraries are included in Riemann but, I was able to find a java package that Riemann includes that is able to generate hashes, Apache Commons. I likely would have known that if I had more experience with the Java ecosystem but I donʼt. So here is what I came up with.
Then all you need to do is define the function that you will use on your steams. Something like (def write-log (write "/var/www/alerts" "https://alerts.example.com")) would work where /var/www/alerts is the content directory for alerts.example.com. To include the the link in your Slack alert, youʼll need to provide a custom formatter that includes a link. Here is what we use:
I know thats a lot to piece together, so here is a minimal Riemann config that should work, to show you how to use everything.
What that config will do is send alerts to the Alerts channel of your Slack when any events are placed into Riemann that end with ` logs`. They are limited to no more than 2 messages per hour per service.
I recently got to migrate my server from running Ubuntu 12.04 to Ubuntu 14.04. As I was unable to upgrade the kernel while on 12.04 (due to some hardware issues), I was stuck running into the bug mentioned on the installation page. It wasn’t too big of an issue but, basically sometimes containers would just hang for no real reason. In this state I couldn’t stop them or delete them. The only way to get rid of them was to either restart docker or restart the server.
Anyways, I’m happy that is no longer an issue for me. So I’ve been looking into moving as much as I can into Docker. I basically have two ways that I’m using it. The first of which is the one that I mention in Docker Appliances, I’m running my Discourse server in Docker using the execellant discourse_docker project. It works extremely well, expecially now that it doesn’t hang randomly.
The other way I’m using it is as the execution enironment for a couple of other things. What I mean by this is that both the code for the site and the database files are mounted from the host. Here is an example, this site runs in a Docker container. On my server, its code and database reside at /var/node/ruin. Since I last wrote about Ghost, I have switched over to using SQLite as my database. SQLite performance is adequate for my needs and at the time, I hadn’t figured out how to get MySQL up and running in a Docker container. So the docker container simply mounts the code/content directory and runs only the node process. I have it set to expose the default port that Ghost runs at, 2368, to the docker host. On the docker host, I run nginx and reverse proxy the traffic to port 2368. When I want to update the code, I have an ansible playbook that pulls the fresh code down, stops the current container and launches a new one.
I also have a similar set up for my unmark instance, unmark.tobolaski.com (I wouldn’t recommend going there as it uses a custom certificate authority). It mounts the code from /var/unmark/app and the mysql data on /var/unmark/mysql into a lamp container. In this case, I have nginx setup to reverse proxy to 8000 which is mapped to 80 on the Docker container.
If I was going to be putting either of these containers on multiple hosts, I’d move the database into something else, possibly a new container or maybe their own host. Then I add in the application code into the container and distribute it as a single unit, which is more like the intended usage than what I’m currently doing. Basically I’m using docker as lightweight virtualization, which it does extremely well.
I’ve open sourced both of the dockerfiles that I discussed here.
A number of months ago, I picked up a kit for an unusual keyboard from Massdrop, the ErgoDox. Unusual is a fitting word for it as it is unusual in nearly every way. The only way that I can describe the layout is unusual, the shape is unusual, the materials are unusual and you probably get the point. Its also completely open source. You can download the PCB and case design and build the keyboard yourself. Given all of that, you also need to assemble it yourself. In spite (or perhaps because) of all of that, the ErgoDox is the best keyboard that Iʼve ever used.
As I mentioned before, the layout is unusual. Its a split layout where you can independently move the halves. This is, by far, the most important improvement over a standard keyboard layout. This allows you to position the two halves to be shoulder width apart. This vastly decreases the stress on your wrists while you type.
The keys are also arranged in columnar layout which removes any of the staggering that you would typically see between each row of keys. What this means is that your middle and ring fingers only move up and down. Its a little hard to imagine the benefits of this but, after a few days of use, you wont want to go back.
Unlike most ergonomic keyboards, the ErgoDox uses mechanical switches. Many people enjoy keyboards with mechanical switches as they have a much better feel while typing. Iʼm one of those people. I donʼt mind typing on scissor key keyboards but typing on keyboards with mechanical switches is a real treat. Thats not something that I would normally say of typing. You can use whatever switches you want for it. If you already have a favorite switch variety, you can use those on your ErgoDox. If not, I really like Cherry MX Clears. They have a tactile bump about just after the actuation point. That makes it very easy to type quickly without bottoming out the keys.
With the ErgoDox, everything is customizable. In fact, you are forced to customize it before you can use it. Luckily MassDrop has a web based configuration tool thats really easy to use. Its what I used to create my layout. The web configuration tool is great. It exposes almost all of the options that you might want. If that doesnʼt quite do all of the customization that you want, youʼre able to write C to fully customize your keyboard. The base firmware is available on GitHub. The one thing that Iʼve found is that you arenʼt able to change what character is sent when you use shift with the key. This is important for doing some more exotic layouts. There is an alternate firmware called tmk that supports some more advanced features but I havenʼt tried it yet and I donʼt know whether it supports custom shift modifiers.
There are also a variety of external customization that you can do. Iʼve equipped mine with an aluminum top plate. I also chosen to have clear DCS keycaps. I strongly recommend that you get DCS keycaps. It is much nicer to reach the bottom key rows as they are angled to meet your fingers. I also chose the standard case. If I had to make the choice again, Iʼm not sure what I would chose. While Iʼm working at my nextdesk, I really like the standard case. However, I now work in an office and a full hand case would work much better on a more conventional desk.
Although you can put any keycaps on your ErgoDox, there are some practical limitations. You have to be particularly careful when picking out DCS keycaps. Basically you need to be able to buy a keycap set that was made for the ErgoDox. Youʼll also be locked into using qwerty like layout. If you go with DCS keycaps, you have quite a few more options but, youʼll still have considerable trouble finding keycap sets with all of the extra modifier keys. Youʼll likely need to buy at least the modifier key set from a set designed for the ErgoDox but then youʼll be able to use any keycap set you want for your base keys.
My views on the assembly process have changed since the time that I built my ErgoDox. If you would have asked me about it while I was in the process of assembling it, I would have told you that it was the most tedious process that Iʼd ever done. I would have also stated that it really isnʼt worthwhile. While I still feel that the first part is true, I definitely think that the end result is worth the pain. I feel a special connection with my keyboard since I needed to assemble it. I really needed to work to reap the benefits of the ErgoDox. It has made me really appreciate the end result. Iʼm also rather proud that I constructed my primary input device using my own hands.
On to some more practical advice for assembling the ErgoDox. Youʼll need a soldering iron, solder and a tweezers. The last part isnʼt optional. The surface mounted diodes are incredibly tiny and you are not going to want to put your fingers anywhere near the tip of the soldering iron. I used this soldering iron from RadioShack with this solder. Neither of which was ideal. Youʼll likely want a slightly smaller solder. As for the soldering iron, youʼll want something a bit nicer with an adjustable temperature. The one I used frequently was hotter than I was comfortable holding.
The Massdrop ErgoDox kit has a couple of choices that make assembly more difficult. Of course, its unclear whether Massdrop will be doing anymore ErgoDox kits due to their introduction of an Infinity ErgoDox. Due to their case design, you canʼt use standard diodes, you have to use surface mount diodes. They are, of course, included in the ErgoDox kit but, they are tiny, youʼll need to use a tweezers to be able to pick them up and attach them. You also need to be careful while attaching them. I managed to break one of them while I was attaching it. My kit was also short a single diode so I ended up needing to purchase more. Luckily the diodes are fairly easy to find, they are these ones from Digikey. I really wish that MassDrop would have included a few extra diodes in the kit, at the volume that they ordered them at, they are 3¢ a piece. I found the easiest way to attach the diodes was to put down a dot of solder on one side of each diode for an entire row before attaching the diode.
The ErgoDox is the best keyboard that Iʼve ever used. Its by far the most comfortable keyboard Iʼve ever typed on, I actually enjoy using it everyday. However, you need to be a tinkerer in order to use this keyboard. The assembly is quite tedious and program the keyboard is a little bit involved. Iʼm sure that anyone could make it through the web based configuration but it is another step to complete before you get to experience the ErgoDox. If youʼre a tinkerer too, you should check out the ErgoDox.
As some of you have noticed, I use an ErgoDox keyboard. Iʼm currently in the process of writing a review of it which, I hope to have completed soon. In the process of writing the review, I discovered that MassDrop has started a drop for a new revision of the design. The new design was not created by Dominic Beauchamp, as the original one was but, is designed by Jacob Alexander and the team at Input Club. Iʼm deeply divided on whether or not I should purchase one.
This revision has fixed my two biggest issues with the Ergodox: the weak connector between the two halves and an awful job of shouldering all of the resistors to the board. One of the TTRS connectors on my current board is flaky due to me putting it in my bag without disconnecting the two halves. When I pulled the ErgoDox out, I grabbed it by the connectors and that has caused the left half of the keyboard to have issues. Basically, it requires me to unplug and then plug the keyboard back in several times per day. I have the replacement parts sitting on my desk but desoldering is quite a pain, so I havenʼt done it yet. This revision of the ErgoDox removes the week point by utilizing a standard USB connection between the two halves. There are a couple of posts on DeskAuthority with people saying that theyʼve replaced their ttrs connectors with USB connectors and it works much better but, I havenʼt yet took that plunge. The another issue I had with the Ergodox was shouldering all of the resistors. It was easily the longest part of the assembly. The resistor was so tiny that it was extremely hard to attach them. This was probably due to my relative lack of experience with soldering (it was the first time that I had assembled something in 4 years) but I found it to be extremely tedious. Iʼm glad to see that that step is gone since the resistors are now integrated into the board.
Theyʼve also added a couple of niceties to the keyboard. First off is that each key can now have an led. They can also be independently controlled. This means that you will be able to setup your Ergodox to be backlit if you desire. They have also added an lcd. Iʼm not really sure what the point of having an lcd on your keyboard is but, no doubt some people will come up with awesome uses for it. Iʼm a little bit concerned that it wonʼt be very visible due to the glare from the acrylic case.
I also have quite a list of concerns over the new design. Much of it boils down to this being a 1.0 product. They have come up with a custom protocol to communicate between the two halves of the keyboard. I really donʼt think that this is a good plan, they could have gone with what the original ErgoDox used and just switch out the connectors for USB. We have no idea how reliable this connection will be and if it turns out to be unreliable, then this new iteration is pretty useless. Iʼm also concerned that neither the PCB design nor the firmware has been open sourced yet. While we have their assurances that they will be open sourced when the keyboard starts shipping but, this would hardly be the first time that a company has promised to open source something and then simply never do it.
I also liked the ErgoDox because it built up a decent community of enthusiasts. This new revision leaves all of that behind. It’s possible that many of the fans will migrate to the new design but, that’s hardly a sure thing. The new design is not compatible in any way with the previous ErgoDox, its more of a spiritual successor rather than an actual one. To add all of the things that they were able to do no doubt necessitated these changes, I question whether those changes are worthwhile or not. In making the assembly easier, I worry that they have lost part of the charm of the original. I think it’s great that this will let people with little to no experience soldering use an Ergodox but, the repairability of this new keyboard is a significant regression from the original ErgoDox. If a component on my ErgoDox fails, I can simply desolder the failed component from the board and replace it with a new one. Sure, that is a pain and you would still have to get a new pcb if the pcb is the part that fails but, I feel like that was an important part of the original ErgoDox. With the new one, if a switch, led or lcd fails, you can swap them out. If anything else fails it will necessitate a whole new pcb. That seems to be a bad trade-off to me.
As I said before, Iʼm deeply divided on whether or not I should pick up one of the new revision. Iʼve wanted to get a second ErgoDox for a while so that I no longer need to transport one between home and the office but, this wouldnʼt be picking up the second one. It would be picking up a whole new keyboard with a similar layout to my ErgoDox. Still, that might be worthwhile. Perhaps this keyboard will gain a larger following than I fear it will. If that’s the case then I really want to get in on the ground floor and figure out what is possible.
Recently I announced my tip jar. The tip jar is great if you happen to want to send me a couple of dollars. If you wanted to provide ongoing support, it was a bit difficult to do. So Iʼm launching a Patreon.
One of my goals for this year is to make this site break even. The best way to do this, I feel, is to have people support me every single month. That will help me keep motivated and help me pay for the monthly expenses associated with running this site. By far the most accepted way to go about doing this is to setup a Patreon so, thats what Iʼve done.
Iʼve also tried to align my readerʼs interests with my own. Iʼve chosen to get paid per article. This seems to be align with what you want, you pay me and I produce more content for you to read. Of course, this could get out of hand. I could start to optimize for income. To push this to the extreme, I would be publishing many short articles every month. I really donʼt think that that anyone would be happy with that situation, so Iʼve decided on a set of guidelines that should produce the optimum situation for all of us.
I will publish as many article as I feel like every month. Some months I feel more drawn to writing than others and this will allow me to produce what I perceive to be quality articles without any sense of pressure that Iʼm underperforming. Of course, this doesnʼt prevent the problem that I previously discussed, so I need to place some limits on the number of articles that I can publish every month. I think that 6 month should be the maximum. Even if I submit more than that, Iʼll only receive support for the first six articles every month. In addition, they must be full articles ad not little linked-list posts. In addition, posts like this, i.e. posts about my site, will not be supported by readers. I think that this simple set of guidelines will bring us to the optimum state. If you have a better idea, please leave me a note on what I can do better.
Thank you for reading this, I really hope that you choose to support me. I really like writing for this site and Iʼd like it to be a successful venture. Whatever you can manage would be great, even just $1 per article. Unfortunately, I donʼt have anything special to give you yet. Iʼm trying to come up with something special for those of you that choose to support me. Thanks for being a part of my site.
Iʼve recently taken over maintaining jekyll-picture-tags. jekyll-picture-tags is a plugin for Jekyll to add a Liquid tag for doing <picture> elements. Picture elements are useful for doing responsive images, Iʼve been using it here on Ruin for quite a bit of time.
For my first release, Iʼve made a couple of small improvements. First off is a pull request that I created a few months ago. It speeds up the site builds when all of the images have already been generated. The other improvement is that you can now install it as a gem instead of copying it into your _plugin folder. You still need to add Picturefill to your site for the polyfill.
I have many plans for jekyll-picture-tags. Iʼd like to add some tests but, Iʼm not sure exactly how that will work. Iʼm also hoping to simplify installation further. Take a look and let me know what you think. Iʼm also looking for contributors.
I sure hope that you enjoy reading my site, I know that I really enjoy writing for it. Unfortunately, this site costs me a fair amount of money. That is something that Iʼm trying to change this year. I really donʼt want to go down the advertising route as I think that leads to all sorts of compromises. Iʼd love it if Ruin could be completely reader supported. To that end, I have a number of things planned for this year, the first of which is a tip jar. If you like my writing and want to send me a couple of dollars, now you can.
To that end, Iʼve added a link to my $cashtag to my brand new support page. I generally like Square and Iʼm happy to use one of their products. If you donʼt link Square as much as I do, you can also use Paypal.