Ruin The assorted ramblings of Brendan Tobolaski

Using a Mac Mini as a server

As some of you may know, Iʼve used a Mac Mini as a server. There are a couple of different providers that will host a Mac Mini in a datacenter for you, Mac Mini Colo and Mac Mini Vault. I used Mac Mini Vault mainly for the Rent to Own option. There are a couple of well known people that hosts their sites on a Mac Mini: Brett Terpstra and Ben Brooks.

The most obvious thing that having a Mac Mini server gets you is a server that runs OS X. The Mac Mini is one of only 2 Macs that you could host in a data center and therefore one of only 2 “servers” that can legally run OS X. This can be extremely useful. There is OS X Server where you can run a number of different server type things such as email or web sites. There is also Mamp (which Mac Mini Vault will give you a license for) for hosting multiple PHP and MySQL websites. All of this means that its extremely easy to get a server up and running for yourself. It can also be useful for running things like SpamSeive or Apple Mail rules continuously.

The downside of running OS X on a server is that you do lose some performance. Its also a bit harder to get more exotic setups than just Apache, PHP and MySQL. The best way to manage you Mac Remotely is by using a Remote Desktop connection. This does make if very easy to get started with a server but, its not very convent to have to use the GUI to configure your server. Since it is a remote connection, there is inherently a bit of lag which can grow to be quite severe.

Fortunately, there are a few other options. Mac Mini Vault will install an alternative OS on your Mac Mini. The options include ESXi, Windows Server and Ubuntu. While it may be tempting to use it as an ESXi host, it will give up a considerable amount of performance. I started out using OS X on my Mac Mini and eventually I decided to run Ubuntu 12.04 on it. The benefit to this approach is that you are able to use standard linux tools to manage your server. Iʼm very comfortable with managing Linux servers. It can also give you better performance as many software stacks are tuned for running on Linux.

It is not without its drawbacks. While running Ubuntu on Mac Mini does perform very well, its performance isnʼt great. The Mac Minis run a generation old mobile cpu. For my use case, the Mac Mini is fairly slow. I serve my website over https because I value my readerʼs privacy. Establishing ssl connections on a Mac Mini is quite slow. It only manages around 200/s. Thats quite a bit more traffic than this site receives but, in comparison to vps servers, its pretty bad even considering the price. This is mostly my fault as I chose the base model but, even if the server model doubles the performance, its not good. A $20 Linode can manage at least 300 connections per second. $20 is cheaper than either the Mac Mini Colo or the Mac Mini Vault colocation plans.

The performance characteristics of Mac Mini are a little bit strange. I chose to install the maximum amount of ram, 16 Gb. As mentioned before, the cpu is quite weak. If you happen to have a very memory intensive application, this is great. If not, youʼll get much better performance by simply spending an extra $10 a month on a Linode.

Another challenging issue with using Ubuntu on a Mac Mini is that you are basically managing a bare metal server. VPSs are much more convent as you never need to worry about messing up. If you do, you are able to just create a new server and not repeat the same mistake. With a bare metal server or Mac Mini, you donʼt have that luxury. If you mess something up, you need to wait for support to fix it for you. I did this once. Mac Mini Vaultʼs support was great but, I happened to mess up my server during non-support hours. If I had been using Linode, I could have just fired up a new server and gotten everything up and running in a few minutes.

All in all, I donʼt think its a good idea to use a Mac Mini as a server unless you have special circumstances that make it a requirement. If you feel that you need to own your serverʼs hardware, then it is a great option. The other reason that you might want to run a Mac Mini server is if you need to have a server that runs OS X. Its a great option for a build server or if you want an extremely easy to manage web server. It appears that Apple let slip that its going to rev the Mac Mini very soon. Assuming that they upgrade the cpu options, it may be a great server option. Iʼll probably go back to using it.

“Immediate” handler for Ansible

One of the things that I miss in Ansible is an equivalent of Chefʼs notifies :immediately. The problem that I was trying to solve is resizing the MySQLʼs binary logs (which turns out to be a bit more complicated). Basically, you install MySQL, then change the configuration file and finally restart MySQL which, causes MySQL to barf because the binary logs arenʼt the correct size.

The solution that sprang to mind for this was to delete the binary logs immediately after installation and changing the configuration file. Handlers canʼt quite do this. They only run at the end of the current playbook block. There also isnʼt a way to only run the handler if both things have changed. Ansibleʼs documentation isnʼt much help in this regard. The When statement documentation gets you fairly close but, not quite there.

Here is how you can do it. In my case, I register: mysql_installation in the MySQL installation and register: config_status in the template block for changing the configuration. Then on our “handler” task we add when: mysql_installation|changed and config_status|changed. Here is what it looks like all together:

- name: Install mysql
  apt: name=mysql
   register: mysql_installation

- name: Configure mysql
   template: src=my.cnf.j2
  notify: Restart mysql
  register: config_status

- name: Delete binary logs
  shell: service mysql stop && rm -rf /var/lib/mysql/ib_logfile* && service mysql start
  when: mysql_installation|changed and config_status|changed

There are better solutions to my original problem but, I think this technique could be useful for other things.

(re) Starting with Chef

Quite a while ago, I posted about using Logstash as a Chef handler. That was about the last time that I used Chef. I changed jobs and managing servers wasnʼt part of my duties anymore. Well, Iʼve changed jobs again and, now, Iʼm back to managing servers. I initially started using Ansible but, I donʼt like the way it works. So, Iʼm back to using chef.

So when I got started this time around, I got started once again with Chefʼs documentation. If you are at all familiar with the recent developments (well not so recent really) then, you already know what the problem. It seems obvious in retrospect (or, for that matter, while youʼre doing it) but, dumping all of your custom cookbooks, as well as a ll of your dependencies, in a single repository is a bad idea.

Then, there is cookbook uploading. Once you have all of cookbooks setup in your repository, you need to upload them to the chef server. The first time is relatively easy, just knife cookbook upload -A to upload all of them at once. Then when you modify a cookbook and include a new dependency you go to upload it, knife cookbook upload <cookbook-name>. Which, of course, gives you an error that not all of the dependencies are available on the server. Annoying but, easily solved, just run knife cookbook upload --include-dependencies <cookbook-name>.

There has to be a better way and there is, The Berkshelf Way. Also, donʼt follow the directions on installing Chef Workstation. Youʼll need the part about setting up the keys and ~/.chef but, save yourself some trouble and just install chefdk. Then use Berkshelf to setup an application cookbook. To upload the cookbook and its dependencies to the Chef Server, all you need to do is run berks upload. In addition to the much better workflow, you get a bunch of awesome extras. It includes a Vagrant environment, for manual testing; Test-Kitchen, for automated integration testing; and Chefspec for unit testing. Berkshelf is an awesome addition to Chef and it should be the default workflow.

Donate to Mayday PAC

If you live in the United States and feel like the Government has been bought away from your interests (and let’s be honest, it has) then, you should put your money where your mouth is and donate to Mayday Pac. I donated a few hours ago and they had $1.2 million left to go. You have 13 hours left to donate and they have a little under $500,000 left to go.

WWDC 2014

As many of you know, WWDC was huge this year. It seems like Apple managed to do as much this year as they have been able to do for the last 3 to 4 years combined. Of course, almost everything that they announced this year has been in the works for a number of years. It sure feels like theyʼve given pretty much everyone everything that they have been asking for.

It has me interested in iOS development again. I guess that isnʼt saying much as I said that I was going to have an app ready on launch for iOS 7. I donʼt feel too bad about it as the link in that post was to Marco Arment saying about the same thing. I would consider him to a professional iOS developer and he didnʼt have an app ready to launch with iOS 7 (and he still hasnʼt launched his main app). So this year, I wonʼt say that Iʼm going to have an app ready for iOS 8 but, I am trying out all of the new developer things.

Iʼm extremely exicted for this coming year. Apple announced a huge number of things for developers to utilize. Its going to be great to see what everyone can come up with and I will finally be able to use Swype or SwiftKey again.

Static Site Generators focus on the wrong things

SSGs focus on the wrong problem. They focus too much on getting the non-functional requirements right. More secure, more portable, more flexible, more hackable.

The functional requirements in blogging are:

  • less friction to publish - encourage more frequent blogging improved audience experience
  • better analytics to understand audience
  • help in increasing views
Pankaj More

I go back and forth on this. On the one hand, I really like having all of my posts in simple text files on disk. I’m a bit of a plain text nerd so, that is an awesome feeling for me.

On the other hand, I hate not being able to post from any device. Itʼs a crappy feeling. Sure, with how I have things set up, all I need is a device with git but, I do have two devices that don’t have decent git access. One of them, it happens to be my most used device, is my iPhone. I know that there is Sgit but, the last thing that I want to do is jump into and to write a blog post. The other device is an iPad. I’m not aware of a git client for it. I think that Git Mongo cab commit and push but, Iʼm not going to spend $10 to find out. There is but, that only works if you host your blogʼs git repository on GitHub. Not to mention that I find it to be incredibly painful to use.

As Iʼve clearly illustrated, Iʼm very much on making publishing easier side of the pendulum swing right now. Some people may have noticed that I switched my blog back to Ghost about a week ago. Obviously it hasnʼt helped much with the rate of my blog posts but, Iʼm happy with my setup once again.

What does this all mean? Not a whole lot. Just use whatever feels right to you. There are certainly downsides to every system, so, just use whatever makes sense to you.

I do take a small amount of issue with a couple of the alternative that Mr. More suggests. He suggests using Medium, Svbtle or Ghost. Iʼm certainly fine with Ghost but, the other two I would consider to be non-options. If youʼre taking the time to blog, you should own your words. In a technical sense, you still own your words with either Svbtle or Medium but, you donʼt control the experience of people on your site. This is not something that you should give up. If you aren’t very familiar with blogging, then I would suggest using either hosted Ghost or (be sure to purchase your own domain name for your site) as both allow you to easily transition into hosting your site yourself.

Verizon isnʼt a fan of the free market either

Verizon originally sued the FCC because of the FCC’s Net Neutrality rules. They sued on the grounds that the FCC didnʼt have the authority to enforce the Net Neutrality rules. The implication, of course, is that they preferred to have the free market settle any such issues. So now,

This week, Netflix customer Yuri Victor tweeted a screenshot of a message he got from Netflix that said, “The Verizon network is crowded right now. Adjusting video for smoother playback...” It turns out Netflix has been providing these messages to customers of multiple ISPs for a month.

Verizon is worried that these notices will harm its otherwise sparkling reputation and even cause customers to switch Internet service providers. After all, the US Internet market is flush with competition, with every resident able to choose from so many high-quality service providers that there's no way we could list them all here.

“Netflixʼs false accusations have the potential to harm the Verizon brand in the marketplace,” Verizon Executive VP and General Counsel Randal Milch wrote today in a letter to Netflix General Counsel David Hyman. “The impression that Netflix is falsely giving our customers is that the Verizon network is generally ‘crowdedʼ and troublesome. This could cause a customer to think that any attempted viewing of video, whether it be Hulu, YouTube or other sites, would yield a similarly ‘crowdedʼ experience, and he or she may then choose to alter or cease their use of the Verizon network.”

— Jon Brodkin on Ars Technica

So, Verizon isnʼt a fan of Netflix telling its customers just how bad it is. This is a well documented problem. It got so bad that Netflix had to bribe Verizon to allow Netflix to pay for its own transit and to pay for running an interconnect to Verizon. All Verizon has to do is connect to Netflix with a sufficient amount of bandwidth to fulfill their customer’s requests and route that across their own network. Apparently, Verizon isnʼt even will to do that after being bribed.

Netflix is rightfully fed up with the situation, so they are doing the only thing that is left to them, informing their customers of the problem. Now that Netflix is letting their customers know that Verizon isnʼt fulfilling the service that they are paying for, Verizon is threatening to sue. Verizon is threatening to sue Netflix because Netflix is letting Verizon’s customers know that Verizon isn’t fulfilling their commitment to their customers. LOL.

This is exactly what Verizon was asking for. Now that its been revealed that they are willfully disregarding their commitment to their customers, instead of fixing the problem they are threatening legal action against the people revealing the problem. Its the perfect case to show why we need Net Neutrality.

A Fresh Coat of Paint

Regular readers of my site probably noticed a difference in my site today. I’ve redesigned my site. If you’ve been following me for a long time, you may recognize some of the elements of the new design. I’m using the same fonts that I used on I really liked them back then and I still do now.

While it is similar to past designs Iʼve done, I really like this newest generation. Iʼm sure there are some things that I could have done better but, I like it. I think everything should be working properly, let me know if it isnʼt. I hope you enjoy the new design.

Using Docker to build a Jekyll site with Jenkins

This probably wonʼt be particularly helpful for you. Its probably much easier for you to just run jekyll build and then deploy however you currently do. I donʼt like to do that though. I prefer to have my site built on git push. That is exactly what I set up here. Its an improvement on my old process of using Jenkins to build my Jekyll site.


  • Jenkins up and running
  • A Jekyll site in a git repo
  • A job in Jenkins to build the site
    • The build agent that will build your site needs to have Docker installed
    • It also needs to have pull access to the git repo that holds the Jekyll site
    • For it to be useful to you, you should have it so that pushing to the git repo will cause a build

Once you have all of that already set up, we start by building the docker container that will be used to generate the site.

FROM ubuntu:12.04
RUN apt-get update && \
  apt-get install -y python-software-properties && \
  apt-add-repository ppa:brightbox/ruby-ng && \
  apt-add-repository ppa:chris-lea/node.js && \
  apt-get update && \
  apt-get install -y ruby2.1 ruby rubygems ruby-switch ruby2.1-dev nodejs sudo rsync && \
  apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
RUN ruby-switch --set ruby2.1
RUN gem install bundler

RUN useradd --uid 1000 s
ADD /bin/

VOLUME ["/jekyll", "/deploy"]

CMD ["/bin/"]

One consideration is that you should set the uid of the Jenkins user in the container to the uid of the Jenkins agent. If you do not, youʼll end up with some rather painful permissions issues.

Iʼm sure that you noticed that it has a script to run specified in the Dockerfile. That is up next.

cd /jekyll
chown -R jenkins .
sudo -u jenkins -H bundle install -j4
sudo -u jenkins -H bundle exec jekyll build || exit 1
sudo -u jenkins -H rsync -avh --delete /jekyll/_site/ /deploy/

I put both of those files into _docker so that they wonʼt be included in the site. Commit them to your Jekyll siteʼs repo. Now, we add the build script for in Jenkins

git checkout master
git pull
cd _docker
docker build -t btobolaski/jekyll-builder .
docker run --name=jekyll-builder -v "/$WORKSPACE:/jekyll" -v /var/www/ruin:/deploy btobolaski/jekyll-builder
docker rm jekyll-builder

Overall, thats pretty simple. If you didn’t put the Dockerfile and script into the _docker directory, change the cd _docker line to match what you chose. You will also need to choose where the built site will end up. I chose to put it in /var/www/ruin. Feel free to use your user name instead of btobolaski in the script.

This is what I use to build All I have to do is commit a new post into the git repo and push it. In a few seconds, it appears on the site. It works really great for me but, I’m not sure that any one else will find it very useful.

Intelligence Policy Bans Citation of Leaked Material

A new pre-publication review policy for the Office of Director of National Intelligence says the agency’s current and former employees and contractors may not cite news reports based on leaks in their speeches, opinion articles, books, term papers or other unofficial writings.

Such officials “must not use sourcing that comes from known leaks, or unauthorized disclosures of sensitive information,” it says. “The use of such information in a publication can confirm the validity of an unauthorized disclosure and cause further harm to national security.”

— Charlie Savage on The New York Times

This seems like complete bull shit. How exactly does citing leaked material give it any more creditability. It just seems like a stick to beat down the opposition.