Thursday 13 December 2012

HOWTO: Deploy mono web applications with capistrano-mono-deploy

I've recently been working on a capistrano gem to help with mono deployments. I've had a bit of experience trying to get capistrano to work with non-rails environments, basically it can be a bit of pain if you're not sure what you're doing. Also most of the beginners tutorials are aimed at people developing Ruby on Rails. So I though I'd create a gem and provide a little how to. The gem is called capistrano-mono-deploy. This is how you can use it to deploy a simple web application to a Linux machine. In this example I'll be using the excellent ServiceStack for the web application and xsp4 to host the it. You can find all the code on my github account.

Mono

I'm running Ubuntu 12.10 on my server and am using Mono 2.10.8.1 which was installed from the package repository using apt-get

sudo apt-get install mono-xsp4

Service Stack

First I knocked a quick service stack application and created a status endpoint. This should work for any application that's mono compatible but I haven't tried with anything else. Once you've got an application you'll want to be deploying it.

Capistrano

You need to make sure you have capistrano and ruby gems installed first. I'm running this from my Ubuntu development machine and have not tried deploying from a windows machine to 'nix machine.

Install ruby and ruby gems
sudo apt-get install ruby1.9.3 rubygems

Install bundle
sudo gem install bundle capistrano

Add a Gemfile and insert the following
source :rubygems
gem "capistrano-mono-deploy"

Call bundle to update the gems
bundle

Capify your project
capify .

Edit your config/deploy.rb to be something like this:

require "capistrano/mono-deploy"

ssh_options[:keys] = %w('~/.ssh/*.pub')

set :application, "service stack deployed with capistrano running on mono"
set :deploy_to, "~/www" 

role :app, "the.server.com"

Then from the root of your project call
cap deploy

BOOM your application is deployed!

Assuming you have your ssh keys setup for the server, either that or you had to enter a password. 

Now I wouldn't recommend using xsp as your weapon of choice when it comes to hosting, but it's easy to get started with and you can swap it out for other things once you get up and running.

You should take some time to read up about capistrano and how it works. It's a really powerful tool that has loads of options available for you to play with. The capistrano-mono-deploy gem currently deploys the first directory it finds with a web.config located in it.

Feedback appreciated, particularly on my terrible ruby code!

Friday 21 September 2012

Deploying ServiceStack to debian using capistrano with hosting in nginx and fastcgi

Over the last month we've started using ServiceStack for a couple of our api endpoints. We're hosting these projects on a debian squeeze vm using nginx and mono. We ran into various problems along the way. Here's a breakdown of what we found and how we solved the issues we ran into. Hopefully you'll find this useful.

Mono

We're using version 2.11.3. There are various bug fixes compared to the version that comes with squeeze. Namely, problems with min pool size specified in a connection string. Rule of thumb, if there's a bug in mono then get the latest stable!

Nginx

We're using nginx and fastcgi to host the application. This has made life easier for automated deployments as we can just specify the socket file based on the incoming host header. There's a discussion about what's the best way to host ServiceStack on Linux at stackoverflow. Our nginx config looks like this:
When we deploy the application we nohup the monoservice from capistrano Where: fqdn is the fully qualified domain name, i.e. the url you are requesting from latest_release is the directory where the web.config is located.

Capistrano

To get the files onto the server using capistrano we followed the standard deploy recipe. In our set up we differ from the norm in that we have a separate deployment repository form the application. This allows us re-deploy without re-building the whole application. Currently we use Teamcity to package up the application. We then unzip the packaged application into a directory inside the deployment repository.
We us deploy_via copy to get everything on the server. This means you need to have some authorized_keys setup on your server. We store our ssh keys in the git repository and pull all the keys into capistrano like this:
ssh_options[:keys] = Dir["./ssh/*id_rsa"]

No downtime during deployments ... almost

Most people deal with deployment downtime using a load balancer. Something like take server out of rotation, update the server, bring it back in. The problem with this is it's slow and you need to wait for server to finish what it's doing. Also in our application we can have long running background tasks (up to 1000ms). So we didn't want to kill those of when we deployed. So we decided to take advantage of a rather nice feature of using sockets. When you start fastcgi using a socket it gets a handle to the file. This means that you can move the file! Meaning that you can have a long running process carry on whilst you move you new site into production, leaving that long running task running on the old code to finish. Amazing!
This is what we have i so far:
There's room for improvement feedback appreciated.

Thursday 20 September 2012

ServiceStack the way I like it

Over the last month we've started using ServiceStack for a couple of our api endpoints. Here's a breakdown of how I configured ServiceStack to work the way I like it. Hopefully you'll find this useful.

Overriding the defaults

Some of the defaults for ServiceStack are in my opinion not well suited to writing an api. This is probably down to the frameworks desire to be a complete web framework. Here's our current default implementation of AppHost: For me, the biggest annoyance was trying to find the DefaultContentType setting. I found some of the settings unintuitive to find, but it's not like you have to do it very often!

Timing requests with StatsD

As you can see, we've added a StatsD feature which was very easy to add. It basically times how long each request took and logs it to statsD. Here's how we did it:
It would have been nicer if we could wrap the request handler but that kind of pipeline is foreign to the framework and as such you need to subscribe to the begin and end messages. There's probably a better way of recording the time spent but hey ho it works for us.

RestServiceBase, Exceptions and Error Responses

One of my biggest bugbears with ServiceStack was the insistence on a separate request and response object, the presence of a special property and that you follow a naming convention all in the name of sending error and validation messages back to the client. It's explained at length on the wiki and Demis was good enough to answer our question.

RestServiceBase

The simple RestServiceBase that comes with provides an easy way of getting started, but there aren't many hooks you can use to manipulate how it works. It would be nice if you could inject your own error response creator. We ended up inheriting from RestServiceBase and overriding how it works:
We basically chopped out the bits we're not using and changed the thing that creates the Error Response: This allows us to respond with an error no matter what the type is for the request or what response you are going to send back. It provides us with extra flexibility above what is provided out of the box. In a nutshell, if there's an exception in the code we will always get a stack trace in the response if debug is on.

Validation

We had the same issue with the validation feature; if you don't follow the convention you don't get anything in the response body. So we followed the same practice and copied the ValidationFeature and tweaked it how we wanted it.

Conclusion


I like ServiceStack; it's really easy to get up and running and whilst it has it's own opinions on how you should work, what framework doesn't?

Tuesday 11 September 2012

Run All sql files in a directory

Create a a batch file and change the params accordingly to run all sql files in a directory against your db


Wednesday 1 August 2012

Stub responses with Nest.ElasticClient

We've started using ElasticSearch at work for some of our projects. When we started out doing simple web requests was easy enough but as the complexity of what we where doing grew it became obvious that we where starting to write our own DSL for elastic search. Surely someone else has already done this?

There are several options but we settled on NEST. Mainly because of the way the DSL was written. The first stumbling block I hit was unit testing. You could just stub out IElasticClient and then add some integration tests to cover yourself.

The problem with integration tests is they're slow and require, well something to integrate with, so I thought it would be good to stub the json response from ElasticSearch.

When using NEST it's not all that obvious. But here's how I did it:


The down side is you're testing the frameworks assumptions. But it's good to know what the framework is actually doing under the hood i.e. calling a web request with some parameters.

You also get the added bonus of being able to test that the response can be parsed into you're objects.

Doing to much in one test?

Absolutely, in our code base it's two tests. One for parsing the response and another test to assert post parameters.

Sunday 22 July 2012

Cache Fluffer / Gorilla Caching / Cache Warmer

The relatively simple introduction of a cache fluffer can make a huge difference to performance, particularly at peak load. The idea is simple, keep the cache up to do date so you don't have to go and get data when the user requests the site.

        public T Get(Func<T> factory, string cacheKey)
        {
            if (cache.Contains(cacheKey))
            {
                Task.Factory.StartNew(() => FluffTheCache(factory, cacheKey));
                return cache.Get(cacheKey);
            }
            return Put(factory, cacheKey);
        }

        public void FluffTheCache(Func<T> factory, string cacheKey)
        {
            var expiry = cache.getExpiry(cacheKey);
            if (expiry.Subtract(new TimeSpan(0, 0, 10)).Second < new Random().Next(1, 60))
            {
                Put(factory, cacheKey);
            }
        }

        public T Put(Func<T> factory, string cacheKey)
        {
            var item = factory();
            cache.Set(item, cacheKey);
            return item;
        }
The beauty with this method is that popular items will always be in cache where as less popular items will not be cached for unnecessary lengths of time.

You also get some randomization mixed in for free meaning that items don't come out of sync at the same time.

Tuesday 10 July 2012

Testing Async Behaviour - AutoResetEvent WaitOne

This little nugget helps you test async code.

https://github.com/7digital/SevenDigital.Api.Wrapper/blob/master/src/SevenDigital.Api.Wrapper.Unit.Tests/FluentAPITests.cs

Essentially you use a semaphore to signal that something has finished. In our case we're using the AutoResetEvent. This allows to wait up to an amount of time for something happen.

In our case we're waiting for an action to be fired and we want to ensure the result is true.

[Test]
public void AsyncTest()
{
  var autoResetEvent = new AutoResetEvent(false);
  new Something.Async(
    action =>
    {
 Assert.That(action.SomethingToTest, Is.True);
 autoResetEvent.Set();
    });

  var result = autoResetEvent.WaitOne(1000 * 60);
  Assert.That(result, Is.True, "Method Not Fired");
}
Make sure that you test the result from WaitOne. If you don't you're not testing that the method actually returns.

Monday 9 July 2012

Samsung Chromebox with XBMC

What to do with a Chromebox?

At google IO 2012 every delegate got a free Samsung Chromebox. Personally I already have a laptop, desktop, tablet and smart phone. Why would I need a cut down desktop? It would probably be great for your Nan who has no idea what they're doing and just wants to email the grandkids.

So what should I do with this piece of hardware? Well my xbox classic is struggling to play some high def media files, it is over 10 years old, how about I use my free chromebox.

Enable Developer Mode

Essentially flip a switch and erase all user data. Here's how.

Install ChrUbuntu

I only had one problem following these instructions, me, I didn't follow them. You must leave your box in developer mode, meaning that boot time takes almost a minute because of the developer mode warning.

Install XBMC

XBMC has been accepted into debian.
sudo apt-get install xbmc
BOOM!

Install VNC and SSH

Invaluable, otherwise you'll be plugging in your keyboard and mouse every time you get a problem. 

Install SSH
Enable Remote Desktop Connections

Run xbmc on startup

I had a slight issue with xbmc starting just before the window manager had fully started. This was causing xbmc to start in a windowed state. Instead I wrote a small script to pause for 2 seconds then start xbmc.

#!/bin/sh
sleep 5
xbmc
chmod +x and then add the file into the script to startup programs.

Summary

In total it took me about 90 minutes. My main annoyance is the beep you get on startup. I connected it to my TV through the dvi port. It handled 720p files with no problems, just need to test with some 1080p files.

Just waiting on my display port -> HDMI cable  so that I can give it a proper testing.

Would I recommend buying one explicitly for this purpose, no, unless the price tag is brought down significantly.

Wednesday 20 June 2012

Language Obsessed

Recently on the GOOS group discussion board there was a good debate about Acceptance Testing. Whilst much of the advice was good it amazed me how many people hadn't looked outside of their own language community for inspiration.

For instance, in the dot net world automated acceptance testing frameworks for web sites where fairly poor. However the Ruby guys nailed this a while ago so why not just use those frameworks? Take a look at WATIR for more information. When it comes to black box testing your application why does it matter what language or framework you use? If anything I'd say it's better to test it using a completely different language.

If you've written your application in C# what's stopping you from using Ruby to test you website? Many people get scared of using something that isn't written in their favorite language. The problem with this is you don't expose yourself to new ideas a new ways of doing things.

Don't limit yourself to one ecosystem.

Monday 23 January 2012

Cucumber.js with zombie.js

I wanted to start looking at alternatives to our current set of cucumber feature tests. At the moment on the web team we're using using FireWatir and Capybara. So I though I'd take at look at what was available in Node.js. Many people think it's strange that a .Net shop would use a something written for testing Ruby or even consider something that isn't from the .Net community. Personally I think it's a benefit to truly look at something form the outside in.  Should it matter what you're using to drive your end product or what language your using to test it? Not really. So what are the motivations for moving away from Ruby, Capybara and FireWatir?

In a word 'flaky', we've had heaps of issues getting our feature tests, AATs and smoke tests reliable. When it comes to testing, consistency should be king. They should be as solid as your unit tests.  If they fail you want to know that for definite you've broken something, rather than thinking it's a problem with the webdriver.

It is with this aim in mind that I started looking at the following.

Cucumber.js is definitely in it's infancy, there's lots of stuff missing but there's enough there to get going.

Zombie.js is a headless browser, it claims to be insanely fast, no complaints here.

First up we got something working with the current implementation of cucumber-js https://github.com/antonydenyer/zombiejsplayground. The progress formatter works fine and the usual "you can implement step definitions for undefined steps" are a real help. Interestingly rather than requiring zombie.js in our step definitions we ended up going down the route of implementing our own DSL inside world.js. We could have used another DSL like capybara to protect us from changing the browser/driver we use. This is currently done with our Ruby implementation, the problem is that we've ending up implementing our own hacks to get round the limitations/flakiness of selnium/webdriver and to date we have never 'just swapped out the driver' to see what happens when they run against chrome/ie. That said should you be using cucumber tests to test the browser? I don't think you should. With that in mind we ended up implementing directly against zombie.js from our own DSL.

Extending cucmber-js https://github.com/antonydenyer/cucumber-js

There are a lot things yet to be implemented in cucmber.js one that gives me great satisfaction is the pretty formatter. Look everything is green!  It's no where near ready for production but you do get a nice pretty formatter.

Thanks to Raoul Millais for helping out with command line parsing and general hand holding around JavaScript first steps.

Tuesday 10 January 2012

OpenRasta is opening up to the community

Last Thursday a few of us from 7digital had a meet up with Sebastien Lambla author of OpenRasta.

As some of you may know we're been writing all our new API endpoint using OpenRasta, we have a vested interested in ensuring the success of this project and as such are responding to the rallying cry with gusto.

So what's going to happen? Essentially 7digital, along with Huddle and Neil Mosafi, will jumping on board to help out with development and maintenance of OpenRasta 2.x. Short term goals are to help people get started with OpenRasta. At the moment it's not particularly easy to get going with the 2.1 code. The first thing to get up and running with is a build server, this is something that 7digital will be taking responsibility for. Our first aim is to build OpenRasta and publish _latest binaries with every push and make those binaries available in OpenWrap, NuGet and as binary downloads.

We're really looking forward to working with everyone on OpenRasta and can't wait to get stuck in.