According to Wikipedia, in some cultures the term professional is used as shorthand to describe a particular social stratum of well-educated workers who enjoy considerable work autonomy and who are commonly engaged in creative and intellectually challenging work. According to Google Analytics you probably come from one of these cultures.

I have to admit that when I started to hear the term Craftsmanship I was distrusting, maybe because some of the people that used it tried to use it as a label to put themselves in a higher level than the rest of us.

But lately I meet more people that describes themselves as a craftsman, and that uses the term to describe a set of values they care when developing his professional live.

If you want to know more about Craftsmanship, please read Sandro Mancuso's book. If you want a single-line resume, Craftsmanship could be resumed as "raising the bar of professionalism".

What's is difficult to understand for me is that the software development community had to coin a new term to try warn about the lack of professionalism in our profession. Xavi Gost gave a talk at Conferencia Agile Spain 2014 this Thursday and he said: "We don't deserve an agile environment". And that's true. We are always complaining about not having an agile environment and, when we have it, we are not at the height of it. We keep making crap code difficult to maintain, difficult to evolve, that doesn't scale well, etc.

Friday afternoon I was taking some beers with a friend that works in a big Spanish consultancy. He explained me how they work, what kind of business decisions they make, how people behaves, and how people don't pay attention to the improvements he is trying to do. I was scared. And I was scared because when I was young I worked for a company like that. And I was scared because I know a lot of people working in companies like that.

What impresses me is that this kind of companies earns a lot of money. There are clients who pays them to create crappy applications that are far away of the applications they want to have. Pedro Serrahima said in the closing keynote of the conference that the clients hire this kind of companies because they are scared to make decisions. They prefer to have a contingency plan that to discover new ways of working. They prefer to work with a "big" company with a big financial muscle that work with a small company that make great software.

Some days I think that I don't care. Hopefully that kind of companies always work with bad clients and always have bad professionals, like in a ghetto.

But I really care. I care because most of these projects are public projects than me (and you) pay with our taxes. I care because nowadays, where (at least in Spain) we have a lack of good jobs, there are a lot of good people captive in that companies.

One day I went to a meeting in a public department in Spain. They explain us how they work, their continuous integration system, how the companies they hire work. It was terrible, unbelievable.

In my current job we are helping the UK government to create a new service. In the UK, some of the new projects that are under development are bound to have the source code under a public GitHub account. I think this is a great measure. Increasing the transparency of our work immediately increases the quality of it. If all the companies that work for the government shows its code, I'm sure that we would have better applications. Why don't have a dashboard with the state of the build server as well?

As a professionals we have to take ownership of our careers. As a profession we have to take ownership of our future.

In this article we will see how can we test drive the routing configuration of an ASP.Net web application.

Let's start with our first test:

[TestMethod]
public void TestSimpleRoute()
{
    RouteCollection routes = new RouteCollection();
    RouteConfig.RegisterRoutes(routes);
    // Act - process the route
    RouteData result
        = routes.GetRouteData(CreateHttpContext("~/Admin/Index"));
    // Assert
    Assert.IsNotNull(result);
    Assert.AreEqual("controller", result.Values["controller"]);
    Assert.AreEqual("action", result.Values["action"]);
}

private HttpContextBase CreateHttpContext(string targetUrl = null)
{
    var mockRequest = new Mock<HttpRequestBase>();

    mockRequest.Setup(m => m.AppRelativeCurrentExecutionFilePath)
        .Returns(targetUrl);
    mockRequest.Setup(m => m.HttpMethod).Returns("GET");

    var mockResponseBase = new Mock<HttpResponseBase>();
    mockResponseBase.Setup(m => m.ApplyAppPathModifier(It.IsAny<string>())).Returns<string>(s => s);

    var mockContext = new Mock<HttpContextBase>();
    mockContext.Setup(c => c.Request).Returns(mockRequest.Object);
    mockContext.Setup(c => c.Response).Returns(mockResponseBase.Object);

    return mockContext.Object;
}

As you can see we are mocking the HttpContext. We just need to mock the HttpMethod returned and function that converts an absolute URL to a relative one. As we will pass a relative path in the test, we just need to return the URL passed as argument. In the test code we assert that the route values for controller and action are the ones we expect.

This test doesn't compile, as we haven't implemented the RegisterRoutes function yet. Let's implement it.

public class RouteConfig
{
    public static void RegisterRoutes(RouteCollection routes)
    {
        routes.MapRoute("SimpleRoute", "{controller}/{action}");
    }
}

In a "real" application, this class will be in the App_Start folder of our web application project. In our case, we will put this code in our test project.

In our second test we will test the default values of a route, that is the values of the controller and action we will have if we don't provide them in the path.

[TestMethod]
public void TestDefaults()
{
    RouteCollection routes = new RouteCollection();
    RouteConfig.RegisterRoutes(routes);
    // Act - process the route
    RouteData result
        = routes.GetRouteData(CreateHttpContext("~/"));
    // Assert
    Assert.IsNotNull(result);
    Assert.AreEqual("DefaultController", result.Values["controller"]);
    Assert.AreEqual("DefaultIndex", result.Values["action"]);
}

Let's make this test pass.

routes.MapRoute("SimpleRoute", "{controller}/{action}", 
    new { controller = "DefaultController", action = "DefaultIndex" });

As you can see we are providing an anonymous object where we specify the default values for controller and action.

In our third test we will introduce the static URL segments. Imagine that before your controller and action, you want to specify some fixed segments, for example: http://yourdomain.com/Public/<controller>/<action>.

[TestMethod]
public void TestStaticUrlSegments()
{
    // Arrange - register routes
    RouteCollection routes = new RouteCollection();
    RouteConfig.RegisterRoutes(routes);
    // Act - process the route
    RouteData result
        = routes.GetRouteData(CreateHttpContext("~/Public/Admin/Index"));
    // Assert
    Assert.IsNotNull(result);
    Assert.AreEqual("Admin", result.Values["controller"]);
    Assert.AreEqual("Index", result.Values["action"]);
}

Let's make the test pass:

routes.MapRoute("Public", "Public/{controller}/{action}");

Time to do some some refactoring in our test code. Let's start by implementing a [TestInitialize] method.

RouteCollection routes;

[TestInitialize]
public void TestInitialize()
{
    routes = new RouteCollection();
    RouteConfig.RegisterRoutes(routes);
}

And make a method to retrieve the values of the route:

private string GetRouteValueFor(RouteData result, string key)
{
    return result.Values[key].ToString();
}

Now, our tests look like this:

[TestMethod]
public void TestStaticUrlSegments()
{
    RouteData result = routes.GetRouteData(CreateHttpContext("~/Public/Admin/Index"));

    Assert.IsNotNull(result);
    Assert.AreEqual("Admin", GetRouteValueFor(result, "controller"));
    Assert.AreEqual("Index", GetRouteValueFor(result, "action"));
}

Let's test mixed segments. That is that you can have a "fixed" string before the controller's name, like http://yourdomain.com/Mixed<controller>/<action>.

[TestMethod]
public void TestMixedSegments()
{
    RouteData result = routes.GetRouteData(CreateHttpContext("~/MixedAdmin/Index"));

    Assert.IsNotNull(result);
    Assert.AreEqual("Admin", GetRouteValueFor(result, "controller"));
    Assert.AreEqual("Index", GetRouteValueFor(result, "action"));
}

Let's make the test pass.

routes.MapRoute("MixedSegments", "Mixed{controller}/{action}");

We could specify some alias as well. That is that you map an static route to some controller and action.

[TestMethod]
public void TestAlias()
{
    RouteData result = routes.GetRouteData(CreateHttpContext("~/OldAdmin/OldIndex"));

    Assert.IsNotNull(result);
    Assert.AreEqual("Admin", GetRouteValueFor(result, "controller"));
    Assert.AreEqual("Index", GetRouteValueFor(result, "action"));
}

And the code that pass the test:

routes.MapRoute("Alias", "OldAdmin/OldIndex", new { controller = "Admin", action = "Index" });

The next feature to implement is the custom segments, the segments that aren't controller of action.

[TestMethod]
public void TestCustomSegment()
{
    RouteData result = routes.GetRouteData(CreateHttpContext("~/Admin/Index/SomeId"));

    Assert.IsNotNull(result);
    Assert.AreEqual("Admin", GetRouteValueFor(result, "controller"));
    Assert.AreEqual("Index", GetRouteValueFor(result, "action"));
    Assert.AreEqual("SomeId", GetRouteValueFor(result, "id"));
}

As you can see, we want a segment called id to have the value SomeId. Let's do it:

routes.MapRoute("CustomSegment", "{controller}/{action}/{id}");

We can have some optional segments as well. In the route we will specify that we could have a segment called id, but we can decide not to specify this segment when making the call.

[TestMethod]
public void TestOptionalSegment()
{
    RouteData result = routes.GetRouteData(CreateHttpContext("~/OptionalAdmin/Index"));

    Assert.IsNotNull(result);
    Assert.AreEqual("Admin", GetRouteValueFor(result, "controller"));
    Assert.AreEqual("Index", GetRouteValueFor(result, "action"));
    Assert.AreEqual(UrlParameter.Optional, GetRouteValueFor(result, "id"));
}

[TestMethod]
public void TestOptionalSegmentWithValue()
{
    RouteData result = routes.GetRouteData(CreateHttpContext("~/OptionalAdmin/Index/4"));

    Assert.IsNotNull(result);
    Assert.AreEqual("Admin", GetRouteValueFor(result, "controller"));
    Assert.AreEqual("Index", GetRouteValueFor(result, "action"));
    Assert.AreEqual("4", GetRouteValueFor(result, "id"));
}

Let's pass the test:

routes.MapRoute("OptionalSegment", "Optional{controller}/{action}/{id}", new { id = UrlParameter.Optional });

Finally, we could need to specify a long list of segments. We can do it, but at the end we will be responsible of splitting this segments. You will be see more clear in the test code:

[TestMethod]
public void TestVariableLengthRoute()
{
    RouteData result = routes.GetRouteData(CreateHttpContext("~/CatchAllAdmin/Index/SubIndex/Step1/Step2/Step3"));

    Assert.IsNotNull(result);
    Assert.AreEqual("Admin", GetRouteValueFor(result, "controller"));
    Assert.AreEqual("Index", GetRouteValueFor(result, "action"));
    Assert.AreEqual("SubIndex", GetRouteValueFor(result, "id"));
    Assert.AreEqual("Step1/Step2/Step3", GetRouteValueFor(result, "catchAll"));
}

And let's pass the last test:

routes.MapRoute("CatchAllSegment", "CatchAll{controller}/{action}/{id}/{*catchAll}");

In this article you have seen how you can test your routing configuration in an ASP.Net web application. You can find the code of this article here: https://github.com/vgaltes/TestDrivingASPNetRouting I've made a commit for each step.

See you soon!

I'm a big fan of the BDD or ATDD way of developing software, although I'm not clear about the difference between them. You can find a kind of explanation here. So, now that I'm starting to learn Nodejs and AngularJS one of the first things I want to make is discover how can I make acceptance tests with these technologies. This tests can include doing several things like start a node server, fill a database with sample data, etc. I'm following the fantastic Let's code: Test-Driven Javascript from James Shore. In this serie of tutorials, he does end to end tests too. He first uses Phantomjs directly, and after that he makes the same tests using Casperjs. At the end, he prefers Caperjs but the infrastructure for making both kind of tests is the same. Let's take a look. In the nodeunit tests you have to spawn a start a child process calling Casperjs test function, and passing the files with the tests as a parameter.

var casperJsProcess = child_process.spawn("./node_modules/.bin/casperjs", [ "test", "./src/features/shouldShowToDoItems.js" ], {
            stdio: "inherit",
            env: { "PHANTOMJS_EXECUTABLE": "./node_modules/phantomjs/lib/phantom/bin/phantomjs" }
        });
        casperJsProcess.on("exit", function(code) {
            test.equals(code, 0, "CasperJS test failures");
            test.done();
        });

And make all the tests you want in the file you pass as a parameter. For example:

casper.test.begin("simple test", function(test){
        casper.start("http://localhost:8090");
        test.assertTitle("Angular ToDo List", "The title is not the one expected");

        casper.waitForSelector('#todoItemsList', function() {
            test.assertEval(function() {
                return __utils__.findAll(".todoItem").length == 10;
            }, "There aren't 10 results");

        });

        casper.run(function()
        {
            test.done();
        });
    });

The problem with this solution is that, if you have a node module to, for example, fill the database with some sample data, you have to put in the nodeunit test file. That's because Casperjs is not a node module. There is a library to drive Casperjs from Nodejs, but the last update is 10 months old. So, if you need to do something with node modules before your tests are launched, you need to put in the external test file. This implies that, if you need to do something before each test, you have to have a separate file for each Casperjs test and your code could become a mess very quickly. I don't like this approximation, so I tried Protractor, the end-to-end test framework for AngularJS applications. Protractor is a Node program so, it seems more suitable for what I want to do. If you follow the tutorial, you can see that you need a Selenium Server running in order to be able to pass the tests. It has to be a way to automate this. Yes, the solution is gulp and gulp-protractor. Let's see what we have to do. First, install gulp and gulp-protractor for your project:

npm install --save-dev gulp-protractor protractor

Now we need to install the web driver server. Let's do this with this command:

node_modules/protractor/bin/webdriver-manager update

Then, make a gulp task to pass this kind of tests. You have to create a file called gulpfile.js in your project root with this contents.

var gulp = require('gulp'),
    protractor = require('gulp-protractor').protractor;

gulp.task('acceptanceTests', function(){
   return gulp.src('src/features/protractor.js')
       .pipe(protractor({
           configFile: 'protractorConf.js'
       }))
       .on('error', function(e){throw e});
});

gulp.task('default', [], function () {
    gulp.start('acceptanceTests');
});

In this snippet, you are telling protractor which are the test files you want to run, and which is its configuration file. As you can imagine, you need a protractorConf.js file. Create it with this content.

exports.config = {
    seleniumServerJar: 'node_modules/protractor/selenium/selenium-server-standalone-2.42.2.jar'
}

With this configuration you are telling protractor to start the Selenium Server automatically. The server will stop once your tests are done. That's great! And now we can write our tests. Something like this:

describe('simple test', function () {

    //1.- Fill database with data (recreate database or tables if needed)
    var databaseHelper = require('./helpers/databaseHelper');
    databaseHelper.ensureDataBaseIsFilledWithSampleData();

    it('should have a title', function () {
        browser.get('http://juliemr.github.io/protractor-demo/');

        expect(browser.getTitle()).toEqual('Super Calculator');
    });
});

Notice that we are requiring a Node module and we are using it. We cannot do this in a phantom or casper test (or, at least, I don't know how). To run the test just write this in your terminal.

gulp

And you will see your tests passing. That's great! I thing I will use this approximation for my projects. I will keep you up to date with any news. See you soon!

Let’s Code: Test-Driven JavaScript

Let’s Code: Test-Driven JavaScript

 

[caption id="attachment_192" align="alignnone" width="300"]money money[/caption]

Imagine you "manage" a development team. Imagine you reach to have a product that your client is reasonably happy with it. Imagine that the architecture of the application is a mess. Imagine that you don't have any test. Imagine that the performance of the application is clearly improvable. Imagine that you can improve the user experience a lot. Imagine that you UI layer is made in a 14 years old technology. And now, your company gives you a bunch of money to spend on a kind of second version of the application. No new requirements, only do the same application in another way. What things you can do? Maybe you can train your developers in some different areas to be able to make a better application. Maybe you can refactor your application to have a better separation in layers. Maybe you can contract a company to do an architecture review and give you some guidance. Maybe you can add tests to your code (please, do it before refactoring). Maybe you can hire an UX expert to redesign the application and a designer to change the look and feel. ... or maybe you can change all your server code, migrate it to the new hipster technology with a team without any experience nor knowledge in it and without the habit to do any kind of testing. (WTF!!!!) Ok, you can do it if you want. You've talked with your client and he agreed to do this madness. For an inexplicable reason is very important to him to migrate to another technology. But if this is not your situation, please don't do it. You are killing yourself and your team. And you will give a worst product to your clients. There are better ways to invest your money.

Hi,

in this article I will show you how can you provision a MySQL server with Chef in a Vagrant environment. The Vagrant environment will be an Ubuntu virtual machine. I will do this tutorial in a OSX machine. If you want to make it in a Windows environment, the commands could be slightly different.

First of all you will need to install Vagrant, Chef and VirtualBox. Please, refer to their web sites to know how to do it.

The first step is create a directory for our environment. Let's call it UbuntuMySQL. We will make two directories inside it: environment and share.

mkdir -p UbuntuMySQL/{environment/,share/}

Initialize a git repository inside your new folder.

cd UbuntuMySQL
git init

Go to your environment folder and initialize Vagrant

cd environment
vagrant init

Now you have a file called Vagrantfile whit sample data. Take a look at it to imagine how many things you can configure. Erase all the content and copy these lines.

[caption id="attachment_170" align="alignnone" width="1024"]First Vagrantfile First Vagrantfile[/caption]

(The url of config.vm.box_url is http://opscode-vm-bento.s3.amazonaws.com/vagrant/virtualbox/opscode_ubuntu-14.04_chef-provisionerless.box)

In this lines we are telling Vagrant that we want him to download this virtual maxine, create a private network and that we want to sync our ../share folder with the /share folder in the virtual machine.

Now we are ready to start our virtual machine. Just write vagrant up in your shell. If it's the first time you make this, have a book near you because vagrant will download the virtual machine and, depending on your broadband connection, could be time consuming.

The result of the operation is something similar to this.

[caption id="attachment_171" align="alignnone" width="1024"]virtual machine up virtual machine up[/caption]

Now, you could write vagrant ssh and access your virtual machine. As it's configured in the Vagrantfile file, the user used to connect will be "vagrant".

[caption id="attachment_172" align="alignnone" width="1024"]vagrant ssh vagrant ssh[/caption]

Ok, we have a brand new Ubuntu virtual machine but we want it to install a MySQL server. Do I have to do it manually? The answer is NO, you could use chef to do the job for you.

Chef uses cookbooks to know what to do to install applications (or whatever you want to do with Chef). So, seems logic that we need to have a MySQL cookbook to be able to install MySQL. Do I have to write it? Maybe, but there are loads of cookbooks already written. You could find them at https://supermarket.getchef.com/cookbooks. Let's download the MySQL cookbook to our repository. To do that write this command in your shell in the virtual machine root folder:

git submodule add https://github.com/opscode-cookbooks/mysql.git environment/cookbooks/mysql

Doing this, we will clone the repository in environment/cookbooks/mysql folder.

If you take a look at recipes folder inside that folder, you could see more than one recipe:

[caption id="attachment_173" align="alignnone" width="1024"]MySQL recipes MySQL recipes[/caption]

The one we want to install is server.rb.

Let's edit our Vagrantfile to tell Vagrant to install MySQL using Chef.

VagrantChef5

Stop the virtual machine using "vagrant halt" and run one more time "vagrant up".

Ops! We have an error!

[caption id="attachment_175" align="alignnone" width="1024"]Error no Chef Error no Chef[/caption]

It seems that Chef isn't installed in the virtual machine. To do that we can use Vagrant Omnibus to help us. Install it and write in your shell

vagrant plugin install vagrant-omnibus

Now, we can change our Vagrantfile to install chef in the virtual machine.

[caption id="attachment_176" align="alignnone" width="780"]Install chef Install chef[/caption]

Let's try to provision our virtual machine. Write this in your shell:

vagrant provision

Ops! Another error!

[caption id="attachment_177" align="alignnone" width="1024"]No yum-mysql-cummunity No yum-mysql-cummunity[/caption]

It seems that we need another cookbook, in this case yum-mysql-cummunity. Let's download it:

git submodule add https://github.com/opscode-cookbooks/yum-mysql-community.git environment/cookbooks/yum-mysql-community

Try to provision one more time and... oh no, another error!

[caption id="attachment_178" align="alignnone" width="1024"]No yum No yum[/caption]

We need to download another cookbook. Let's do it:

git submodule add https://github.com/opscode-cookbooks/yum.git environment/cookbooks/yum

And try to provision one more time. Oh, no errors! Looks promising!

To ensure MySQL is installed, enter in the virtual machine

vagrant ssh

and try to access to the mysql server:

[caption id="attachment_179" align="alignnone" width="1024"]MySQL installed MySQL installed[/caption]

Great! And now, let's try to connect to that server from our computer. Open MySQLWorkbench and connect to the server:

[caption id="attachment_180" align="alignnone" width="1024"]Connecting from host Connecting from host[/caption]

[caption id="attachment_181" align="alignnone" width="728"]Connected Connected[/caption]

That's great! We've installed a virtual machine (with no gui) in our machine, installed a MySQL server and connected to it. The greatest thing is that this steps are repeatable. We could upload this files to our Git server, and another workmate could pull them, execute vagrant up and have exactly the same environment we have.

Thanks for reading, see you soon!