Last week I've worked in a tiny library to test if you have configured all the dependencies in your code in the IoC container. I borrowed the idea from this post from Juan María Hernández. The idea is simple, but it can save you some time of debugging.

Install the library is quite straightforward: just install the NuGet package you need depending on the container you are using:

install-package IoCTesting.StructureMap
install-package IoCTesting.Unity

To be able to use the library you need to have a method that returns the container. If you use StructureMap, you must have something like this:

public IContainer CreateContainer()
{
    return new Container(x =>
    {
        x.For<IFoo>().Use<Foo>();
        x.For<IBar>().Use<Bar>();
        x.For<IBaz>().Use<Baz>();
    });
}

The method could be static. The library will scan the assembly looking for this method and will call it to have the container initialized. After that, it will scan your assembly looking for classes that have a constructor with abstract classes or interfaces as parameters, and it will query the container to see if there's some registration for that type.

So, for example you can have a call like this:

var structureMapTesting = new IoCTestingStructureMap();
var errors = structureMapTesting.CheckDependencies(RegisteringAssemblyPath, RegisteringMethodName, TestingAssemblyPath, NamespaceToScan);

As you can see the library takes four parameters:

  • The full path of the assembly where the registration is performed.
  • The qualified name of the class that performs the registration.
  • The assembly you want to scan
  • The root namespace you want to scan. Types that its namespace doesn't start with it won't be scanned.

And returns an IEnumerable<string> with all the classes or interfaces not registered. Only supports constructor injection, not property injection.

And that's all. I hope it can be useful for you. If you have some issue or comment, please do it in the project page.

 

 

No, I'm not going to talk about CQRS, but the ideas behind both concepts are similar.

The Command-Query Separation principle was first introduced by Bertrand Meyer in his book Object-Oriented Software Construction. Mr Meyer states as follows:

Functions should no produce abstract side effects.

Meyer differentiates two kind of functions when we design a class:

  • Commands: those functions which produce abstract side effects (change the observable state of the object).
  • Queries: those functions that don't produce any side effect and return some value.

And what is an abstract side effect? Meyer define an abstract side effect as follows:

An abstract side effect is a concrete side effect that can change the value of a non-secret query.

Or, in other words, that change the observable state of the object. A concrete side effect is a change in the internal state of your object but then they restore the original state.

A query should be idempotent, that is should return the same results if we execute it once or one hundred times. So, a query should not change the state of the object.

Imagine for example the Console.ReadLine() method of the .Net framework. This is clearly an example of query that changes the state. We cannot call Console.ReadLine() one hundred times and expect the same behavior. According to this principle we should split this call in two calls: Fetch (a command) and GetLastLine (a query).

Let's see another simple example:

public class Account
{
    private double _balance;

    public Account(double initialBalance)
    {
        _balance = initialBalance;
    }

    public void Deposit(double amount)
    {
        _balance += amount;
    }

    public double Withdraw(double amount)
    {
        _balance -= amount;
        return _balance;
    }

    public double GetBalance()
    {
        return _balance;
    }
}

In this class we have two queries (GetBalance and Withdraw) and one command (Deposit). We can identify queries because they have a return value (although commands can return new objects as well ). If we take a look at Withdraw, we can see that it changes the internal value of balance and return it. So, if we execute Withdraw several times, we will have different results, and that's something we don't expect of a query. In that case we should convert this query into a command, converting it to a procedure and removing the return statement.

Adhering to this principle will increase the maintainability and extensibility of our codebase. Your code will be easier to explain and understand, because you won't have to dig into the class looking for side effects.

If you are developing a web site using ASP.NET MVC it's possible that you are using razor in your views. And it's possible that you have some presentation logic in that views. This code is hard to test. We love testing our code, so probably you'll end up coding some Selenium or CodedUI tests. As you may know, UI tests are more fragile, harder and slower to execute than unit tests and therefore we execute them less. UI tests are good in some cases but if we could replace them with a more reliable suite of tests, that would be great.

And here is where RazorGenerator extension and NuGet packages (created by David Ebbo) come to rescue. What does this extension? It pre-compiles locally your views (the same code that is generated by IIS when you load a page is generated locally) so mainly you win three things:

  • The initial loading of your website will be faster because the view is already compiled.
  • You can test your views.
  • You don't need to deploy your cshtml files.

The first step is to install the extension, so go to the Extensions and Updates section of Visual Studio and search for RazorGenerator.

[caption id="attachment_219" align="alignnone" width="1024"]RazorGenerator extension RazorGenerator extension[/caption]

With the extension installed you are already able to pre-compile your views, but if you want to use them in your site, you should install RazorGenerator.Mvc NuGet package in your web project.

To generate the code version of your view you must set RazorGenerator as the custom tool of the views.

[caption id="attachment_220" align="alignnone" width="1023"]RazorGenerator custom tool RazorGenerator custom tool[/caption]

And that's all. The extension you've just installed will generate the compiled version of the view.

[caption id="attachment_221" align="alignnone" width="379"]Pre-compiled view Pre-compiled view[/caption]

Now you have a code version of your view, so you can test it. Go to your test project and install the RazorGenerator.Testing package. A typical test could be this one:

[TestMethod]
public void TestUsingRoutes()
{
    // Instantiate the view directly
    var view = new UsingRoutes();

    // Set up the data that needs to be access by the view
    MvcApplication.RegisterRoutes(RouteTable.Routes);

    // Render it in an HtmlDocument
    var output = view.RenderAsHtml();

    // Verify that it looks correct
    var element = output.GetElementbyId("link-using-routes");
    Assert.AreEqual("/Home/UsingRoutes", element.Attributes["href"].Value);
}

Where UsingRoutes is the name of the view we want to test. As you can see, the package provides us with an extension method for the view called RenderAsHtml (and another one called RenderAsString). This method returns an HtmlNode object of the HtmlAgilityPack library that we can query easily to verify if the view is correctly rendered.

It's possible that you will need to setup an HttpContext object (using mocks) to simulate some behavior. The library provides us a simple builder called HttpContextBuilder that you can use for this purpose. Let's see an example:

[TestMethod]
public void TestMockHttpContext()
{
    // Instantiate the view directly
    var view = new MockHttpContext();

    // Set up the data that needs to be access by the view
    var mockHttpRequest = new Mock<HttpRequestBase>(MockBehavior.Loose);
    mockHttpRequest.Setup(m => m.IsAuthenticated).Returns(true);

    // Render it in an HtmlDocument
    var output = view.RenderAsHtml(new HttpContextBuilder().With(mockHttpRequest.Object).Build());

    // Verify that it looks correct
    var element = output.GetElementbyId("user-authenticated");
    Assert.IsNotNull(element);   
}

As you can see we are setting up a mock of the HttpRequest object to specify that the user is authenticated. We can pass an HttpContext created with the builder as the first parameter of the RenderAsHtml call.

And that's all. Testing is not the main purpose of RazorGenerator project, but it's a nice side effect.

Last Saturday (January 17th) the Bilbostack conference was held in Bilbao. It was the fourth edition of this conference and it was a complete success. I'm really glad to be part of the organisation of Bilbostack, sharing this responsibility with Ibon Landa, Asier Marqués and Fran Mosteiro.

I like to organize Bilbostack for many reasons:

  • It's "easy" to organize. We don't open a Call for Papers. We meet one day in a beer, take some beers and make a list of the people we would like to speak at the conference. Normally they say yes just after receiving our email.
  • We don't have to look for a venue. The Universidad de Deusto, kindly lend us two great rooms to make the conference without paying a single euro.
  • We don't have to organize any catering. It's a morning event, and our sponsors buy some bottles of water for the attendees.
  • We don't have to set up any payment platform. It's a free event and our budget is almost 0 euros.

And I like to attend Bilbostack for many reasons:

  • The sessions are always great. We know the speakers and we know they are great in the subject they are presenting.
  • It's only a morning.
  • The subjects are varied. From development to UX, from SEO to agile, from accessibility to beacons.
  • The networking after the event is really great. Pintxos and beers with friends, what else can you ask for?

This year we had more than 250 attendees and I think they enjoyed the conference. Next year will be best, I promise.

According to Wikipedia, in some cultures the term professional is used as shorthand to describe a particular social stratum of well-educated workers who enjoy considerable work autonomy and who are commonly engaged in creative and intellectually challenging work. According to Google Analytics you probably come from one of these cultures.

I have to admit that when I started to hear the term Craftsmanship I was distrusting, maybe because some of the people that used it tried to use it as a label to put themselves in a higher level than the rest of us.

But lately I meet more people that describes themselves as a craftsman, and that uses the term to describe a set of values they care when developing his professional live.

If you want to know more about Craftsmanship, please read Sandro Mancuso's book. If you want a single-line resume, Craftsmanship could be resumed as "raising the bar of professionalism".

What's is difficult to understand for me is that the software development community had to coin a new term to try warn about the lack of professionalism in our profession. Xavi Gost gave a talk at Conferencia Agile Spain 2014 this Thursday and he said: "We don't deserve an agile environment". And that's true. We are always complaining about not having an agile environment and, when we have it, we are not at the height of it. We keep making crap code difficult to maintain, difficult to evolve, that doesn't scale well, etc.

Friday afternoon I was taking some beers with a friend that works in a big Spanish consultancy. He explained me how they work, what kind of business decisions they make, how people behaves, and how people don't pay attention to the improvements he is trying to do. I was scared. And I was scared because when I was young I worked for a company like that. And I was scared because I know a lot of people working in companies like that.

What impresses me is that this kind of companies earns a lot of money. There are clients who pays them to create crappy applications that are far away of the applications they want to have. Pedro Serrahima said in the closing keynote of the conference that the clients hire this kind of companies because they are scared to make decisions. They prefer to have a contingency plan that to discover new ways of working. They prefer to work with a "big" company with a big financial muscle that work with a small company that make great software.

Some days I think that I don't care. Hopefully that kind of companies always work with bad clients and always have bad professionals, like in a ghetto.

But I really care. I care because most of these projects are public projects than me (and you) pay with our taxes. I care because nowadays, where (at least in Spain) we have a lack of good jobs, there are a lot of good people captive in that companies.

One day I went to a meeting in a public department in Spain. They explain us how they work, their continuous integration system, how the companies they hire work. It was terrible, unbelievable.

In my current job we are helping the UK government to create a new service. In the UK, some of the new projects that are under development are bound to have the source code under a public GitHub account. I think this is a great measure. Increasing the transparency of our work immediately increases the quality of it. If all the companies that work for the government shows its code, I'm sure that we would have better applications. Why don't have a dashboard with the state of the build server as well?

As a professionals we have to take ownership of our careers. As a profession we have to take ownership of our future.