Don’t use Regions in .NET

One thing that annoys me when reading other people’s C# or VB.NET code is the use of regions. The main reason why anyone should wrap a piece of code inside a region is that there is something inside that they don’t want you to see. Visual Studio’s default behavior is, as you know, to collapse all regions when you open the file. Here are some of the uses I’ve seen of regions:

Grouping of Constructors

I can see the point with wrapping the constructors in a region. Usually nothing interesting should happen in the constructors. But I have been surprised many times by the code people put in their constructors. Therefore you can’t make the assumption that the constructor code is something you can ignore.

If you have so many constructors that you need to hide them, your class is too complicated.

Grouping of Private methods

The same argument as above applies to private methods as well. If you have so many private methods that you want to hide them, your class is too big. There is another class inside just waiting to be released!

Grouping of field and property declarations

Here are some simple rules to avoid many lines of property declarations:

  1. Avoid properties if you can. They violate encapsulation.
  2. If you still need properties, at least use implicit backing fields
  3. If you have many properties, your class is too big

Inside methods

From time to time I see people dividing up methods in regions, usually with some hint of what the wrapped code is supposed to do, e.g. #region Do processing. This is the worst use of regions in my opinion. A well written method should not be more than maybe ten lines long, so there is not much room for regions, is it? Long methods are usually quite easy to refactor, especially with a refactoring tool. Even Visual Studio can do Extract Method out-of-the-box!

Conclusion

If you feel an urge to write #region in your code, you should refactor instead!

First contact with Refinery CMS

I am about to finish my first CMS project based on the open source Ruby on Rails based CMS framework Refinery. It has been a pleasure to work with it and I will explain why in this post.

I have some previous experience with the commercial CMS product EPiServer. It is a very polished product packed with features. The problem with it is that it is developed to be attractive for editors and IT Management and not for developers. It looks really nice in demos, but the developer experience is far from perfect. I suspect that other commercial CMS systems have the same problem. Open source products on the other hand, are developed by developers for developers. This is certainly true for Refinery.

One thing I really like about Refinery is that it is designed “the Rails way”, which means that there is not much to learn if you’re already familiar with Rails.

Setup

Starting a new project is really simple. Just follow the simple steps in the guide and you are ready to go. When you start your new application the first time, you will be prompted to create the first user account.

Basic customization

Customizing the frontend is straightforward. There are Rake tasks defined for overriding defaults. The override task simply copies the file from the gem repository to your application directory. To override the default page view, for example, just issue the command:

rake refinery:override view=pages/show

You can override controllers, models and stylesheets in the same way. The override mechanism is not limited to the frontend, the admin interface can also be customized.

Extending Refinery

Extensions in Refinery are based on Rails Engines. A generator is provided that works like the Rails Scaffold generator. This makes it really easy to add your own functionality. This approach also makes it easy to reuse the extensions you build in other projects.

Deployment

Since your Refinery application is just a normal Rails app, you have the same deployment options available as with any Rack-based app. That includes Heroku, which allows you to get your app up and running in minutes. For a demo or test site you can probably use Heroku free of charge.

Conclusion

Refinery is lacking some advanced features available in commercial CMS products. Most notably it has no versioning support and the globalization support is rudimentary. Despite this I can really recommend it if you value its properties:

  • Based on Ruby on Rails
  • Developer friendly
  • Easy to deploy
  • Easy to customize and extend

If you are new to Rails, don’t worry. Very little knowledge of Ruby and Rails is necessary, at least to build a basic web site.

Small Projects are no Excuse for Sloppy Process

In the past few years I have been involved in a few small development projects. By small I mean a project with two or maybe three developers and a timeframe of four to eight weeks. Some of the projects have been very successful and others have been more troubled.

In retrospect I found that the in the more successful projects we applied a more strict agile process. The most troubled projects were those that were run in a sloppier fashion.

In a small project with a tight budget it is essential that you:

  • Show continuous progress
  • Can respond to change quickly
  • Don’t waste time on things that don’t add value
  • Reduce defects

To be successful in small projects, in my experience, you should at least do the following:

  • Have a product backlog
  • Each product backlog item should be small enough to be completed in one day
  • Use a task board
  • Demonstrate your software one a week
  • Do test-driven development
  • Practice Continuous Integration
  • Automate build and deployment
  • Have retrospectives regularly
  • Daily stand-up meetings together with the customer
  • If you need estimates, use T-shirt sizes (S, M, L)
  • Pair-program as much as you can

A consequence of the last point is that you should never be alone no matter how small the project is.

What can be left out?

In my experience you can safely in most cases skip the following practices:

  • Break down of features into tasks
  • Detailed estimation
  • Burn-down charts
  • Velocity tracking

Migrating from Visual SourceSafe to Mercurial

If your organization use Microsoft SourceSafe as your version control tool, there are several reasons to stop doing that as pointed out in these blog posts:

Which version control system should you use instead? Well, there are a lot of tools to choose from, both free and commercial. In my opinion the best choice for most organizations is a Distributed version control system (DVCS). Popular tools are Git, Mercurial and Bazaar. These are all excellent tools, which one to choose is very much a matter of taste. In this post I will describe how I did a migration from SourceSafe to Mercurial for a client.

Migrating the repository

I haven’t found any reliable tool to migrate directly from SourceSafe to Mercurial, but there are many tools to migrate from SourceSafe to Subversion and it is possible to migrate from Subversion to Mercurial.

I tried a few tools for the SourceSafe –> Subversion conversion. I ended up using Vss2Svn, a tool with a simple command line interface. Vss2Svn creates a Subversion dump file, which can then be imported into Subversion. The following commands migrates the VSS repository into a new Subversion repository:

vss2svn –vssdir <path to your VSS database>   
svnadmin create C:\svn-repo    
svnadmin load C:\svn-repo< vss2svn-dumpfile.dat

To migrate your new Subversion repo to Mercurial you can use the convert extension to Mercurial. If you have installed TortoiseHg, you already have it. Just enable it from the Global Settings->Extensions page. Now start a local Subversion server with the command:

svnserve -r C:\svn-repo –d

If you would like to have each of your existing VSS project in a separate Mercurial repository, you will have to convert each one of them with a separate command, like so:

hg convert svn://localhost/YourProject YourProject

You will now have a new nice and warm home for your project!

One thing that was lost in translation was the labels from SourceSafe. I’m sure there is a way to keep them, but I didn’t have the time to investigate that.

NHibernate Session Handling Revisited

In an old blog post I described how you can implement the Open Session in View pattern using Contextual Sessions. Since then I have discovered even easier way to handle the NHibernate sessions in an ASP.NET web application. The approach I have used recently is the one proposed by Ayende in this blog post. His approach is to store the reference to the current session in the current HttpContext and hooking in the session lifecycle management into the BeginRequest and EndRequest events.

Another approach that I haven’t used yet but I certainly will try in some upcoming project is to let your IoC container manage the sessions. Here is an example of how to do that with StructureMap.

Code Metrics Statistics with TeamCity

Code metrics can be a very useful tool for monitoring some aspects of code quality. To get the most out of it you need to calculate the metrics on a regular basis in order to find trends. Code quality tend to go down during intense phases of a development project and also when a product is in low-intensity development, a.k.a the maintenance phase. The obvious thing to do for a modern developer is to integrate calculation of code metrics into the Continuous Integration process.

In this post I will demonstrate how you can calculate code metrics and display graphs of the evolution over time as part of your Continuous Integration process. There are a few tools available for calculating code metrics in .NET, the most capable is without doubt NDepend. Here I will use another tool, SourceMonitor, which is free of charge and very lightweight. We use Team City at work, so that is the CI server that I will use here also, but you could probably implement this idea in the CI server you are using.

One of the goals I had when I started experimenting with code metrics statistics was that I wanted it to be easy to add statistics to any project, without changing anything in the project itself. All changes should be in the Team City configuration.

Step 1: Create Metrics project in VCS

The first step is to create a project in your version control system. This project should contain the following artifacts:

  • The SourceMonitor executable
  • A SourceMonitor command file, SourceMonitorCommands.xml
  • A MSBuild script file, SourceMonitor.proj
  • MSBuild community tasks

I put all the files in a directory called SourceMonitor.

Step 2: Create a SourceMonitor command file

I will not go into the details of working with SourceMonitor. If you are interested, you can read the documentation that is included in the download. Here is the command file that I used:

<?xml version="1.0" encoding="utf-8"?>
<sourcemonitor_commands>
  <write_log>true</write_log>
  <command>
    <project_file>MyProject.smp</project_file>
    <checkpoint_name>Baseline</checkpoint_name>
    <project_language>C#</project_language>
    <source_directory>..</source_directory>
    <source_subdirectory_list>
      <exclude_subdirectories>true</exclude_subdirectories>
      <source_subtree>bin\</source_subtree>
      <source_subdirectory>obj\</source_subdirectory>
    </source_subdirectory_list>
    <parse_utf8_files>True</parse_utf8_files>
    <ignore_headers_footers>True</ignore_headers_footers>
    <export>
      <export_file>SourceMonitor-details.xml</export_file>
      <export_type>2</export_type>
    </export>
  </command>
  <command>
      <project_file>MyProject.smp</project_file>
    <checkpoint_name>Baseline</checkpoint_name>
    <export>
    <export_file>SourceMonitor-summary.xml</export_file>
    <export_type>1</export_type>
  </export>
  </command>
</sourcemonitor_commands>

This command file will create two xml-files: SourceMonitor-details.xml and SourceMonitor-summary.xml. It is from the latter that we will extract the values to publish to Team City.

Step 3: Create a build script

Here I have used MSBuild, but you can of course use NAnt, Rake or whatever. The build script will do the following:

  • Run SourceMonitor on your source files
  • Extract the interesting values from the resulting xml-file
  • Publish these values to Team City

The MSBuild community task XmlRead is used to extract the values from the xml-file. The Team City task TeamCityReportStatsValue is used to publish the values. The community tasks has to be explicitly imported, but the Team City tasks are imported automatically when the script is run by Team City. Here is my MSBuild script:

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="3.5" DefaultTargets="Analyze"
xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<PropertyGroup>
  <MSBuildCommunityTasksPath>.</MSBuildCommunityTasksPath>
</PropertyGroup>
  <Import Project="MSBuild.Community.Tasks.Targets"/>

  <Target Name="Analyze">
  <Exec Command="SourceMonitor.exe /C SourceMonitorCommands.xml"/>
    <XmlRead XPath="//*/metric[@id='M0']" XmlFileName="SourceMonitor-summary.xml">
      <Output TaskParameter="Value" PropertyName="NumberOfLines" />
    </XmlRead>
    <TeamCityReportStatsValue Key="NumberOfLines" Value="$(NumberOfLines)" />

    <XmlRead XPath="//*/metric[@id='M5']" XmlFileName="SourceMonitor-summary.xml">
      <Output TaskParameter="Value" PropertyName="MethodsPerClass" />
    </XmlRead>
    <TeamCityReportStatsValue Key="MethodsPerClass" Value="$(MethodsPerClass)" />

    <XmlRead XPath="//*/metric[@id='M7']" XmlFileName="SourceMonitor-summary.xml">
      <Output TaskParameter="Value" PropertyName="StatementsPerMethod" />
    </XmlRead>
    <TeamCityReportStatsValue Key="StatementsPerMethod" Value="$(StatementsPerMethod)" />

    <XmlRead XPath="//*/metric[@id='M10']" XmlFileName="SourceMonitor-summary.xml">
      <Output TaskParameter="Value" PropertyName="MaxComplexity" />
    </XmlRead>
    <TeamCityReportStatsValue Key="MaxComplexity" Value="$(MaxComplexity)" />

    <XmlRead XPath="//*/metric[@id='M14']" XmlFileName="SourceMonitor-summary.xml">
      <Output TaskParameter="Value" PropertyName="AvgComplexity" />
    </XmlRead>
    <TeamCityReportStatsValue Key="AvgComplexity" Value="$(AvgComplexity)" />
    </Target>
</Project>

The build script above will extract the following metrics and publish them:

  • Number of lines of code
  • Average number of methods per class
  • Average number of statements per method
  • Maximum cyclomatic complexity
  • Average cyclomatic complexity

Step 4: Create a build configuration in Team City

The next step is to create a new build configuration for the project that you want to analyze and display statistics for.

Attach your VCS project root and the SourceMonitor root as well. Your version control settings will look like the example below.

Version Control settings for Metrics

Version Control Settings in Team City

Configure TeamCity to use the MSBuild runner for your build and specify the path to the build script that will run SourceMonitor. Also specify the target, in my case “Analyze”.

Build runner configuration in Team City

Setup build triggering the way you like it, either as a dependent build or scheduled at some interval, for example every night.

Step 5: Configure Team City to display statistics

TeamCity has built in capablities to display statistics in graphs. You can easily add your own graphs to a project by adding configuration to the file plugin-settings.xml which is located in the folder: (TeamCity path)\.BuildServer\config\(project name). Note that you have to change the buildTypeId value to the Id of your Metrics build configuration. You can find the buildTypeId in the URL as a query parameter. Below is a sample plugin-settings file:

<?xml version="1.0" encoding="UTF-8"?>
<settings>
 <custom-graphs>
    <graph title="Number of Lines" defaultFilters="" hideFilters="showFailed">
      <valueType key="NumberOfLines" title="Number Of Lines" buildTypeId="bt61" />
    </graph>
    <graph title="Methods per Class" defaultFilters="" hideFilters="showFailed">
      <valueType key="MethodsPerClass" title="Methods per Class" buildTypeId="bt61" />
    </graph>
    <graph title="Statements per Method" defaultFilters="" hideFilters="showFailed">
      <valueType key="StatementsPerMethod" title="Statements per Method" buildTypeId="bt61" />
    </graph>
    <graph title="Complexity" defaultFilters="" hideFilters="showFailed">
      <valueType key="MaxComplexity" title="Max Complexity" buildTypeId="bt61" />
      <valueType key="AvgComplexity" title="Average Complexity" buildTypeId="bt61" />
    </graph>
  </custom-graphs>
  </settings>

Whenever the Metrics build is run, the graphs will be updated with the latest values. You will find them on the Statistics tab on the project overview page. It will look something like this after the first successful run:

Sample Metrics graphs

Final remarks

SourceMonitor only supports a few metrics, so if you want some more advanced metrics, like code cohesion, then go for NDepend.

In this post I haven’t explained how to display detailed reports from SourceMonitor in TeamCity. Maybe that will come in a later post.

Good luck with your metrics!

Test-Driven Developers have more fun

I started using TDD a couple of months ago and what strikes me now is how much more fun it is to develop in this radically new way. There are several reasons for this.

A common frustration when developing in the traditional way is that the progress is not immediately visible. This is particularly true in larger projects. You write a lot of code before you get something that is working and can be demonstrated. With TDD, every new passing test is a clear sign of a small step forward.

Another common cause for frustration and dissatisfaction for a professional developer is that many times you are not confident that your code is working as expected. If you apply TDD, allmost 100% of your code will be covered by tests and that will make you confident that the code is doing the right thing.

In the past I have sometimes suffered from “code writer’s block” when I’m trying so hard to get the design right from the beginning that I couldn’t produce any code at all. With TDD you don’t have to get it right from the start. The tests allows you to try out different designs and easily refactor the code without breaking it.

Every now and then I try to write some code for a hobby project, but I rarely got more than an hour or two free for coding at home. A TDD roundtrip of “red – green – refactor” typically takes less than two hours to complete, which means that even though I have very limited time I can still make som progress by adding at least one passing test every time I get a chance to write some code at home.

Conclusion

By applying TDD you will get greater job satisfaction as a developer. You will get feedback serveral times a day that you are making progress. You will be more confident that the code you are delivering is working as expected and you will find it easier to experiment with the design, thus giving you more freedom.

Feedback, confidence and freedom are factors that will make you enjoy your profession even more. Don’t miss an oppurtunity to have more fun at work and with your hobby projects – jump on the TDD bandwagon now!!

NHibernate Session handling in ASP.NET – the easy way

EDIT: Take a look at a new blog post of mine on this subject.

When I first started using the popular ORM framework NHibernate, I was a bit confused about the best way to manage NHibernate sessions in a web application. The official documentation did not provide much guidance, but I did find out that the pattern Open Session In View was the way to go. The obvious question was: how do I implement this pattern in the context of an ASP.NET application?

The implementation that was proposed in the otherwise great article NHibernate Best Practices seemed overly complicated. There must be a simpler way to do this. After reading the documenation again I found that there is a feature in NHibernate called Contextual Sessions.

The basic idea with contextual sessions is that in order to get a reference to an instance of ISession you simply call the method GetCurrentSession() that is defined in the interface ISessionFactory. The call to GetCurrentSession() is delegated to a class specified by the configuration parameter hibernate.current_session_context_class. This class implements the interface NHibernate.Context.ICurrentSessionContext. Out-of-the-box NHibernate provides one implementation of this interface, NHibernate.Context.ManagedWebSessionContext, which tracks current sessions by HttpContext. That is exactly what we want in order to implement the Open Session In View pattern.

What we need to do in order to utilize the ManagedWebSessionContext is to bind the current HttpContext to an open session at the beginning of each request and then unbind the session to the context at the end of each request. The easiest way to do this is to use the Application_BeginRequest and Application_EndRequest event handlers in Global.asax. You could of course implement your own HttpModule and do the bindind and unbinding there, if you for some reason don’t want to use Global.asax.

Now it is time to show you some code! First of all you have to implement a session manager. I have implemented the session manager as a lazy, lock-free, thread-safe singleton:

using NHibernate;
using NHibernate.Cfg;

namespace DataAccess
{

    public class SessionManager
    {
        private readonly ISessionFactory sessionFactory;
        public static ISessionFactory SessionFactory
        {
            get { return Instance.sessionFactory; }
        }

        private ISessionFactory GetSessionFactory()
        {
            return sessionFactory;
        }

        public static SessionManager Instance
        {
            get {
                return NestedSessionManager.sessionManager; }
        }

        public static ISession OpenSession()
        {
            return Instance.GetSessionFactory().OpenSession();
        }

        public static ISession CurrentSession
        {
            get
            {
                return Instance.GetSessionFactory().GetCurrentSession();
            }
        }

        private SessionManager()
        {
            Configuration configuration = new Configuration().Configure();
            sessionFactory = configuration.BuildSessionFactory();
        }

        class NestedSessionManager
        {
            internal static readonly SessionManager sessionManager =
                new SessionManager();
        }
    }
}

The binding and unbinding of the NHibernate session to the current HttpContext is done in Global.asax:

        protected void Application_BeginRequest(
            object sender, EventArgs e)
        {
            ManagedWebSessionContext.Bind(
                HttpContext.Current,
                SessionManager.SessionFactory.OpenSession());
        }

        protected void Application_EndRequest(
            object sender, EventArgs e)
        {
            ISession session = ManagedWebSessionContext.Unbind(
                HttpContext.Current, SessionManager.SessionFactory);
            if (session != null)
            {
                if (session.Transaction != null &&
                    session.Transaction.IsActive)
                {
                    session.Transaction.Rollback();
                }
                else
                    session.Flush();
                session.Close();
            }
        }

The reason for the rollback of open transaction in Application_EndRequest is to ensure that the transaction is not committed in case of an uncaught exception.

Finally we have to configure NHibernate to use the ManagedWebSessionContext, by setting the parameter current_session_context_class. There is a short name that can be used, “managed_web”. In your web.config file you add the following property:

<hibernate-configuration xmlns="urn:nhibernate-configuration-2.2">
    <session-factory>
        ...
        <property name="current_session_context_class">
            managed_web
        </property>
        ...
    </session-factory>
</hibernate-configuration>

Now the property SessionManager.CurrentSession holds a reference to an open NHibernate session on every HTTP request. The usage is illustrated in the example code snippet below:

    ISession session = SessionManager.CurrentSession;
    session.BeginTransaction();
    Issue issue = new Issue();
    issue.Heading = txtHeading.Text;
    int id = (int)session.Save(issue);
    session.Transaction.Commit();
    lblCreated.Text = "Created issue with id: " + id;

As you can see this code snippet is taken from a code-behind file. In a real application you would of course put the code related to NHibernate into a DAO or Repository class.

Conclusion

What I have showed in this post is just one way of handling NHibernate sessions in an ASP.NET application. Another option which I haven’t investigated yet is to use the promising framework NHibernate Burrow. Maybe I will dig into that in another blog post.

kick it on DotNetKicks.com

Deploying ASP.NET Web Applications using NAnt

We have started implementing Contiuous Integration at our company. We began with Cruise Control.NET but have decided to try with TeamCity instead. One thing that we would like to do is to deploy ASP.NET web applications as part of our Continuous Integration process. We use the Web Application Project template in Visual Studio. One problem that we faced was: How can we copy all the neccesary files to the web server? We didn’t want to specify the files explicitly nor did we want to copy all files with certain file name extensions. Those approaches seemed very error-prone.

The first approach we used with Cruise Control.NET was to use the MSBuild target _CopyWebApplication which is defined in Microsoft.WebApplication.targets. I was not very pleased with this solution, although it worked. My main concern that we didn’t have any control over the process. When we started with TeamCity it turned out that this approach was not working anymore. The _CopyWebApplication target copies all files needed for the web application to a directory called _PublishedWebSites\<web application project name>. For some reason (probably a good one), TeamCity mangles the name of the project file, with the effect that the name of the directory where the web application files were copied to had a different name than expected.

So I decided to try another approach. Since I like NAnt much more than MSBuild, I started to explore if it was possible to use NAnt for this task. The main problem was that the valuable information on which content files were part of the application was buried in the project file. To utilize this information I used an XSL transformation to transform parts of the project file (which is an MSBuild script) into a NAnt script.

We would like to select all the files where the Build Action property is set to Content. Those files are listed in the project file as a Content element, like in this example:

  <ItemGroup>
    <Content Include="Default.aspx" />

Hence they can be selected with the XPath expression: /Project/ItemGroup/Content.

What I wanted my XSLT script to do was to create a NAnt script with the following content:

  1. A fileset with all the content files included
  2. A target which copies the files in the fileset to some directory defined by a property.

This is what I came up with:

<?xml version="1.0" encoding="utf-8"?>
<xsl:stylesheet
    version="1.0"
    xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
    xmlns:msxsl="urn:schemas-microsoft-com:xslt"
    exclude-result-prefixes="msxsl"
    xmlns:s="http://schemas.microsoft.com/developer/
        msbuild/2003">
    <xsl:output method="xml" indent="yes"/>
    <xsl:template match="/">
        <project
            xmlns="http://nant.sf.net/release/0.85/nant.xsd"
            name="GeneratedFromMsbuild">
            <fileset id="content.files"
                basedir="${{project.root}}">
                <xsl:for-each select=
                    "/s:Project/s:ItemGroup/s:Content">
                    <include name="{@Include}"/>
                </xsl:for-each>
            </fileset>
            <target name="copy.content">
                <copy todir="${{destination.dir}}">
                    <fileset refid="content.files"/>
                </copy>
            </target>
        </project>
    </xsl:template>
</xsl:stylesheet>

The following excerpt from our main NAnt script shows how the XLS transformation is invoked and how the resulting script is used to copy the content files:

<target name="deploy.web" depends="generatenant">
    <nant buildfile="${generated.file}" target="copy.content">
        <properties>
            <property name="project.root"
                value="${web.projectrootdir}"/>
            <property name="destination.dir"
                value="${web.rootdir}"/>
        </properties>
    </nant>
   <copy
       todir="${web.rootdir}\bin">
       <fileset basedir="${web.projectrootdir}\bin">
            <include name ="*.*"/>
       </fileset>
   </copy>
</target>

<target name="generatenant">
    <delete file="${generated.file}" failonerror="false"/>
    <style style="GenerateNantFromMSBuild.xslt"
        in="${web.projectrootdir}/${web.projectfilename}"
        out="${generated.file}"/>
</target>

Closing remarks

Of course there are several other ways of accomplishing this, like using a Web Deployment Project, but I personally like having full control of what is going on and that can be achieved with NAnt.

Here is my blog

After having thought about it for a long time, I finally decided to start my own blog.

I will blog about things that are related to my work as a .NET developer, such as:

  • Solutions to technical problems that I have encountered
  • Technical books, blog posts and articles I have read
  • Reports from conferences and training courses
  • Development tools and frameworks
  • Adoption of agile principles and practices

I hope you will enjoy this blog and find it useful.

Follow

Get every new post delivered to your Inbox.