Tuesday, 14 December 2010

And It’s I-o Silver Lining

Will Io finally find a way to win me over, or will the silver lining just be that this is the last day of Io, and I can soon move on to the next language? Let’s find out.

In the opening paragraph of day 3 the author tells us that he found the first few days frustrating but after a couple of weeks it had started to click & helped change the way he thinks. Spurred on by these words I fired up the interactive console again, and took me straight back to the non-existent .io_history error message :( I think that fate doesn’t want me to like Io.

The book went on to describe how to create DSLs with Io. Following on from the previous day showing us how we can create custom operators etc. this seems like a good fit, but not so much that I was bubbling over with delight by the end of that section.

Following that we had a section on concurrency. It started with coroutines, this is a way of writing asynchronous code where the developer states exactly where control can be yielded to another thread which should lead to fewer heisenbugs. This struck me as a nice design. Then came actors and finally futures which seem a lot like Tasks in .netland. The provided sample called URL to retrieve a page from the web, but this library doesn’t seem to be included by default, so I was left with an error message stating that Object does not respond to URL. Once again these issues left me feeling like I didn’t want to dive into the self study section which dealt with expanding upon the DSL code from earlier in the chapter.

I’ve wanted to like Io. The intro to it had sparked some interest, but the constant niggles to get anything to happen have sapped my enthusiasm too quickly. As he sums up the language, Bruce mentions how syntactic sugar is a matter of taste, Ruby has lots whereas Io has none, based on the preferences of their creators. Personally I find myself drawn in to the Ruby style (even if I dislike the excessive use of the underscore in names). Add this to the problems that I’ve had getting anything to run and it just lead me further away from embracing Io. Still, no-one said I need to end up loving them all.

Next up, Prolog.

I-o, I-o, It’s Back To Work I Go

It’s been a while since I last had the chance to do a bit more learning, but I’m back with it so lets see how day 2 of Io goes.

The chapter started off with standard control flow stuff in the form of conditionals and loops, stuff that I would probably have expected to see in day 1 really. Nothing too remarkable here, and it quickly moved into creating custom operators which was pleasant, but I’m not sure how useful that is on a day to day basis. I can’t remember ever feeling a need to so much as override an operator in .net, let alone create a new one.

Next up was more about message passing. Io is all about messages, to the point that pretty much anything that isn’t a comment or a comma is a message. This allows you to create control structures in code that would typically need to be keywords implemented by the language creators. Messages are only evaluated when they are specifically required rather than the typical evaluate the parameters, push the values on the stack and call the method approach that typifies most languages. That has a quite a nice functional feel, and no need to start a call with “() => “

The chapter ends with a section on reflection. The supplied code here worked fine for me when run in a file, but trying to get it to work in the interactive mode was a non-starter sadly. Of course I spent a while trying to enter and run it before going the file route, so yet again I spent more time mucking around trying to get things to work with Io rather than learning about it. By the time I arrived at the self study page I’d pretty much decided that I was going to leave it off.

Still, the book continues to be engaging even if this language keeps throwing up obstacles to using it cleanly. I particularly enjoyed sending messages to princessButtercup, and the ambiguous animal noises.

Friday, 3 December 2010

Io day 1.5

A little time had gone by, I was ready to forgive Io and see what I could pick up as I continued on with the bulk of the chapter. Sadly Io still wasn’t ready to make friends. Trying to fire up the REPL I got an error proudly proclaiming “Exception: while loading history file '/Volumes/Data/User Data/tobz/.io_history', reason: No such file or directory” I had a look and there was a file of that name there, I tried chmod-777-ing it to no avail, and my limited Unix skills were pretty much done. Fortunately simply removing the file seemed to do the trick.

Io day 1 then followed roughly the same pattern as Ruby day 1, but due to the sparse nature of its syntax it felt far less familiar. The tasks to find info online are tricky enough compared to Ruby just because of the difference in user base and associated sites/posts about each, when you factor in the commonness of the term io, it becomes even worse. Got there in the end though.

So far I’m not feeling the love. All the pain just getting it up and running certainly left a bad taste in my mouth about the whole thing, and the unfamiliar feel to the syntax leaves me feeling a bit disconnected so far. Still, I’ve only just scraped the surface so far, maybe once we get in a little deeper I’ll see something special that it does in a particularly nice way and start to warm to it. We shall see.

I should mention that there were some amusing giggles in the film references used to show off a few of the features. Even if the language has yet to grab me, at least the book continues to be good :)

Tuesday, 30 November 2010

I-o, I-o, It’s Off To Work We Go

Language number 2 of the 7 is Io. When Bruce Tate was picking the languages for inclusion in this book he discounted JavaScript due to its ubiquity. Io takes its place as a prototype based language, and in the intro Bruce tells us that learning Io helped his understanding of JS. In my web developmenty day job I tend to spend more time on the back end, so don’t hit JavaScript too much, but from time to time I have the odd job to do in it. I can muddle through, but I’d certainly not claim any major expertise with it and I’m always happy to get back server-side. So despite not having heard of Io prior to discovering this book, the promise being able to apply some of the principles learnt here to my work has some allure. Additionally, at the Io’s home website, http://iolanguage.com/, the overview mentions that it was inspired by Smalltalk, a language that seems to be the starting point of every good idea in the industry right now. Admittedly, plenty of other languages are probably inspired by it as well, but it’s nice to see the reference in black and white :)

At this point I screeched into my first blocker. There doesn’t seem to be a download of the binaries available. It’s an old-school, unix-style, build-it-yourself job. That wouldn’t be sooo bad, but I’ve not installed Xcode on this box yet, so I don’t have the tools installed to do so. So to run what is described as a teeny tiny little interpreter, I’ve now got a few gig of Xcode install running. Waiting…

…And there we go. With Xcode installed I now have the ability to run make. Unfortunately the instructions for installing Io also require cmake. Another download, more installing…

…And finally, I can build the app. The Readme.txt file instructs me to run the following:
   mkdir build && cd build
   cmake ..
   make install
which, of course fails part way through the final step. A bit of googling and I find that a better command would be:
   sudo make install

…And at long last I can fire it up. Hoorah! The intro to the book states that we are on our own when it comes to installing the languages due to the wide variety of platforms etc. that readers might be using, which is fair enough. However, after all the hassle getting to this stage I decided to just run the hello world app as a quick test and take a break from it. Page 1 of Day 1 has taken me far longer than I feel like dealing with right now.

Saturday, 27 November 2010

Hooray For Me, It’s Ruby Day 3

I’ve finished the first of my 7 languages, and in just under the week. Go me. The final chapter on Ruby focuses on the metaprogramming aspects of the language which are some of the things that Ruby fans seem to love.

We start with adding methods to existing classes. This is something that .net has some ability to do since the introduction of extension methods, but Ruby really adds the methods to the class unlike C# where it just adds different locations to look for it, so it doesn’t look like you’ll need to include references to different assemblies and namespaces and whatnot to find the method.

Next up is method_missing which is a very nice technique indeed. Basically it passes the name of the method being called as a parameter so that it can be dealt with in code. The example given was an api for roman numerals so you could enter Roman.XIV or Roman.VII for example and the numeral would be parsed and a number returned.

The chapter ended with a section on modules. I felt that this was not as clear as what had come before it unfortunately, though by the end of the section and with a bit of fiddling I was starting to get an idea about some of the things that it was showing. Certainly enough to be interested in finding out more about the language.

The self study for this day only had one task which was to extend the module that we’d written to handle CSV files with some metaprogrammed method_missing goodness. This went a lot quicker and smoother that day 2’s exercises, but again it would have been nice to see a completed version to compare against mine.

As an intro to Ruby it was short and sweet. I certainly can’t claim to have learned Ruby yet, but I’m undeniably further on than I was and keen to learn more in depth. I was able to pick up a copy of the pickaxe book from pragprog whilst they were running it on super special offer prices, so this will be happening sooner rather than later. Following IronRuby’s recent release of Visual Studio integration tooling, I’m looking forward to seeing how it can help my .net based apps.

Thursday, 25 November 2010

Finishing Ruby Day 2, After A Day Or Two…

Finding the time to finish day 2 took me a few sessions over the next few days, but I’ve finally wrapped it up. The lessons continued with writing our own classes. This started with a simple tree example which I ended up getting a little carried away with extending. After that came mixins. These seem to deal with multiple inheritance issues that .net uses interfaces for, but Ruby includes implementation, which is nice. The chapter finished with a bit more general data manipulation.

The self study section starts with finding stuff again, but where day 1 was “find the docs” this is more focused on learning more about topics that we’ve touched in this chapter so I had to pay more attention to the content that I found.

It then moved to the exercises again. This time it felt a bit more like being thrown in at the deep end, requiring more googling and experimentation. This continued the theme of working harder to get more out of it, which is not without merit. The first set of exercise mentioned that there’d be answers at the back of the book, but this is not the case. Whilst with coding there is generally not a single correct answer, at this stage in my learning of the language I’d like to see what is considered to be a good solution to compare against mine. There are aspects of my code where I feel that I may be missing a better approach, but it does what I expect of it, so it can’t be tooooo bad.

Finding the time to get all the way through this chapter was tricky. However, this has a lot to do with the fact that I want to be at my computer to fiddle with the samples rather than just reading the book on my commute or something. That would have let me blast through things a goodly chunk faster but I’m trying to get the most out of it, so I’m happy with the progress I’m making through it.

Tuesday, 23 November 2010

Ruby Tuesday

The chapter for day 2 is a chunk longer than day 1, so with only a limited amount of time I did not manage to complete it. I did like what I saw though.

The first thing that jumped out at me was in array access. Given a plain old array like a=['first', 'second', 'third'] I’m entirely used to the concept of a[0] being ‘first’ a[1] ‘second’ etc. but have never seen the reverse indexing feature whereby a[-1] is ‘third’ and that delighted me.

I have seen snippets of Ruby like 3.times <blah blah blah> before and quite liked the flow that it lead to, but being able to assign a number to a variable and then use it in the same manner, like:
>> b=5
=> 5
>> b.times {puts "rockin the b"}
rockin the b
rockin the b
rockin the b
rockin the b
rockin the b
was another pleasing experience, as it seems more useful than just being able to run it on numeric literals.

After this we moved on the a custom implementation of the times method that we added to the Fixnum class using the yield keyword to execute the blocks of code. I quite liked the introduction of extension methods in .net for the ability to add handy methods to classes, so seeing this part of the dynamic nature of Ruby was another good thing.

I’ve got a little way to go yet before I even reach the next set of exercises, but again I found myself expanding on the samples given in the book which is a good sign that I’m feeling interested and engaged in what’s happening, and helps to make that learning start to stick. I had been contemplating trying to squeeze some IronRuby into some personal projects that I have been planning, so this bodes well for that.

So far, the only thing that I don’t like about Ruby is that it seems to favour the use of the underscore in naming conventions. I’ve long been a fan of the camel/pascal case conventions used in .net land if only for the simpler typing and shorter names that it provides. Ho hum, it can’t all be hugs and puppies.

7 Languages

People keep batting around the statement that you should aim to learn a new language every year to be a good programmer. This tends to be attributed to the book “The Pragmatic Programmer” the authors of which went on to create their own publishing company and have released many more books on programming pragmatically. One of their most recent releases is “Seven Languages In Seven Weeks” and in my aims to be a better dev, I have decided to attempt to follow this book and see what I can learn. It takes a bunch of languages that have fundamental differences between them, such as dynamic, functional and logic based, aiming to give you a basic understanding of what makes that language interesting and useful compared to the others. Whether or not I get to use any of these languages in my day job, hopefully it will open my eyes to different ways of doing things, and make me better with C# and VB.net.

The first language up is Ruby. I’ve never had the chance to get involved in Ruby at work, but have heard plenty of good things about the language so have been interested in learning a bit about it. I’m sure that there are plenty of other devs in the same boat, so this was a great choice of opener in my opinion.

Day 1 starts with a simple intro to Ruby syntax: simple assignments, printing to the console, the fact that everything is a object unlike .net with its value types, the way that the if statement considers 0 to be true, and touching on the dynamic typing but without going into enough depth to really get a feel for how this is useful in the real world. I hope to see days 2 and 3 help to get this point across as this is one of the key areas that I want to grok about Ruby coming from a static language like C#. There are plenty of small simple code examples in this segment that illustrate what is being taught as well as serving as a springboard for a little fiddling. They are all run in the irb REPL program. It’s quite nice to have such a light weight way to run some simple code. Visual Studio can be awesome, but it does take a while to get you to the code from a cold start.

After the usual discussions about how to do the basics the chapter leaves off with some exercises for the reader to perform. First of all these are just finding some useful sources of documentation on the intarwebs, including ranges. (‘aa’..’bb’).to_a is a very interesting thing to see in action :) After this there were a few exercises, starting off with the classic “Hello, world”, some basic looping, running code from a file rather than in the REPL and progressing through to a simple “guess the random number between 1 and 10 with a prompt to go higher or lower next go” game. I’m not sure that I’ll win any point for style, but these all came fairly simply. This last step of engaging us with some tasks based on, and stretching beyond, what has been given in the chapter works well for helping to solidify that learning.

This was a decent intro to the basics of the language and the approach of the book that has me looking forward to working through some more. For the record, and to open myself up to ridicule were anyone to bother with reading my little blog, the code I came up with for my guessing game was:

mynum = rand(10)
guess = -1
until guess == mynum
    puts "have a guess"
    guess = gets.to_i
    puts 'too low' if guess < mynum
    puts 'too high' if guess > mynum
end
puts 'woot'

I’m not sure what will happen if a non numeric character was entered, I don’t think Ruby is dynamic enough to come up with something that meaningful on its own ;)

Sunday, 14 November 2010

Testing and Tooling, I’ve got it covered

We’re currently going through the process of trying to get unit tests around a large legacy code base at work, most of which has been written in a rather test-proof manner. This means that we need to introduce seams as we work through different sections, and obviously this sort of code change can create new and subtle bugs if any small errors are made in the refactoring. As we’re adding the tests as we go human error can mean that a method may be modified without the safety harness of tests to watch it. Having seen one such bug get onto our live webservers and cause a few issues when used in anger, it occurred to me that we ought to get some code coverage running so that we can tell exactly what code is tested, and see what we’ve missed.

I’ve dabbled with NCover in the past, but the open source version is getting a bit long in the tooth now, and I had no budget to jump straight in with the commercial version, so I found PartCover and after a bit of tweaking found that I could get useful metrics out of it, even if the interface currently feels rather clunky. At the high level, code coverage figures are a nice thing to know, but tell you very little about what is being tested, especially at this sort of stage of a project where we are at fractions of a percent. However, the ability to dig into different assemblies, classes, and even diving into the code in methods to see which lines had been executed suddenly gives us a great insight into what is going on. We can now see if there are any sections of the class that we are refactoring that aren’t covered by tests, and make sure that we rectify the situation before hitting the real world, like that implementation of IEquatable<T> that had tests for both equal and unequal objects, but not null. Oops.

Such tooling is less important in the modules that we’ve been able to code using TDD, but even there it is handy to see if we’ve been a bit over zealous with writing production code. We’re not ready to push code coverage into our CI system yet as the high level statistics would give management a bigger stick to beat us with over the low number of tests that we currently have. However, as we get more and more tests in there, then we shall end up with it in the automated build for the extra reassurance that it gives us.

 

On a slight tangent, I had a lovely experience whilst working on one of the new classes that I have been able to develop with TDD recently. I’d written a bunch of tests and the code to make them pass, and then went through the cycle again. All of a sudden my test runner lit up like a christmas tree. I’d made some simple schoolboy error, like a off by one, or using < instead of > or some such faux pas. Before I’d had a chance to think about anything else the system was already telling me I’d screwed up. It’s a nice warm and fuzzy feeling to know you have that kind of security in place :)

Friday, 5 November 2010

A Sink?

Yesterday evening saw Mads Torgersen and Lucian Wischik from the C# and VB teams respectively give a presentation to the London .net user group regarding the new Async functionality that has recently been released as a CTP addon to studio 2010. The guys are on tour, with the same core presentation that Anders gave at PDC, but in the context of the user group setting there was a lot more audience participation that pulled it in a different direction making it a useful follow-on.

One of the main messages that keeps cropping up in any discussion of the new feature is that it is not about spinning up new threads and making use of all those cores that we find in machines these days. It is about the orchestration of asynchronous tasks, allowing us developers to write code as if it were nice easy synchronous code, without all the callbacks for completion or failure. Creating methods that actually do work in parallel is a whole different matter.

As someone with only a few dribs and drabs of experience running code asynchronously, the demos I’ve seen so far are very compelling. The code is far far easy to write and understand using the new keywords, rather than manually prodding in all of the callbacks. Most of the emphasis has been on rich client code, whereas I spend most of my time in ASP.net at the moment, but the messages coming from MS are that we’ll be able to help servers to scale better by not blocking threads that are waiting for other services. Whether this is useful for relatively quick connections like hitting a big database server with a super high speed lan connection as opposed to slow things such as going out to web services across the internet will determine how useful this ends up being to me.

I’d like to quote a paragraph from Jon Skeet’s blog that seems to sum it all up well:

“It's important to note that it's not a free lunch, and doesn't try to be. It removes much of the error-prone mechanical drudgery of writing asynchronous code, but it doesn't attempt to magically parallelize everything you do. You still need to think about what makes sense to do asynchronously, and how to introduce parallelism where appropriate. That's a really good thing, in my view: it's about managing complexity rather than hiding it.”

Tuesday, 20 July 2010

The Data Retrier or how I crammed functional programming, generic methods & reflection into 1 routine

I had to work on an old batch job recently that needed to run against a remote database with a flakey VPN connection that kept causing ADO calls to timeout and fail at random. The technique that I came up with to let the job run all the way through was my Data Retrier of the title.

Essentially, all I needed to do was to find the calls that were failing (of which there were but a handful) and package the call up in a lambda, sending it through my new routine. That would then try to execute the call in a loop to allow it to cope with the odd failure here and there. Using reflection it is able to update the console with details about what it is trying to do, and generics let the method return the same type as the function that is passed in to it.

Function GetDataRetrier(Of T)(ByVal dataFunction _
      As Func(Of T)) As T
  Dim currentTry As Integer = 1
  Const maxTries As Integer = 3
  Dim lastException As Exception = Nothing
  While currentTry <= maxTries
    ' Everything from here to the try block is just for 
    ' outputting status, so is non-essential.
    Dim retType As Type = dataFunction.Method.ReturnType
    Dim retTypeOutput As String = retType.Name
    If retType.IsGenericType Then
      Dim genericArgs() As Type = retType.GetGenericArguments
      retTypeOutput += "<"
      For Each arg As Type In genericArgs
        retTypeOutput += arg.Name + ", "
      Next
      retTypeOutput += ">"
    End If
    Console.WriteLine("Trying to get data " + _
      currentTry.ToString + "/" + maxTries.ToString + _
      " : " + retTypeOutput)
    Try
      Dim result As T = dataFunction.Invoke()
      Console.WriteLine("Got data on try " + currentTry.ToString)
      Return result
    Catch ex As Exception
      lastException = ex
    End Try
    currentTry += 1
  End While
  Throw lastException
End Function

The code in the job that used to get the data looked something like this:

dim myData as List(Of BusinessyObject)
myData = dataService.GetRelevantBusinessyObjects()

To use the retrier the second line would just have to change like so:

myData = DataRetrier(Function() _
  dataService.GetRelevantBusinessyObjects())

The myData object gets the exact same content, but will now happily recall the database a couple of times if it fails. And with that change the app went from being almost impossible to run all the way through to working every time.

Wednesday, 23 June 2010

Secret Squirrel

This post is just for holding nice little SQL scripts that seem like they could be useful things to have in the toolbox.

First up we have a routine that can be used to search all the stored procedures in a database for the supplied string:

CREATE PROCEDURE Find_Text_In_SP
@StringToSearch varchar(100)
AS
   SET @StringToSearch = '%' +@StringToSearch + '%'
   SELECT Distinct SO.Name
   FROM sysobjects SO (NOLOCK)
   INNER JOIN syscomments SC (NOLOCK) on SO.Id = SC.ID
   AND SO.Type = 'P'
   AND SC.Text LIKE @stringtosearch
   ORDER BY SO.Name
GO

And its close personal friend for searching the names of stored procs for the specified string:

CREATE PROCEDURE Find_SPName_With_Text
   @StringToSearch varchar(100)
AS
   SET @StringToSearch = '%' + @StringToSearch + '%'
   SELECT DISTINCT SO.NAME
   FROM SYSOBJECTS SO (NOLOCK)
   WHERE SO.TYPE = 'P'
   AND SO.NAME LIKE @StringToSearch
   ORDER BY SO.Name
GO

The above sprocs were shamelessly stolen from knowdotnet.com

Tuesday, 25 May 2010

UUHellcoding

I’m currently working on porting a site from Coldfusion to .Net, and have been tasked with making the new site capable of reading the old site’s cookies to auto log in users. The first step in this requires decoding the cookie string into binary data with a UUDecoder. I’ve trawled them there interwebs and found a load of “implementations” on various coding websites, none of which worked. I particularly enjoyed the one that took a string and turned it into a string. That seemed to be missing the point of a binary encoder to me.

Aaaanyway, the happy ending to my tale of woe came when I found this post on Szymon Kobalczyk's Blog. A great big thanks to Szymon for this little nugget of goodness.

Monday, 12 April 2010

Slipping in a little alt.net under the radar

Sometimes it can be hard to get your employers to move forwards on a good idea. They may agree in principle that unit testing and dependency injection are good things, or that development effort would be reduced with a good ORM solution, but it just doesn’t fit in the project plan yet. Soon, we’ll do that “soon…”

My current employers are interested in moving into a more agile process, but the combination of deadlines sprinkled with a little fear of change can slow these things down. A good remedy to this can be to sneak a few things in without going after permission from the start and that can easily be seen to have a benefit without too much investment upfront. The best choices for this are going to be things that aren’t likely to cause sticking points for other devs.

Wapping StructureMap or NHibernate into the middle of a codebase is going to be a problem when another dev touches your code. Simply compiling it will fail without getting the libraries onto their machines, and even then the instant that someone else needs to change something there are going to be problems when no-one else understands the APIs, or even the concepts behind them.

Unit tests are great, but building up a decent test suite when one doesn’t exist is not a quick task. It may be hard to get the all clear for taking TDD approaches everywhere, and the short term performance drop of writing all your tests could make you look bad if management hasn’t bought into the idea and you just forge ahead alone.

The easy starting point that I found was Continuous Integration. It isn’t going to suddenly bite other developers when they stumble across it, and other than taking a few hours to set up initially it won’t have a big impact on your visible productivity. Of course, without a decent test suite to run you’re not going to feel the full benefits, but if you are part of a team and find that getting latest out of source control results in a broken build more often than you’d like (a couple of times a week for me) it is a good place to start, and has plenty of scope for enhancement with tests, coverage, static analysis etc later on.

We currently have 3 important systems under development with a certain amount of code shared between all 3, and a couple of tasks about to start to merge some of the code changes that have happened in one back into the others that I am involved in. This is an area ripe for causing build problems. A simple change in one project that isn’t accommodated in the others could easily break a system that you don’t even know you’re modifying. With this in mind I set up CruiseControl.net to build these 3 projects on my own machine. With this well underway, I mentioned it to my team leader, extolling the virtues of quickly locating breaking changes and he liked the idea so I was able to spread an email around all the devs introducing it, linking to CCTray and giving instructions on configuring it so everyone could see the build status. Had I been less confident about the sort of response that I would get to the suggestion once everything was in place, I might have been more likely to keep quiet about it and just know that I had a safety net to catch me and my fellow open-minded devs that I wanted to let in on the secret. As there are already plans for us to have continuous integration at some point in the future it seemed like a positive thing to get out there once I had moved it into at least a decent proof of concept stage.

This is now running with at least some if not all of the devs watching it and has already caught a few broken build states quickly and easily. Following this, my team leader asked me to do a presentation at an upcoming team meeting to talk a little about what this gives us, so I hope to use this to get across the reasons for wanting to run CI, the benefits that it gives us currently, and how much more it could provide when mixed with a full test suite etc. With a bit of luck, this will help to ensure that everyone is on board with the process and make it a success.

Wednesday, 31 March 2010

F# pwns your language!!!

I've been spending a little time starting to get a feel for F# recently and just discovered that it has the keyword "pown" :)

Tuesday, 23 March 2010

TDD - Test Deriven* Development, or how I used reflection to subvert everything test-first stands for.

(* Ok, so deriven isn’t a proper word, but it flows better than derived in that context)

In my efforts to improve myself as a developer, one of the key techniques that I’ve picked up on has been that of test driven development. A short time working with it has shown me that it helps you to think about how best to design the interface to the code you are focussing on, and that by endeavouring to write testable code, you end up with looser coupling and better adherence to good object oriented programming techniques. However, there can be times when it’s not the right tool for the job.

I was recently tasked with creating the logging system for the project that I am working on. There are currently 26 different events that create log entries, all of which share a common base structure of mandatory fields, along with a selection of other fields specific to the event. Some events are closely related so share a number of these extra fields, and some of the fields only appear in one event. The design that I was given involved creating a base class to hold the common content and allow polymorphism in certain areas of handling the logs, and derived classes for all the actual events. Each class has a constructor containing all of the fields to ensure that they are set and nothing is accidentally left null as the compiler will catch such issues. In hindsight, I feel that this would have been an ideal project for code generating from some T4 templates or something like that, however I only read my first article on T4 after starting the task. D’oh. So, I ended up with a lot of similar classes and felt a little concern that having done soooo much boring manual labour I had probably made a few mistakes that might be tricky to catch.

This struck me as the sort of area that a decent set of unit tests would be ideal for, but manually crafting such a thing would be as error-prone as writing the tests in the first case. This is where reflection came in. By scanning my assembly for all of the classes deriving from the base I was able to write a few simple tests to check for the sort of problems that I was concerned about. The tests I settled on were checking that the classes all had only 1 public constructor, and that each parameter in the constructor had a matching property on the class with the same name and the same data type to catch missing fields, typos, and mismatches such as nullable ints becoming normal ints etc. Lo and behold, my classes had a bunch of these small problems floating around which I was quickly able to correct, but that would have been a nightmare to find and fix manually.

Properly written unit tests should test only one thing, but obviously that is not possible with a reflection based test. I was able to write my code in such a way that I could test all classes for a single type of issue, and have the error message build a list for all failing classes, which seemed to me to be the best compromise. To do this I used the following methods as my mini reflection testing framework. The core of the functionality is RunReflectionTest, which takes an action that will be the specific test that will show up in the runner. This simply loops through all of the classes, running the provided action, whilst building up an error list, then displaying any problems at the end.

private void RunReflectionTest(Action<Type, List<string>> testMethod)
{
    var classes = GetAllEventClasses();
    var errorMessages = new List<string>();
    foreach (Type eventClass in classes)
    {
        int messageCount = errorMessages.Count;
        testMethod(eventClass, errorMessages);
        if (errorMessages.Count > messageCount)
            errorMessages.Add(Environment.NewLine);
    }
    DisplayErrors(errorMessages);
}

The DisplayErrors method simply checks to see if there are any entries in the list of error messages and calls Assert.Fail if necessary. As this was a quick system knocked up just to test a single area the tracking of what has happened here is very simplistic, so the count of error lines doesn’t map exactly to the number of errors, but it did the job well enough.

private void DisplayErrors(List<string> errorMessages)
{
    if (errorMessages.Count > 0)
    {
        string errorString = errorMessages.Count.ToString() + " lines" + Environment.NewLine;
        foreach (string error in errorMessages)
        {
            errorString += error + Environment.NewLine;
        }
        Assert.Fail(errorString);
    }
}

The reflection starts here. GetAllEventClasses simply finds all of the classes that inherit from my base class, and it specifically excludes the handy DebugEvent class that I used for quick hacks in the main system. GetConstructors was just a useful little call that the actual tests needed.

private IList<Type> GetAllEventClasses()
{
    var q = from t in Assembly.GetAssembly(typeof(LogEventBase)).GetTypes()
            where t.IsClass && t.IsSubclassOf(typeof(LogEventBase)) && !t.Name.Equals("DebugEvent")
            orderby t.Name
            select t;
    return q.ToList(); 
}

private IEnumerable<ConstructorInfo> GetConstructors(Type eventClass)
{
    return from constructor in eventClass.GetConstructors()
           select constructor;
}

With all this code in place, a unit test could then be as simple as:

[TestMethod]
public void EventClassesDontHaveMultipleConstructors()
{
    RunReflectionTest(CheckClassForMultipleConstructors);
}

private void CheckClassForMultipleConstructors(Type eventClass, List<string> errorMessages)
{
    if (GetConstructors(eventClass).Count() > 1)
    {
        errorMessages.Add(eventClass.Name + " has multiple constructors.");
    }
}

The test method just calls to RunReflectionTest passing in an action delegate to the function that will perform the testing on a class by class basis. The testing code simply needs to add to the errorMessages list if it encounters a problem. I’m sure that Uncle Bob isn’t going to be asking me for the rights to put my code in his next book anytime soon, but it certainly made my life a whole lot easier.