Friday, 2 December 2011

Mmmm-Crunch

A few months back I started hearing about the concept of continuous testing, more specifically the tools Mighty Moose and AutoTest.Net. Unfortunately the moose isn’t freely available and as it builds on the open source AutoTest, the latter package isn’t so capable or polished. Fortunately I then heard of NCrunch which is ready for anyone to download and start using, and seems to have similar goals with code coverage indicators, only compiling the smallest amount of changed code possible etc.

Obviously I was keen to install it, but given my machine’s love of hanging Visual Studio, and the largeness of the solutions I tend to find myself in I left it in disabled mode for a while. With some recent work letting me hone in on a smaller set of project files I decided to give it a try. Initial impressions were a little mixed. It was looking pretty good for most of the projects in my solution, but a handful failed to build, sadly including the one I actually needed to work on. The NCrunch test runner stated that it couldn’t find the dll that it was trying to test, but from what little I could see everything was where it should be. With work to do I decided to turn NCrunch off and get on with it in the more traditional TDD style.

Eventually of course you move onto a different bit of work, and as that happened I realised that I could probably give NCrunch another go with the different assemblies I had moved on to. Second impressions were very favourable with indicators telling me that code was failing then passing as I worked, with far less compilation stuff happening than when using the built in MsTest test runner. The big red failure marks on the other project files were still bugging me though. I opened up all of the windows that I could find in the NCrunch menu and saw that there was a configuration pane which lets you tweak a few settings per project. One of these settings is CopyReferencedAssembliesToWorkspace, this has warnings that it may negatively impact performance, but, joy of joys, it got one of my failing projects to build. Then it got 3 of them to build, and just 1 was left failing. For the final failure I needed to set the same property again, but on the production code class rather than the test itself.

Sadly, whilst you can specify which tests to run and which to ignore, NCrunch doesn’t seem to have any facility that I can see to tie that in to what we have set in the MsTest test list editor window and associated vsmdi file. That is all still there to be run at will though, so I can leave that to the CI server and just the odd local run when I check code in and out of source control, and stick to NCrunching whilst churning through the TDD cycle.

Tuesday, 22 November 2011

Grokking dynamic changes to an object’s type

One of the much touted features of dynamic languages like Ruby is the ability to change a class or object’s structure at run-time. Whilst this all sounds very clever and meta, I’ve not really seen anything that tells me why I’d want to do that. However, I’ve recently bumped into a couple of cases where I’ve found myself thinking that such an ability would be handy.

The key is when I’m receiving an object that I have no control over, generally this has been in the form of interacting with a framework like ASP.Net, where I’m given an object it defines so I can’t tweak it. I was adding routes to a collection defined by the framework, but wanted to also add the route’s name to its datatokens collection. This would have been an ideal place to override the collection’s add method, but in C# and VB.Net this isn’t possible.

Sadly, just realising that I could do this sort of thing more easily with something like Ruby doesn’t help me deal with the problem. (In this case I wrote an add method that did both actions, and changed all of the collection.add calls to use my new method. Not too hard, but it relies on no-one going in and adding other routes without paying some attention to that pattern in the future.) However, it is good to understand practical reasons for cunning language features. Knowledge is power and all that.

Tuesday, 1 November 2011

Subverted! Mergeinfo unsupported.

I was just trying to merge a feature branch back into the main dev branch on our work SVN server, whereupon I received the mighty unhelpful message “svn retrieval of mergeinfo unsupported by” followed by the path of my repo. With a bit of prodding around I noticed that I was able to use the “merge a range of revisions” option in TortoiseSVN, it was just the “reintegrate a branch” option that was failing. So by selecting the former option, then entering the URL for my branch, I was able to use the “show log” button to select all of the changes that I’d made to give me the revision range to merge and let SVN do it’s thing.

Our SVN server setup has a mix of a couple of old versions, whereas my client is on a flavour of 1.7, so this seems to be a mishmash between the 2. The workaround, whilst not as simple as the reintegrate option seemed to do the job fine. Hurrah.

Friday, 30 September 2011

MVC model binding not setting values

As part of our push towards using the latest and greatest technologies, we have a couple of small web solutions using MVC rather than Webforms. A problem came up recently for one of our devs when trying to model bind a post from a form. The model being passed in to the action method on the controller was chock full of default values for the data types in question, rather than taking the values that the form was providing. Quite a bit of time was spent by him staring at the code trying to figure out why it didn’t work. This was followed by quite a big of time with me staring at his code. Googling for the issue didn’t seem to show up anything obvious either.

Wondering if he’d managed to break the routing setup or something else low-level and fundamental to MVC working correctly, I took a look at the log on code in the template MVC project and saw that it worked just fine. After more staring I finally realised what the problem was. The following code shows essentially the same very very basic model class, firstly with code that doesn’t model bind, secondly with code that does:

public class BadModel
{
    public int MyValue;
}
public class GoodModel
{
    public int MyValue { get; set; }
}

Pretty subtle huh? Trust me when I say that it is far subtler when you don’t have the good and bad code stacked together to see the blatant line length disparity that makes the property stand out. Granted, best practices say that we should always hide member variables behind property getters and setters so you’d hope not to have such a situation arise. However, in day to day usage with VB or C# property and field access does tend to feel identical. Yeah there are multiple methods made when compiled to IL, but Visual Studio hides these from us so it’s not hard for a dev to leave direct access to fields available. So remember, public fields are bad, mkay.

Sunday, 4 September 2011

Dolby Prolog-ic? It's day 2.

So, to begin with, day 2 of Prolog is still looking like very much the same logic engine rather than something that feels like a full programming language to me. We kick off with recursive rules for working out the ancestry of members of the Waltons. The recursion gives a nice flow to the whole process of defining such a rule, and it is a concept that I'm very used to from my day-to-day coding. Infact, I'm ashamed to say that when I attended J.P. Boodhoo's Nothin' But .Net course and was tasked with writing an app in our own time that could list the contents of a directory tree without using recursion it took me a looooong time to figure out how to do it, because recursion is such a nice and natural fit.

Next up is a demonstration of lists and tuples. The list is obviously a key data structure in my daily .Net (ahhh, List<T> how I love thee) and tuples get mentioned time to time in the more cutting edge sections of the blog-o-sphere, and I recall the term from my comp sci degree, but they're not something that I've had much use for. This was different to the usual examples in the book as rather than write a small app with some pop-culture theme, it was just bashing values into prolog to see what happens when you tell it (1, B, 3)=(A, 2, C). or [4, 5, 6] = [4, [Head | Tail]].

Then we were back to the more traditional programming of write a file and then use it. In this case it was using recursion to write routines to count the number of elements in a list, sum the elements in a list, and using those 2, work out the average value in a list. This is definitely different to the normal ways of coding, but it does feel a bit convoluted, especially in these post-Linq days where any old enumerable collection of data lets you get a count from an extension method, or even pre-Linq where a typical collection class would have a count that it could tell you easily.

The last lesson of this section takes a look at the Prolog method "append" and then walks through writing it from scratch. This ends up as merely a 2 line function, however it does get a bit mind bending with its use of recursion. My initial reaction is that I understand it, but I don't want to have to write it myself. The thing is, the next part of the chapter will get into more exercises, so I may have to...

And I was right to be suspicious. The first of the exercises was to write a function to reverse the elements in a list which leads to a very similar function. I actually managed to get to the right answer pretty quickly, but as a Prolog amateur I failed to put square brackets around a variable that needed them and the code didn't work. There was nothing obvious pointing to why it failed, so fixing it revolved around trial, error, and headbutting the desk. The first sample here doesn't work, the next one does. Simples.

rev([],[]).
rev([Head|Tail], Revd) :- rev(Tail, Revd2), append(Revd2, Head, Revd).

rev([],[]).
rev([Head|Tail], Revd) :- rev(Tail, Revd2), append(Revd2, [Head], Revd).

The next exercise asked us to find the smallest element in a list. I couldn't think of a way to do this without employing a conditional statement, which doesn't really seem very much like idiomatic prolog from what we've been shown so far, but a quick google lead me to a helpful post on stack overflow with this snippet "( condition -> then_clause ; else_clause )" after which it all fell into place. The last exercise was to sort a list. I stared at this for a while, started thinking of ideas that might work but would quickly get convoluted and huge, and rapidly lost enthusiasm. I decided to have a quick google and saw that other people were finding this one a blocker too, and that solutions they'd eventually come up with, or simply copied from elsewhere on the net were as nasty as I'd expected, either longwinded or mindbending, so I decided to cease my pursuit of that final goal there.

This chapter was a pretty heavy load of info, so again it didn't have the same fun vibe of some of the earlier writing, but it did flow well and didn't ram the americanisms down my throat that the previous chapter did, so it is approaching a return to form. Hurrah.

As an aside, whilst I'm doing this all on my mac lappy with the terminal, I'm getting to play around with Vim. It always takes me typing a word or two into a new document before I remember that the stupid thing has different modes for navigation and text entry. Bah. But once I've been reminded by the distinct lack of text appearing, or the end of a word after a vowel suddenly flitting onto the screen, I do like it :) I've tried using ViEmu in Visual Studio, but found that you need to use the escape key way too much in Studio so it clashed with the Vi functionality. So getting to use it a bit here is fun :)

 

 

 

Thursday, 1 September 2011

Visual Slowdio 2010

Recently I’ve been finding Visual Studio 2010 to be having something of a performance problem. We’re not just talking slow here, it would just hang for seconds at a time when doing seemingly trivial things, like clicking into the app to change some code after just looking in a web browser, or switching to a different file in the solution. You know, simple every-day stuff.

The first though when faced with such issues is to try and switch off any extensions that are installed and might be causing issues. I’m a big Coderush fan, but I’d rather keep that around for now. I’ve been playing with ViEmu, but turning that off had no effect. Next up I spotted an installation of TestDriven.Net that I’d installed purely for doing a demonstration to my team based on things I covered on the J.P. Boodhoo Nothin but .Net course earlier in the year. That seemed to have 2 different entries in the Add-in Manager, “TestDriven.Net 3.0 Personal” and “TestDriven.Net Reflector”. Turning both of these bad boys off has seeeeeemed to fix the problem.

I’m not trying to criticise TestDriven here, it may be a combination of MsTest, Coderush’s test runner and TestDriven all getting into fights over my code. I’ve had it installed and forgotten about for months and the problem only happened recently, so it may have been an update to Coderush or Studio that sparked it off. Anyways, so far it looks like I have a somewhat more responsive IDE again now. Hoorah :)

Whilst that has reduced the problems, there are still times that Studio decides to have a little nap, albeit less frequently. So I’m compiling a list of links that have tips for speeding it all up here as a bunch of resources to churn through when I have time:

Tuesday, 30 August 2011

Open Saucy

One day, I shall finish my procrastinating and finally write and release something into the open source world. And when that glorious day finally arrives, I may just have to release it under the WTFPL because it is pretty damn awesome:

DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE 
Version 2, December 2004 

Copyright (C) 2004 Sam Hocevar <sam@hocevar.net> 

Everyone is permitted to copy and distribute verbatim or modified 
copies of this license document, and changing it is allowed as long 
as the name is changed. 

DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE 
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 

0. You just DO WHAT THE FUCK YOU WANT TO.

/* This program is free software. It comes without any warranty, to
* the extent permitted by applicable law. You can redistribute it
* and/or modify it under the terms of the Do What The Fuck You Want
* To Public License, Version 2, as published by Sam Hocevar. See
* http://sam.zoy.org/wtfpl/COPYING for more details. */

Marvellous :)

Monday, 22 August 2011

Properlog

So, now we're doing it old school as I set off into chapter 1 of Prolog properly. It is sparking a few memories of the old computer science degree, but only vaguely, so it still feels pretty new to me.

Whilst I enjoyed a bunch of the fun examples in previous chapters, here I felt that I had to change some americanisms, velveeta, jolt, soda, twinkie & desert became cheddar, coke, drink, cake & pudding. Plus, flavour and colouring have a 'u' in them. I can't help it, I'm a bit of a grammar and spelling nazi sometimes :)

Anyways, on to the tech stuff, it has some impressiveness, but so far seems less like a programming language and more like a logic calculator as you have to use it within the boundaries of it's runtime interface. I dunno if we'll find out ways to do other work around crafting a UI, or to harness it as a logic engine within a different language for some polygottism, but I currently can't see how I could fit it into anything that I do.

One of the samples was to write a map colouring tool that could give combinations of colours to use so that adjacent states were different colours. The code had rules different(Mississippi, Alabama) and different(Alabama, Mississippi) but all other pairing are only listed once. The rule different was declared in both directions, so I think that this was a bit of a typo, although not really a bug as it is just duplicating a rule. I also found that the definition of the colouring method was overly verbose with the definition having the full state names and the calling code having the exact same. The rest of the rules about what has to be different were tied intimately to the given selection of states though, so a more generic method wouldn't really be right, it just felt like it wasn't really very DRY.

As usual, the chapter finished with a bit of homework, a few quick googling tasks so that we could get a few resources, and a couple of simple apps to write. (Is apps the right word for Prolog code? I'm not sure, knowledge systems maybe...) These were nice and simple to churn out without needing to resort to leaning on any of the resources that we needed to google for. First up was a list of books and their authors with a query to find all books by a single author. Following that was a list of musicians, their instruments, and the genres that they play. We were asked to write a query so we could find all guitarists, and that was it. The genre part was ignored completely, so I made my query a little more complicated so that I could find all musos based on their instrument, or all people playing a certain genre, or a combination thereof. And due to the way that Prolog works you could also find all instruments and genres that a musician plays. As we all know, Mr Eddie Van Halen is both the world's greatest guitarist and a fine keyboard player too ;)

As mentioned, from what I've seen so far, I can't imagine how Prolog can be used anywhere other than in Prolog, which feels limiting. But the code came nice and easy without all the road blocks that Io put in my way, so I'm quite enjoying it so far and looking forward to seeing if it will be more useful than I can currently grok. I found the writing of this chapter less inspiring than others though. Bruce Tate's writing kept me interested through the dark days of Io, but the language is holding together the Prolog for me. Hopefully that is just me being racist to a chapter that has more Americanised examples than the earlier ones, and not a sign that a little way into the book he started to get bored or complacent and let the quality drop. Lets see what awaits in day 2...

Thursday, 11 August 2011

Replace Conditional With Polymorphism Refactoring–A Wake-up Call

One of the aspects of good object oriented design that I keep seeing mentioned is to replace complex conditional statements with polymorphism the best known source of this probably being Martin Fowler's Refactoring. This is usually called out when seeing large case statements or stacks of ifs. I’ve always had a problem understanding how polymorphic behaviour would help here, but I think that is because the only time I’ve used much in the way of conditional stuff like that has been in places like just behind the UI layer when processing user input, so there isn’t really the chance to do anything polymorphic at that stage as things are just coming in to the system in the first place.

However, I recently inherited some complex business logic code that is chock full of ifs and cases. All of a sudden I can see where the potential for polymorphising it comes in. This code is absolutely mission critical, arguably the core of our business, and we currently have no unit test coverage whatsoever, and only minimal automated integration tests that certainly won’t be covering all of the edge cases. Add to that our legacy code issues of tightly coupled code and static methods, the facts that it does actually work and is suitably performant, plus medium term business plans to do a complete overhaul of the way that it all works, and I can’t see myself being in a position to do any major refactoring any time soon. But at last I have come across a great example of where that refactoring would apply, which also makes me feel more comfortable about the places that I’ve been tending to pile conditionals as being a valid use of them.

Friday, 5 August 2011

Prologue

Next up in the 7 Languages book is Prolog, and the author points us towards the GNU gprolog package. Fortunately this time there is a prebuilt installer. Hooray, that’s a promising start.

Screeeech. As soon as I try to start the Prolog app I get a file not found type of error.

I am sadly lacking in unixy skills but was able to figure out that the path wasn’t set. Navigating to the correct folder and typing gprolog still gave command not found error but ./gprolog however worked fine. I added a symlink to gprolog in /usr/local/bin alongside the now defunct Io and was then able to launch it. The other option that I tried on my laptop was editing the .bash file to add to the Path environment variable. Both work, but I don't know which would be considered the best approach.

I have only done the first couple of simple examples to check that it is working properly, so I won't yet document my experiences with it, but I will say that after all these semi-colons for end of statements in C based languages, using a full stop feels very civilised :)

Sunday, 31 July 2011

7 Language Lion

So, many moons ago I started working through the 7 languages in 7 weeks book. I saw signs of the loveliness that has won Ruby so many fans, and I battled with Io, eventually finishing the section ready to move on to another language. It was around that time that I ran out of steam. Tucking myself away in my room to learn about a new language doesn't fit well with trying to play family man, and installing a bunch of new languages on my mac when I don't have a good Unix background to know if I'm screwing my system up left me a little concerned too.

Fast forward a few months and Mac OS X Lion is released for a bargain price. I decided to set up my much neglected laptop with Apple's latest big cat and start using it for the development of my development. With this release Apple have relaxed their restrictions about using the OS on virtual machines which struck me as an excellent opportunity. At this time, VMWare Fusion is helping to enforce Apple's old restrictions by only letting you run server versions of OS X in their VMs, but with a little help from this excellent post here I was able to create a hacked image that satisfies VMWare.

That gives me a nicely virtualised sandbox to be able to mess around to my heart's content without screwing up the main OS installation and with the added ability to roll back to an earlier snapshot if necessary, but this icing on the cake came with this next link. Here we see how to set up Spaces in Lion so that Fusion can be restricted to a single screen. This allows me to easily switch back and forth between the main OS and the VM with a gesture. Marvellous.

The starting point for my dev VM has Lion, XCode, Git and Gitbox. Whilst this is available on the mac app store, getting it straight from the dev's site means that there is a free download which allows me to use it with up to 3 repositories rather than having to fork out £27.99, which is great for my light usage. My free private Git repo is provided by unfuddle. Github may be getting all the publicity, but I like being able to keep my code private which usually costs monies, so kudos to unfuddle for that too. After snapshotting that for ready rollbackability, I've installed GNU Prolog, so will be hoping to delve into the next chapter soon.

Monday, 25 July 2011

TDD justifications

In recent years, TDD has surged in popularity. If we look beyond pure TDD to consider writing unit tests in the same general timeframe as the production code, or having automated integration tests then such practices are even more widespread. However, I think that there is a bit of an echo chamber effect that happens with the sort of devs who blog, tweet, go to conferences, talk on podcasts, post on stack overflow etc. that can lose sight of how many people code for a living but don’t pay attention to what is happening in the world outside their office, so these good practices aren’t nearly as widespread as some of us might like to think.

I was brought into my current job as part of a drive to improve on the use of agile techniques and best practices. On a dev team of around 10 people there was only 1 other person writing unit tests so it all started as something of an uphill battle. Being the new guy on a team and trying to change the processes can seem like you are just being critical and accusing other developers of not being good enough. Introducing ideas with a good explanation of the benefits is vital to avoid causing an us and them type rift, even when it is part of your job description. When testing has been viewed simply as a manual process that is the domain of the QA department I think there are 3 stages of understanding to get across to the team. Firstly the value of having automated tests at all, with the implication that the suite can include integration tests. Secondly the point of having proper unit tests so that different components are tested in isolation. Finally the reasons for writing your tests before the production code.

Why automate tests?

  • Regular feedback to show if changes have broken anything.
  • Confidence to refactor code from working but messy to working and clean.

Why unit tests?

  • Integration tests can be slow to run if there is network, database, filesystem access etc. Isolating your tests can provide huge speed ups with doubles providing required interactions. The regular feedback mentioned above can become even more regular.
  • Taking other components out of the equation makes it far more obvious which part of the code is at fault when something breaks, an integration test could give a huge stack of components that participate, but a unit test will likely be a single public method call on a component with just a few private methods to dig through too. This can aid with the point about refactoring made earlier, although integration tests are essential if you decide to rework larger chunks of code.
  • Mocking out the dependencies of a component means that different devs or even teams or companies can be working on those layers independently without holding up consumers of that code, as long as a suitable interface has been designed and agreed upon up front. Or even if it will be done by the same dev they can stay focussed on what the current class deals with whilst that is in the forefront of their mind, and move on to the dependency and deal with that later.
  • Writing pure unit tests requires the code to be written with an eye to testability, meaning that code will have to be decoupled and conform to good design principles.

Why test first?

  • Having the test in place before production code helps to push the decoupling aspect harder than writing tests afterwards where code may need changing to become testable, or the odd integration test may be allowed to sneak into the suite.
  • You only write production code that a test has called for which helps you apply the YAGNI principle.
  • That last D in TDD is Design not Development, writing your test first means that you are making a declaration of how you want to use that API so tends to lead more towards a clear, usable component. This is an area that is a bit more obvious with newer test frameworks that use variation on the word Specification in their name, rather than Unit or Test.

The first set is an easy win, the latter sections require more work, not to mention some fundamental changes in the way that the team has to approach their code. Fortunately we’ve had a push from the top level management to improve the number of tests that we have which is helping to push some of the more reticent devs into trying to adopt these practices, as well as giving us some scope to try and provide some mentoring to them so that they can start doing some TDD without being thrown in at the deep end.

Sadly we have some highly test-resistant code with lots of classes full of static methods that can’t easily be mocked out, so I shall post about the techniques that we’ve come up with for dealing with that soon.

I should probably blog more…

The road to hell is well known to be paved with good intentions. I set out with the intention of blogging plentifully to try and build my own personal brand, but like many blogs the reality hasn’t been quite so impressive. However, today I have been watching Scott Hanselman’s presentations on the topic “Every Developer Needs a Blog” Part 1 & Part 2 and have been inspired.

There was much goodness packed into the talk, but the part that stood out to me was the suggestion that the alpha geeks in a company tend to be the ones who send emails around to the rest of the team with all the shiny dev news, tips and other such content, and that this could make for prime bloggable content. I certainly fall into this camp, and so I’ve decided to see if I can retrofit some of my team emails into this blog (after removing anything too company specific), and subsequently use it for such future messages. I don’t expect to be anywhere near the bleeding edge of progress in the development community, but based on my experience most companies aren’t, so I hope that it will be of some value.