Wednesday, September 3, 2008

Speaking At GANG

I'm going to be speaking at the Great Lakes Area .NET User Group in Southfield on September 17th.  I will be giving a talk about Rhino Mocks, and the how, when and why to use it. 

I am still putting the presentation together, so if you will be in the area and would like me to cover something specific, leave a comment.  Even if you won't be attending, let me know if you think there are points which would be helpful for the people there.

Monday, July 21, 2008

Why I Am Sick Of Hearing About Deferred Execution

Since the announcement of LINQ we've heard plenty about "deferred execution", this term that has appeared like its some sort of LINQ magic feature.  Personally, I think I need to come up with my own term and claim it's something awesome too.  I'm really tired from hearing about it.

On Wednesday, July 15th I went to a Great Lakes Area .NET Users Group talk by Bill Wagner where he was talking about Extension Methods and how to make proper use of them.  Now, don't get me wrong, I have a lot of respect for Bill.  I don't mean to criticizing Bill in any way.  So Bill, if you read this, I really don't mean any disrespect by this.  It was simply your use of the term that made me recall my feelings on this topic.

Bill was doing a demo where he showed various LINQ extension methods and showed that by making use of these extension methods we were able to harness the power of DFERRED EXECUTION! 

The first example Bill showed was Enumerable.Range(Int32, Int32) where it returns an IEnumerable<Int32>.  Bill then shows that when he calls the Take() extension method it only iterates through the first x of the items in the range, not the full list of items identified by the range.  Ok yes, this is true.  We didn't have to create a new list and populate it with a million items, just to pull the first 5 items.

Bill later went on to discuss how if you use a LINQ query with variables, you can change those variables after you have defined the query.  His code looked something like the following:


var range = Enumerable.Range(0, 1000000);

var maxValue = 40;

var items = from r in range
where r < maxValue
select r;

var takenItems = items.Take(30);

maxValue = 20;

foreach (var i in takenItems)
{
Console.WriteLine(i);
}



Output:

0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19

Now yes, you define your LINQ query, change your variable after the fact and then consume that class.  Yes, it takes into account the change in your variable.  Yes, this occurs after you defined your query, so deferred execution is a term that makes sense.

Ok, I'll give in a bit, I'm ok with the term, but not the way its talked about.  The magic isn't LINQ, and understanding what is going on is not just about understanding LINQ.  It's the fundamentals of how LINQ works which people should really understand.

I'm going to say this one more time before I move on "Deferred Execution is not a LINQ feature".  It's a closure feature/implementation pattern.

First let me try to explain the implementation pattern piece by creating my own "Deferred Execution" code which works exactly the same way as as the Range method Bill demonstrated. (Note that this is not necessarily built with production quality in mind).



public class MyRange : IEnumerable
{
private class RangeEnumerator : IEnumerator
{
private int? _current;
private bool _complete = false;
private readonly int _minValue;
private readonly int _maxValue;

public void Dispose()
{

}

public bool MoveNext()
{
if (_current == null)
{
_current = _minValue;
return true;
}

if (_current < _maxValue)
{
_current += 1;
return true;
}
else
{
_complete = true;
return false;
}
}

public void Reset()
{
_current = null;
_complete = false;
}

public int Current
{
get
{
if (_current == null || _complete)
{
throw new InvalidOperationException();
}

return _current.Value;
}
}

object IEnumerator.Current
{
get
{
return Current;
}
}

public RangeEnumerator(int minValue, int maxValue)
{
_minValue = minValue;
_maxValue = maxValue;
}
}

private readonly int _from;
private readonly int _to;

public IEnumerator GetEnumerator()
{
return new RangeEnumerator(_from, _to);
}

IEnumerator IEnumerable.GetEnumerator()
{
return GetEnumerator();
}

public MyRange(int from, int to)
{
_from = from;
_to = to;
}
}

That's actually really simple code isn't it?  There is nothing revolutionary in that code.  Any one of us could have implemented that in C# 1.0. 

Now, let's look at a case with closures.  LINQ internally is using closures (via lambda expressions) to perform its queries.  So lets say I write my own closure. 



var range = new MyRange(0, 1000000);

var maxValue = 40;

Func expression = i => i < maxValue;

maxValue = 20;

foreach (var i in range)
{
if (!expression(i))
{
break;
}
else
{
Console.WriteLine(i);
}
}


Output:

0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19

Huh, wouldn't you guess it, it also shows this magical LINQ "deferred execution" behavior.

So what's the point of all this?  One, I'm probably too easily set off on topics like this.  Second, we shouldn't look at "deferred execution" as some sort of LINQ magic but rather a pattern that can provide us many benefits with our own code.  Deferred execution allows us to enhance performance and flexibility of our applications.  This is something we can all make use of in our algorithms, even if we aren't utilizing LINQ.

And in regards to the Extension method talk by Bill, I really enjoyed it.  It was simple enough for people to learn about new C# 3.0 features.  You talked about it well and gave good examples.  I'm just frustrated that people seem to write this stuff off as magic even though they are simple concepts.  Plus, this term seemingly just appeared with LINQ even though the concept has been around for a long time.

Saturday, July 12, 2008

Subtle Bugs When Dealing With Threads

Pop quiz, what's wrong with the following code?


public void Unsubscribe()
{
if (_request != null)
{
ThreadPool.Enqueue(() => _service.Unsubscribe(_request));
}
}

public void Subscribe(string key)
{
Unsubscribe();

if (!String.IsNullOrEmpty(key))
{
_request = new Request(key, handler);
ThreadPool.Enqueue(() => _service.Subscribe(_request));
}
}


Does everyone see the issue? There is a critical bug in the above code which isn't always readily apparent.



Try to find it...



I actually wrote code like this today (same concept, different implementation) and immediately saw some serious defects. Honestly, I'm lucky the issues popped up right away, these sorts of things tend to not appear right away, but jump up to bite you at a later point.


In this case the issue is the use of closures. When using a closure it copies the fields from outside the lambda expression into the expression meaning that my use of _request from within the lambda is actually the same reference that exists outside the lambda expression. So in the above case the Unsubscribe lambda gets executed on a new thread (from the pool) but by the time it actually executes the _request has already been changed.



In this case, you're actually unsubscribing from a request that most likely hasn't even been subscribed yet. And to top it off you haven't unsubscribed from the old request yet either. Obviously the above is a race condition where the exact output isn't guaranteed. There is a chance it works perfectly (though doubtful with a true thread pool). There is a chance the new request is subscribed first and then immediately unsubscribed as well.



The simplest way to resolve this issue is by changing the variable which is captured from one which is shared between both closures to one that is unique to each closure. As shown here:



public void Unsubscribe()
{
if (_request != null)
{
var localRequest = _request;
ThreadPool.Enqueue(() => _service.Unsubscribe(localRequest));
}
}

public void Subscribe(string key)
{
Unsubscribe();

if (!String.IsNullOrEmpty(key))
{
_request = new Request(key, handler);
var localRequest =_request;
ThreadPool.Enqueue(() => _service.Subscribe(localRequest));
}
}



However, what I really want to solve this type of problem going forward is to develop something which is process aware, much like the Saga in NServiceBus. Of course my goal is not to be a long running, persistable process like the Saga in NServiceBus, but the process portion is what I'm looking at.

Wednesday, June 18, 2008

StackOverflow.com, Uh oh?

So I mentioned in my last post that I have began listening to podcasts.  I have a lot of respect for both Jeff Atwood (Coding Horror) and Joel Spoelsky.  So when I saw they were working together on a new project and publishing their conversations regarding their new product, I figured I had to listen.  Now, they've posted around 9 episodes now, but I've only had a chance to listen to the first couple so far.

Honestly, I'm a bit concerned by what I heard.  In their first episode I felt like they gave Microsoft technology based developers a really bad name.  Now it may well be largely true (which is probably part of my concern), but I wish there were more resources to correct this bad name, rather than encouraging it.

Basically, during the podcast you will hear something along the lines that Microsoft technology developers basically resort to the Google-Copy & Paste programming development.  Microsoft technology developers are called pragmatic in that they don't care what the right solution is, or how clean it is, or well it works so long as it does work. 

Now, I'm not saying that using google to find answers to interesting problems is a bad thing.  I'm not even saying that if you ever copy and paste code you're a bad developer, but ideally the developer is learning from the blog post instead of just finding something which seems to work and moves about their business.  Honestly, typically these samples you find in blog posts are not thorough enough for a true production deployment.  The point of these postings should be to educate people about new concepts, not try to do their job for them.

Stackoverflow.com from what I gathered wants to be the place to replace google as the first place where you search for doing your job.  Now they stated their goal is to be the first hit on google for all of your searches, but really I think they would be happier if you went straight there instead of google.

Now Stackoverflow is not Microsoft specific, it is meant to appeal to developers on all platforms.  However, they seem to be looking at the Microsoft centric market as their main target.  Honestly, I think these guys will be successful.  They both have large followings, and I think there is huge demand for systems that can essentially do their jobs for them.  I just wish it appeared to be a more helpful resource that helped developers grow, instead of just allowing them to get by.

This all being said, the podcast is worth checking out.  These are two extremely intelligent people, and by listening to these podcasts you essentially get a look inside their heads and how they think.  I don't have to agree with Stackoverflow.com, or the topic which they discuss, I'm still able to learn from it while I listen.  To both Joel and Jeff, thank you for posting your phone conversations as podcasts, it has been a great learning experience for me.

Deep Fried Goodness

So I realize I'm going to look like a bit of a sellout based on my procrastinating, but I really meant to write this earlier.  With my newly purchased iPhone and my increased amount of travel, I've recently started listening to Podcasts.  I honestly never saw the point before.  I rarely get an hour where I can really listen to a podcast.  I have always thought of reading as being a simpler and more effective mechanism for learning.  However, while traveling (especially on a plane) I find that a properly timed podcast can provide a lot of information that otherwise I wouldn't be able to consume.

I saw that Keith Elder (and Chris Woodruff) had a new podcast called Deep Fried Bytes, and I figured I may as well see what it is.  I'm actually one of those people that first met Keith because I recognized his picture from his blog.  Not knowing really what was good for pocasts (besides the obligatory Hanselminutes and DNR) I figured it was worth a shot.

After listening to their episode on interview war stories I was really impressed.  They had some really intelligent people talking about interviewing.  This was a topic, which I have to admit, was not something that immediately peaked my interest.  But what you find is that when many smart people sit down to have a talk, something good will result.  Now, after I picked myself up off the floor from hearing a C# MVP call the using keyword "Obsolete", I realized that they have a winning format.

Plus honestly, Keith Elder is the kind of guy where he doesn't need to have anything good to say.  The way he talks and presents himself can be entertaining almost regardless of the topic.  If you listen to podcasts I recommend you go try these guys out.  If you don't listen to podcasts, I recommend giving them a shot anyway.

As for my contribution to the topic at hand, I suppose I had a little bit of a war story.  From an interviewing side I do remember talking to one guy who's resume really looked great.  He had all sorts of great items written down from projects he had worked on in the past.  While inquiring about these items it became more and more clear that this person really didn't understand the concepts which he had written he had previously implemented.  After a few questions trying to get this candidate to talk about items on his blog he eventually answered that he had nothing to do with those tasks.  They were all completed by other people and he didn't understand how they worked.  He then apologized for writing misleading (or factually incorrect) items on his resume, and we ended the interview.

As an interviewee.  I just remember the Microsoft interview I had.  When I was graduating from Case Western Reserve University I had an on-campus interview with a representative from Microsoft.  I wanted to be a programmer since I was a small child (maybe 12 or 13 years old) and working for Microsoft was always a dream for me.  I had seen their campus (my family lived in Portland, OR at the time, and we saw their campus while visiting the Seattle area), and everything seemed like the perfect opportunity for a young geek in love with software.  From my interview, I really only remember a single technical question which I was asked.  Now keep in mind I wasn't claiming to be an expert at C or any other language at the time.  I had some professional experience working in VB.NET Beta, as well as some experience in developing relatively simple applications in C, C++, Java, PHP, Basic, Perl and the early versions of C#.

Anyways, he asked me "What is the fastest way to reverse a string in C?".  Ok, well I am familiar with C, and I'm familiar with how strings work in C.  I understand pointers, and pointer arithmetic, and immediately I think this must be pointer arithmetic.  Well, before I could even start talking about my response he says "Ohh, and it doesn't use pointer arithmetic.".  Uh ohh, at that point I pretty much froze.  I didn't know what to do.  I'm not a C expert.  I haven't written any C code in a while, let alone overly complex C code, and I need to know what the fastest way to reverse a string is in that language?  Well, lets just say the rest of that apparently didn't go over so well, and I wasn't asked any other technical questions.  I probably didn't handle the curve ball so well, but that was that.

I still remember how dumb I felt when I later learned just how many people from one of my classes landed jobs at Microsoft.  While in a class my senior year I remember the professor asking who was going to work for Microsoft, and there must have been at least 30 hands in the room that went up.  The said part of it to me too was I WAS the curve buster in that class.  I remember taking a test where the curve was so bad that a 68 became an A, yet I had scored a 98.  I was trying to figure out where I went wrong at that point.  Oh well, that's just how it goes.

Well, enough about me and my interviewing war stories, you need to go have a listen to Deep Fried Bytes.

--John Chapman

Friday, May 30, 2008

Been Gone

I just wanted to make a post so people knew I was still alive.  I've been extremely busy over the past two months.  I haven't forgotten about this blog, and I really want to get back to writing.  I really want to get further on the Sudoku series.

Lately I've been focusing on architecture of a Model-View-Presenter based WPF application.  The current complexities which are being analyzed is that users will have many duplicate views open at the same time, each working on a different item.  So picture 40 or more edit windows open on the same time, some for different items, some for the same items.  Some have similar child widgets, some have different child widgets.  Screens still need to talk to each other, but they have to talk to the appropriate screen.  Oh, and btw, performance is absolutely crucial on this app. 

The standard Model-View-Presenter samples you see really don't address these issues.  Typically there is one main form which acts as a shell, and there is one region where a given view can live.  Not here, that view can be in any number of places, and there can be any number of them.  It makes the problem a bit more complex.  We have some solutions, but I can't really discuss them here.

But after looking at the standard Model-View-Presenter samples, it make me feel more strongly that I really need to find the time to get back to the sudoku series, and hopefully I can produce a well documented series about why each choice is made.

Sorry for the delay.  I will be back.

--John Chapman 

Thursday, April 3, 2008

New Adventures

I was originally going to write this on Tuesday, but I realized that with it being April 1st, people may have thought I was joking.  I have decided to leave Ryder.  Tomorrow, Friday April 4th will be my last day.

I have found an opportunity to work as an independent contractor building an application dealing with a domain that I am very passionate about.  I can honestly say that the application I'll be working on is one that I would love to use myself.  How often do we get to say things like that as developers?  I figured this was an opportunity I had to take.

Ryder has served me very well.  I first started at Ryder at the end of 2004.  I've had the chance to work with some great people.  I've also had the chance to interact with people of all levels in the company.  There were people at Ryder that took good care of me, helping obtain 24" monitors (to replace 17" CRTs for most people) to increase productivity as well as state of the art development machines.  I'll miss working with them.

I worked on the LMS team at Ryder, which is probably the largest externally facing application at Ryder (don't quote me on that).  I do know that they are backfilling my position, so if you are looking for a new adventure yourself, send them your resume.

--John Chapman

Saturday, March 15, 2008

Sudoku Part 5: A Look At The UI Architecture

Part 5 has been a long time coming.  Originally, the plan was to present an entire UI in this post, which I've come to realize is simply not feasible. 

I have been attempting to learn WPF as part of this exercise.  Learning an entirely new UI framework while putting together this piece has proven a little bit difficult, and as a result the progress of the application has slowed significantly.  So for the rest of these pieces, I'll try to break down the UI further to show the choices I make while it's being developed and why.

Today, we're going to take a look at the architecture I plan to use for the WPF sudoku game and why.  We're going to take a look at my simple implementation of MVP (Model-View-Presenter).  We're going to take a look at the advantages of a MVP architecture and how that fits with our goal of utilizing Behavior Driven Development to build our application.

Architecture Description

In Part 1 (Defining The Solver Behavior) I mentioned that we would be using Castle Windsor in this project.  One of the great things about Windsor, and really any good inversion of control framework, is that you can minimize the invasion of container logic in your code via nested dependency injection.  What I mean by this, is that you can use the container to resolve a single service and it will automatically populate all dependencies for all children.  From our perspective, this will mean that the container logic can be restricted to just the UI, resulting in one less dependency throughout the application.

From previous parts we never made explicit use of Windsor, but rather we defined our inner dependencies via our available constructors.  For example, our RecursiveGenerator that we built in Part 4 took an ISolver in the constructor which will instruct Windsor to automatically populate the appropriate implementation for our generator.  This process is known as Constructor Injection.

I mentioned that we'll be using a Model-View-Presenter based architecture for our Sudoku game.  The Model-View-Presenter obviously breaks down to three pieces.  There are many variations of the MVP pattern (See Jeremy Miller's Build your own CAB series for some awesome overviews of various techniques), but the way we will use it for our purposes can be described as follows:

  1. View - The portion of the application that actually displays data to the user as well as listens for user input.  Such as watching for button presses or mouse movement.  For this application we will be using WPF for the view.
  2. Presenter - Handles the UI behavior logic.  For example the view will show the data to the user, but the presenter will determine what data should be shown to the user and how it should be formatted.  Also, while the view listens to see if a mouse moves, or a button is clicked, how that action is handled belongs to the presenter.  So it would be possible for our game to have many "New Game" buttons on the view, but it would defer the appropriate actions to the presenter to determine what should happen when any of those buttons are pressed.
  3. Model - Handles the data which the presenter operates on.  This would typically represent the description of the business domain which is being solved by our application.  In our case this will represent the data needed to track the game which is being played, such as the puzzle which is being worked on, the current status of the solution which the user is building as well as supporting information such as the game clock or at least the time when the game was started.

So, why go through all of this?  Especially for an application which seems as simple as a sudoku game?  Well, first of all, we are doing this as an exercise, not necessarily the exact application.  Secondly, and far more importantly, this separation of concerns allows us to keep logic more closely aligned to a purpose, and allows us to have fully testable behavior.  By moving our presentation behavior from the XAML "code-behind" to a separate presenter object, we can test our presentation without the need for an actual UI.  This is a major advantage over the old fashioned applications.  We can now have a lot more faith that our application behaves as expected via our automated testing.

Additionally, MVP does not force us to be tied to the WPF view.  It would be possible for us to re-use the same model and presenter in a WinForms application (or potentially silverlight and other web-based frameworks) with no change, only a different view.  This will be considered out of scope for the time being. 

WPF Implementation

First off, I want to make a disclaimer. I am not a WPF expert.  If there is a better way to implement this in WPF, I would love to hear about it.  I'm learning WPF as I go with this sample application.

My first idea was to create a base class which would initialize the appropriate presenter for that view.  I quickly ran into a bit of a problem with WPF.  It's valid to create your own base class instead of Window, but it adds a lot of complexity to the XAML, and Visual Studio didn't seem to appreciate it too much. 

Since I only needed the base class to resolve the appropriate presenter via my container, I figured it would be ok to just have the convention of every view calling a static Initialize method.  This keeps the XAML simple, and only adds one extra line of code to all views.

Application Implementation

I chose to initialize Windsor in the Application class, which is where the static Initialize also lives.  The Application class looks like the following:

public partial class App : Application, IContainerAccessor
{
public static IWindsorContainer Container { get; private set; }

IWindsorContainer IContainerAccessor.Container
{
get
{
return Container;
}
}


protected override void OnStartup(StartupEventArgs e)
{
InitializeContainer();
base.OnStartup(e);
}

private void InitializeContainer()
{
Container = new WindsorContainer(new XmlInterpreter());
}

public static void InitializePresenter<T>(T view)
{
Presenter<T> presenter = Container.Resolve<Presenter<T>>();
presenter.Wireup(view);
}
}

Note that this is as far as our container needs to go into our sudoku application.  When the application first starts we need to initialize our container based on the application configuration file.  The InitializePresenter method is what we expect every view to call when the view is initialized.  The type paramter (T) is the interface type which the view itself implements.  That is the view that the appropriate presenter will use in order to interact with the view.  The Presenter<T> will server as the abstract base class for all presenters.  So in this case we're asking Windsor to locate and instantiate the appropriate presentation logic for our particular view.  Once the presenter is located we call the base class method Wireup which tells the presenter to begin observing any events exposed by the view.  This would also be the time that the presenter could perform any initialization logic.  The Presenter base class looks like the following:


public abstract class Presenter<T>
{
public T View { get; private set; }

public void Wireup(T view)
{
View = view;
Initialize();
}

protected abstract void Initialize();
}

The View implementation would then contain the following:


protected override void OnInitialized(EventArgs e)
{
base.OnInitialized(e);
App.InitializePresenter<IBoardView>(this);
}

Where obviously in this case the view implements the IBoardView interface.  Our appropriate presenter implementation would be defined as:


public class BoardPresenter : Presenter<IBoardView>

To wrap up our initial implementation we would need to configure our Windsor Container in our configuration file as follows:


<configuration>
<configSections>
<section name="castle"
type="Castle.Windsor.Configuration.AppDomain.CastleSectionHandler, Castle.Windsor" />
</configSections>

<castle>
<components>
<component
id="generator"
service="Sudoku.Bll.Generator.IGenerator, Sudoku.Bll"
type="Sudoku.Bll.Generator.RecursiveGenerator, Sudoku.Bll" />
<component
id="solver"
service="Sudoku.Bll.Solver.ISolver, Sudoku.Bll"
type="Sudoku.Bll.Solver.RecursiveSolver, Sudoku.Bll" />
<component
id="boardpresenter"
lifestyle="transient"
service="Sudoku.UI.Presenter`1[[Sudoku.UI.Views.IBoardView, Sudoku.UI]], Sudoku.UI"
type="Sudoku.UI.Presenters.BoardPresenter, Sudoku.UI" />
</components>
</castle>
</configuration>

In this case our presenter would have an IGenerator parameter in its constructor which would tell Windsor to construct the defined generator, which we've already seen requires a solver implementation.  Therefore these three specific components are necessary to fully configure our Windsor container for this example.  One item to note is that we have specified a lifestyle of transient for our presenter.  This is to signify that every presenter should have its own containing class (since the state of the associated model will matter), whereas our generator and solver services have no defined state and can safely be use the default lifestyle of singleton.


At this point we have the basics defined for a Model-View-Presenter application in WPF.  We are using Castle Windsor to manage our dependencies.  Our View has no knowledge of our presenter implementation, and the presenter has no knowledge of our view implementation.  All aspects are configurable and easily changeable.  Plus, since our presenter does not know our view implementation it has opened the door to potential porting to an alternative UI framework.


In the next iteration I hope to examine the needed behavior of our presenter and potentially the implementations which satisfy that behavior.


--John Chapman

Sunday, February 3, 2008

Sudoku Part 4: Implementing A Generator


In our last segment (Defining The Generator Behavior) we looked at the specifications required to generate new Sudoku puzzles. It turned out that from a specification standpoint, there were not many expectations that we placed on the generator. We basically just want a solvable sudoku puzzle. We didn't much care how it worked, or what the puzzle looked like, so long as the result was uniquely solvable.

Algorithm

Today, we're going to create our very first sudoku solver using a similar technique as the one we used to actually solve the puzzle. We're going to develop what I call the RecursiveGenerator.

In order to generate our puzzle, we'll first fill all board pieces with a valid sudoku value. While we're filling in each piece we'll ensure that we never create an invalid board. Meaning that at no time will we have a board with duplicate row values, column values or region values. We'll utilize a recursive algorithm very similar to the RecurisveSolver we developed previously.

Now that we have a fully populated sudoku board with no duplicate row, column or region values, we'll start removing values from random pieces. In order to do this we'll create a list of all pieces on the board and then assign a random double value to them and sort the list. We'll iterate over the list removing that piece's value from our board until our ISolver implementation reports that our board is an invalid puzzle. At this time we'll put back the last piece we removed and consider the puzzle generated.

Test Class Implementation

Our test class to test our generator implementation is only minimally more complex than the solver implementation we used in an earlier section. Because this generator we built has a dependency on an ISolver we need to provide an implementation which we have already tested so that our generator can operate successfully. For this we're obviously going to use our already tested RecursiveSolver implementation. Note that while it would be possible to use a different type of solver (or notably a mock solver), doing this would most likely require significant changes to the tests which we are running, and this is not an exercise we're yet willing to go through.


[TestClass]
public class When_RecursiveGenerator_Generates_A_Puzzle : When_A_New_Puzzle_Is_Generated
{
public When_RecursiveGenerator_Generates_A_Puzzle()
: base(new RecusiveGenerator(new RecursiveSolver()))
{

}
}


Notice that the only new twist to this code is that our constructor for RecursiveGenerator must now takes an implementation of ISolver.

RecursiveGenerator Source


public class RecusiveGenerator : IGenerator
{
private Random rand;
private ISolver solver;

public RecusiveGenerator(ISolver solver)
{
rand = new Random();
this.solver = solver;
}

public Puzzle Generate()
{
Puzzle puzzle = new Puzzle();

//fully fill a puzzle with valid pieces
GeneratePiece(GetMinimumPosibilityPiece(puzzle), puzzle);

//now remove random values until the puzzle is no longer
//uniquely solvable.


SortedList<double, Piece> sortedPieces = new SortedList<double, Piece>();
foreach (Piece piece in puzzle.Pieces)
{
sortedPieces.Add(rand.NextDouble(), piece);
}

Piece lastPiece = null;
Value? lastValue = null;

try
{
foreach (double key in sortedPieces.Keys)
{
lastPiece = sortedPieces[key];
lastValue = lastPiece.AssignedValue;
lastPiece.AssignedValue = null;
solver.Solve(puzzle);
}
}
catch (DuplicateSolutionFoundException)
{
//when we reach the point of a duplicate solution
//we need to put the last value we stripped off back.

lastPiece.AssignedValue = lastValue;
}

return puzzle;
}

private bool GeneratePiece(Piece piece, Puzzle puzzle)
{
if (piece == null)
{
return true;
}

HashSet<Value> potentials = CalculatePotentials(piece);

do
{
piece.AssignedValue = null;

// if there are no potential values for this piece then
// we have reached the end of the line.

if (potentials.Count == 0)
{
return false;
}

piece.AssignedValue = potentials.PopRandomItem();
} while (!GeneratePiece(GetMinimumPosibilityPiece(puzzle), puzzle));

return true;
}

private Piece GetMinimumPosibilityPiece(Puzzle puzzle)
{
Piece minimumPiece = null;
int minimumPossiblities = Enum.GetValues(typeof(Value)).Length + 1;

foreach (Piece piece in puzzle.Pieces)
{
int possibilities = CalculatePotentials(piece).Count;
if (!piece.AssignedValue.HasValue && possibilities < minimumPossiblities)
{
minimumPossiblities = possibilities;
minimumPiece = piece;
}
}

return minimumPiece;
}

private HashSet<Value> CalculatePotentials(Piece piece)
{
HashSet<Value> potentials = GetPossibleValues();

potentials.IntersectWith(GetSolutionValues(piece.Column));
potentials.IntersectWith(GetSolutionValues(piece.Row));
potentials.IntersectWith(GetSolutionValues(piece.Region));

return potentials;
}

private HashSet<Value> GetSolutionValues(IEnumerable<Piece> pieces)
{
HashSet<Value> values = GetPossibleValues();
foreach (Piece piece in pieces)
{
if (piece.AssignedValue != null)
{
values.Remove(piece.AssignedValue.Value);
}
}

return values;
}

private HashSet<Value> GetPossibleValues()
{
HashSet<Value> values = new HashSet<Value>();
foreach (Value val in Enum.GetValues(typeof(Value)))
{
values.Add(val);
}
return values;
}
}


Notice that a lot of this code is very similar to that of the solver. This could probably be refactored, but for now we'll leave as is.

Again, what do we have to show for our hard word? Our green lights of course!

Notes

For those readers with a keen eye, something may seem less than optimal regarding the approaches we took here in this section. We'll get in to what those may be and what we can do about them in future segments after we have built a UI to be able to show these generated Sudoku puzzles.

Sunday, January 27, 2008

Sudoku Part 3: Defining The Generator Behavior


In part 1 (Defining The Solver Behavior) we examined how Behavior Driven Development could assist us in defining the specifications of a small portion of our application. Today we're going to take the same approach, but instead we're going to tackle the Generation portion of a Sudoku puzzle instead of the solving of the puzzle.

We're going to resume our conversations with our domain expert, and see what types of behaviors this new piece of functionality should include.

Needed Behavior

Just like last time, we'll have simple questions and simple answers which will later drive how we create and define our behavior based unit tests.

  • What is the result when a new puzzle is generated?
    • No column should have duplicate values
    • No row should have duplicate values
    • No region should have duplicate values
    • The puzzle should be uniquely solvable.
Really, that's it. We don't care too much about how the puzzle is generated, or what it looks like (at least at this point), so long as the puzzle itself can be solved.

Notice, that making the puzzle solvable would actually be quite a tricky requirement if we had not already built a Sudoku solver in part 2 (Implementing A Solver). Lucky for us, since we have a fully tested solver which we have proven works we can plug that implementation in to our behavior tests for the generator in order to verify the output. Note that really any implementation of an ISolver will suffice for our purposes, since our behavior tests are actually written against the interface and not any particular implementation.

The BDD Style Specifications

We're going to write our test class (or specification) in much the same way we did before. We try to match the discussion with the domain expert as close as we can, and then write relatively simple tests which we can use to ensure our implementations provide the correct behavior.


[TestClass]
public abstract class When_A_New_Puzzle_Is_Generated
{
protected Puzzle puzzle;
private IGenerator generator;
private ISolver solver;

protected When_A_New_Puzzle_Is_Generated(IGenerator generator)
{
this.generator = generator;
this.solver = new RecursiveSolver();
}

[TestInitialize]
public void Initialize()
{
puzzle = generator.Generate();
}

[TestMethod]
public void No_Column_Should_Have_Duplicate_Values()
{
foreach (Column col in puzzle.Columns)
{
List<Value> foundValues = new List<Value>();
foreach (Piece piece in col)
{
if (piece.AssignedValue.HasValue)
{
Assert.IsFalse(foundValues.Contains(piece.AssignedValue.Value));
foundValues.Add(piece.AssignedValue.Value);
}
}
}
}

[TestMethod]
public void No_Row_Should_Have_Duplicate_Values()
{
foreach (Row row in puzzle.Rows)
{
List<Value> foundValues = new List<Value>();
foreach (Piece piece in row)
{
if (piece.AssignedValue.HasValue)
{
Assert.IsFalse(foundValues.Contains(piece.AssignedValue.Value));
foundValues.Add(piece.AssignedValue.Value);
}
}
}
}

[TestMethod]
public void No_Region_Should_Have_Duplicate_Values()
{
foreach (Region region in puzzle.Regions)
{
List<Value> foundValues = new List<Value>();
foreach (Piece piece in region)
{
if (piece.AssignedValue.HasValue)
{
Assert.IsFalse(foundValues.Contains(piece.AssignedValue.Value));
foundValues.Add(piece.AssignedValue.Value);
}
}
}
}

[TestMethod]
public void The_Puzzle_Should_Be_Uniquely_Solvable()
{
Assert.IsNotNull(solver.Solve(puzzle));
}
}

Tuesday, January 15, 2008

Sudoku Part 2: Implementing A Solver


In the previous posting (Defining The Solver Behavior) we discussed the needed behavior in order to build a working Sudoku solver. Today we're going to build our first implementation of a solver which exhibits the previously described behaviors.

Algorithm

For the first solver I took a relatively simple approach. It's not quite a brute-force solver, but it is close. I call the first implementation the RecursiveSolver, since it solves the puzzle using a recursive algorithm.

In order to solve the puzzle we'll first analyze the board position to determine the number of candidate values for each puzzle piece. The set of candiate values will be the full list of values minus the set of values in the piece's row, minus the set of values in the piece's column and then minus the set of values in the piece's region.

Once we have computed the candidate values for each piece, we'll record the piece with the fewest candidate values. We'll then loop over the candidate values in a random order calculating the piece with the minimum candidates given that move and continuing the process.

If we find a board position where there are no longer any pieces without an assigned value we know we have found a solution. If we have already recorded a value solution and we find a second board position we know we have found an invalid position.

If we found a board position where there is a piece with 0 potential values, we know we have traversed a path which will not lead to a solution.

If we traverse all possible paths of the puzzle and can not find a solution, then we know we have an unsolvable puzzle as our algorithm would have examined every possible choice a solver could make.

Broken into steps our algorithm looks like the following:
  1. Find The Piece With The Minimum Number Of Candidate Values
  2. If A Piece Is Found With No Candidate Values, Return Without Solution
  3. If No Pieces exist with Candidate Values Return the Solution We Are Done
  4. Loop Over Each Candidate Value
  5. Assign the Value to the Solution
  6. Return To Step 1 Given The New State
  7. If We Found A Solution, Verify That We Have Not Previously Found A Solution
  8. Loop To Step 4 Until No Other Candidates Are Available
  9. Return The Solution If It Was Found
Test Class Implementations

In the last installment we created abstract behavior tests since we did not have a specific implementation of a solver. We had a set of behaviors we wanted all potential solver implementations to exhibit. So first we need to create concrete test classes which inherit from our abstract ones.

First, let's look at what we need to do for the case when a puzzle is solved:


[TestClass]
public class When_RecursiveSolver_Solves_A_Puzzle : When_A_Puzzle_Is_Solved
{
public When_RecursiveSolver_Solves_A_Puzzle()
: base(new RecursiveSolver())
{

}
}


That's pretty simple isn't it? Now, this works fine with the MSTest tool, I have not actually verified that this technique works in other .NET unit testing frameworks, but I believe it does. We rely on inheritance to pull all of our expectations from our base test class.

We can continue this pattern for all other behavior cases as well. Take special note to the fact that we are constructing the instance of the solver we want to test in the constructor of the test class. This is needed since our base class takes an ISolver to use for its test cases.

Solver Source

Now let's look at the good stuff. What does the solver code look like?


public class RecursiveSolver : ISolver
{
private Random rand;

public RecursiveSolver()
{
this.rand = new Random();
}

#region ISolver Members

public Solution Solve(Puzzle puzzle)
{
Solution solution = new Solution(puzzle);

solution = SolvePiece(FindMinimumCandidatePiece(solution), solution);

return solution;
}

#endregion

private Solution SolvePiece(Piece piece, Solution solution)
{
//when the provided piece is null we know there are no remaining
//pieces to be filled in indicating that the puzzle is solved.

if (piece == null)
{
return solution;
}

//we clone the current solution to ensure that we always provide
//later steps with the same configuration. Otherwise once the item
//recursed the values within the soultion reference would have changed.

solution = (Solution)solution.Clone();
Solution returnSolution = null;
Solution foundSolution = null;
HashSet<Value&*gt; candidates = CalculateCandidates(piece, solution);

//loop over all possible choices for this piece. If we finish
//looping and no slution was found the earlier steps will have
//a null solution indicating the path was incorrect.

while (candidates.Count > 0)
{
solution.Values[piece] = candidates.PopRandomItem();
returnSolution = SolvePiece(FindMinimumCandidatePiece(solution), solution);

if (returnSolution != null)
{
if (foundSolution != null)
{
throw new DuplicateSolutionFoundException("Provided puzzle is invalid.");
}
foundSolution = returnSolution;
}
}

return foundSolution;
}

private Piece FindMinimumCandidatePiece(Solution solution)
{
Piece foundPiece = null;
int minimumCandidates = 10;

foreach (Piece piece in solution.Puzzle.Pieces)
{
if (!solution.Values.ContainsKey(piece))
{
HashSet<Value> candidates = CalculateCandidates(piece, solution);
if (candidates.Count < minimumcandidates)
{
minimumCandidates = candidates.Count;
foundpiece = piece;

//If we found a piece with only 1 candidate we can stop looking
//since 1 is the minimum possible candidates for a valid piece.

if (minimumCandidates == 1)
{
break;
}
}
}
}

return foundPiece;
}

private HashSet<Value> CalculateCandidates(Piece piece, Solution solution)
{
HashSet<Value> candidates = new HashSet<Value>((Value[])Enum.GetValues(typeof(Value)));

candidates.ExceptWith(GetAssignedValues(piece.Column, solution));
candidates.ExceptWith(GetAssignedValues(piece.Row, solution));
candidates.ExceptWith(GetAssignedValues(piece.Region, solution));

return candidates;
}

private HashSet<Value> GetAssignedValues(IEnumerable<Piece> pieces, Solution solution)
{
HashSet<Value> values = new HashSet<Value>();

foreach (Piece piece in pieces)
{
if (solution.Values.ContainsKey(piece))
{
values.Add(solution.Values[piece]);
}
}

return values;
}
}


Notice that there is actually a lot of logic within this solver. While it could be argued that some of this logic deserves its own behavior class, at this time I'm going to consider it premature optimization, and leave it for future refactoring.

For example the ability to calculate potential values isn't something which is specific to a solver, or at least this solver. This could be usable elsewhere. If/When it becomes valuable to break it out we'll do it.

Also of note is the PopRandomValue method on the HashSet<>. PopRandomValue is obviously an extension method here, just don't tell anyone! This method randomly selects an item from the Set and then removes it from the set so it won't be selected next time. The implementation is as follows:


public static class HashSetExtension
{
private static Random rand;

static HashSetExtension()
{
rand = new Random();
}

public static T PopRandomItem(this HashSet<T> set)
{
List<T> list = new List<T>(set);
T item = list[rand.Next(0, list.Count - 1)];
set.Remove(item);
return item;
}
}


What do we have to show for all of our effort? Well a pretty screen with lots of green of course!



Notes

Note that we could have included additional behavior tests regarding how this particular solver should behave. At this time I don't deem it necessary since our only real requirements is that the solver gives valid solutions when one is available, and reports when no solution is available.

As previously mentioned we could break apart some items of this solver into multiple solvers. We'll investigate those possibilities at a later point.

Look for the next section where we'll discuss our Sudoku puzzle generator!

-- John Chapman

Saturday, January 5, 2008

Sudoku Part 1: Defining The Solver Behavior


In the introduction I talked briefly about my goals for this series. I wanted to create both a Sudoku generator which could generate puzzles for both myself and a theoretical automatic solver. There are many places to find such tools on the internet, but my goal was to use this as an exercise to show how someone could use various techniques and tools such as Behavior-Driven Development and the Castle Windsor Project. Today, we'll start with Behavior-Driven Development.

Needed Behavior

What are we trying to accomplish Here? Let's start with the Sudoku Solver first. Let's pretend that I am having a conversation with a customer who is asking me to write this Sudoku solver for them. Let's also say that I am not familiar with sudoku puzzles. First the customer explains to me that a Sudoku puzzle is a 9x9 grid with 9 3x3 sub-regions within the grid where each cell can hold a value from 1-9. He also explains that the puzzle begins with some of the cells (or pieces) already filled in for us, and the rest is for the solver to fill in.

So we have the following dialog:

  • What happens when the puzzle is solved?
    • All cells should be assigned a value.
    • No column should have duplicate values.
    • No row should have duplicate values.
    • No region should have duplicate values.
The customer explains to me that if the solution meets the provided criteria we are guaranteed of a valid solution for the given puzzle. But then I start thinking. Is it possible to be given a Sudoku puzzle which has many possible solutions? I think that if I don't place any pieces, clearly there would be many possible solutions. So this leads to the following:
  • What happens when a puzzle has multiple solutions?
    • The puzzle should be reported as invalid.
Ok, so now I know what happens if a puzzle has many solutions, but what if the puzzle has no solution.
  • What happens when a puzzle has no solution?
    • There is no solution for the puzzle.
Ok, so my customer gave me a pretty weird look on that one, but there is nothing wrong with asking right?

Note that my customer has not told me any specifics about how he or she would like the puzzle to be solved, only that the puzzle should be solved and what the result of a valid solved puzzle would be.

Letting The Behavior Drive Our Development

Ok, so now I'm back in the office, ready to begin work on the Sudoku solver for my customer. Where do I begin? This is where the Behavior-Driven Development (BDD) comes in to play. Behavior-Driven Development is basically a Test-Driven Development (TDD) technique where your tests are designed around the needed behaviors of your software. This should result in tests which are far easier to refactor, since most changes results in complete removal or replacement of tests instead of removing pieces of a method based test.

Additionally by wording your tests in such a way that it portrays the resulting behavior of the software, the results of the tests become easy for our customers to read to understand how the software is working.

Let's look at the resulting unit tests to show what I'm talking about. First lets look at solving a valid puzzle. (*Warning* These tests are currently written in MSTest, I will most likely change to NUnit or mbUnit before releasing the entire source code.)


[TestClass]
public abstract class When_A_Puzzle_Is_Solved
{
private Sudoku.Solver.ISolver solver;
private Puzzle puzzle;
private Solution solution;

public When_A_Puzzle_Is_Solved(Sudoku.Solver.ISolver solver)
{
this.solver = solver;
}

[TestInitialize]
public void Initialize()
{
CreateSolvablePuzzle();
solution = solver.Solve(puzzle);
}

private void CreateSolvablePuzzle()
{
puzzle = new Puzzle();

puzzle.Rows[0][1].AssignedValue = Value.Five;
puzzle.Rows[0][2].AssignedValue = Value.Four;
puzzle.Rows[0][7].AssignedValue = Value.Two;
puzzle.Rows[1][0].AssignedValue = Value.Three;
puzzle.Rows[1][3].AssignedValue = Value.Four;
puzzle.Rows[2][0].AssignedValue = Value.Seven;
puzzle.Rows[2][3].AssignedValue = Value.Eight;
puzzle.Rows[2][6].AssignedValue = Value.Three;
puzzle.Rows[2][7].AssignedValue = Value.Five;
puzzle.Rows[3][1].AssignedValue = Value.Seven;
puzzle.Rows[3][2].AssignedValue = Value.One;
puzzle.Rows[3][5].AssignedValue = Value.Five;
puzzle.Rows[3][8].AssignedValue = Value.Three;
puzzle.Rows[4][0].AssignedValue = Value.Six;
puzzle.Rows[4][3].AssignedValue = Value.Three;
puzzle.Rows[4][5].AssignedValue = Value.Eight;
puzzle.Rows[4][8].AssignedValue = Value.Nine;
puzzle.Rows[5][0].AssignedValue = Value.Five;
puzzle.Rows[5][3].AssignedValue = Value.Nine;
puzzle.Rows[5][6].AssignedValue = Value.Four;
puzzle.Rows[5][7].AssignedValue = Value.Seven;
puzzle.Rows[6][1].AssignedValue = Value.Eight;
puzzle.Rows[6][2].AssignedValue = Value.Five;
puzzle.Rows[6][5].AssignedValue = Value.Four;
puzzle.Rows[6][8].AssignedValue = Value.One;
puzzle.Rows[7][5].AssignedValue = Value.Three;
puzzle.Rows[7][8].AssignedValue = Value.Six;
puzzle.Rows[8][1].AssignedValue = Value.Six;
puzzle.Rows[8][6].AssignedValue = Value.Eight;
puzzle.Rows[8][7].AssignedValue = Value.Nine;
}

[TestMethod]
public void All_Pieces_Should_Have_A_Value()
{
foreach (Piece piece in puzzle.Pieces)
{
Assert.IsTrue(solution.Values.ContainsKey(piece));
}
}

[TestMethod]
public void No_Column_Should_Have_Duplicate_Values()
{
foreach (Column col in puzzle.Columns)
{
List<Value> foundValues = new List<Value>();
foreach (Piece piece in col)
{
Assert.IsFalse(foundValues.Contains(solution.Values[piece]));
foundValues.Add(solution.Values[piece]);
}
}
}

[TestMethod]
public void No_Row_Should_Have_Duplicate_Values()
{
foreach (Row row in puzzle.Rows)
{
List<Value> foundValues = new List<Value>();
foreach (Piece piece in row)
{
Assert.IsFalse(foundValues.Contains(solution.Values[piece]));
foundValues.Add(solution.Values[piece]);
}
}
}

[TestMethod]
public void No_Region_Should_Have_Duplicate_Values()
{
foreach (Region region in puzzle.Regions)
{
List<Value> foundValues = new List<Value>();
foreach (Piece piece in region)
{
Assert.IsFalse(foundValues.Contains(solution.Values[piece]));
foundValues.Add(solution.Values[piece]);
}
}
}
}



Notice how closely these tests match the above dialog I had with the fictional customer. Anyone, including the customer should be able to read that test fixture (especially how it formatted in a proper runner) and verify that the behavior is correct. Plus, the test methods themselves are very short, easy to read and verify.

Behavior-Driven Development works by you first defining the scenario. The scenario becomes the test class itself with the test initialization being the place where we place our objects in to the appropriate state for our scenario. Each method then becomes a validation of what happens in a given scenario. The methods and class names are then written out as words so it easy to tell what behaviors are being tested.

Note that the initialization logic of this test builds a valid Sudoku puzzle and then asks the solver to solve it. As proof of a valid solution I have provided the puzzle and the associated solution in red below.


There are a few things which need to be explained in this code.

First, I chose to use an enumeration for puzzle values as a way to limit the values of the puzzle. This actually looks very lame when I read it, having the names of numbers represent the numbers themselves, and it may be refactored at a later point, but for now it helped me ensure no 0s or 10s showed up (although I have learned that enumerations are pretty lame and nothing stops you from assigning an invalid integer value to the enumeration, but that's a post for another day).

Second, notice that I used an interface for my solver, not a specific solver. The reason for this is really simple. I don't know at this point how the puzzle will be solved, only what the result of a solver should be. It doesn't matter how the internal solver works at this point, provided it sticks to the interface and this root behavior.

I also made the choice to make the Puzzle class and a separate Solution class. Basically this just allows a puzzle to remain free of solution information and would theoretically allow someone to make Puzzle classes persistable and not have to worry about updating a puzzle while creating a solution for it.

Now that we have our behaviors defined for what happens with a valid puzzle, lets move on to the other cases which were discussed in the dialog with the customer.


[TestClass]
public abstract class When_A_Puzzle_Has_Multiple_Solutions
{
private ISolver solver;
private Puzzle puzzle;

public When_A_Puzzle_Has_Multiple_Solutions(ISolver solver)
{
this.solver = solver;
}

[TestInitialize]
public void Initialize()
{
CreateInvalidPuzzle();
}

private void CreateInvalidPuzzle()
{
puzzle = new Puzzle();

puzzle.Rows[4][4].AssignedValue = Value.Five;
}

[TestMethod]
[ExpectedException(typeof(DuplicateSolutionFoundException))]
public void The_Solver_Should_Report_An_Invalid_Puzzle()
{
solver.Solve(puzzle);
}
}


Note that in this scenario, many valid sudoku boards could be created when only one piece is filled in. As such we are saying that whenever a puzzle with many solutions is provided we expect any solver to throw an exception.

I'm not actually a big fan of this approach, but I did it anyways here. Basically checking for duplicate solutions is something which I think would actually be useful for business logic. Therefore I for see cases where I could wind up using this exception for flow control, which I am opposed to. However, since at this point I just need a failure it works fine, and I can always refactor it later.

There is still one case remaining from the solver discussion:



[TestClass]
public abstract class When_A_Puzzle_Has_No_Solution
{
private Puzzle puzzle;
private ISolver solver;

public When_A_Puzzle_Has_No_Solution(ISolver solver)
{
this.solver = solver;
}

[TestInitialize]
public void Initialize()
{
puzzle = new Puzzle();

puzzle.Rows[0][0].AssignedValue = Value.Five;
puzzle.Rows[0][1].AssignedValue = Value.One;
puzzle.Rows[0][2].AssignedValue = Value.Three;
puzzle.Rows[0][3].AssignedValue = Value.Two;
puzzle.Rows[0][4].AssignedValue = Value.Nine;
puzzle.Rows[0][5].AssignedValue = Value.Four;
puzzle.Rows[0][6].AssignedValue = Value.Eight;
puzzle.Rows[0][7].AssignedValue = Value.Seven;
puzzle.Rows[0][8].AssignedValue = Value.Six;
puzzle.Rows[1][0].AssignedValue = Value.Eight;
puzzle.Rows[1][1].AssignedValue = Value.Two;
puzzle.Rows[1][2].AssignedValue = Value.Seven;
puzzle.Rows[1][3].AssignedValue = Value.Five;
puzzle.Rows[1][4].AssignedValue = Value.Six;
puzzle.Rows[1][5].AssignedValue = Value.One;
puzzle.Rows[1][6].AssignedValue = Value.Three;
puzzle.Rows[1][7].AssignedValue = Value.Four;
puzzle.Rows[1][8].AssignedValue = Value.Nine;
puzzle.Rows[2][0].AssignedValue = Value.Nine;
puzzle.Rows[2][1].AssignedValue = Value.Six;
puzzle.Rows[2][2].AssignedValue = Value.Four;
puzzle.Rows[2][3].AssignedValue = Value.Seven;
puzzle.Rows[2][4].AssignedValue = Value.Eight;
puzzle.Rows[2][5].AssignedValue = Value.Three;
puzzle.Rows[2][6].AssignedValue = Value.One;
puzzle.Rows[2][7].AssignedValue = Value.Two;
puzzle.Rows[2][8].AssignedValue = Value.Five;
puzzle.Rows[3][0].AssignedValue = Value.Six;
puzzle.Rows[3][1].AssignedValue = Value.Five;
puzzle.Rows[3][2].AssignedValue = Value.One;
puzzle.Rows[3][3].AssignedValue = Value.Three;
puzzle.Rows[3][4].AssignedValue = Value.Seven;
puzzle.Rows[3][5].AssignedValue = Value.Nine;
puzzle.Rows[3][6].AssignedValue = Value.Two;
puzzle.Rows[3][7].AssignedValue = Value.Eight;
puzzle.Rows[3][8].AssignedValue = Value.Four;
puzzle.Rows[4][0].AssignedValue = Value.Two;
puzzle.Rows[4][1].AssignedValue = Value.Eight;
puzzle.Rows[4][2].AssignedValue = Value.Nine;
puzzle.Rows[4][3].AssignedValue = Value.One;
puzzle.Rows[4][4].AssignedValue = Value.Five;
puzzle.Rows[4][5].AssignedValue = Value.Six;
puzzle.Rows[4][6].AssignedValue = Value.Seven;
puzzle.Rows[4][7].AssignedValue = Value.Three;
}

[TestMethod]
public void No_Solution_Should_Be_Provided()
{
Assert.IsNull(solver.Solve(puzzle));
}
}


Note that for this scenario I took the puzzle from Part 0 which proved to be unsolvable and used it as my test puzzle.

Hopefully this gives you a good idea of how to define your behaviors. In the next post we'll look at creating concrete tests for an actual implementation of a solver that exhibits the behaviors we discussed here. Note that we discussed the behavior unit tests first since when developing with Behavior-Driven Development, the behavior tests come first!

--John Chapman

Wednesday, January 2, 2008

CodeMash 2008 Here I Come!



Well, it's official I've gone and registered for CodeMash 2008! I'm really looking forward to this conference. If anyone who reads this is going to attend let me know, maybe we can meet up at some point.

These are just some of the interesting topics I'm looking forward to.

  1. LinqTo: Implementing IQueryProvider (Bill Wagner)
    • Has anyone out there taken a look at what it takes to implement your own Linq provider? It's a major pain in the rear! Now, I don't know to what depths Bill will go, but any good overview would really be helpful. For this session I'm not really looking for a good how to implement guide, since I doubt I'll ever work on my own custom Linq provider, but really it's more to help me get a better grasp of how Linq is working under the covers to help myself in consuming it!
  2. Putting the Fun into Functional with F# (Dustin Campbell)
    • Ok, so I've kind of been watching the boat sail on all of the popular dynamic languages. I've dabbled in the past with python, but only very slightly. I've written a modest amount of javascript to get the hang of the ideas behind it, but I think a good solid introduction to the up and coming dynamic first class citizen of .NET is in order. Let's get a good introduction to all of the fuss. I should have enough of python to follow along with the presentation. Again, this isn't something I plan to use on a day to day basis, but rather just help me understand how the other half lives, and help me to understand why I make the choices that I do.
  3. Introduction To Behavior Driven Development (Andrew Glober)
    • This one concerns me a little bit with the introduction tag, but I'm starting to become a big fan of Behavior Driven Development or (BDD). Any additional insight in to the thought processes which it takes to implement it properly would be beneficial to me. Plus the fact that it is being shown for a Java implementation might help me to think a little bit out of the box while implementing BDD myself in the C# world.
  4. Story-Driven Testing (Jim Holmes)
    • This one basically belongs with the prior item. It's just an area which I want to learn more about. I believe this one should have a .NET focus (not that it is even necessarily for the topic).
  5. Introducing Castle (Jay R. Wren)
    • Castle is one of those projects I've grown to enjoy. I still won't use some parts of the project (like ActiveRecord), but that is also the beauty of Castle. You don't have to. You take the pieces you want. I've grown to enjoy Microkernel/Windsor and while I've never actually used MonoRail, it actually makes a lot of sense to me. The Microsoft MVC framework actually helped me realize just how good of an MVC framework already existed for the .NET platform. So, while the term Introducing may be a slight put off, there is still a chance to see some items in a different light. Plus, I've met Jay in the past, and he's a sharp guy. I would like to see a full presentation on what he has to say about the subject.
  6. Introduction To Workflow Foundation (Keith Elder)
    • On this one I don't really mind the Introduction part. I really don't know much about implementing WF. I understand you need to run the engine yourself, and I've seen the GUI used to create workflows, but really I don't know much beyond that. It's something which has seemed like it would provide benefits to me and my projects in the past, but I haven't ever gotten enough expertise to know for sure if it was something to invest in or not. Hopefully this will help me down that path.
  7. Rails: A Peek Under The Covers (Brian Sam-Bodden)
    • I'm going a bit in to the deep end on this one. I understand only the absolute minimum of Ruby, yet who hasn't heard of Ruby on Rails? This may give a better oversight regarding why it has become such a popular framework. I'm familiar with the ideas behind what the Rail framework offers, but seeing how it works with Ruby should be interesting.
And this is just a short list. I haven't actually checked to see what the times are for these sessions, lets just hope I am able to get to all of them.

Hopefully I'll see you all at CodeMash!

--John Chapman

Blogger Syntax Highliter