Sunday, March 8, 2009

YAGNI Abuse

YAGNI "You Ain't Gonna Need It" is one of those principles I usually say I hate.  Truthfully, I suppose I don't hate the YAGNI principle, I think I typically hate when people try to pull it out in a discussion/argument.  I feel that sometimes people use YAGNI either as an excuse to write poor code or as an excuse for why they don't need to improve as a developer.

Somewhat recently, Jeremy Miller wrote a post titled "A Quck Example of YAGNI / Simplest Thing Possible In Action" which completely annoyed me.  I think it got under my skin so much because of how highly I view Jeremy.  I spoke with some people after a recent Great Lakes Area .Net User Group meeting about this.  To be fair, after that post and after my discussions Jeremy did make a second post titled "Update on the YAGNI Episode" where he explains that he was wrong in the prior post.  I probably should have written this post earlier.

In Jeremy's original post he discusses an argument he had with another member of his team regarding how to build a simple web layout which included a 2x2 arrangement of panels which will hold various content.  Jeremy uses YAGNI to argue that this layout should be done using an HTML table with 2 rows and 2 columns.  It's relatively simple, it works, why try to complicate things?  He argued that this was the simplest thing to do which would satisfy the current requirement.

To me, this shows that Jeremy is not an expert at HTML design, and was trying to use YAGNI to defend it.  I think there should be a new principle for Jeremy, and for others trying to use YAGNI.  Let's call it "Use What You Know".  Maybe we could call it WYK or something like that.  Jeremy knows how to lay out web sites using tables.  If he's working on a project where he needs to lay out panels, that's probably what he should do.  However, in cases where someone else on the team understands a superior approach which allows more flexibility to be replaced in the future, that's what they should use.

To me, YAGNI needs to be about features, not the technical approach.  The technical approach needs to be clean and easily replaceable.  This is a big reason why I'm such a big fan of the Single Responsibility, Open-Closed and the Inversion of Control principles.  When used correctly these principles, not meaning to exclude many other solid principles, result in maintainable and easily replaceable code.  Code which follows these principles is not necessarily hard to write (sometimes it does involve some different ways of thinking), and it doesn't typically take any more time, but the benefits really pay off when you actually do "Need It" and have to change a piece of your software.  In cases before I have written extremely simple services which work for the current functionality only to require that they be fully replaced with more advanced functionality in the future.  This is fine, simply swapping out that service for the newer service was a simple switch which didn't require breaking the rest of the application.

That's how YAGNI should be used.  Only build the functionality that you need, but leave it easy to enhance or even replace. 

In many cases I feel that developers use arguments against Inversion of Control, or the Open-Closed principle because of YAGNI.  They argue you don't need that flexibility.  You don't need to replace your services easily.  It's a simple application, it works without it, why are you pushing this?  To me that's using YAGNI to fight continuous improvement.  As developers we should strive to constantly hone our craft.  We should always look to find ways to better do our job.  It's the same argument I make all the time.  Doing things the "clean" way is usually not more work, or more time consuming than the "quick and dirty" way.  However, not understanding how to do it cleanly, does result in it being significantly easier and faster to do things the "quick and dirty" way.

Getting back to the "Use What You Know" point.  I'm not advocating that every time a problem comes up you research what the cleanest way is to do it.  I'm advocating that you find time to learn about new approaches to solving software development problems before you have to use it on the job.  The employer typically requires fast turn around on the software problems you face.  Deliver the solution that you know how to deliver.  That gives the employer or customer the solution that works the fastest and for the least cost.  However, if we could have spent earlier months learning new techniques and solutions, the employer could receive superior products while developers are able to enjoy working on a cleaner code base.

Ideally you find employers that allow for time up front for you to lean and continuously improve.  That way when the problem comes you are better suited to solve it with a larger toolbox.  Some employers do not allow time for this, and instead developers are forced to improve on their own time.  Regardless, I would advocate that we all put in the extra effort to learn on our own time.  The ones with employers that pay for this time are lucky, but that doesn't let the rest of us off the hook.

Tuesday, March 3, 2009

Spoke at GANG

I finally gave my talk at GANG.  I was originally scheduled to speak in September (Speaking at GANG), but came down with food poisoning which prevented me from talking.  That same talk, an Introduction to Rhino Mocks, was made last month in February.

 

I really enjoyed giving the talk.  The crowd was great, and overall I had a good time.  I hope others enjoyed the talk and were able to learn about how Rhino Mocks may come in handy with their software testing needs.

 

At the end of the talk I said I would post my example application online.  I have decided to place the file on savefile.com.  You will have to put up with an ad in order to download the file.  Download the sample here.

 

In the near future, I plan to post in detail about some parts of the application application.  If readers would like an explanation of any piece in particular, please leave a comment on this post.

Wednesday, September 3, 2008

Speaking At GANG

I'm going to be speaking at the Great Lakes Area .NET User Group in Southfield on September 17th.  I will be giving a talk about Rhino Mocks, and the how, when and why to use it. 

I am still putting the presentation together, so if you will be in the area and would like me to cover something specific, leave a comment.  Even if you won't be attending, let me know if you think there are points which would be helpful for the people there.

Monday, July 21, 2008

Why I Am Sick Of Hearing About Deferred Execution

Since the announcement of LINQ we've heard plenty about "deferred execution", this term that has appeared like its some sort of LINQ magic feature.  Personally, I think I need to come up with my own term and claim it's something awesome too.  I'm really tired from hearing about it.

On Wednesday, July 15th I went to a Great Lakes Area .NET Users Group talk by Bill Wagner where he was talking about Extension Methods and how to make proper use of them.  Now, don't get me wrong, I have a lot of respect for Bill.  I don't mean to criticizing Bill in any way.  So Bill, if you read this, I really don't mean any disrespect by this.  It was simply your use of the term that made me recall my feelings on this topic.

Bill was doing a demo where he showed various LINQ extension methods and showed that by making use of these extension methods we were able to harness the power of DFERRED EXECUTION! 

The first example Bill showed was Enumerable.Range(Int32, Int32) where it returns an IEnumerable<Int32>.  Bill then shows that when he calls the Take() extension method it only iterates through the first x of the items in the range, not the full list of items identified by the range.  Ok yes, this is true.  We didn't have to create a new list and populate it with a million items, just to pull the first 5 items.

Bill later went on to discuss how if you use a LINQ query with variables, you can change those variables after you have defined the query.  His code looked something like the following:


var range = Enumerable.Range(0, 1000000);

var maxValue = 40;

var items = from r in range
where r < maxValue
select r;

var takenItems = items.Take(30);

maxValue = 20;

foreach (var i in takenItems)
{
Console.WriteLine(i);
}



Output:

0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19

Now yes, you define your LINQ query, change your variable after the fact and then consume that class.  Yes, it takes into account the change in your variable.  Yes, this occurs after you defined your query, so deferred execution is a term that makes sense.

Ok, I'll give in a bit, I'm ok with the term, but not the way its talked about.  The magic isn't LINQ, and understanding what is going on is not just about understanding LINQ.  It's the fundamentals of how LINQ works which people should really understand.

I'm going to say this one more time before I move on "Deferred Execution is not a LINQ feature".  It's a closure feature/implementation pattern.

First let me try to explain the implementation pattern piece by creating my own "Deferred Execution" code which works exactly the same way as as the Range method Bill demonstrated. (Note that this is not necessarily built with production quality in mind).



public class MyRange : IEnumerable
{
private class RangeEnumerator : IEnumerator
{
private int? _current;
private bool _complete = false;
private readonly int _minValue;
private readonly int _maxValue;

public void Dispose()
{

}

public bool MoveNext()
{
if (_current == null)
{
_current = _minValue;
return true;
}

if (_current < _maxValue)
{
_current += 1;
return true;
}
else
{
_complete = true;
return false;
}
}

public void Reset()
{
_current = null;
_complete = false;
}

public int Current
{
get
{
if (_current == null || _complete)
{
throw new InvalidOperationException();
}

return _current.Value;
}
}

object IEnumerator.Current
{
get
{
return Current;
}
}

public RangeEnumerator(int minValue, int maxValue)
{
_minValue = minValue;
_maxValue = maxValue;
}
}

private readonly int _from;
private readonly int _to;

public IEnumerator GetEnumerator()
{
return new RangeEnumerator(_from, _to);
}

IEnumerator IEnumerable.GetEnumerator()
{
return GetEnumerator();
}

public MyRange(int from, int to)
{
_from = from;
_to = to;
}
}

That's actually really simple code isn't it?  There is nothing revolutionary in that code.  Any one of us could have implemented that in C# 1.0. 

Now, let's look at a case with closures.  LINQ internally is using closures (via lambda expressions) to perform its queries.  So lets say I write my own closure. 



var range = new MyRange(0, 1000000);

var maxValue = 40;

Func expression = i => i < maxValue;

maxValue = 20;

foreach (var i in range)
{
if (!expression(i))
{
break;
}
else
{
Console.WriteLine(i);
}
}


Output:

0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19

Huh, wouldn't you guess it, it also shows this magical LINQ "deferred execution" behavior.

So what's the point of all this?  One, I'm probably too easily set off on topics like this.  Second, we shouldn't look at "deferred execution" as some sort of LINQ magic but rather a pattern that can provide us many benefits with our own code.  Deferred execution allows us to enhance performance and flexibility of our applications.  This is something we can all make use of in our algorithms, even if we aren't utilizing LINQ.

And in regards to the Extension method talk by Bill, I really enjoyed it.  It was simple enough for people to learn about new C# 3.0 features.  You talked about it well and gave good examples.  I'm just frustrated that people seem to write this stuff off as magic even though they are simple concepts.  Plus, this term seemingly just appeared with LINQ even though the concept has been around for a long time.

Saturday, July 12, 2008

Subtle Bugs When Dealing With Threads

Pop quiz, what's wrong with the following code?


public void Unsubscribe()
{
if (_request != null)
{
ThreadPool.Enqueue(() => _service.Unsubscribe(_request));
}
}

public void Subscribe(string key)
{
Unsubscribe();

if (!String.IsNullOrEmpty(key))
{
_request = new Request(key, handler);
ThreadPool.Enqueue(() => _service.Subscribe(_request));
}
}


Does everyone see the issue? There is a critical bug in the above code which isn't always readily apparent.



Try to find it...



I actually wrote code like this today (same concept, different implementation) and immediately saw some serious defects. Honestly, I'm lucky the issues popped up right away, these sorts of things tend to not appear right away, but jump up to bite you at a later point.


In this case the issue is the use of closures. When using a closure it copies the fields from outside the lambda expression into the expression meaning that my use of _request from within the lambda is actually the same reference that exists outside the lambda expression. So in the above case the Unsubscribe lambda gets executed on a new thread (from the pool) but by the time it actually executes the _request has already been changed.



In this case, you're actually unsubscribing from a request that most likely hasn't even been subscribed yet. And to top it off you haven't unsubscribed from the old request yet either. Obviously the above is a race condition where the exact output isn't guaranteed. There is a chance it works perfectly (though doubtful with a true thread pool). There is a chance the new request is subscribed first and then immediately unsubscribed as well.



The simplest way to resolve this issue is by changing the variable which is captured from one which is shared between both closures to one that is unique to each closure. As shown here:



public void Unsubscribe()
{
if (_request != null)
{
var localRequest = _request;
ThreadPool.Enqueue(() => _service.Unsubscribe(localRequest));
}
}

public void Subscribe(string key)
{
Unsubscribe();

if (!String.IsNullOrEmpty(key))
{
_request = new Request(key, handler);
var localRequest =_request;
ThreadPool.Enqueue(() => _service.Subscribe(localRequest));
}
}



However, what I really want to solve this type of problem going forward is to develop something which is process aware, much like the Saga in NServiceBus. Of course my goal is not to be a long running, persistable process like the Saga in NServiceBus, but the process portion is what I'm looking at.

Wednesday, June 18, 2008

StackOverflow.com, Uh oh?

So I mentioned in my last post that I have began listening to podcasts.  I have a lot of respect for both Jeff Atwood (Coding Horror) and Joel Spoelsky.  So when I saw they were working together on a new project and publishing their conversations regarding their new product, I figured I had to listen.  Now, they've posted around 9 episodes now, but I've only had a chance to listen to the first couple so far.

Honestly, I'm a bit concerned by what I heard.  In their first episode I felt like they gave Microsoft technology based developers a really bad name.  Now it may well be largely true (which is probably part of my concern), but I wish there were more resources to correct this bad name, rather than encouraging it.

Basically, during the podcast you will hear something along the lines that Microsoft technology developers basically resort to the Google-Copy & Paste programming development.  Microsoft technology developers are called pragmatic in that they don't care what the right solution is, or how clean it is, or well it works so long as it does work. 

Now, I'm not saying that using google to find answers to interesting problems is a bad thing.  I'm not even saying that if you ever copy and paste code you're a bad developer, but ideally the developer is learning from the blog post instead of just finding something which seems to work and moves about their business.  Honestly, typically these samples you find in blog posts are not thorough enough for a true production deployment.  The point of these postings should be to educate people about new concepts, not try to do their job for them.

Stackoverflow.com from what I gathered wants to be the place to replace google as the first place where you search for doing your job.  Now they stated their goal is to be the first hit on google for all of your searches, but really I think they would be happier if you went straight there instead of google.

Now Stackoverflow is not Microsoft specific, it is meant to appeal to developers on all platforms.  However, they seem to be looking at the Microsoft centric market as their main target.  Honestly, I think these guys will be successful.  They both have large followings, and I think there is huge demand for systems that can essentially do their jobs for them.  I just wish it appeared to be a more helpful resource that helped developers grow, instead of just allowing them to get by.

This all being said, the podcast is worth checking out.  These are two extremely intelligent people, and by listening to these podcasts you essentially get a look inside their heads and how they think.  I don't have to agree with Stackoverflow.com, or the topic which they discuss, I'm still able to learn from it while I listen.  To both Joel and Jeff, thank you for posting your phone conversations as podcasts, it has been a great learning experience for me.

Deep Fried Goodness

So I realize I'm going to look like a bit of a sellout based on my procrastinating, but I really meant to write this earlier.  With my newly purchased iPhone and my increased amount of travel, I've recently started listening to Podcasts.  I honestly never saw the point before.  I rarely get an hour where I can really listen to a podcast.  I have always thought of reading as being a simpler and more effective mechanism for learning.  However, while traveling (especially on a plane) I find that a properly timed podcast can provide a lot of information that otherwise I wouldn't be able to consume.

I saw that Keith Elder (and Chris Woodruff) had a new podcast called Deep Fried Bytes, and I figured I may as well see what it is.  I'm actually one of those people that first met Keith because I recognized his picture from his blog.  Not knowing really what was good for pocasts (besides the obligatory Hanselminutes and DNR) I figured it was worth a shot.

After listening to their episode on interview war stories I was really impressed.  They had some really intelligent people talking about interviewing.  This was a topic, which I have to admit, was not something that immediately peaked my interest.  But what you find is that when many smart people sit down to have a talk, something good will result.  Now, after I picked myself up off the floor from hearing a C# MVP call the using keyword "Obsolete", I realized that they have a winning format.

Plus honestly, Keith Elder is the kind of guy where he doesn't need to have anything good to say.  The way he talks and presents himself can be entertaining almost regardless of the topic.  If you listen to podcasts I recommend you go try these guys out.  If you don't listen to podcasts, I recommend giving them a shot anyway.

As for my contribution to the topic at hand, I suppose I had a little bit of a war story.  From an interviewing side I do remember talking to one guy who's resume really looked great.  He had all sorts of great items written down from projects he had worked on in the past.  While inquiring about these items it became more and more clear that this person really didn't understand the concepts which he had written he had previously implemented.  After a few questions trying to get this candidate to talk about items on his blog he eventually answered that he had nothing to do with those tasks.  They were all completed by other people and he didn't understand how they worked.  He then apologized for writing misleading (or factually incorrect) items on his resume, and we ended the interview.

As an interviewee.  I just remember the Microsoft interview I had.  When I was graduating from Case Western Reserve University I had an on-campus interview with a representative from Microsoft.  I wanted to be a programmer since I was a small child (maybe 12 or 13 years old) and working for Microsoft was always a dream for me.  I had seen their campus (my family lived in Portland, OR at the time, and we saw their campus while visiting the Seattle area), and everything seemed like the perfect opportunity for a young geek in love with software.  From my interview, I really only remember a single technical question which I was asked.  Now keep in mind I wasn't claiming to be an expert at C or any other language at the time.  I had some professional experience working in VB.NET Beta, as well as some experience in developing relatively simple applications in C, C++, Java, PHP, Basic, Perl and the early versions of C#.

Anyways, he asked me "What is the fastest way to reverse a string in C?".  Ok, well I am familiar with C, and I'm familiar with how strings work in C.  I understand pointers, and pointer arithmetic, and immediately I think this must be pointer arithmetic.  Well, before I could even start talking about my response he says "Ohh, and it doesn't use pointer arithmetic.".  Uh ohh, at that point I pretty much froze.  I didn't know what to do.  I'm not a C expert.  I haven't written any C code in a while, let alone overly complex C code, and I need to know what the fastest way to reverse a string is in that language?  Well, lets just say the rest of that apparently didn't go over so well, and I wasn't asked any other technical questions.  I probably didn't handle the curve ball so well, but that was that.

I still remember how dumb I felt when I later learned just how many people from one of my classes landed jobs at Microsoft.  While in a class my senior year I remember the professor asking who was going to work for Microsoft, and there must have been at least 30 hands in the room that went up.  The said part of it to me too was I WAS the curve buster in that class.  I remember taking a test where the curve was so bad that a 68 became an A, yet I had scored a 98.  I was trying to figure out where I went wrong at that point.  Oh well, that's just how it goes.

Well, enough about me and my interviewing war stories, you need to go have a listen to Deep Fried Bytes.

--John Chapman

Blogger Syntax Highliter