IT consultancy companies should learn from used car dealers!

June 6, 2012 at 1:49 pm | Posted in Agile, Ramblings | Leave a comment

Let’s suppose that your old car one day suddenly stops working and it is beyond repair. So now you are in the market for a new (or second-hand) car. You have a mental model of what you are looking for: it should be roomy enough for your family and 2 dogs, it should be safe and of course your new car should have sufficient power. Together with your wife you decide that you want a MPV. So you walk into the nearest car dealership. To your surprise it is a rather nondescript building and when you enter it the showroom is completely empty. Luckily enough there is this friendly car sales guy and you start to talk to him: “Hi, we are looking for a new car and …”. Before you can finish your sentence he raises his hand to stop you, smiles at you and walks back to the counter. He returns with a key and hands it over to you. “Here is the key of your car. If you just follow me, I will show it.” Completely flabbergasted you and your wife are led into a parking lot behind the building and he points to a Mazda MX-5 Miata. You start to protest and try to explain that your 3 children and 2 dogs are not going to fit into a convertible, but he ignores your protests. “I’m sorry Sir, as you can see this is the only car we have got. And I think it is just right for you. Good luck!”

Now the above scenario might sound a bit absurd, but this is what I see a lot in IT consulting: your current project has come to an end and in some magic gathering that often goes by the name “Project Allocation Meeting” or “Resource Meeting” the sales guys and girls at a typical IT company have decided that the very first project that comes along miraculously is the perfect fit for you. And you can already start tomorrow. Sounds familiar with that single car that is for sale in an otherwise empty dealership?

Lets first have a look at the criteria for a good project allocation process:

  1. Limited waste because of non-billable hours. The business model for IT consulting is rather straightforward: revenue equals the number of billable hours multiplied by the hourly rate of a consultant, summed over all consultants. Both parameters are not very scalable, so it is tempting to maximize the number of billable hours by leaving no gaps between projects.
  2. The right fit: projects should be challenging enough so that a consultant can develop his skills. Working below his level is going to make him leave the company eventually. Working far above his level will only result in a burn-out. In general: the assignment should be a good fit in the career path of the consultant. That will make him move valuable to customers so you can ask higher rates later.
  3. Physical location: a project at a nearby customer will limit traveling time and will result in overall happiness. Long traveling times will make it unlikely that the consultant is going to put in some extra hours for either the customer or his own company.
  4. Other criteria like for example: does the customer’s culture fit with that of the consultant. Putting a consultant who thrives on freedom into a limited or formal organization like an insurance company is not going to make him very happy.

You will notice that the first point is a rather short-term optimization. The next three points are going to have way more impact on an IT consulting company but the effects are not immediately noticeable. Therefore an IT company will have to make a careful trade-off between optimization for billable hours (which is easy, a straightforward computer algorithm can do this for you!) and long-term sustainability and profit. This is quite difficult and one of the reasons that most consulting companies behave like this weird car dealership I introduced at the start of this blogpost. That brings me to the conclusion.

So we can choose the company we like to work for, the place we want to live, or our own car. And yet in IT consulting others are deciding what assignments best fit us. What I would like to propose is a very simple project allocation process with a minimum amount of overhead: All project information is always visible to every consultant. Very similar to a car dealer that has many cars on display. So you can pick the right car or in this case: the right project for you. That comes with the freedom of waiting for a better opportunity and passing a project. Of course that also comes with the responsibility for balancing the number of non-billable hours. That could be done by putting a cap on that number or by introducing some kind of rewarding mechanism.

Most important is that an IT consultant can perfectly well make these trade-offs himself. Self organization and responsibility at the lowest possible level instead of old Soviet style planning!

Writing software as easy as installing laminate flooring

May 28, 2012 at 8:09 pm | Posted in Agile, Programming | 2 Comments

At parties I usually try to avoid mentioning that I work in IT. There are a couple of reasons for this. First I often get a reaction like “Oh cool, I have this weird problem with Windows XP and my new printer. Since you are an expert, you can help me with that!” My answers range from anywhere between polite (trying to explain what I do for a living, which is not fixing Windows problems) and a simple “No, I can’t help you”.

The second question I often get is more like a remark of even an accusation: “why are software projects always late, expensive and unpredictable? By now everything is already available as standard libraries, so why is writing software not as simple a clicking existing components together?” These remarks mostly come from people whose software experience is limited to  small 100 line Visual Basic programs or from hobbyists who have tinkered a bit with Excel. Often they add examples like building a new house or bridge, arguing that this is way more difficult than building a simple piece of software and yet at the same time very predictable.

Lately I started telling those people (and others when giving a Scrum/Agile training) a story from my own experience, which is about installing laminate flooring in our house.

When I planned for this do-it-yourself activity, my first estimate was: 3 bedrooms and 1 hallway should take at most one weekend. Next I created an initial work breakdown structure (WBS):

  1. Remove old carpet and floor panels: 1 hour
  2. Install 50 m2 underlayment: 2 hour
  3. Install 50 m2 laminate flooring: 8 hour

Nice, if I would work hard enough I could finish this on a single Saturday and have the Sunday for relaxing, spending time with my wife and kids or even write a blogpost! So I started and everything went more or less according to plan: first two steps took a little bit less, and in another hour I had installed the first 5 m2 of laminate. Since installing a floor is as easy as writing software I figured that I could scale 5 m2 to 50 m2, so that would take 10 hours instead of the planned 8. Well, not that bad.

But then I discovered that it would look way better if the laminate would be a bit under the skirting board, instead of against it which would leave some visible gaps. This was a bit of a setback since this would mean I probably couldn’t finish the job in one day. Luckily I still had the Sunday to finish the work. Then disaster hit: while removing the skirting board I discovered it was quite old and nailed into the wall with really long nails. So two things happened: part of the boards broke, while also part of the plaster felt off, damaging the walls.

Note: not my actual wall…

I decided to do a little bit of refactoring and reuse the old floor panels that I had removed from the first bedroom as skirting board. Without all the details my new WBS looked like this:

  1. Remove old carpet and floor panels: 1 hour
  2. Install 50 m2 underlayment: 2 hours
  3. Remove old skirting board: 2 hours
  4. Install 50 m2 limate flooring: 10 hours
  5. Sawing 40 m floor panels into new skirting boards: 4 hours
  6. Grinding 40 m skirting boards: 2 hours
  7. Using a plunge router to add a nice profile to the skirting boards: 2 hours
  8. Painting the skirting boards twice: 8 hours
  9. Repair damaged walls: 2 hours
  10. Remove remaining nails: 1 hour
  11. Fixing new skirting boards to the walls: 4 hours
  12. Some additional woodwork for the door posts: 8 hours

So my carefully planned 11 hours blew up to 46 hours! So that’s about 400 %. What’s worse, my initial lead-time of 1 day ultimately became 6 months. This simple seemingly  predictable set of tasks  behaved like a real software project after all with lots of unforeseen problems and new functionality during the project.

If I would have foreseen all those problems I might not have started at all. On the plus side I ended up with skirting boards that are way more beautiful than the original ones. And what’s more, I learned how to operate a plunge router, making me a better craftsman which will be useful in future projects.

Conclusion: writing software indeed is as predictable and easy as installing laminate flooring.

Generating release notes from JIRA with Google Docs

April 11, 2012 at 2:29 pm | Posted in Agile, Programming | Leave a comment

Recently I did a project using Scrum in short (one week) iterations. The acceptance testers weren’t part of the team and asked for (well, actually demanded) release notes with every increment we shipped. We told them that wasn’t a problem at all, since we keep track of all our user stories and issues in JIRA. We already had created an account for them at the start of the project, so end of story we thought. Almost.

This didn’t work out because they experienced JIRA as a bit too complicated and didn’t want to dig up all the information themselves every Friday. We realized that some basic introduction in JIRA might help but that we could help them even more by defining a couple of filters. So I asked what they needed, created the filters to come up with this information, showed them how to use them and again concluded: end of story and back to real work. Well, almost.

They still preferred to have a document containing the release notes attached to the email that announced every new release. Mainly because they had always done it like that and also because it was easier to print the document. We decided to take the path with the least resistance, use our own JIRA filters, and waste one or two hours per release to copy JIRA issues to Word, format them in tables, fight to get the layout somewhat correctly, etc. At least  some activities to give a project manager a reason for his existence. End of story. Almost.

Because as IT guys (and girls) we don’t like boring repetitive work, especially not when it has to be done late at night when we finally got that release shipped. And certainly not when we can’t see the added value of duplicating information from one format (JIRA) to another (Word). Our default solution is Yak shaving automating the work. So I came up with this set-up based on JIRA, Google Docs and some Google Apps Script:

The source is JIRA. In the past I already wrote a couple of blog posts on how to import data using Ruby (and Soap), using JavaScript (directly into a Google docs spreadsheet) or using ClojureScript (using REST). The report is based on a template that I created in Google docs. Here you can already include for example the company logo’s, the disclaimers, etc. etc.

Some sample code (note this doesn’t use JIRA) to create a document using a template:

function createDocFromTemplate() {
  var files = DocsList.find("my-template");
  return DocumentApp.openById(files[0].makeCopy("release-notes").getId());
}

As you can see in the picture I also use a Timed Trigger. This fires the script every Friday for example at 6:00 PM. I belong to the minority of people that think that distributing Word documents is not very professional (unless you want to co-author it with others) so I prefer to create a pdf. This is done in the next step. Some sample code on how to do this:

function createAttachments(doc) {
  var mimeType = "application/pdf";
  var blob = doc.getAs(mimeType);
  return [{fileName:"release-notes.pdf", mimeType:mimeType, content:blob.getBytes()}];
}

And finally we have to send the release notes to the right people. For this I created a new Group in Google Contacts. The next code snippet shows how I can read the email addresses from this group and how I create an email with the pdf as an attachment:

function sendDocumentAsPdf(doc) {
  var contacts = getRecipients();
  var attachments = createAttachments(doc);
  contacts.forEach(function(contact) {sendMail(contact, attachments);});
}

function getRecipients() {
  return ContactsApp.getContactGroup("MyGroup").getContacts();
}

function sendMail(contact, attachments) {
  var recipient = contact.getEmails()[0].getAddress();
  var subject = "Release notes";
  var body = "Please find the release notes as attachment.";

  MailApp.sendEmail(recipient, subject, body, {attachments:attachments});
}

This concludes my brief description on how to generate release notes. There is room for improvements. For example right now the email is scheduled at a fixed time. To make your reports look more ‘genuine’ (as in: a lot of work to create) you could use a ClockTriggerBuilder to generate your own triggers that fire at a more or less random time, preferably of course Friday late at night.

Final remark: it is almost always better to include testers in your team. Even at the cost of a lot of initial energy and frustration, it’s worth the end result. The solution I described is only a patch for a very bad process.

Improving the Scrum standup questions

January 19, 2012 at 8:39 pm | Posted in Agile | 1 Comment

Anyone practicing Scrum will probably know the three questions that are asked during the daily standup:

  1. What have you done since yesterday?
  2. What are you planning to do today?
  3. Any impediments/stumbling blocks?

There are several problems with the way these questions are phrased. Firstly team-members might get defensive because they feel like you have to explain you really did do something important since the last standup. So regularly I hear answers like “Well, I spent 8 hours on this really very difficult task. But I’m almost done. Will probably finish it today and there are no impediments. Next!”. Wow, this developer managed to burn 8 hours during the last 8 hour working day. Impressive! So there is no information at all in this answer for the Scrum Master or the other team-members.

To tackle this last problem some Scrum Masters rephrase the questions:

  1. What have you accomplished since yesterday?
  2. What are you planning to accomplish today?
  3. Any impediments/stumbling blocks to reach your goal?

At first glance these questions look ok. You really force every team-member to explain what progress he/she has made. Everyone is happy apart  from those poor developers that didn’t accomplish anything since yesterday. Maybe they needed more time to think about a difficult design, maybe they were tackling a nasty bug. In my opinion these questions can be very demotivating.

Of course we all learned that the standup is not meant to be a progress meeting. It is a coordination mechanism between team-members. So can we do better than the two previous approaches?

Yes we can. I recently read a book that has become quite popular “The Lean Startup” by Eric Ries. What I really liked is his message that a start-up company is all about learning, not about the perfect result. And that is exactly what can motivate Scrum teams: software developers are knowledge workers, not just people working at an assembly line accomplishing small predictable tasks in an endless way. So why not state the standup questions in terms of learning:

  1. What have you learned since yesterday?
  2. What are you planning to learn today?
  3. Any impediments/stumbling blocks to keep you from learning?

One of the benefits is that this will greatly improve knowledge exchange within the team. People might even feel comfortable with spending a day doing hammock-driven development (by Rich Hickey, Clojure inventor) as long as they can explain what they have learned. And remember, learning that an approach didn’t work is still learning. Of course at the end a team will need to produce software, but I’m convinced that concentrating on learning instead of accomplishing smaller tasks will get you there a lot faster.

Please give these questions a try and share the results here or in your favorite Scrum forum!

Why Scrum will never work

July 13, 2011 at 11:22 am | Posted in Agile, Programming | 76 Comments

With such a slightly provocative title I will probably have to start with the disclaimer first: what is written here is my own opinion and not necessarily that of my employer. That is, if I still have one after posting this blog. What’s more: I’m a big fan of Scrum and other Agile methods. It pays my bills. Uh wait, let me rephrase that a bit more accurate: I’m totally 100 % convinced that Scrum works for software development.

Now with these formalities behind me let’s get a bit more serious: I really like Scrum. I have been using it for the last 5 years, I haven given several presentations about (distributed) Scrum at several conferences, I’ve written an article about it together with Scrum guru Jeff Sutherland, etc. However now that the Agile Manifesto is 10 years old I thought it would be fun to put on Edward de Bono‘s black hat and explain why Scrum is never ever going to work.

Reason 1: the cornerstone of Scrum is about trusting people. Creating a safe environment so that we can be open to each other, learn from our mistakes. And all that other touchy-feely hippy back to the 60’s stuff. That is not going to work! Did you notice my disclaimer when I started this blog? I had to put it there because quite a few people read my blog, including customers and my boss. There are a lot of pointy haired bosses out there (oh no, another disclaimer: by boss isn’t one, he is a nice friendly guy, bla bla bla). This world is full of alpha males (and females?) who are not at the least interested in you, your process, the outcome, etc. Being open is only going to hurt your career. Room for mistakes, taking risks? Don’t be naive!

Reason 2: according to Scrum ‘people do the best they can’ if you give them enough freedom. What the hell is this based upon? They don’t. They will probably do the least they can because in general most software developers are underpaid, especially compared to their managers. That’s why they want to become managers or software architects as soon as possible, since they can then still be lazy without anyone noticing and the added bonus that they are getting better paid.

Reason 3: because of the previous reason we still have to put project management on top of Scrum teams, so at least it has some output. So this is probably going to be business as usual. Assigning tasks to team members, micromanaging developers, demanding progress reports, etc. All the usual actions to slow your team down as much as possible.

Reason 4: Scrum is just a process. I have seen many processes (like CMM which is nowadays called CMMI) and I have seen them all fail and leave a lot of frustrated people behind. So if you are stuck (like most companies) with a bunch of average people then nothing is going to change. Scrum doesn’t improve your software, good people do! And by definition, you don’t have those good people since you just have Joe Average (or worse) as a programmer because your company doesn’t want to pay a decent salary.

Reason 5: Scrum delivers ‘business value’. Well no, actually it doesn’t. For many reasons. The guys or girls that know about business are not going to be involved in your project. They like to lunch with customers, not work on this weird thing called backlog to explain a bunch of introvert nerdy software developers what to do. So your team ends up with a junior help desk employee as a product owner. And besides, your whole ICT department is a cost center anyhow. So don’t start about business value.

Reason 6: an Agile team is supposed to continuously improve. That is why Scrum has retrospectives to see what went well, what can be improved and to define actions. Now do you really think people want to improve? First they have to think of possible improvement actions. Next they may even have to execute them, which might well take them way out of their comfort zone. People resist change, and therefor improvements. Your old working habits may suck, but at least they kinda work and it gets you through the day.

Reason 7: the Product Owner focusses on the ‘what’ and ‘why’ questions, the development team decides ‘how’. Nicely separated so the team can go for quality and thus high velocity on the long term. However, this is not going to work. Your product owner wants this functionality right now and doesn’t care the least about software quality. Just deliver those features as fast as possible because there always is a deadline, promises made to this important customer, etc. And don’t think you can blow away this junior product owner, because behind him is this business manager ranked high in the companies hierarchy. You as a developer are just part of a cost center and probably going to be outsourced soon anyhow. Now how is that for motivation and trust?

Reason 8: my previous point was about quality. There is some evidence that pushing productivity lowers the quality of software. On the other hand, when you focus on quality, you will get higher productivity. However Joe Average programmer doesn’t care about software quality. If the quality is poor, developing some piece of software takes more time, but why care? He is getting paid between 9 and 5. The project manager (or Scrummaster!) will take the blame for missed schedules. Even worse, if this developer is hired from another company it is in his (and his company’s) interest to stay as long as possible. So this all means that your productivity in Scrum isn’t going to be the least bit higher than with any other method.

Reason 9: “yes, but if we only build the necessary features, then at least we will have a lower total cost, right?” It never stops to amaze me how naive people are when they say something like that. You don’t build necessary features. Most of the time you are on a fixed-price contract for a major banking or insurance company. Or even worse: a government contract. They have selected you because you offered the cheapest bid (which is pretty naive in itself) but they are going to make sure that you will deliver all the requirements they have stated up-front. Of course at least 50 % of these requirements have no business value at all, but hey, you aren’t going to fool the project manager that handles the project from the customer side. He is an alpha male and you are going to deliver that last bloody feature as well!

Wow, wearing that black hat was even more fun than I thought! Must…. take…. it…. off.. now…

Less code matters

June 16, 2011 at 9:31 pm | Posted in Agile, C#, GIMP, Programming | 11 Comments

One of Edsger Dijkstra‘s quotes I really like is: If we wish to count lines of code, we should not regard them as “lines produced” but as “lines spent”. This hasn’t changed since those days despite of almost infinite amounts of processing power, memory and powerful IDE support. In fact I would argue that thanks to this power we are now able to build huge systems which makes carefully spending these precious lines of code even more important. The main reason still is that once you start producing code, you have to maintain it. And maintaining code will take effort and therefore will cost money.

Over the years I have worked on a lot of code, both existing (as in maintenance projects) and while building new software. My personal statement was and still is that any non-trivial amount of code can easily be reduced to half it’s size while increasing readability at the same time. There are several ways to reduce code size. As an example I will take 18 revisions (from CVS) from a small GIMP# plug-in and show what I did to gradually reduce the size.

Let’s start with looking at the graph that depicts the code size first:

This is a small plug-in that calculates the average color of all pixels in an image. Next the plug-in sets the color of all pixels in the image to this average color. This functionality is provided as the “Blur Average” filter in Photoshop and I wrote it because it didn’t exist for the GIMP yet.

I took the size in lines of code from CVS. I included all white lines and comment lines, apart from the standard 20 lines of GPL header at the start of every file. Next I used the maximum size (at revision 3) as 100 % and scaled the rest of the revision sizes relative to this maximum. Now let’s see what happened at these 18 revisions:

  1. Revision 1.1. Checked in initial code. This was mainly the boiler plate code that comes with any plug-in written in the GIMP# framework. At that moment this was 50 LOC.
  2. Revision 1.2. Plug-in now fully functional. Wrote two straightforward loops, one to calculate the average, the other one to apply it to all pixels. Code has grown to 63 LOC.
  3. Revision 1.3. A GIMP# co-developer added code for i18n. Code now at it’s maximum size of 68 LOC. Still pretty small of course. Keep in mind that a similar basic plug-in written in C (the default GIMP programming language) takes about twice as much code.
  4. Revision 1.4. Accidentally  in revision 1.3 a key (from a key/value pair) was translated. Fixed that which saved 2 lines. Nice start on my way to a smaller plug-in. 66 LOC left.
  5. Revision 1.5. The i18n initialization that happened in all plug-ins, was moved to the constructor of the base class, removing another 3 lines. 63 LOC left.
  6. Revision 1.6. Until this revision I had to handle all pixels as arrays of 3 (or 4, including alpha channel) bytes which held the RGB(A) values. I abstracted this into a Pixel class and wrote an iterator that calls a delegate for every pixel in this particular image. I added some overloading magic for Pixel objects that allows me to add pixels and easily calculate the average. Of course this added to the main GIMP# library, but my plug-in code shrunk to 55 LOC.
  7. Revision 1.7. A bit of cleaning up, code size stayed at 55 LOC
  8. Revision 1.8. Instead of returning a set of all supported procedures (needed to register a plug-in within GIMP) this same function now returns the C# yield construct, saving another 3 lines. 52 LOC left.
  9. Revision 1.9. Minor clean-up, improving readability. Code still at 52 LOC.
  10. Revision 1.10. Improved algorithm to calculate the average. In previous revisions I was updating a counter (that had to be initialized) inside the first iterator so I could later divide the sum by this counter to calculate the average. Of course the number of pixels within an image (or selection) is already known and can be asked directly from the iterator class. This allowed me to remove another 2 lines, leaving the size now at 50 LOC.
  11. Revision 1.11. The delegate to calculate the average was since the previous revision a short one-liner. No need anymore to spread that over 3 lines, including the curly braces. Inlining this delegate into the iterator call removed 3 lines. Code size at 47 LOC.
  12. Revision 1.12. Oh no! Code has grown to 48 LOC. Reason is that I tried anonymous function support in Mono, concluded that it didn’t work (yet) but left the updated line as a comment in the code.
  13. Revision 1.13.  Still 48 LOC. Only minor textual changes to code.
  14. Revision 1.14. Finally I realized that have commented out code is bad practice. Removed it, reducing the size to 47 LOC again.
  15. Revision 1.15. Mono 1.2.6 supported lambda functions. Did a bit of cheating and removed 1 empty line that divided 3 lines that logically belonged together. Size now at 46 LOC.
  16. Revision 1.16. A new C# 3.0 feature (object initializers) allowed me to remove another 2 lines. Code size at 44 LOC.
  17. Revision 1.17. Another 2 lines moved to the Plugin base class. 42 LOC left.
  18. Revision 1.18. Simplified GIMP# framework API a bit, allowing me to remove another line from almost all plug-ins. This is the most recent version which was checked in on June 10th, 2010. No changes since that time. Code size is 41 LOC.
As you can see I went from a maximum of  68 LOC to the current size of 41 LOC. I didn’t manage to remove half of the code, but 40 % still isn’t bad for such a small amount of code. At least I don’t have to maintain those 27 removed lines anymore. At the same time the readability has improved a lot. In a next blog I will categorize all the methods that can be used to reduce your code size, based on my personal experience.
For completeness the final code:
using System;
using System.Collections.Generic;

namespace Gimp.AverageBlur
{
  class AverageBlur : Plugin
  {
    static void Main(string[] args)
    {
      new AverageBlur(args);
    }

AverageBlur(string[] args) : base(args, "AverageBlur")
    {
    }

    override protected IEnumerable<Procedure> ListProcedures()
    {
      yield return new Procedure("plug_in_average_blur",
                                 _("Average blur"),
                                 _("Average blur"),
                                 "Maurits Rijk",
                                 "(C) Maurits Rijk",
                                 "2006-2009",
                                 _("Average"),
                                 "RGB*, GRAY*")
        {MenuPath = "<Image>/Filters/Blur"};
    }

    override protected void Render(Drawable drawable)
    {
      var iter = new RgnIterator(drawable, _("Average"));

      var average = drawable.CreatePixel();
      iter.IterateSrc(pixel => average.Add(pixel));
      average /= iter.Count;

      iter.IterateDest(() => average);
    }
  }
}

A simulation to show the importance of backlog prioritization

June 8, 2011 at 6:36 pm | Posted in Agile, Uncategorized | 10 Comments

A successful agile software project depends on a prioritized backlog. You can have a perfectly functioning development team that efficiently finishes one feature after the other, but if these features hold no business value it still is waste.

To show just how important this prioritization is I did a Monte Carlo simulation, using Google Docs spreadsheet. Here are the assumptions I made to create the model:

  • the business value of a user story is randomly generated from a uniform distribution between 1 (very low value) and 100 (very high value).
  • the effort to implement this user story is generated by drawing from a standard set of Scrum poker cards with the values 0, 1/2, 1, 2, 3, 5, 8, 13, 20 and 40 (100 and ? are omitted). The drawing was done first selecting an index based on a normal distribution with average 5 and standard deviation 1. This (zero-based) index is used to select a card from the range of poker cards. For example index 5 corresponds with the fifth card which happens to be 5 as well. Index 6 corresponds with the value 8, etc.
  • the assumption was made that there is no correlation between the business value and the effort needed to implement the corresponding story.
  • the whole backlog can be delivered in 10 sprints.
  • during the first 4 sprints the velocity is increasing in a ration 1, 2, 3, 5. This means that the development team is 5 times as fast in the forth sprint compared to the first. After that the velocity stays constant at a value of 5.

With these assumptions I calculated the delivered business value (as a percentage of the business value of the whole backlog) using 4 different prioritization algorithms:

  • no prioritization. The development team picks up the stories in the (random) order from the top of the backlog.
  • prioritization using the business value of the stories. Stories with high business value are built first.
  • prioritization using the needed effort to build a story. Stories with low effort are built first.
  • prioritization build on the ratio between business value and effort. Stories with a high ratio (high business value, low cost) are built first.

The results can be seen in the following graph:

As you can see from this graph is that not prioritizing your backlog will deliver business value quite late. Since there is no correlation between business value this will be a straight line after the first 3 iterations in which we have a low velocity.

The other extreme is the prioritized backlog using the business value / effort ratio. This is the red line. Here you can see that after 5 iterations you already have delivered 80 % of the business value. For the backlog without prioritization this takes more than 8 iterations!

What looks a bit surprising initially is the difference between the prioritization on business value (the yellow line) and prioritization based on effort (the green line). This is introduced by the fact that in our simulation we have created a bias towards more expensive user stories: the number of user stories with effort lower than 5 is equal to the number of user stories with an effort higher than 5, but remember that the Scrum poker card values are not symmetric: 0, 1/2, 1, 2, 3 versus 8, 13, 20 and 40. That is the reason that in this simulation it is better to prioritize on effort instead of business value.

There is one more interesting aspect to investigate. What is the impact of the standard deviation that we use to draw from our poker cards? I repeated the simulation using the business value / effort ratio for the prioritization and different values for the standard deviation. The result:

The results are easy to explain: when using a higher value for the standard deviation you will get more extremes (either 0 or 40) for the effort to implement a user story. This means that there will be more user stories which require little effort and deliver good business value. These can be build first.

Bottom line: make sure your product owner works hard on a prioritized backlog. Let him quantify the business value of his user stories. As you can see from this simulation half of the effort (and cost) needed to complete the whole project might already be sufficient to fullfil his business needs. This is a huge money saver!

Efficiency versus Effectiveness

January 30, 2011 at 3:44 pm | Posted in Agile | 10 Comments

I work as a consultant for companies that want to introduce Agile Software Development as a way to improve their software process. During an intake interview I always ask what they expect to gain from that and more often than not (about 90 % of the time) the customer’s first answer is that they want to improve the productivity of their IT department. Second in line is almost invariably the need for higher quality. But let’s stick with higher productivity first, because that is where the fun (please don’t interpret this as disrespect for my work or customers) starts.

My first reaction is pretty straightforward: “so what is your current productivity?”. I have never had an answer to that question apart from “we don’t know exactly, but it is not high enough” followed by remarks like “You are the expert, maybe you can do a baseline measurement.” My second question is mostly along the lines of “It’s ok that you don’t know your current productivity, but can you explain what you mean by productivity?”. And again I never get a clear answer. Sometimes a department has set goals like doing the same amount of work with less people: 20 % less people means 20 % productivity improvement to them.

The interesting thing is that when you talk to customers about productivity, they almost always talk about improving efficiency: more work done with less people, projects within time, budget and scope, etc. In other words, they mostly tend to talk about the costs, not about the benefits. From a historic perspective this makes sense: IT departments are often seen as cost centers. So the cheaper you can run your IT department, the better. This attitude ignores a couple of issues: firstly, IT is not just production work. It is about handling knowledge, interaction between people, etc. In his book ‘Slack: Getting Past Burnout, Busywork, and the Myth of Total Efficiency’ Tom DeMarco argues that it doesn’t make sense to drive people up to 100 % efficiency.

But equally important, efficiency really isn’t that relevant. What really matters is effectiveness. How much money do you really get out of every dollar or euro you invest in IT. Cranking out more code per person month might not improve this at all. Productivity is all about the business value you can deliver. The interesting part is that there is a strong relationship between this definition of productivity (effectiveness) and efficiency. For three different types of companies I have shown this in the next graph:

If your IT department does standard production work then you can aim at high efficiency. Just like MacDonalds has an optimized process to produce hamburgers, you might do the same thing with your software. For example if you produce webshops in a limited amount of variations, you might want to go for 90 % efficiency. Higher than this hardly makes sense since you will always need some flexibility.

On the other side of the spectrum, if you are a start-up company or part of an R&D department, you need flexibility. By definition you need to trade a lot of your efficiency for the ability to handle risks (big opportunities and large risks come hand in hand). In this situation you should care about results, not that much about costs.

After the start-up phase, a company enters its maturing or consolidation phase. Efficiency can increase but you still want to maintain agility. Most IT-departments will have to handle this situation. Here you should go for 70 – 80 % efficiency and use the rest to handle risks, improve your effectiveness. In other words: concentrate on delivering business value at the cost of 20 – 30 % efficiency. An example of where I see this failing: many larger organizations have ‘optimized’ their software release process. Their IT department only releases twice a year because that minimizes the amount of work involved (never releasing would even be better, but that obviously doesn’t work 😉 ).

These graphs have helped me many times while discussing productivity with customers. In putting this in my blog I hope they will be useful for others as well. Please let me know if you have improvements, criticism or other feedback!

Effort estimates for software considered evil

February 24, 2010 at 10:23 am | Posted in Agile, Programming | 1 Comment

Recently I had the opportunity to look at effort estimates for a big (about 4000 function points) software project. The interesting thing was that there were already quite elaborate specifications, so in theory that would be a good basis for an estimate. The estimates were done by three different parties:

  1. An external consultancy firm. They counted function points based on a document with use-cases.
  2. The internal business analysts.
  3. The developers. They split-up the use-cases in user-stories and assigned story points.

Both the business analysts and the developers made relative estimates instead of absolute estimates in for example hours. What I had expected was a reasonable correlation (0.90 or higher) between the three estimates. This turned out to be quite different. Lets first have a look at a graph that plots the value of the estimations from the consultancy firm against the business analysts:

Consultancy firm versus business analysts

On the horizontal axis you can see the estimates in function points. On the vertical axis are the estimates from the business analysts in an arbitrary (but relative) unit. As you can see from this graph the correlation is pretty poor. In fact, when you do regression analysis, the correlation is only 0.73. This means that about 50 % of the estimates of the business analysts can be explained by the estimates from the consultancy firm. The consultancy firm at the same time claims about 15 % accuracy in their estimates.

When we look at the difference between the business analysts and the software development team we see a similar picture:

Development team versus business analysts

There are a lot less data points here because the development team only estimates those user stories that they will pick up in a next iteration of 2 weeks. Again the correlation between the two estimates is pretty poor: 0.71. So once again only 50 % of the estimates of the development team can be explained by the estimates from the business team, assuming there is a linear relation between the two.

If we correlate the estimates from the development team with the consultancy firm we get similar data. What worries me most is that the approved budget and lead time will be based on the estimates of the consultancy work which apparently bears hardly a relation with the effort that the development team has to invest. The estimated effort could easily be 100 % of more wrong. Guess who is going to be held responsible for that…

How to select user stories – part 1

February 11, 2010 at 4:04 pm | Posted in Agile | 3 Comments

This is going to be a series of blogs (probably 4 or 5) about selecting user stories. According to Wikipedia User stories are “used with Agile software development methodologies for the specification of requirements”. Pretty straightforward. Once you got a bunch of user stories selected you can prioritize them. This list is called the product backlog and one of the backbones of the Scrum process. With the backlog the software development team can plan their iterations, starting from the top with the stories with the highest priority or business value.

Continue Reading How to select user stories – part 1…

Blog at WordPress.com.
Entries and comments feeds.