Showing posts with label project management. Show all posts
Showing posts with label project management. Show all posts

Saturday, March 22, 2008

Three ingredients to a better bug report

At work I've recently been going through lots of bug reports. It is part of my job to determine the priority of each defect that gets reported and whether we should still fix it in the upcoming release. With multiple testers trying to break the product in all possible ways and a deadline approaching rapidly, analyzing the newfound defects seems to take more and more time.


What did you see?


People entering defects can actually do a lot to reduce the time it takes to analyze their entries. You'd be amazed at the number of defects that say something along the lines:

  • "when I do A, B happens"

There are of course cases when this is all there is to say about a defect. For example:

  • "when I click the Ok button, the browser shows an internal apache error"

Granted, it would be more useful if the report said a bit more about the error message. But it is at least clear that an internal error message should not be shown to the user.

What did you expect to see?


Unfortunately things are not always so clear:

  • "when try to I delete the last item from the list, I get a message saying at least one item is required"

When I get an error report like this, I'm not sure what to do with it. Most likely there is an internal rule in the program that this list may never be empty. And the program seems to enforce this rule by giving a message when you try to delete the last remaining item. So there is a "business rule" in the program and the developer wrote code to enforce that rule. Where is the defect?

In cases like these I ask the person who entered the defect why they think this behavior is wrong. Typically I get an answer like:

  • "if I can't delete the selected item, the delete button should be disabled"

This added information makes things a lot clearer. So the tester didn't disagree with the fact that there should always be at least one item in the list, they just didn't agree with the way it was handled.

Why do you think your expectation is better than the current behavior?


But the above leaves me with a problem. There is a clear rule in the program that the list must never be empty. The programmer implemented this one way, someone else thought it should have been implemented another way.

In cases like these I ask the tester (or whoever reported the defect) to explain why they think their expectation is better than the current behavior. In the example we've used so far, the reason could be something like:

  • "clicking the button when there is only one item in the list will always show an error message - the delete action will never be performed. Buttons that don't lead to an action being executed should be disabled."

This is a clear - albeit somewhat abstract - description of the reason why the person expected the behavior to be different.

Prior art



In this example I doubt whether anyone will disagree with the reasoning of the tester. But there are many cases where someone will disagree. Especially the developer that implemented the functionality will tend to defend the way it works.

That's why I normally prefer the defect to point to other places where similar functionality is available in the way the tester prefers it. So in the example defect:

  • "in screens B and C we have a similar list and there the delete button is disabled if there is only one item remaining in the list"

This type of argument works especially well when the functionality in screens B and C has already been in a released version of the product. The users of the product have experienced the functionality and they will expect the new screen to behave in the same way.

If no similar functionality is available in the application, I often look for other programs that have similar functionality. On Windows the notepad application is one of my favorite examples. Everybody has it and the functionality as not substantially changed for at least a decade. Of course the functionality your program has might not be in notepad. In those cases I often refer to programs like Microsoft Office, Outlook, Firefox or the Google home-page. Not because I think these are perfect programs, but because they're so ubiquitous that most users accept them as a reference point for the behavior they expose.

Summary


So a bug report should at least contain the following ingredients:

  1. What did you see?

  2. What did you expect to see?

  3. Why do you think that 2 is better than 1?


Now if everyone starts filing their bug reports like that, I will have to spend a lot less time on analyzing them and can get back to fixing those defects sooner. Who knows... maybe we'll make that deadline after all.

Saturday, February 2, 2008

Scrum: story points, ideal man days, real man weeks

My team completed its seventh sprint of a project. Once again all stories were accepted by the product owner.

While one of the team member was giving the sprint demo, I started looking in more detail at some of the numbers. Because with seven sprints behind us, we've gathered quite some data on the progress from sprint to sprint. That's the velocity, for XP practitioners.

Looking at the data



So far we've had 131 story points of functionality accepted by the product owner, so that's an average of 18-19 per sprint. The distribution has not really been all that stable though. Here's a chart showing the number of accepted story points per sprint:


Although it is a bit difficult to see the trend in sprint 1 to 5, we seemed to be going slightly upward. This is in line with what you'd expect in any project: as the team gets more used to the project and to each other, performance increases a bit.

The jump from sprint 5 to sprint 6 however is very clearly visible. This jump should come as no surprise when I tell you that our team was expanded from 3 developers to 5 developers in sprint 6 and 7. And as you can clearly see, those additional developers were contributing to the team velocity right from the start.

But how much does each developer contribute? To see that we divide the number of accepted story points per sprint by the number of developers in that sprint:

Apparently we've been pretty consistently been implementing 5 story points per developer per sprint. There was a slight drop in sprint 6, which is also fairly typical when you add more developers to a project. But overall you can say that our velocity per developer has been pretty stable.

Given this stability it suddenly becomes a simple (but still interesting) exercise to try and project when the project will be completed. All you need in addition to the data from the previous sprints, is an indication of the total estimate of all stories on the product backlog. We've been keeping track of that number too, so plotting both the work completed vs. the total scope gives the following chart:

So it looks like we indeed will be finished with the project after one more sprint. That is of course, if the product owner doesn't all of a sudden change the scope. Or we find out that our initial estimates for the remaining stories were way off. After all: it's an agile project, so anything can happen.

Story points vs. ideal man days vs. real man weeks



Whenever I talk about this "number of story points per developer per sprint" to people on other projects, they inevitably ask the same question. What is a story point? The correct Scrum answer would be that it doesn't matter what unit it is. It's a story point and we do about five story points per developer per sprint.

But of course there is a different unit behind the story points. When our team estimates its stories, we ask ourselves the question: if I were locked into a room with no phone or other disturbances and a perfect development setup, after how many days would I have this story completed? So a story point is a so-called "ideal man day".

From the results so far we can see that apparently this is a pretty stable way to estimate the work required. And stability is most important, way more important than for example absolute correctness.

A classic project manager might take the estimate of the team (in ideal man days) and divide that by 5 to get to the ideal man weeks. Then divide by the number of people in the team to get to the number of weeks it should take the team to complete the work. And of course they'll add some time to the plan for "overhead", being the benevolent leaders that they are. This will give them a "realistic" deadline for the project. A deadline that somehow is never made, much to the surprise and outrage of the classic project manager.

I'm just a Scrum master on the project. So I don't set deadlines. And I don't get to be outraged when we don't make the deadline. All I can do is study the numbers and see what it tells me. And what it tells me for the current project is that the number are pretty stable. And that's the way I like it.

But there is a bit more you can do with the numbers. If you know that the developers in the team estimate in "ideal man days", you can also determine how many ideal man days fit into a real week. For that you need to know the length of a sprint.

Our team has settled on a sprint length of four weeks. That's the end-to-end time between the sprints. So four weeks after the end of sprint 3, we are at the end of sprint 4. In those four weeks, we have two "slack days". One of those is for the acceptance test and demo. The other is for the retro and planning of the next sprint.

So there is two days of overhead per sprint. But there is a lot more overhead during the sprint, so in calculations that span multiple sprints I tend to think of those two days as part of the sprint.

So a sprint is simply four weeks. And in a sprint a developer on average completes 5 story points, which is just another way of saying 5 ideal man days. So in a real week there is 1.25 ideal man days!

I just hope that our managers don't read this post. Because their initial reaction will be: "What? What are you doing the rest of the time? Is there any way we can improve this number? Can't you people just work harder?"

Like I said before: I don't believe in that logic. It's classic utilization-focused project management. It suggests that you should try to have perfect estimates and account for all variables so that you can come to a guaranteed delivery date. The problem with that is that it doesn't work! If there's anything that decades of software engineering management should have taught us, is that there are too many unknown factors to get any kind of certainty on the deadline. So until we get more control of those variables, I'd much rather have a stable velocity than a high utilization.

Sunday, August 19, 2007

Scrum: utilization vs. velocity

At work we're recently started using Scrum for running some projects. As expected we need to slowly learn the lessons. One of the things we're been having a lot of discussion on recently is the meaning of the focus factor. Let me begin by explaining what a focus factor is, at least in my company.

To determine how much work you can do in a sprint, you need to estimate the top stories. We estimate these stories in "ideal man days" using planning poker. This means that each developer answers the question: if we lock you into a room each day without any distractions, after how many days would you have this story finished?

After these estimates we determine people's availability for the project. After all, they might also be assigned to other projects, if only for part of their time. Even people that have no other projects, tend to have other activities. Like answering questions from customer support or consultants, department meetings, company wide meetings, job interviews with candidates or just playing a game of fusball, table tennis or bowling on the Wii. So basically nobody is available to a project 100% of the time. At most it's 80% - 90% and on overage it seems to be about 60% - 70%.

So the first stab at determining how much work someone can complete is:

  • available hours = contract hours * availability
But when you're working on the project, you're not going to always be contributing towards the goals that you've picked up. Within Scrum there is the daily Scrum meeting. It lasts no more than 15 minutes, but those are minutes that nobody in the team is working towards the goal. And after the meeting a few team members always stick around to discuss some problem further. Such time is very well spent, but it probably wasn't included in the original estimate. So it doesn't bring the "remaining hours" down very much. I see all this meeting, discussion, coaching and tutoring as necessary work. But work that doesn't bring the team much closer to the goal of the sprint. I used to call this overhead, but that sounded like we were generating waste. So in lieu of the agile world I switched to using the term focus factor. So now we have:
  • velocity = contract hours * availability * focus factor
So the speed at which we get things done (velocity) is the time we're working minus the time we loose to non-project work minus the time we loose on work that doesn't immediately get us closer to the goal. In the past I probably would have included a few more factors in there, but in an agile world this is already accurate enough to get a decent indication of how long it will take us to get something done.

If there's one thing I've learned from the agile movement and Scrum it's to focus on "when will it be done" instead of "how much time will it take". So to focus on velocity instead of utilization.

Utilization is the territory of classic project management. It's trying to make sure that every hour of every employee is fully accounted for. So if they're programming, they should have a time-writing slot for programming; if they're meeting, there's a slot for meeting; if they're reviewing designs, there's a slot for that and if they're drinking coffee or playing the Wii... you get the picture. Of course there's no project manager that wants all that level of detail. But in general they are focused on what you're spending your time on.

Agile thinkers see this really differently. They say: it doesn't really matter how much time you spend, what matters is when it is done. This sounds contradictory so let's see if a small example can make it clearer what I'm trying to say.

If I tell my boss that some feature he wants will be done at the end of next week, he is interested in only one thing: that it is done next week. If we get it done on time, he doesn't care whether I spent two hours per day on it or whether it was twelve hours per day. I care about it of course, because I don't want to work late every night. And there's also a limit to the amount of gaming I like to do during a day, so two hours per day will leave me bored quickly. But to my boss, all that matters is when I deliver, not how much effort it took.

This is why the focus for Scrum projects is on velocity and not on utilization. So in Scrum you want to know how many hours you still need to spend on a job, not how many you've already spent on it. A classic project manager might be really proud that you worked late all week and clocked in 50+ hours. An agile project manager will note that you reduced the "hours remaining" by 10 hours and nothing more. If you're looking for compliments on all your hard work, then Scrum might not be for you.

Learn more:

Thursday, July 12, 2007

My first game of planning poker

Yesterday I took part in the first sprint planning meeting of my life. We have started up a new development project and we decided to use Scrum as the process. The project is actually quite small, so we have just two developers (myself included) and a product owner for it.

The product owner had prepared nicely and had a quite extensive product backlog. He had even filled in a "how to demo" field for a lot of the stories, which I'm not sure he's supposed to do before the sprint planning. At least it wasn't very handy to have the "how to demo" in place, as it makes it harder to discuss alternative solutions for the same functionality.

After the product owner had explained each story, we were to come up with an estimate of how much work (in ideal man days/story points) it would be to implement the story. I have done many of these estimation sessions before, but this time we decided to play a game of planning poker. Being the good scrum master that I am, I had brought two packs of (rather improvised) planning poker cards.

The other developer and I talked through the story, determining what it would take. We were basically already breaking the story down in tasks, which was a nice head start for the actual breaking down we planned to do later. After agreeing on the tasks, we would go into our poker deck and select the card matching our estimate. When we both had selected a card, we'd pull it out of the deck at the same time - revealing our estimate.

Now I must admit that I wasn't too impressed with the transparency that this estimating method brought. I guess -just as with real poker- you shouldn't play with just two players. There was actually only one story where we seemed to have a big difference in estimate: 8 vs 13 points. But as it turns out, our decks just didn't have any numbers in between 8 and 13. We had both wanted to select a 10, but since that wasn't there we just had to pick something slightly higher or lower. Being the planning pessimist that I am, I of course picked the 13. :-)

So there you have it: I played the game of planning poker. It wasn't anything special or extremely different from the ways I've done estimations before. But I guess that contrary to popular belief, being extremely different is not what Scrum is about. What is it about then, you ask? I'll let you know when I find out. Because if I answered that question now, I'd just be repeating the Scrum/Schwaber mantra.

Friday, June 1, 2007

This scrum? Or that scrum? Or that scrum?

This week I took a training that now officially makes me a certified scrum master.


Or wait... let's make it sound even more official: I'm now a Certified ScrumMaster! There, that looks a lot better. And geeky, with a camel-case word in it. But actually all it takes to become a certified scrum master is completing a two day training. Now the training wasn't bad, mind you. But I do feel that a certification should require some kind of exam. Especially a certification that claims you're a master in something.

One of the things that I really noticed during our training is the apparent difference between the way our trainer implements scrum in companies and the way I understood it from Ken Schwaber's books (on agile software development and on agile project management) and video (Google TechTalk on scrum).

I have the feeling that in Ken's approach the role of the Product Owner is much less involved than what we've been taught this week. And that the actual ScrumMaster role is a lot less intensive then what Ken suggests in his TechTalk.

The Product Owner role we've learned during our training was pretty close to what Toyota seems to call the Chief Engineer. And that's a term we can probably all imagine what it means. In my recent history it has been called: technical lead, technical project lead, technical team lead, principal developer and I'm probably still missing a few. The only real difference I see is that the Chief Engineer is empowered, meaning that he/she has the backing of management and is allowed to do the things necessary to get a product or project completed. But of course having a Chief Engineer somewhat dilutes the role of the scrum master whose major responsibility is guarding that the team really follows the process. With such a simple process, that should never be a full time job. Which Ken seems to suggest it often is.

Of course in scrum it doesn't really matter if companies implement it slightly different. After all, this is an agile process. So what's a little flexibility between friends. And I actually believe that this flexibility is what makes scrum work for so many companies. Have a little faith in the ability of your people to somehow work towards the goals that you set out for them. It of course depends on the people, but then again... that's what I already said a few months ago.

What this training mostly showed me is that it really matters a lot who helps you implement scrum in your organization. If Ken Schwaber would have taught us, we probably would come away with a "completely" different interpretation of scrum. Not necessarily better or worse, but definitely different. Now that is something that you might want to consider before you book the first scrum training for your company.

Monday, May 14, 2007

Succesfully implementing Scrum

At my company they're now introducing Scrum. Traditionally most of our projects were done using a sort-of waterfall approach. But in recent years it has become harder and harder to get product releases onto the market in a reasonable amount of time using this approach, so management has been looking for a solution.

I have actually lead some projects in the past where we already used an iterative development process. Some of these projects have been immensely successful, while others have failed to deliver the intended result in a reasonable amount of time. I am curious to see whether following the more standard -and better defined- Scrum framework will result in more reliable success.

When I recently watched a Google Tech Talk where Ken Schwaber explains some of Scrum, he mentioned that only about 35% of the companies that try to implement Scrum actually succeed on doing so. And while everyone -including Schwaber of course- is very positive about Scrum's advantages, I haven't seen many details about what can make it fail. And more importantly: how to avoid the dangers that might make it fail.

In other words: how do we avoid becoming part of the 65% that doesn't succeed in implementing Scrum? Does anyone know?

Saturday, March 31, 2007

Some of us are more equal then the others

A few days ago I told you about the three forces that are often at work within a project: what, when and how. This is a follow up, to tell you what the discussion I had was about.

I feel strongly that there should be a balance between what is built, how it's built and when it will be done. You need to strike a balance between these three forces to make any project a success. That balance can be achieved in two really different ways: either make it all the responsibility of one person, or make each force the responsibility of a separate person.

Years ago when projects were often relatively simple, scope was relatively clear and deadlines were realistic, it was reasonable to have one person run an entire project. And even today there are projects and people that can make the "one responsible person" setup work. Mind you: that one person doesn't have to do all the work him or herself. But he or she is responsible for all three facets (what, when and how) at the same time and has to strike a balance between them all the time. Constantly balancing whether having this extra feature is really worth the extra time and risk. Or whether changing the code structure this close to the deadline is really worth the risk it introduces of not being done on time.

But most projects these days have grown way too complex for one person to handle all these responsibilities. These days it is more common to have three separate people fulfill the three separate roles: the functional guy, the technical guy and the "deadline guy". They're always fighting with each other and often you will find two of them ganging up on the third. But in the end they always have to find a compromise on which all three agree.

Now imagine what happens if one of these three is also made responsible for the project as a whole. What happens to the balance of power between the three then? They're all responsible for one aspect, but suddenly one of them is also responsible for the total result. This completely destroys the delicate balance of power that was so carefully introduced by having a different person be responsible for each of the factors. You might as well just have one person be responsible for all three powers. In fact, that would be better. Because then at least there is one leader, instead of three leaders of which one is a bit more important then the others.

Tuesday, March 27, 2007

The trias projectica


I had an interesting (that seems to be my stopgap these days) discussion the other day about who is in charge of a project. Who is responsible for the timely and correct delivery of a project. As is often the case, "the other side" claimed the project manager is end responsible for the entire project. I disagree strongly with that view.



As far as I can see there are (at least) three key factors in a project:

  • what do we build?
  • how do we build it?
  • when do we deliver it?
These are known by many other names (like: features, quality, deadline), but I'd like to stick with what, how and when for now. The what is determined by a functional designers, or in these days more often either a product manager or a "usabilitystop" expert. The how will normally be determined by the developers, the technical lead or maybe even an "architect". The when is the responsibility of the project manager, the resource planner or the "director".

As you might know or notice, there is a dependency between these factors. You can choose to create more functionality (what) in the same way (how), but that will certainly take more time (when). You can also say that you want more functionality and want it delivered in the same time, no matter how it's built. It will certainly result in lesser quality (not to mention less motivation of your developers), but it can be done.

The important thing is that a change in one of the factors, influences the other one. And the trick to a successful project is balancing these three factors.

Tuesday, February 27, 2007

Agile is about the people

When Steve Yegg's article on the good and bad of agile development showed up in my feed reader a few months ago, I decided to pass on it. The summaries I saw sounded too much like Steve just needed to vent hist frustration with all the agile hype. Which is fine by me, but doesn't make it onto my already overpopulated reading stack.

But today one of my many managers put the article on my desk, saying it was an interesting read for him. I know when not to argue, so I read it on my way home. Don't worry, I travel by public transport so didn't endanger anyone. Although I did draw some attention by nodding in approval and making snorting noises while reading.

Steve has some very valid points about the whole Agile Methodology hype. Luckily he also tells plenty about a place where development is done in a less then formal way very successfully: Google. We all hear the stories about why Google is such a great place to work. But Steve provides insight in why it actually pays for Google to be such a great place to work. If you're interested in that, just read Steve's article. I just want to talk a bit more about my experience with agile processes.

I've seen less formal development processes work and I've seen them fail. I'm still trying to figure out what causes success or failure. But I'm getting more and more convinced that it isn't caused by the project and it isn't caused by the process; it is caused by the people.

When agile processes worked for me, it was because the people took ownership and felt in control of their project.

The ownership wasn't imposed upon them from above by a manager. The responsibility was taken by them from feeling involved in the project. Simply by letting people make their own call in most cases, they took ownership of the project and thus took responsibility for its success. Which also means they took credit in the success of the project. I've never worked with an incentive system such as Google's, but even smaller rewards work wonders here.

The people also felt in control of the project. Progress was monitored (but not planned) by the project manager. And progress was even somewhat predictable. When unexpected problems got in the way of getting something done, it was clear what to do with the problem. Create a separate task/feature for it and get on with what you're working on. Sure the list of tasks would grow at times, much to the frustration of project managers that wanted to finalize the project. But at least it was there, out in the open and it couldn't be denied. And it was always clear for the developer what to do next: just look at the list.

But I've also seen the lack of a rigid process fail. In those cases people weren't taking ownership. And they certainly didn't feel in control of things. Like I said before, I'm still not entirely sure why this happened. But I have a feeling that the people themselves had something to do with it.

When someone isn't comfortable with the subject matter, he is not likely to take ownership. Solving this problem is actually quite easy: either he should get comfortable with the subject or he should not work on it.

The lack of feeling in control is slowly becoming more clear to me, in a large part thanks to insightful articles such as Steve's. We made people give estimates on tasks that were assigned to them. But often the people were not yet in control, so they couldn't give reliable estimates. This led to them not making their self-imposed deadlines, which many project managers like to point out in useless progress meetings. And if you hear a manager say "we're not making enough progress" week after week, you have to be pretty strong to keep your feeling of control.

That's why I like Steve's (or actually Google's) approach of not having estimates. Given enough tasks/features the size of the list will become enough to estimate by. If there were thirty items on the list a week ago and there are now twenty, a very simple estimate is that you'll need two more weeks to finish what's on the list. It might or might not be very accurate. But in my experience the same is true for estimates that take a lot more time to produce.