Showing posts with label Scrum. Show all posts
Showing posts with label Scrum. Show all posts

Friday, September 17, 2010

Color-coding your scrum board

At my company we've been using Scrum for a few years now. And even though I'm quite sure we're not living up to Schwaber's and Sutherland's standards, we seem to have found a way to make the process work for us across many projects.

Like many companies we started out with a simple physical scrum wall:



Later we started putting the burndowns on our intranet, so that everyone could see them. And we started using colored post-its for indicating different types of work.

In our case we use yellow for development tasks, pink for non-development tasks (i.e. testing and documentation) and orange for defects. There's no specific reason for using these colors. They just happened to be the post-it colors we had available when we started using colors. Since then my mind got used to them and I haven't yet heard a reason to switch. Although some of our tech writers are frowning at being "the guys who do the pink work".



As we've been growing our use of Scrum, we've also been using more and more tools. First was the online burndown charts. And then we started using a virtual scrum wall, so that we could share the same wall between distributed teams. It was a great boost for our distributed development, although most team members still sometimes long back for the days when they had a physical wall. Nothing beats the satisfaction you get from moving a physical post-it across 5 inches of whiteboard space.

We've been using Scrum for Team System for a while now to keep track of our stories and tasks. And I thought we were quite happy with it. Sure, we lost the ability to color code our task types. But the task board we used has nice colors too, although it uses them to code the columns. So "done" is green, etc.

But recently I noticed that it was difficult to quickly understand the status of a sprint from simply looking at their board:



So is the status of this project? It's hard to say without looking at the details of each task. It could be that everything is going fine, it could be that things are horribly wrong. There is no way to tell which one it is, unless you get closer to the board and read.

So over the weekend I re-encoded one of our boards with the colors we used back in our "physical days". Now look at the same board where we've used colored post-its:


Yellow = development tasks, pink = non-development tasks, orange = bugs

When we applied this color coding to our existing scrum board, I was shocked at the amount of insight it could add.

So I wrote a quick tool to generate these color-coded task boards for us. The process is purely automatic: read the data from team system, apply a few simple rules (e.g. anything with the word "bug", "defect" or "issue" in it, must be a defect) and output a HTML version of the data. But even with such a simple system we've been able to more easily get a feeling of where all projects stand by simply glancing at the board.

So what is your experience with scrum walls? Do you have any tricks you use to help visualize the data?

Thursday, March 4, 2010

Who estimates the stories in Scrum?

A few years ago we did a project with an outsourcing company. We gave them the current source code of a product, we gave them the backlog for the new version and we asked them to start implementing that backlog top to bottom.

Since the external team was new to both the product and the Scrum process, I decided to also do some rough estimates myself. Given my experience with both the product and the process, those estimates could serve as a rough baseline.

The progress


The remote team worked on the project for 5 sprints. During that time we saw an interesting trend.

As the team finished each sprint, some of their work was accepted and other work wasn't. Work was sometimes rejected because of its quality, but often it was rejected because it simply didn't do everything that the product owner had expected. In those cases the team put another story on the backlog for the missing parts and provided a fresh estimate for the "new" work. They also re-estimated the existing stories based on the newly gained insight of the previous sprint.

So over time the team's backlog grew in size. That is not uncommon, but look at the burnup chart:


burnup 1: based on team estimates

Based on this burnup chart, the project is in serious trouble. The team seems to be doing great, delivering more work with every sprint. But even though more and more work is delivered every sprint, the team is not getting closer to the goal.

The fifth sprint here is really disastrous: the team stopped increasing delivering more work. Did they burn out? How are we ever going to get the project out the door?

For comparison let's look at the data based on my estimates - that the team never saw and didn't commit to:


burnup 2: based on estimates by an external stakeholder

On my end, for every new story the team added to the backlog I checked if I had considered that part in my original estimate. If so, I split the story points I had originally estimated over the two stories. If the new story was functionality I had not expected in my estimate, I would provide a fresh estimate - also increasing the backlog in my part. As you can see that happened only once, but the increase was substantial (about 10%).

In my chart too the team seems to be burning out a bit in sprint 5. But it doesn't seem half as bad as in the firsts burnup chart. They are still getting closer to the goal.

And in both charts the team seems to be about 20% done with the total amount of work.

Analysis


So what's the difference between the two charts and estimators. From my perspective there are two major differences: the stability of the progress and who made the estimates.

How stable is the progress


The charts below show the velocity per sprint as derived from the burndown charts above:


velocity 1: velocity from sprint to sprint in burndown 1

velocity 2: velocity from sprint to sprint in burndown 2

The burnups don't really have a lot of data. But if you look at the first velocity chart, you can see that sprints 1 to 4 show a somewhat exponential growth in velocity (1.3x, 1.4x, 1.8x).

The scope/goal line in the corresponding burnup chart (burnup 1) shows a constant growth, mostly because I don't have the exact data anymore.

So at some point the two lines in burnup 1 are going to intersect, but it is pretty difficult to determine where they'll intersect with the naked eye.

The second burnup doesn't have this growth in velocity and the scope increase is about 10% over 5 sprints.

It is a lot easier to see where this project is going to end. And isn't scrum all about transparency and easy to use charts and data?

Who provides the estimates?


The second question I posed above is who provides these estimates. With the whole hype about lean development I learned that one way to optimize output is to eliminate waste. Anything that isn't delivered to the customer is waste and should be eliminated.

Who needs these estimates? The customer? I don't think my customer cares about estimates. He cares about getting as much software as possible for his/her money. In fact, while the team was providing these estimates they could also have been building more software.

So is it the team that needs these estimates then? After all without those estimates, how do they know what they can commit to? Well... the team does need to know how big a story is before they can commit to it. But they only need to know that at the start of each sprint. And only for the stories that they think they might do in that sprint. So although the team needs to know the size of each story it commits to, it doesn't need to know the size of all stories at the start of the project. Nor do they need to re-estimate the stories (another form of waste).

So the only person that actually needs those estimates, is the guy drawing the charts. In this case that was me, an external stakeholder who is not a part of the team. In many cases it will be the scrum master, who needs those estimates to give his stakeholders some view of the progress towards the overall goal. In other cases it will be the product owner, since he is most interested in seeing his return on investment.

Conclusion


My suggestion: let the guy who needs them come up with the numbers. And if you don't feel comfortable, do a reasonable guess. And if you don't even feel comfortable guessing, set all stories to the same size. In the end it doesn't really matter too much and you'll allow the team to focus on what really matters: building working software.

Saturday, February 2, 2008

Scrum: story points, ideal man days, real man weeks

My team completed its seventh sprint of a project. Once again all stories were accepted by the product owner.

While one of the team member was giving the sprint demo, I started looking in more detail at some of the numbers. Because with seven sprints behind us, we've gathered quite some data on the progress from sprint to sprint. That's the velocity, for XP practitioners.

Looking at the data



So far we've had 131 story points of functionality accepted by the product owner, so that's an average of 18-19 per sprint. The distribution has not really been all that stable though. Here's a chart showing the number of accepted story points per sprint:


Although it is a bit difficult to see the trend in sprint 1 to 5, we seemed to be going slightly upward. This is in line with what you'd expect in any project: as the team gets more used to the project and to each other, performance increases a bit.

The jump from sprint 5 to sprint 6 however is very clearly visible. This jump should come as no surprise when I tell you that our team was expanded from 3 developers to 5 developers in sprint 6 and 7. And as you can clearly see, those additional developers were contributing to the team velocity right from the start.

But how much does each developer contribute? To see that we divide the number of accepted story points per sprint by the number of developers in that sprint:

Apparently we've been pretty consistently been implementing 5 story points per developer per sprint. There was a slight drop in sprint 6, which is also fairly typical when you add more developers to a project. But overall you can say that our velocity per developer has been pretty stable.

Given this stability it suddenly becomes a simple (but still interesting) exercise to try and project when the project will be completed. All you need in addition to the data from the previous sprints, is an indication of the total estimate of all stories on the product backlog. We've been keeping track of that number too, so plotting both the work completed vs. the total scope gives the following chart:

So it looks like we indeed will be finished with the project after one more sprint. That is of course, if the product owner doesn't all of a sudden change the scope. Or we find out that our initial estimates for the remaining stories were way off. After all: it's an agile project, so anything can happen.

Story points vs. ideal man days vs. real man weeks



Whenever I talk about this "number of story points per developer per sprint" to people on other projects, they inevitably ask the same question. What is a story point? The correct Scrum answer would be that it doesn't matter what unit it is. It's a story point and we do about five story points per developer per sprint.

But of course there is a different unit behind the story points. When our team estimates its stories, we ask ourselves the question: if I were locked into a room with no phone or other disturbances and a perfect development setup, after how many days would I have this story completed? So a story point is a so-called "ideal man day".

From the results so far we can see that apparently this is a pretty stable way to estimate the work required. And stability is most important, way more important than for example absolute correctness.

A classic project manager might take the estimate of the team (in ideal man days) and divide that by 5 to get to the ideal man weeks. Then divide by the number of people in the team to get to the number of weeks it should take the team to complete the work. And of course they'll add some time to the plan for "overhead", being the benevolent leaders that they are. This will give them a "realistic" deadline for the project. A deadline that somehow is never made, much to the surprise and outrage of the classic project manager.

I'm just a Scrum master on the project. So I don't set deadlines. And I don't get to be outraged when we don't make the deadline. All I can do is study the numbers and see what it tells me. And what it tells me for the current project is that the number are pretty stable. And that's the way I like it.

But there is a bit more you can do with the numbers. If you know that the developers in the team estimate in "ideal man days", you can also determine how many ideal man days fit into a real week. For that you need to know the length of a sprint.

Our team has settled on a sprint length of four weeks. That's the end-to-end time between the sprints. So four weeks after the end of sprint 3, we are at the end of sprint 4. In those four weeks, we have two "slack days". One of those is for the acceptance test and demo. The other is for the retro and planning of the next sprint.

So there is two days of overhead per sprint. But there is a lot more overhead during the sprint, so in calculations that span multiple sprints I tend to think of those two days as part of the sprint.

So a sprint is simply four weeks. And in a sprint a developer on average completes 5 story points, which is just another way of saying 5 ideal man days. So in a real week there is 1.25 ideal man days!

I just hope that our managers don't read this post. Because their initial reaction will be: "What? What are you doing the rest of the time? Is there any way we can improve this number? Can't you people just work harder?"

Like I said before: I don't believe in that logic. It's classic utilization-focused project management. It suggests that you should try to have perfect estimates and account for all variables so that you can come to a guaranteed delivery date. The problem with that is that it doesn't work! If there's anything that decades of software engineering management should have taught us, is that there are too many unknown factors to get any kind of certainty on the deadline. So until we get more control of those variables, I'd much rather have a stable velocity than a high utilization.

Saturday, August 25, 2007

Online burndown chart generator

One of the aspects of Scrum is its focus on transparency - getting all information out in the open. And one of the areas that enables the transparency is the burndown chart. It's a public posting of the progress of the team throughout its current sprint.

On the horizontal axis you see the days of this sprint. The vertical axis describes the amount of work. At the top of the vertical axis is the number of "ideal man hours" we committed to for this sprint. The straight diagonal line is the "ideal burndown" that we're aiming for. The slightly less straight line is our actual burndown. As you can see this chart is from somewhere during the third week of our four-week sprint and we're slightly above target. But things don't look as desperate as a few days before, thanks to some colleagues getting back from Holidays (which it says in the small scribling that you probably can't read).

As a Scrum master I like to post this information as publicly as I can. So just having it on the wall of our team room isn't good enough, since there are many people that don't visit our team room. Ideally I'd like to have the burndown chart projected on a wall in the central hallway of our office, so everyone can see it first thing they come in in the morning. But as a nice step along the way to this, I chose to publish the chart (and the rest of our product backlog) on our project wiki.

In the first sprints I did this by taking a photograph of the burndown chart every morning, right after updating it. I'd then upload the photo to our wiki. The only problem is... uploading them every day turned out to be too much of a hassle. So the wiki actually only got updated once a week. And that's not good for transparency of course.

So this time around we went searching for a simple tool that would lower the threshold of updating the burndown chart on our wiki. We searched for an extension to MediaWiki that allows you to create a chart by just entering the numbers in your wiki text. That turned out to be quite a challenge. There are many charting and drawing extensions for MediaWiki, but they either didn't do what I wanted or we couldn't get them to work on our wiki.

In the end I just gave up and wrote a simple web page that -when fed with the right parameters- will return a PNG image of the burndown chart. You call the page like this:

  • burndown.jsp?days=1,2,3,6,7,8,9,10,13,14&work=200,170,165,150,125,95
And the page will return the following image:
So the days parameter indicates the day numbers shown on the bottom. I entered all of them for the entire sprint right away. The work parameter is the work remaining. I just entered the values that I know, which is why the green line stops halfway through.

The generated chart is really simple and not very pretty. But it is very easy to keep up to date and that's what counts most. I just add the remaining hours at the end of the URL every morning... and that's it.

Although I consider this generator a stop gap solution until I find something better, I imagine it might also be useful to other budding Scrum masters. For that reason I've put the page online for public use at http://apps.vanpuffelen.net/charts/burndown.jsp. Just click the link and you'll get some usage examples.

Let me know if this generator is useful to you in the comments section. Also let me know if there's something wrong with it and I'll do my best to fix it.

Update (January 1st, 2010): in my company we've created a custom version of this same tool and used that in many projects over the last few years. This public burndown generator has drawn over 60.000 charts in 2009 alone, so apparently we're not the only ones who use burndown charts. That's why I've now updated the tool with the best features that we've added over time at my company. Check the latest version on http://apps.vanpuffelen.net/charts/burndown.jsp for all the features and let me know what you think of them.

Sunday, August 19, 2007

Scrum: utilization vs. velocity

At work we're recently started using Scrum for running some projects. As expected we need to slowly learn the lessons. One of the things we're been having a lot of discussion on recently is the meaning of the focus factor. Let me begin by explaining what a focus factor is, at least in my company.

To determine how much work you can do in a sprint, you need to estimate the top stories. We estimate these stories in "ideal man days" using planning poker. This means that each developer answers the question: if we lock you into a room each day without any distractions, after how many days would you have this story finished?

After these estimates we determine people's availability for the project. After all, they might also be assigned to other projects, if only for part of their time. Even people that have no other projects, tend to have other activities. Like answering questions from customer support or consultants, department meetings, company wide meetings, job interviews with candidates or just playing a game of fusball, table tennis or bowling on the Wii. So basically nobody is available to a project 100% of the time. At most it's 80% - 90% and on overage it seems to be about 60% - 70%.

So the first stab at determining how much work someone can complete is:

  • available hours = contract hours * availability
But when you're working on the project, you're not going to always be contributing towards the goals that you've picked up. Within Scrum there is the daily Scrum meeting. It lasts no more than 15 minutes, but those are minutes that nobody in the team is working towards the goal. And after the meeting a few team members always stick around to discuss some problem further. Such time is very well spent, but it probably wasn't included in the original estimate. So it doesn't bring the "remaining hours" down very much. I see all this meeting, discussion, coaching and tutoring as necessary work. But work that doesn't bring the team much closer to the goal of the sprint. I used to call this overhead, but that sounded like we were generating waste. So in lieu of the agile world I switched to using the term focus factor. So now we have:
  • velocity = contract hours * availability * focus factor
So the speed at which we get things done (velocity) is the time we're working minus the time we loose to non-project work minus the time we loose on work that doesn't immediately get us closer to the goal. In the past I probably would have included a few more factors in there, but in an agile world this is already accurate enough to get a decent indication of how long it will take us to get something done.

If there's one thing I've learned from the agile movement and Scrum it's to focus on "when will it be done" instead of "how much time will it take". So to focus on velocity instead of utilization.

Utilization is the territory of classic project management. It's trying to make sure that every hour of every employee is fully accounted for. So if they're programming, they should have a time-writing slot for programming; if they're meeting, there's a slot for meeting; if they're reviewing designs, there's a slot for that and if they're drinking coffee or playing the Wii... you get the picture. Of course there's no project manager that wants all that level of detail. But in general they are focused on what you're spending your time on.

Agile thinkers see this really differently. They say: it doesn't really matter how much time you spend, what matters is when it is done. This sounds contradictory so let's see if a small example can make it clearer what I'm trying to say.

If I tell my boss that some feature he wants will be done at the end of next week, he is interested in only one thing: that it is done next week. If we get it done on time, he doesn't care whether I spent two hours per day on it or whether it was twelve hours per day. I care about it of course, because I don't want to work late every night. And there's also a limit to the amount of gaming I like to do during a day, so two hours per day will leave me bored quickly. But to my boss, all that matters is when I deliver, not how much effort it took.

This is why the focus for Scrum projects is on velocity and not on utilization. So in Scrum you want to know how many hours you still need to spend on a job, not how many you've already spent on it. A classic project manager might be really proud that you worked late all week and clocked in 50+ hours. An agile project manager will note that you reduced the "hours remaining" by 10 hours and nothing more. If you're looking for compliments on all your hard work, then Scrum might not be for you.

Learn more:

Thursday, July 12, 2007

My first game of planning poker

Yesterday I took part in the first sprint planning meeting of my life. We have started up a new development project and we decided to use Scrum as the process. The project is actually quite small, so we have just two developers (myself included) and a product owner for it.

The product owner had prepared nicely and had a quite extensive product backlog. He had even filled in a "how to demo" field for a lot of the stories, which I'm not sure he's supposed to do before the sprint planning. At least it wasn't very handy to have the "how to demo" in place, as it makes it harder to discuss alternative solutions for the same functionality.

After the product owner had explained each story, we were to come up with an estimate of how much work (in ideal man days/story points) it would be to implement the story. I have done many of these estimation sessions before, but this time we decided to play a game of planning poker. Being the good scrum master that I am, I had brought two packs of (rather improvised) planning poker cards.

The other developer and I talked through the story, determining what it would take. We were basically already breaking the story down in tasks, which was a nice head start for the actual breaking down we planned to do later. After agreeing on the tasks, we would go into our poker deck and select the card matching our estimate. When we both had selected a card, we'd pull it out of the deck at the same time - revealing our estimate.

Now I must admit that I wasn't too impressed with the transparency that this estimating method brought. I guess -just as with real poker- you shouldn't play with just two players. There was actually only one story where we seemed to have a big difference in estimate: 8 vs 13 points. But as it turns out, our decks just didn't have any numbers in between 8 and 13. We had both wanted to select a 10, but since that wasn't there we just had to pick something slightly higher or lower. Being the planning pessimist that I am, I of course picked the 13. :-)

So there you have it: I played the game of planning poker. It wasn't anything special or extremely different from the ways I've done estimations before. But I guess that contrary to popular belief, being extremely different is not what Scrum is about. What is it about then, you ask? I'll let you know when I find out. Because if I answered that question now, I'd just be repeating the Scrum/Schwaber mantra.

Friday, June 1, 2007

This scrum? Or that scrum? Or that scrum?

This week I took a training that now officially makes me a certified scrum master.


Or wait... let's make it sound even more official: I'm now a Certified ScrumMaster! There, that looks a lot better. And geeky, with a camel-case word in it. But actually all it takes to become a certified scrum master is completing a two day training. Now the training wasn't bad, mind you. But I do feel that a certification should require some kind of exam. Especially a certification that claims you're a master in something.

One of the things that I really noticed during our training is the apparent difference between the way our trainer implements scrum in companies and the way I understood it from Ken Schwaber's books (on agile software development and on agile project management) and video (Google TechTalk on scrum).

I have the feeling that in Ken's approach the role of the Product Owner is much less involved than what we've been taught this week. And that the actual ScrumMaster role is a lot less intensive then what Ken suggests in his TechTalk.

The Product Owner role we've learned during our training was pretty close to what Toyota seems to call the Chief Engineer. And that's a term we can probably all imagine what it means. In my recent history it has been called: technical lead, technical project lead, technical team lead, principal developer and I'm probably still missing a few. The only real difference I see is that the Chief Engineer is empowered, meaning that he/she has the backing of management and is allowed to do the things necessary to get a product or project completed. But of course having a Chief Engineer somewhat dilutes the role of the scrum master whose major responsibility is guarding that the team really follows the process. With such a simple process, that should never be a full time job. Which Ken seems to suggest it often is.

Of course in scrum it doesn't really matter if companies implement it slightly different. After all, this is an agile process. So what's a little flexibility between friends. And I actually believe that this flexibility is what makes scrum work for so many companies. Have a little faith in the ability of your people to somehow work towards the goals that you set out for them. It of course depends on the people, but then again... that's what I already said a few months ago.

What this training mostly showed me is that it really matters a lot who helps you implement scrum in your organization. If Ken Schwaber would have taught us, we probably would come away with a "completely" different interpretation of scrum. Not necessarily better or worse, but definitely different. Now that is something that you might want to consider before you book the first scrum training for your company.

Wednesday, May 30, 2007

Unused features might sell the product

During a Scrum training at my company, there was an interesting discussion between a few people.

As you might know Scrum is based on the Lean thinking, which focuses on producing as little waste as possible. Our Scrum trainer presented a slide showing that 45% of the functionality in a product is never used, which makes it the equivalent of Lean's worst kind of waste: over-production. It should therefor -according to Lean thinking- be eliminated/avoided.

One of the attendees asked: "but what if those 45% contains features that helped sell the product?". Which is a valid question, especially in a market where the buyers of a product are often not the users of that product.

Unfortunately nobody knew a really good answer. The trainers remark that such a feature set was not "sustainable" was true, but irrelevant. Does anyone else have an answer? So what do you do with those features no one uses, but that effectively sell the product? Leave them in without maintenance? Just advertise them without properly implementing them? What do you do?

Monday, May 14, 2007

Succesfully implementing Scrum

At my company they're now introducing Scrum. Traditionally most of our projects were done using a sort-of waterfall approach. But in recent years it has become harder and harder to get product releases onto the market in a reasonable amount of time using this approach, so management has been looking for a solution.

I have actually lead some projects in the past where we already used an iterative development process. Some of these projects have been immensely successful, while others have failed to deliver the intended result in a reasonable amount of time. I am curious to see whether following the more standard -and better defined- Scrum framework will result in more reliable success.

When I recently watched a Google Tech Talk where Ken Schwaber explains some of Scrum, he mentioned that only about 35% of the companies that try to implement Scrum actually succeed on doing so. And while everyone -including Schwaber of course- is very positive about Scrum's advantages, I haven't seen many details about what can make it fail. And more importantly: how to avoid the dangers that might make it fail.

In other words: how do we avoid becoming part of the 65% that doesn't succeed in implementing Scrum? Does anyone know?

Tuesday, February 27, 2007

Agile is about the people

When Steve Yegg's article on the good and bad of agile development showed up in my feed reader a few months ago, I decided to pass on it. The summaries I saw sounded too much like Steve just needed to vent hist frustration with all the agile hype. Which is fine by me, but doesn't make it onto my already overpopulated reading stack.

But today one of my many managers put the article on my desk, saying it was an interesting read for him. I know when not to argue, so I read it on my way home. Don't worry, I travel by public transport so didn't endanger anyone. Although I did draw some attention by nodding in approval and making snorting noises while reading.

Steve has some very valid points about the whole Agile Methodology hype. Luckily he also tells plenty about a place where development is done in a less then formal way very successfully: Google. We all hear the stories about why Google is such a great place to work. But Steve provides insight in why it actually pays for Google to be such a great place to work. If you're interested in that, just read Steve's article. I just want to talk a bit more about my experience with agile processes.

I've seen less formal development processes work and I've seen them fail. I'm still trying to figure out what causes success or failure. But I'm getting more and more convinced that it isn't caused by the project and it isn't caused by the process; it is caused by the people.

When agile processes worked for me, it was because the people took ownership and felt in control of their project.

The ownership wasn't imposed upon them from above by a manager. The responsibility was taken by them from feeling involved in the project. Simply by letting people make their own call in most cases, they took ownership of the project and thus took responsibility for its success. Which also means they took credit in the success of the project. I've never worked with an incentive system such as Google's, but even smaller rewards work wonders here.

The people also felt in control of the project. Progress was monitored (but not planned) by the project manager. And progress was even somewhat predictable. When unexpected problems got in the way of getting something done, it was clear what to do with the problem. Create a separate task/feature for it and get on with what you're working on. Sure the list of tasks would grow at times, much to the frustration of project managers that wanted to finalize the project. But at least it was there, out in the open and it couldn't be denied. And it was always clear for the developer what to do next: just look at the list.

But I've also seen the lack of a rigid process fail. In those cases people weren't taking ownership. And they certainly didn't feel in control of things. Like I said before, I'm still not entirely sure why this happened. But I have a feeling that the people themselves had something to do with it.

When someone isn't comfortable with the subject matter, he is not likely to take ownership. Solving this problem is actually quite easy: either he should get comfortable with the subject or he should not work on it.

The lack of feeling in control is slowly becoming more clear to me, in a large part thanks to insightful articles such as Steve's. We made people give estimates on tasks that were assigned to them. But often the people were not yet in control, so they couldn't give reliable estimates. This led to them not making their self-imposed deadlines, which many project managers like to point out in useless progress meetings. And if you hear a manager say "we're not making enough progress" week after week, you have to be pretty strong to keep your feeling of control.

That's why I like Steve's (or actually Google's) approach of not having estimates. Given enough tasks/features the size of the list will become enough to estimate by. If there were thirty items on the list a week ago and there are now twenty, a very simple estimate is that you'll need two more weeks to finish what's on the list. It might or might not be very accurate. But in my experience the same is true for estimates that take a lot more time to produce.