Agile Sarcasm Animated

I have to say that I really enjoy these sarcastic videos on the misunderstood agile ideas.

I Need Agile Methodology

Another perspective on SCRUM

A Tale of Two Agilities

I want to run an agile project

Part 1

Part 2

Advertisements

On Lean software development by using Prezi.com

I tested to express my though by Prezi. It is a presentation tool, in which you have a very big canvas for presentation material instead of a pile of slides. I like that idea.

In overall, Prezi is pretty nice tool even if the user interface is a bit clumsy (except if you use only text). I have to say that creating presentation by PowerPoint is faster and easier, or… perhaps, I just tried to do something too cool and difficult.

Unfortunately, I was unable to embed the presentation here. Anyway, here you have link to it (if you have slow internet connection, prepare to wait. The size of presentation is 16 Mt):

Why I find Lean so promising?
Why I find lean software development so promising

Book review: Improving Software Estimates – The Slightly Wrong Answer

I’ve just read McConnell’s book “Software estimation – Demystifying the Black Art” published by Microsoft Press 2004. I’ve dissonant feelings toward it. It contains a lot of good stuff: I’m especially impressed by the extensive collection of techniques to improve estimates and (mostly) well-made background study. Unfortunately the author makes few assumptions that are probably wrong or unsound.

Overall rating

Rating
(1-5)
Weight Comment
Content 3 30 Many good ideas, but few unsound assumptions.
Structure and writing style 4 20 Structure is well-though and clear, but pretty convention. Author writes good, easy-to-read academic (or perhaps semi-academic) text.
Layout and editorial work 3 20 Book contains a lot of nice informative graphics. The layout is clear but a bit boring (like most others books with academic-flavor).
Value to business 2 30 I disagree with few key assumptions. Thus I cannot give high rating here. However, I cannot deny that most techniques and conclusions are good and probably work very well.
Overall rating 2,9

Goodies

Probably the best part of the book is the extensive collection of different estimation techniques. Author recommends using many of them side-by-side and underlines that not all techniques fits to all projects and to all phases of project. E.g. using history data from similar projects combined (chapter chapter 8 and 11) with specialist judgments (chapter 9) works well in the beginning when you know very little. When the scope is a bit clearer you might start counting individual small things (like amount of individual pages or connections between different kinds of systems) or estimate the impact of different kind of technological choices. In order to get better accuracy you can estimate in groups (chapter 13) or use software estimation tools (chapter 14).

Book also provided a nice set of nasty quantitative facts. My favorite fact is that approx. 80% of software projects fail at least partially and 20% of them (the all projects) fail to deliver anything useful. The exact number varies. The statistics base on extensive surveys. In 1994 total failure rate was 30% and less than 20% project were completed in schedule and in budged. 2002 almost 30% of projects were in successful and “only” 20% failed to deliver anything. (McConnell, 2004, p. 24-25; I have not checked the source studies, but I have seen the similar numbers in many other books/texts. According Larman (2003) and Cohn (2009) agile projects success a bit better than those using waterfall based approach.)

Another nasty fact I like is unavoidable error factor in estimates, i.e. the cone of uncertainty. Basically it states, that in earliest estimates the error factor you cannot beat is -75% to +400% or, better magnitude, of ±4x. That is, if you estimate that a work takes one week, it can take in real 1.25 days (=5 days ÷ 4) as well as 4 weeks (1 week × 4). You can mis-estimate even more, but you cannot expect that your estimates are better than that. Before you have started implementation you cannot gain better error factor than ±0.1x (i.e. ±10%). McConnell claims that in order to give estimates having ±0,1x error factor, you need to have detailed design so that you know exactly what you need to do. (See McConnell, 2009, chapter 4: Where Does Estimation Error Come From?)

The unavoidable probabilistic of error in all estimates have uneasy implications to the offers in the software industry. See diagram below (adapted from McDonell, 2004, p. 37; “Level of details an ordinary sales case” area added by me):

image

I.e. in a typical sales case the error factor is ±4x-2x. It is not uncommon to get ten bullets list that virtually specify what the customer wants. Every now and then you need to give estimate for the implementation only and you have “requirement specification”. Unfortunately, it is hardly ever complete and often it does not depict what customer really needs but rather what he may need – i.e. there are usually a lot of features that are actually not important to customer but still listed just in case.

Thus, in my opinion, in the most offers the error factor is ±4x-2x. That means that a solution offered as a 100k€ might turn out to be a 200-400k€ solution (or a 25-50k€ one). Notice: I’m not talking of the worst case scenarios but just probabilistic variation. You just cannot be more precise without compromising accuracy. In addition, McConnell presuppose that requirements are relatively stable. Usually they are not. All in all, the real estimation error is potentially even bigger.

Since ±2x-4x error factor is unacceptable for most customers, the only realistic option seems to be to specify the details so that it is possible to implement the software within the constraints of budged (and schedule). Agile and lean models suggest exactly this approach: keep the budged and the schedule, compromise the scope (see e.g. Rasmusson, 2010 and Cohn, 2009).

I like this fact because it clearly shows how insane it is to sell a project with fixed price and fixed detailed scope (having no space for negotiations about the details). It also illustrates how illusory is the certainty of the heavy design and planning process before implementation provides. Even if you do the best possible design, it is still fiction and the estimates that bases on it will still have ±0.1x error factor. Personally, I don’t think that the error rate of ±0.1x is worth of the enormous amount of work you need to do for it.

Critique

In my opinion, author makes one critical false assumption:

In order to keep budged and schedule better you need to have better estimates (pp. 21, 27-28), and in order to have better requirements you need to have more and more detailed requirements and the requirements must be relatively stable (p. 42, see also the way McConnell specify the cone of uncertainty).

In sort: Non sequitur. You can keep the budged and schedule without having that kind of better estimates.

In my opinion, McConnell haven’t quite understood why and how agile and iterative approach is able to keep the schedule and deliver in time without having static requirements and detailed, comprehensive specification. Accurate and precise estimates can base on two completely different foundations: (1) on detailed and stable requirements or (2) on reliable and frequent delivery of usable (subset of) software. McConnell ignores completely the later foundation for good estimates.

I suppose that the root reason to this unsound assumption relates to the way McConnell seems to understand “better estimate”. According McConnell better estimate to be an estimate that have is as precise as possible and still accurate. In order to understand this you need to understand what he means by “accuracy of estimate” and “precision of estimate”. “Accuracy of estimate” means a range inside which the real value will be. The work estimate is accurate if the real value is within its boundaries, and it is inaccurate if the actualized value is outside the estimated range. “Precision or estimate” is basically narrowness of the range. E.g. 3 (meaning range [2.5, 3.5[) is accurate estimate for PI. It is less precise than, say, 3.18 (i.e. range [3.175, 3.185[) is, but nevertheless more accurate even if the later value is closer to the correct value. [3.175, 3.185[ is inaccurate estimate for PI, because the real value is not in the range. (See McConnell, 2004.)

The more precise but still accurate estimates may help you to keep budged and schedule – or they may not. Precision of estimates cost (time and money) and that’s something McConnell seem to forget. This is also the reason why many agile/lean models prefer imprecise estimates like small, medium and big work (see Rasmusson, 2010). Even if McConnell introduces relative estimates as a one estimation technique, it is unclear how you can apply concepts of accuracy and precision to them. Probably, “small” is an accurate estimate for an item if it takes less time get small item done than an average item estimated as a medium. The boundaries are not clear; the question is “is this-and-this item more likely as big as this small work or this medium work?”

As far as I understand, the accuracy and precision in agile is rather “an ability to deliver working software reliable” than “the ration between well-grounded guess and the actualization”. Thus, the cone of uncertainty can be drawn bit differently:

image

I cannot say if it really is so that once you have done the first sprint your error factor is around 1,5x. This chart is just a draft that illustrates the idea. Obviously, (1) if you have delivered multiple times approximately x unit of working software within certain timeframe and budged and (2) you estimate that a new feature will take x unit of work, you’ve a good change to deliver the new feature within the same timeframe and the budged. (Importance of reliable delivery is emphasized widely in literacy; e.g. in Larman, 2003; Cohn, 2009 and Rasmusson, 2010.)

Whereas in McConnell’s model you need to have certain amount details to give better estimate, the agile models you need to have a proven ability to deliver certain amount of (estimated) work within a given budged and schedule.

I’m not saying that McConnell’s ideas and methods won’t work. I’m just saying that they are incomplete in such a way that leads McConnell to make few false assumptions. Those unsound assumptions lure reader to think that we need more and more detailed specs in order to keep the budged and schedule. That is not true.

We need both proper amount of details (and estimates that base on them) and proven ability to deliver working software frequently. In my opinion, we should never compromise the later (proven ability for reliable delivery) for the first one (details needed for precise estimates). Focusing too much on the well-grounded guesses endanger your ability to deliver reliable.

You should always trust more on the hard, undeniable facts given by frequent deliveries than the comprehensive and well-grounded – but still more or less fiction-based – plans.

References

Cohn, Mike (2009): Successing with Agile – Software Development Using Scrum.

Larman, Craigh (2003): Agile and Iterative Development – A Manager’s Guide.

McConnell, Steve (2004): Software Estimation – Demystifying the Black Art.

Rasmusson, Jonathan (2010): The Agile Samurai – How Agile Master Deliver Great Software.