Non-Violent Communication and Its Nietzschean Foundation

I truly love Marshall B. Rosenberg’s Non-Violent Communication — not because of its apparent but delusive, hippy “seek-for-compassion-and-love-everyone” -tone, but because of its solid nietzschean foundation.

Yes, for the techniques and practices of Non-Violent Communication the core is in compassion and empathy. Nietzsche doesn’t emphasize compassion and empathy, quite the contrary. Then again, the philosophy behind the praxis is similar: Rosenberg underlines the same underlying principles that are essential for Nietzsche as well: (i) intellectual honesty, (ii) complete but not blind affirmation of your needs and urges, and (iii) vitalistic emphasis on positive, life-affirmative thinking over negative, afterlife-centric thinking.

Non-Violent Communication (NVC) in nutshell

To condense, the core idea of Non-Violent Communication is to reflect a bit longer than normally what for you see, feel and need as you do before responding anything:

The ordinary way to think and communicate bases on a fast loop: The synthesis of observations and needs composes a feeling or feelings and the feelings trigger immediate reaction. A feeling emerges from a situation, but we do not know exactly how and why. Therefore our ability to emotional control is not only impaired but also arbitrary and random.

For instance, a teammate says something wrong to you and suddenly you feel angry and react immediately accordingly. Oftentimes you even don’t identify that you actually feel angry before you say “f*** you!” or do something similar. The loop from impression to reaction is often incredibly fast. “F*** you!” is a reaction to an impression that our teammate said something wrong. It is not reaction to what actually happened or what the other person actually intended to do.

According to NVC, it’s by default unclear “what my feelings are”, “what my needs are”, and “what is the actual situation I’m a part of”. They all are inseparable and uncontrollable part of the impression of a situation. Sometimes we instinctively reflect the situation, see things clearly and control ourselves; sometime we just lose our temper. The chain of thoughts behind this is usually ill seen. Thus, our behavior is more or less random.

NVC makes a small change to this mechanism: Just like above, the observed situation produces to feeling(s) as a part of an impression and the impression relates subconsciously to my needs. After this I should reflect how needs, feelings and observations are related: What I actually feel apart from the impression (in image 1, NVC 1)? Why I feel the way I feel and how the feelings are produced from my needs (in image 1, NVC 3)? How the observed situation activates the needs behind the feelings (in image 1, NVC 2)? This kind of reflection will extend and change the impression by making the related observations, needs and feelings more clear, honest and separate.

The key question in NVC is “do I control the impact the other’s messages and reactions causes in me?” (See image 2 below). Unlike in rationalism, the control here does not mean devaluation of emotions and suspension from them. Rather the control means emotional honesty (i.e. what I truly feel and need?) and emotional clarity (i.e. what really happened and what parts of my impression are colored by the lenses of my needs and personality?). NVC does not propose emotional control in that sense that the rational mind should control the emotions! In NVC the point is that the emotional mind should not bang its head to concrete wall but rather use doors and evade obstacles – not because it’s rational but because banging your head to concrete wall is prima facie stupid and actually, the rational mind is needed to make such a headache desirable.

Nietzsche and NVC

It’s ironical, that Nietzsche’s philosophy of power seems to be rather far from Rosenberg’s thinking as the philosophical roots seems more or less the same.

I have a theory: the delusive difference is caused by the fact, that Rosenberg focuses on feelings and Nietzsche focuses on the values. On the time of Nietzsche, feeling were seen as second class citizens compared to the values. On the time of postmodern pluralism, the situation is almost opposite. The values are often considered just opinions while the feelings, needs and observations are real neurophysiological facts. However, if you put NVC and Nietzsche’s ideas to the same puzzle, the pieces fit together seamlessly. Nonetheless, they both clearly discuss on healthy and honest relation between consciousness, world around and my needs:

Example: Blame

The connection between NVC and Nietzsche become obvious as Rosenberg underlines (i) in the chapter five the importance to accept own feelings and take responsibility for them and as he emphasize (ii) in chapter six the importance of clarity and clearness of thoughts.

When someone gives a negative message we have four options: (1) blame ourselves, (2) blame others, (3) sense our own feelings and needs and (4) sense other’s feelings and needs. Nietzsche uses different terms for blaming ourselves. On one hand “blaming ourselves” is bad consciousness and on the other hand it is life aversive asceticism. Blaming others is resentment. Instead of an attempt to face another person resentment re-frame the other as an evil or otherwise bad person. For Nietzsche, options 3 and 4 are something that happens behind good and evil and are examples of what Nietzsche calls evaluation of all values.

While Rosenberg’s claims that “vague use language contributes to internal confusion” Nietzsche seems to say the exactly same of the values: vague transcendental, higher values (like good, evil, wrong, justice, truthfulness, freedom, etc.) contributes internal confusion.

On the later part of Rosenberg’s book, the connection gets stronger and stronger. On chapter nine Rosenberg criticizes self-judgment: “These speakers [that condemns themselves by statements like ‘That was dumb!’, ‘How could you do such a stupid things?’, ‘That was selfish!’, etc.] had been though to judge themselves in ways that imply that they did wrong or bad; their self-admonishment implicitly assumes that they deserve to suffer for what they’ve done. It is tragic that so many of us get enmeshed in self-hatred rather than benefit from our mistakes, which show us our limitations and guide us toward growth.”

Wow! That was exactly the Nietzsche’s point: don’t hate yourselves because of the values you’ve been given as a poisonous dowry. Rather face everything as an opportunity to live the worthiest possible life.

Sum up

For sake of brevity I won’t justify my claim more extensively. Instead, I recommend that you should read “On the Genealogy of Morals” from Nietzsche or Giles Deleuze’s excellent Nietzsche interpretation “Nietzsche and Philosophy” in order to deeply understand the (incidentally) radical philosophical foundation of NVC.

Nietzsche writes on from eagle’s perspective while Rosenberg’s perspective is alike a lion. They both are lonely, noble beasts rather than evangelists of consensus: honorable, open confrontation is more worthy than fearful aversion of conflicts and resentmental submission. They both underlines intellectual honesty and life-affirmative vitalism. The only real difference is that Nietzsche is more likely to leave the conflicts unresolved while Rosenberg is more likely to leave the values beyond needs unarticulated.

Advertisements

The Lean Startup and Schopenhauer’s Pessimism

schopenhauerthe-lean-startup

There’s a lot of same in Eric Ries’ and Arthur Schopenhauer’s thinking. The first one has become rich by creating a virtual reality chat service (IMVU) and has written a bestseller on running successful startups (Ries, 2011, The Lean Startup). The second one was a classical philosopher lived 19th century, who abandoned merchant carrier in his father successful company and became famous of his pessimism. At first sight, they appear very different kind of thinkers. However, there is surprisingly much common between them – and a crucial difference.

Ries is puzzled by question, why so many startups fails even if they have marvelous strategy and clever people running them – and even if the customers have said that this is exactly what they want?

Schopenhauer struggles with more general question. Why is the world filled with frustration and pain, even if we were able get what we want? Essentially they seem to reflect the very same phenomenon, satisfaction as the distance between desire and actualization.

The clever and capable people fail to get what they really want, because they are deluded to think wrong thing worthy. So to say, once they realize that the red and big apple is rotten inside it’s often too late. Schopenhauer talks of self-satisfaction. Ries focus on customer satisfaction. Yes, different domains but still the same mechanisms.

Impurity of goals/Will

For Schopenhauer the foundation of frustration and pain is impurity of our Will. There is a way to achieve a more tranquil state of consciousness, aesthetic perception. In this form of perception we lose ourselves in the object, forget about our individuality, and become the clear mirror of the object. (By the way, if you narrow the definition by replacing “object” by “task”, there is obvious resemblance to flow experiences.)

For Ries the foundation of the customer dissatisfaction is impurity of the goals. There is a way to achieve a more profound and clear conception of goals and needs, the minimal viable product combined with quick feedback loop allowed by short cycle time. The minimal viable product consists of only the absolutely necessary features needed to achieve the “pure” goal.

Schopenhauer studies how to make ourselves more apt to the joyful and tranquil flow by delimiting disturbing, unnecessary desires. Ries tries to develop principles to make the object-in-itself (product or service as something purely useful or otherwise delightful) more desirable because of the correct reasons and for the correct consequences.

Schopenhauer praises artistic geniuses and sees their work as salvation – musician geniuses especially. Ries praises entrepreneur geniuses and sees their insight the best possible way to make world better, not only in business but in every area of life where people “create new products and services under extreme uncertainty”. In short, for them both the most worthy job is to create new clarity and conceptual unity out of the chaotic world. They both have similar existentialist stance even if there is a differences in emphases: Ries focus on materialized forms of clarity and unity while Schopenhauer look after more transcendental ones.

Analysis paralysis and distrust in rationalism

They both struggle with the analysis paralysis and the overly sound trust in rationalism. For Schopenhauer the world is primarily product of will and secondary saw as more rational manners. Ries expresses his arationalistic approach as follows:

“There are two ever-present dangers when entrepreneurs conduct marker research and talk to customer. Followers of just-do-it school of entrepreneurship are impatient to get started and don’t spent time analyzing their strategy. They’d rather start building immediately, often after just a few cursory customer conversations. Unfortunately, because customers don’t really know what they want, it’s easy for these entrepreneurs to delude themselves that they are on the right path.

Other entrepreneurs can fall victim to analysis paralysis endlessly refining their plans. In this case talking to customers, reading research reports, and whiteboard strategizing are all equally unhelpful. The problem with most entrepreneurs’ plans is generally not that they don’t follow sound strategic principles but that the facts upon which they are based on are wrong. […H]ow do entrepreneur know when to stop analyzing, and start building. The answer is a concept called minimum viable product.” (Ries, 2011, pp. 90-91.)

In Ries thinking, the facts that are wrong seem to have a similar root cause than the frustration and pain have in Schopenhauer’s philosophy: they are rather consequences of our individual whims and biases than the Platonic object-in-itself. Schopenhauer and Ries are both (transcendental) idealists in this: they both seem to think that somewhere out there is the pure essence of things (the idea of a thing) even if the essence never fully apparent to us.

Scientific method vs. asceticism

The crucial difference is how they but their insight into practices. Schopenhauer chooses esoteric asceticism and the denial of will-of-life. Ries chooses scientific method:

“Despite the volumes written on business strategy, the key attributes of business leaders, and ways to identify the next big thing, innovators still struggle to bring their ides to life. This was the frustration that led us to try a radical new approach at IMVU [Ries’ first truly successful startup], one characterized by an extremely fast cycle time, a focus on what customer want (without asking them), and a scientific approach to decision making.” (Ries 2011, pp. 4-5.)

I’d like to underline here “what customer want (without asking them)”. An assumption behind this is similar to an insight beyond Schopenhauer’s pessimism: What customers say desirable is unlikely the exactly same thing than the one what makes them happy and willing to use the product. The second assumption is that the scientific method is the most workable (known) way to get over this obfuscation of introspection.

Don’t understand this wrong! Ries never suggest that we shouldn’t listen customers! Quite the opposite, rather we should observe how they really behave, think and feel in more general terms. We should listen more than theirs words – and not only listen but also extrapolate. Talking is just one form of behavior and it conveys some information. There are others that works sometimes better. E.g. are customers, in fact, willing to pay for your service or product and how they actually use it? Are they more wiling we do the change A rather than B?

Actionable metrics vs. vanity metrics

[Be aware: that I will next slightly strengthen the Ries’ argument! Even if I truly like Ries’ The lean startup, it does not follow its arguments to the end. I will emphasis here strongly the value of falsifiability, while for Ries’ emphasises more the value of quick validation of hypothesis.]

No matter what indicator and metrics you use, the main objective should be the proof of your hypotheses by attempting restlessly falsify them.

Attempt to falsify hypotheses is a crucial difference to the ordinary (product/service) design and strategic planning (and other similar domains of knowledge). Usually, neither the design nor the strategic planning is done by truly attempting to falsify the vision as a part of the process. They are rather done in order to find a vision and a solid strategy to proceed toward success in marketplace.

By underlining the importance of the quick validation of hypotheses, Ries ends up a rather new way to define metrics for progress. Metrics should not be defined against goals but against hypotheses. That is, you should not define a growth target but rather a metrics that indicates if your hypotheses of the (used) growth mechanisms are correct. If you define the all metrics against a set of goals, you cannot deduce if the goals were correct or if they should be adjusted. In addition if the metrics looks bad, you don’t know how to react. If an indicator shows that one of your hypotheses was wrong, you know immediately what to do: Adjust the hypothesis – and the strategy accordingly.

Thus, purpose of metrics is not to validate if we are going toward the goal but to alert if we need to change something. This is what Ries’ means by saying that metrics should be actionable. An opposite for actionable metric is vanity metric. Vanity metrics are dangerous as they may affirm false assumption and create feeling of false security.

Example

Consider following example (that could be from my work): A new intranet was launched in the beginning of year. Management believes that the launch was a big success, employees finds the intranet annoying and feel that they need to waste a lot of their time to get things done in the intranet. There were two key metrics management followed closely: Ratio between active and passive users and visits to the front page. On Q1 ratio between active and passive users was 0.5 and on Q2 and Q3 it is 0.4 and 0.7. In Q1 there was 10 000 unique visits to intranet front page followed by 9 000 and 14 000 in Q2 and Q3. Was there a problem?

In Q2 things looked bad, something need to be done in order to overcome the resistance. Thus, between Q2 and Q3 IT closed file shares and now there’s no other way to get things done but to use the intranet. As a consequence the change eventually happened and in Q3 the number looked great, didn’t it? Uh, not quite. Obviously, there is no good reason to think that after people learned to use intranet they eventually adopted it, and now find it highly useful. But the numbers looks good. The initial exuberance was followed by a chaos and disillusionment and then, eventually, the adaptation. Wasn’t this just like in any change, was it?

In this case, however,the people probably use intranet just because they have no alternative anymore; they use it more just because they cannot use it less. This was not quite the goal, but according the metrics everything looks good. The management didn’t truly attempt to falsify their assumption that this is a beneficial change for workers. Actually, they didn’t explicitly postulate that assumption – perhaps it was too obvious. Thus, the need for the change of direction is not apparent and now there is a waste generator in the middle of communication and work. It would have be more relevant be to measure if workers actually get things done faster/better.

Summary

Sometimes listening carefully what customers say is enough. That is rarely the situation when developing new products and services. Observing how they, in fact, behave, feel and think is far more important. In order to do that properly you need a good hypothesis and serious attempts to falsify your theory. If you hypothesis was false (which is rather common) it’s time to pivot and take another direction with different hypothesis of success. That is the beef of Ries’ “The Lean Startup” – at least for me. Overall estimate: 4/5.

And what comes to Schopenhauer, I’d like to add the scientific method into his pessimism. Without the scientific method Schopenhauer pessimism is far too naïve and distressing for my taste. This is it in nutshell: (1) Create hypothesis of your desires. (2) Experiment. (3) Spy your behavior, feelings and thinking in order to falsify the original theory. Finally, (4) correct the original hypothesis accordingly if needed. Repeat. It’s not that important what you originally thought desirable. Rather, you should keep on experimenting your desires so that they match better with your behavior, feelings and thoughts.

Book review: Continuous integration in .NET.

I’ve been lazy to write anything here. I’ll try to write more from now on.

I’ve been reading quite many books lately, and I have even completed few of them. Thus I’ve set myself a goal: write a review on all software related books I’ve read (or almost read) during the last year. Here you have the first review:


Marcin Kawalerowicz and Craig Berntson (2011): Continuous integration in .NET

kawalerowicz_cover150

Overall estimate: 4/5. Good overview on the topic but doesn’t dive deep into the background and the theory.

The authors start and end by summarizing the idea of continuous integration. They won’t go deep in the theory or the motivation to do continuous integration. It ‘s okey as the main goal of the book seems to be to write good overview on how to implement continuous integration in .NET. The authors also succeed in that. Book introduces tens of different tools. Many of the tools I knew and had used already, but there were few new ones. They also succeeded to deepen my knowledge on some of the tools.

Since the overview of various different tools is the best part of the books, here you have listing of tools and areas of continuous integration with my personal short comments. (If the concept of ‘continuous integration’ (abbr. CI) is new to you, please check the basics from Wikipedia. I chose to be lazy and not explain them.)

(1) Source code repository. The authors discuss mostly SVN and TFVC (Team Foundation Server version control). Before the chapter on the source control tools, they the point out the significance of properly organized source code and the tools for CI. They mention distributed version control systems (DVCS) like Git but find DVCS systems a bit more difficult form CI point of view.

Personally, personally I’m really glad that now Git is now fully supported in Visual studio and TFS. (Or actually, not quite yet: you have to install Visual Studio 2012 Update 2 CTP (Community technology preview). So, you still need to wait for RTM versions, if you don’t want to use beta. In addition, Git is not yet available for in-premises installation of TFS.) 

I see distributed version control systems as an big opportunity e.g. in following cases: (1) simultaneous and geographically distributed maintenance of released code and development of new features and (2) multi-vendor environments. Of course, you have to do design decisions you need not do with centralized system – and, yes, DVCS it is slightly more difficult concept than centralized version control system. Great power comes with great responsibility.

(2) Build process. According the authors MSBuild is superior tool, however, they shortly discuss NAnt also. I haven’t ever used NAnt, but it’s easy to believe that MSBuild is better. By the way, I was not aware of MSBuild Community Task project. E.g. build task that zips certain files as a part of build sounds good.

(3) Build server and build agents. The book compares nicely Team Foundation Server, CruiseControl.NET and TeamCity. I’ve used CC.NET in one project and TFS in several. However, TeamCity is definitely something I need to study more. The main reason is following: if not everyone using build servers and version control have MSDN Professional/Premium/Ultimate subscription TFS is rather expensive (as far as I know).

Also the tool you need to use to modify the TFS build process is – well, not that good. Guys who designed TFS build workflow user interface should check out how neatly SQL Server Integration Services (SSIS) visualizes the workflows. Yes, I know that TFS uses Windows Workflow Foundation but that’s no excuse: if workflow is poorly visualized and the designer is difficult to use, the workflow is poorly visualized and the designer difficult to use; the workflow framework makes it no better. SSIS is good example that it need not be so.

(4) Test automation. Following tools were discussed: NUnit, MSTest, White, Selenium, FitNess, Visual Studio inbuilt UI test framework. Unfortunately, currently the biggest pain I have with unit testing relates faking framework APIs with acceptable amount of work.

(5)Statistic code analysis. The tools: FXCop, StyleCop, NDepend and TeamCity code analysis tool.

As a side note: Learning is the main challenge in software industry. The tools listed here may and – and in some extend – will accelerate learning, but automating code analysis and relying on only it, is not enough. The code must make sense within the problem domain, and no tool can ensure that. The danger I see here, is more or less the same than with the “best practices”. If you don’t understand the problem, focus in it, and keep it simple, the formally perfect code is worth of nothing. The “best practices” are not the goal nor are they the staring point, they should be seen as initial but potentially obsolete baseline and never anything more. Thus, always seek for something better and simpler. The same applies to these tools. Probably, no-one claims that automated statistic code analysis is enough. No-one claims that they are bad either. They are good as long as you understand what they are able to do and what not. I just wanted to underline the limits of automation on this area.

(6) Documentation generation. Just one tool was covered: Sandcastle.

Personally I’m slightly skeptical on the usefulness of XML documentation, since you easily end up having more comment than code (or comment that add no value) and thus code readability is severed. I’m waiting for a great, out-of-the-box innovation on the area of code documentation. If anyone happen to know a visual studio plugin that automatically hides XML documentation but let me easily check and write them when needed, please, let me know.

With no doubt, the code need to be well documented. However, XML documentation is a good solution only if you need not write, read and maintain code yourself. Otherwise they are too verbose and distracting. The smallest possible improvement to this: Chance XML documentation so that they could be placed also at the beginning or the end of method body like in PowerShell v.2. The goal: once you collapse the method also the XML documentation would be hidden.

Currently, I’m seeking the answer from BDD (or something alike): You first write description and examples for you APIs in more or less behavior driven way. The API’s behavior description and the examples will compile into the abstraction layers (abstract base classes and interfaces + something more like enums and constants) and test containers. Then your write the tests and the code (side by side). In theory, F#’s custom type provider could do that job but that’s completely different story.

(7) Deploying code. In the book, Microsoft Installer (incl. WiX and ClickOnce) and MSDeploy were covered. I definitely need to learn to use MSDeploy better than I currently do.

(8) Continuous integration with databases. Authors start by RoundhousE and end up praising Visual studio in-build database tools.

I have used some of Visual Studios database tools in small scale demos only and they seem to be good. However they won’t make databases open for extensions and closed for modifications (Strictly speaking, you cannot apply principles of object oriented programming to databases, but you get the idea). So, the root cause that makes SQL-database CI difficult and laborious remains.

Anyway, if I needed to develop software with extensive custom database Visual Studio in build tools would be my initial choice.

Critique

As I mentioned, they won’t go deep. I.e. if you are expecting solid arguments you can use to justify build automation to customer or management, choose another book. Unfortunately, you cannot either find proper references literacy discussing the subject more deeply. This book is good starter and overview on the subject, that’s all. Actually, let me correct myself a bit, it is also well written and edited overview and starter.

Book review: Improving Software Estimates – The Slightly Wrong Answer

I’ve just read McConnell’s book “Software estimation – Demystifying the Black Art” published by Microsoft Press 2004. I’ve dissonant feelings toward it. It contains a lot of good stuff: I’m especially impressed by the extensive collection of techniques to improve estimates and (mostly) well-made background study. Unfortunately the author makes few assumptions that are probably wrong or unsound.

Overall rating

Rating
(1-5)
Weight Comment
Content 3 30 Many good ideas, but few unsound assumptions.
Structure and writing style 4 20 Structure is well-though and clear, but pretty convention. Author writes good, easy-to-read academic (or perhaps semi-academic) text.
Layout and editorial work 3 20 Book contains a lot of nice informative graphics. The layout is clear but a bit boring (like most others books with academic-flavor).
Value to business 2 30 I disagree with few key assumptions. Thus I cannot give high rating here. However, I cannot deny that most techniques and conclusions are good and probably work very well.
Overall rating 2,9

Goodies

Probably the best part of the book is the extensive collection of different estimation techniques. Author recommends using many of them side-by-side and underlines that not all techniques fits to all projects and to all phases of project. E.g. using history data from similar projects combined (chapter chapter 8 and 11) with specialist judgments (chapter 9) works well in the beginning when you know very little. When the scope is a bit clearer you might start counting individual small things (like amount of individual pages or connections between different kinds of systems) or estimate the impact of different kind of technological choices. In order to get better accuracy you can estimate in groups (chapter 13) or use software estimation tools (chapter 14).

Book also provided a nice set of nasty quantitative facts. My favorite fact is that approx. 80% of software projects fail at least partially and 20% of them (the all projects) fail to deliver anything useful. The exact number varies. The statistics base on extensive surveys. In 1994 total failure rate was 30% and less than 20% project were completed in schedule and in budged. 2002 almost 30% of projects were in successful and “only” 20% failed to deliver anything. (McConnell, 2004, p. 24-25; I have not checked the source studies, but I have seen the similar numbers in many other books/texts. According Larman (2003) and Cohn (2009) agile projects success a bit better than those using waterfall based approach.)

Another nasty fact I like is unavoidable error factor in estimates, i.e. the cone of uncertainty. Basically it states, that in earliest estimates the error factor you cannot beat is -75% to +400% or, better magnitude, of ±4x. That is, if you estimate that a work takes one week, it can take in real 1.25 days (=5 days ÷ 4) as well as 4 weeks (1 week × 4). You can mis-estimate even more, but you cannot expect that your estimates are better than that. Before you have started implementation you cannot gain better error factor than ±0.1x (i.e. ±10%). McConnell claims that in order to give estimates having ±0,1x error factor, you need to have detailed design so that you know exactly what you need to do. (See McConnell, 2009, chapter 4: Where Does Estimation Error Come From?)

The unavoidable probabilistic of error in all estimates have uneasy implications to the offers in the software industry. See diagram below (adapted from McDonell, 2004, p. 37; “Level of details an ordinary sales case” area added by me):

image

I.e. in a typical sales case the error factor is ±4x-2x. It is not uncommon to get ten bullets list that virtually specify what the customer wants. Every now and then you need to give estimate for the implementation only and you have “requirement specification”. Unfortunately, it is hardly ever complete and often it does not depict what customer really needs but rather what he may need – i.e. there are usually a lot of features that are actually not important to customer but still listed just in case.

Thus, in my opinion, in the most offers the error factor is ±4x-2x. That means that a solution offered as a 100k€ might turn out to be a 200-400k€ solution (or a 25-50k€ one). Notice: I’m not talking of the worst case scenarios but just probabilistic variation. You just cannot be more precise without compromising accuracy. In addition, McConnell presuppose that requirements are relatively stable. Usually they are not. All in all, the real estimation error is potentially even bigger.

Since ±2x-4x error factor is unacceptable for most customers, the only realistic option seems to be to specify the details so that it is possible to implement the software within the constraints of budged (and schedule). Agile and lean models suggest exactly this approach: keep the budged and the schedule, compromise the scope (see e.g. Rasmusson, 2010 and Cohn, 2009).

I like this fact because it clearly shows how insane it is to sell a project with fixed price and fixed detailed scope (having no space for negotiations about the details). It also illustrates how illusory is the certainty of the heavy design and planning process before implementation provides. Even if you do the best possible design, it is still fiction and the estimates that bases on it will still have ±0.1x error factor. Personally, I don’t think that the error rate of ±0.1x is worth of the enormous amount of work you need to do for it.

Critique

In my opinion, author makes one critical false assumption:

In order to keep budged and schedule better you need to have better estimates (pp. 21, 27-28), and in order to have better requirements you need to have more and more detailed requirements and the requirements must be relatively stable (p. 42, see also the way McConnell specify the cone of uncertainty).

In sort: Non sequitur. You can keep the budged and schedule without having that kind of better estimates.

In my opinion, McConnell haven’t quite understood why and how agile and iterative approach is able to keep the schedule and deliver in time without having static requirements and detailed, comprehensive specification. Accurate and precise estimates can base on two completely different foundations: (1) on detailed and stable requirements or (2) on reliable and frequent delivery of usable (subset of) software. McConnell ignores completely the later foundation for good estimates.

I suppose that the root reason to this unsound assumption relates to the way McConnell seems to understand “better estimate”. According McConnell better estimate to be an estimate that have is as precise as possible and still accurate. In order to understand this you need to understand what he means by “accuracy of estimate” and “precision of estimate”. “Accuracy of estimate” means a range inside which the real value will be. The work estimate is accurate if the real value is within its boundaries, and it is inaccurate if the actualized value is outside the estimated range. “Precision or estimate” is basically narrowness of the range. E.g. 3 (meaning range [2.5, 3.5[) is accurate estimate for PI. It is less precise than, say, 3.18 (i.e. range [3.175, 3.185[) is, but nevertheless more accurate even if the later value is closer to the correct value. [3.175, 3.185[ is inaccurate estimate for PI, because the real value is not in the range. (See McConnell, 2004.)

The more precise but still accurate estimates may help you to keep budged and schedule – or they may not. Precision of estimates cost (time and money) and that’s something McConnell seem to forget. This is also the reason why many agile/lean models prefer imprecise estimates like small, medium and big work (see Rasmusson, 2010). Even if McConnell introduces relative estimates as a one estimation technique, it is unclear how you can apply concepts of accuracy and precision to them. Probably, “small” is an accurate estimate for an item if it takes less time get small item done than an average item estimated as a medium. The boundaries are not clear; the question is “is this-and-this item more likely as big as this small work or this medium work?”

As far as I understand, the accuracy and precision in agile is rather “an ability to deliver working software reliable” than “the ration between well-grounded guess and the actualization”. Thus, the cone of uncertainty can be drawn bit differently:

image

I cannot say if it really is so that once you have done the first sprint your error factor is around 1,5x. This chart is just a draft that illustrates the idea. Obviously, (1) if you have delivered multiple times approximately x unit of working software within certain timeframe and budged and (2) you estimate that a new feature will take x unit of work, you’ve a good change to deliver the new feature within the same timeframe and the budged. (Importance of reliable delivery is emphasized widely in literacy; e.g. in Larman, 2003; Cohn, 2009 and Rasmusson, 2010.)

Whereas in McConnell’s model you need to have certain amount details to give better estimate, the agile models you need to have a proven ability to deliver certain amount of (estimated) work within a given budged and schedule.

I’m not saying that McConnell’s ideas and methods won’t work. I’m just saying that they are incomplete in such a way that leads McConnell to make few false assumptions. Those unsound assumptions lure reader to think that we need more and more detailed specs in order to keep the budged and schedule. That is not true.

We need both proper amount of details (and estimates that base on them) and proven ability to deliver working software frequently. In my opinion, we should never compromise the later (proven ability for reliable delivery) for the first one (details needed for precise estimates). Focusing too much on the well-grounded guesses endanger your ability to deliver reliable.

You should always trust more on the hard, undeniable facts given by frequent deliveries than the comprehensive and well-grounded – but still more or less fiction-based – plans.

References

Cohn, Mike (2009): Successing with Agile – Software Development Using Scrum.

Larman, Craigh (2003): Agile and Iterative Development – A Manager’s Guide.

McConnell, Steve (2004): Software Estimation – Demystifying the Black Art.

Rasmusson, Jonathan (2010): The Agile Samurai – How Agile Master Deliver Great Software.