So, reports that in France there is outrage amongst (right-wing, conservative) commentators that the current government of President Hollande (a socialist) has reconfirmed the orthographic changes proposed originally in 1990 and agreed by the government of President Chirac (right-wing, conservative) which—amongst other things—deletes the circumflex from words where it makes no difference.
Some of these reports seem to follow the wikipedia article on the circumflex in mentioning that in English, apart from loan words, the circumflex is not used today but was once: in the days when posting a letter was priced by weight an ô was used to abbreviate ough. As in “thô” for “though”. This seems like a fine convention, and one that I intend to adopt in tweets and instant messages. Now that we can pretty much assume that both ends of any messaging app conversation will have good Unicode support we can do a range of interesting things.
For example, althô you can put newlines in tweets† it seems as if many messaging apps are designed on the assumption that no–one using them ever has two consecutive thoughts and interprets a [RETURN] as send. I’ve started using ¶ in messages. I wish it could be typed on an iPhone soft keyboard. For some reason § can be, which I think is no more obscure. Anyway, the pilcrow can be copied and pasted, as can ‘∀’ to mean “all” & ‘∃’ to mean “there’s a” or similar. I’d like to use ‘¬’ for “not” but that might be a step too far, althô I do see a lot of “!=” and “=/=” type of thing in my twitter stream. I also tend to use pairs of unspaced em–dash for parenthetical remarks—like this—which saves two characters in a tweet vs. using actual parens (like this). The ellipsis comes in very handy in several ways… ¶ Over time I’m getting more relaxed about using ‘&’ which of course has a particularly long heritage, although not so long as is sometimes thôt.¶ What other punctuation can we revive, re-purpose or re-use?
Update: how do we feel about ‘þ’ or ‘ð’, both easily available from the Icelandic keyboard, for the?
† I’ve used this to sneak footnotes into tweets. Of course, this will all become a bit pointless if the managers at Twitter really do continue to force fit their brilliant ideas into the product, rather than continuing their previously successful strategy of paving cowpaths.
You may be pleased to learn that this is probably the penultimate thing I have to say here about #NoEstimates.
Anyway, it’s for these reasons…
It’s conceptually incoherent
From what what I can gather from following twitter discussions, and reading blogs, and articles, and buying and reading the book, then, in #NoEstimates land, supposing that someone were to come and ask you “how long will it take to develop XYZ piece of software?” then any one of the below could be an acceptable #NoEstimates answer, depending on which advocate’s line of reasoning you follow:
- Estimation is morally bankrupt and I shall have no part in continuing the evil of it. You are a monster! Get out of my office! But fund the endeavour anyway. Yes, I do mean an open-ended commitment to spend however much it turns out to take.
- Estimation is impossible even in principle so there is no way to answer that question, however roughly. But do please still fund the endeavour. No, I can’t indicate a reasonable budget.
- Estimation is impossible even in principle so there is no way to answer that question and even if there were I still wouldn’t because you can’t be trusted with such information. No, I can’t indicate a reasonable budget. It’ll be ok. Trust me. No, I don’t trust you; but trust me.
- Estimation is so difficult and the results so vague that you’re better off just starting the work and correcting as you go. It’ll be ok. Trust me. No, I still don’t trust you.
- Estimation is so difficult and the results so vague that you’re better off choosing to invest a small, but not too small, amount of money to do something and learn from it and then decide if you’ve come to trust me and want to do some more (or not, which would be disappointing but OK). For your kind of thing, I suggest we start with $BUDGET_FOR_A_BIT, expect to end up spending something in $TOP_DOWN_SYNTOPIC_ESTIMATED_SPEND_AS_A_RANGE
- Estimation is difficult to do with any useful level of confidence and the results of it hard to use effectively. What would you do with an estimate if I did provide it? How could we meet that need some other way?
- Here is a very rough estimate, heavily encrusted with caveats and hedges, of the effort required to deliver something of a size such as experience suggests that what you asked for will end up being. No, I will not convert that into a delivery date for you. Let me explain a better way to plan.
- OK, OK, since you insist, here is a grudgingly made estimate of a delivery date in which I have little faith, I hope it makes you happy. Please don’t use it against me.
For the record: my preferred answer is some sort of combination of 5 and 6, with a hint of 4, and 7 as a backup. And I have turned down potentially lucrative work on the basis of those kinds of answer being rejected.
That’s a huge range of options, many subsets of which are fundamentally, conceptually, incompatible with other subsets. Which means that #NoEstimates doesn’t really seem to me as if it’s much help in deciding what to do.
Except…one good thing about #NE is that it does rule out this answer: “let me fire up MS Project and do a WBS and figure out the critical path and…” which is madness, for software development, but you still see people trying to do it.
Also for the record: In my opinion far too many teams spend far too much time estimating far too many things in far too much detail, and in ways that aren’t sufficiently smart or useful.
Even in an “Agile” setting you see this, and for that I blame Scrum which has had from the beginning in it this weird obsession with estimating and re-estimating, and re-re-estimating again and again and again. I don’t do that. And I certainly don’t do, and do not recommend task-level estimates (or even having tasks smaller than a story).
I can’t understand what anyone’s saying
It seems as if the “no” in #NoEstimates doesn’t mean no. Or maybe it does. Or it might mean: prefer not to but do if you have to. Or it might mean: I’d prefer that you didn’t, but if it’s working for you carry on.
And the “estimate” in #NoEstimates doesn’t mean estimate. It means: an involuntary commitment to an unsubstantiated wild-arsed guess that someone in authority later punishes you for not meeting§. Or it might mean estimate after all, but only if the estimate is based on no data. If there’s data, then that’s a “forecast”, which is both a kind of estimate and not an estimate.
“Control” seems to be a magic word for some #NE people. It’s said to them in the morally neutral, cybernetics sense but they hear it in some Theory X sense, as if it always has the prefix “Command and ”. This creates the impression that they have no interest in being in control of development, budgets, etc. Which might or might not be true. Who can say?
So not only are the #NoEstimates concepts all over the place, they’re discussed in something close to a private vocabulary—maybe more than one private vocabulary. This is not an effective approach to an engineering topic.
Nevertheless: it’s strong medicine and it’s being handled sloppily
…which, if you’ve ever taken strong medicine you’ll know is a poor policy.
In the contexts for software development that I’m familiar with† the idea of making estimates as an input to a budgetary process at the level of months, quarters, years and maybe (hopefully) beyond is really deeply baked in. Maybe this is part of why Scrum has managed to find such a better fit in corporate land that, say, XP ever did, because a Scrum team can seem to still play that game.
For a development team to around and say even that estimates are too difficult to make useful so lets do something else instead is very challenging to the conventions of the corporation. Conventions which I believe should be challenged, in principle. To turn around and say estimation is (and always was) impossibly difficult and management were doing bad things with the results of it is going to deeply challenge and upset many people in an unhealthy way. That’s not the way to effectively change organisational habits. We saw this before with Scrum.
Now, I happen to be of the opinion that estimation is hard, but can be done well, and you can learn to do it better, and the results of it are often misapplied. And I’ve come to the opinion that the most effective and/because least upsetting route to dealing with that is to re-educate managers to do their work in a better way such that they stop asking for estimates.I find that coaching managers to ask more helpful questions beats coaching programmers to give fewer unhelpful answers.
For the record, too: I agree that too many enterprises use developers’ estimates in a way that is invalid in theory, unhelpful in practice, and questionable in its effect on the long term health of the business (and the developers). But, also for the record, I do not agree that this is an inevitable consequence of some intrinsic problem with estimation.
But in the #NE materials that I’ve seen there’s not really much recognition of these organizational aspects of the proposed change. It seems mainly to be about convincing developers that they shouldn’t be producing estimates and explain how misguided (and best) or evil (at worst) management are to ask for estimates in the first place.
We’re just not smart about this kind of thing
…and the treatment of #NoEstimates that I’ve seen fosters exactly the kind of not-smartness that can get us into a real mess.
The industry, and corporations within it, and teams within corporations have a tendency to lurch from one preposterous extreme to another, and to wildly mis-interpret very well intentioned recommendations. This is a particular problem when the recommendation is to do less of something that’s perceived as irksome.
eXtreme Programming offers a good example. When considering a proposed way of working I often find it useful to consider to what it has arisen as a reaction. On one hand, #NoEstimates seems to be partly a reaction against the very degenerate Scrum-and-Jira style of “Agile” development that many corporations are far to comfortable with. And on another hand, it seems to be a reaction against some really terrible management behaviour* that’s connected with estimation.
eXtremeProgramming can be usefully read as a reaction against the insane conditions** in large corporate software shops in the mid 1990s. People who really wanted to be programmers rushed to XP with joyous relief. As it happens I took some convincing, because I kind-of wanted the models thing—and not just boxes-and-arrows, I love, love, love me some ∃s and ∀s—to work. But it doesn’t. So, you know, I’m able to recognise that my ideas need to change, and I’m prepared to do the work.
Anyway, in part of the rush to XP we found that people abandoned the writing of design documents—these days they’d be condemned as muda, but that Lean/kanban vocabulary wasn’t so widespread then—but unfortunately the design work that we were meant to do instead in other ways didn’t really get picked up. Similarly, BRDs and Use Cases and all that stuff went out of the window but good requirements practices tended to vanish too. And the results were not pretty.
And so, over a long and painful period we had to invent things like Software Craftsmanship to try to re-inject design thinking, and we—that is, Jeff Patton—had to introduce things like Story Mapping to get back to a way to talk about big, complex scope.
I invite you to get back to me in, oh, say, 5 years and check on this…forecast: Either #NoEstimates will have burned out and no-one will really remember what it was, or…
- There will be a[t least one] subscription taking membership organization devoted to #NoEstimates
- Leading members of that organization will make their living as #NoEstimates consultants and trainers, far removed from any actual development
- This organization will operate a multi-level marketing scheme in which the organization certifies, at great expense, senior trainers who train, at substantial expense, certified trainers who train, at not outrageous expense, certified #NoEstimates “practitioners”
- adoption of #NoEstimates will turn out to lack some benefit of estimation that #NoEstimates advocates wont’t see and can’t imagine and some other practice will have to have been invented to get it back.
And then the circle will be complete. I don’t think that we’re collectively smart enough to avoid this dreary, self-defeating outcome.
Update—as if by magic, this twitter exchange appears within 12 hours of my post (NB I mean no criticism of either party and I thank them for holding this conversation in public):
Noel asks: What is the first thing one should consider when contemplating a move to NoEstimates?
And Woody replies:
#NoEstimates isn’t something you “move to”. It is about exploring our dependence on and use of estimates, and seeking better.
I expect that many conversations like this are taking place. And that’s how the subtle but valuable message fades away and the industry’s hunger to be told the right thing to do (and then, worse, do it) takes over.
Although I have worked at, as it were, newly-founded companies with few staff, little money and one big idea, working out of adapted loft space I don’t characterise those as “startups”. That’s because the business model was not to throw fairly random things at the wall in the hope that one of them stuck long enough to pay the bills until the next funding round arrived; repeat until exit strategy kicks in or broke—which is how I understand “startups” to work. That’s a world that I don’t understand very well because I haven’t done it.
So: some of the corporations I’ve worked have been product companies, and some sold a service, and some were small and local and some were large and global, and plenty of variations on that. That’s the world I understand, and what I write here grows out of that understanding.
The F&A people wanted to know when, during an iterative, incremental development process they were supposed to create the intangible asset on the balance sheet representing the value-add on the CAPEX on building the system, so that they could start amortising it.
The HR people wanted to know how, with a cross-functional, self-organizing team in place, they could do performance management and, and I quote, figure out “who to pay a bonus to and who to fire”.
I’ve recently heard of companies that link the “accuracy” (whatever that might mean…) of task estimates to bonus pay. And I agree with J.B. that it’s fucking disgusting. What I very much doubt is that fixing that state of affairs is primarily a bottom-up exercise in not estimating.
** Around that time I held—mercifully briefly—a job as a “Software Engineer” in which the ideal we were meant to strive for was that no code was created other than by forward-generation from a model in Rational Rose.
That is, in the technical, sense used in Lean manufacturing, who’s first two principles include:
- Specify value from the standpoint of the end customer by product family.
- Identify all the steps in the value stream for each product family, eliminating whenever possible those steps that do not create value.
The “steps that do not create value” are waste. If our product is, or contains a lot of, software, is the action of testing that software waste, that is, not creating value from the standpoint of the end customer?
At the time of writing I am choosing the carpets tiles for our new office. On the back of the sample book is a list of 11 test results for the carpet relating to various ISO, EN and BS standards, eg the EN 986* dimensional stability of these carpet tiles is < 0.2%—good to know! There are also the marks of cradletocradle certification, GUT registration, BREEAM registration, a few other exotica and a CE mark. Why would the manufacturer go to all this trouble? Partly, because of regulation: an office fitter would baulk at using carpet that did not meet certain mandatory standards. And partly because customers like to see a certain amount of testing.
Take a look around your home or office, I’ll bet you have a lot of small electrical items of various kinds. Low-voltage power supplies, in particular. Take a look a them. You will find on some the mark of Underwriters Laboratories, which indicates that the manufacturer has voluntarily had the product tested by UL for safety, and maybe for other things. If you’re in the habit of taking things apart, or building things, you might also be familiar with the UL’s “recognised component” mark for parts of products. On British made goods† you might see the venerable British Standards Institution “Kite Mark” , or maybe on Canadian gear the CSA mark , on German kit one of the TÜV marks, and so on. These certifications are for the most part voluntary. Manufacturers will not be sanctioned for not obtaining these marks for their products, nor will—other than in some quite specialised cases—anyone be sanctioned for buying a product which does not bear these marks.
Sometimes a manufacturer will obtain many marks for a product, and sometimes fewer, and sometimes none. I invite you to do a little survey of the electrical items in your office or home: how many marks does each one have. Do you notice a pattern?
I’ll bet that the more high-end a device—in the case of power supplies, the more high-end what they drive—the more marks the device will bear, and the more prestigious those marks will be. Cheaper gear will have fewer, less prestigious marks—ones that make you say “uh?!”†† and the very cheapest will have none.
If testing is waste, why do manufacturers do this?
How does your answer translate to software development?
†† There are persistent rumours that some Chinese manufacturers of questionable business ethics have concocted a mark of their own which looks from a distance like the mark
Well, this feels like a conversation from a long time ago. This presentation got tweeted about, which asserts that
Mocks kill TDD. [sic]
which seems bold. And also that
TDD = Design Methodology
which seems wrong. And also that
Test-first encourages you to design code well enough to test…and no further
which seems to have profoundly misunderstood TDD.
Just so we can all agree what we’re talking about, I think that TDD works like this:
repeat until done:
- write a little test, reflecting the next thing that your code needs to do, but doesn’t yet
- see it fail
- make all tests—including the new one—pass, as quickly and easily as possible
- refactor your working code to produce an improved design
I don’t see that as being a design methodology. It’s a small-scale process for making rapid progress towards done while knowing that you’ve not broken anything that was working, and which contains a publicly stated commitment to creating and maintaining a good design. There’s nothing there about what makes a good design—although TDD typically comes with guidance about well designed code being simple, well designed code lacking duplication and—often overlooked, this—well designed code being easy to change. I also often suggest that if the next test turns out to be hard to write, you should probably do some more refactoring.
Note that in TDD we don’t—or shouldn’t—test a design, that is, we shouldn’t come up with a design and then test for it. Instead we discover a design through writing tests. TDD doesn’t design for you, but it does give you a set of behaviours within which to do design. And I’m pretty sure that when followed strictly, TDD leads to designs that have measurably different properties than designs arrived at other ways. Which is why this blog existed in the first place (yes, I have been a bit lax about that stuff recently). UPDATE: a commentator on lobste.rs (no, me neither) quotes me saying that “TDD doesn’t design for you, but it does give you a set of behaviours within which to do design.” and asks: how is TDD not a design methodology, then?! And I answer: because it doesn’t provide a vocabulary of terms with which to talk about design, it doesn’t provide a goal for design, it doesn’t provide any criteria by which a design could be assessed, it doesn’t provide any guidance for doing design beyond this—do some, do it little bit at a time, do it by improving the design of already working code. If that looks like a methodology to you, then OK.
But Ken does have a substantive objection to code that he’s seen written with mocks. Code which has tests like this:
and I certainly agree that this is a terrible test. There are far too many mocks in it, and their expectations are far too complex and far too specific. Worst of all, the expectations refer to other mocks. This is terrible stuff. You can’t tell what the hell’s going on, and this test will be extraordinarily brittle because it reaches out far too far into the system. It probably has a net negative value to the programmers who wrote it. That’s bad. Don’t do that.
Is this the fault of mocks? Not really. The code under test here wouldn’t be much different, I’ll bet, if it hadn’t been TDD’d—If this code even was TDD’d, I have my doubts although people do do this sort of thing, I know. This confusing, brittle, unhelpful test has been written with mocks, but not because of mocks. One could speculate that it was written by someone who’d got far, far too carried away with the things that mock frameworks can do, and failed to apply good taste, common sense and any kind of design sensibility to what they were doing. Is that the fault of mocks? Not really. Show me a tool that can’t be abused and I’ll show you a tool that isn’t worth having.
Other Styles of Programming
Ken, of course, has an agenda, which is really to promote a functional style of programming in which mock objects are not much help in writing the tests. I think he’s right about that and it should be no surprise as mocks are about writing tests that have something to say about what method invocations happen in what order, and as you move towards a functional style that becomes less and less of a concern. So maybe Ken’s issue with mocks is that they don’t stop you from writing non-functional code—to which I say: that doesn’t mean that you have to.
If you can move to functional programming (spoiler: not everyone can) and if your problem is one that is best solved though a functional solution (spoiler: not all of them are), then off you go, and mocks will not be a big part of your world and fair enough and more power to you. But if not…
Now, I tweeted to this effect and that got Ron wondering about that kind of variation, and why it might be that Smalltalk programmers don’t use mocks when doing TDD. Ron kind-of conflates what he calls the “Detroit School” of TDD and “doing TDD in Smalltalk”, which is kind-of fair enough as Kent and he and the others developed their thinking about TDD in Smalltalk and that’s the style of TDD that was first widely discussed on the wiki and spread from there.
Ron says that he does use “test doubles” for:
“slow” operations, and operations across an interface to software that I don’t have control of
and of course mocks are very handy in those cases. But that’s not what they’re for. Ron says:
Perhaps our system relies on a slow operation, such as a database access […] When we TDD such a thing, we will often build a little object that pretends to be a database […] that responds instantly without actually exercising any of the real mechanism. This is dead center in the Mock Object territory,
Well, no. Again, you can use mocks for such tests, but you’ll only get much value from that if your test cares about, say, what the query to the database is (rather than merely using the result). And while it will make your tests go fast, that’s not the real motivation for the mock handy as it may be.
A Brief History Lesson
Mocks were invented to solve a very specific problem: how to test Java objects which do not expose any state. Really not any. No public fields, no public getters. It was kind-of a whim of a CTO. And the solution was to pass in a collaborating object which would test the object which was the target of the test “from the inside” by expecting to be called with certain values (in a certain order, blah blah blah) by the object under test and failing the test otherwise.
A paper from 2001 by the originators of mocks describes the characteristics of a good mock very well:
A Mock Object is a substitute implementation to emulate or instrument other domain code. It should be simpler than the real code, not duplicate its implementation, and allow you to set up private state to aid in testing. The emphasis in mock implementations is on absolute simplicity, rather than completeness. […] We have found that a warning sign of a Mock Object becoming too complex is that it starts calling other Mock Objects – which might mean that the unit test is not sufficiently local. [emphasis added]
the object under test in a mock object test is surrounded by a little cloud of collaborating mocks which are simple, incomplete and local. UPDATE: Nat Pryce reminds me that process calculi, such as CSP, had an influence on the JMock approach to mocking.
Ron talks about Detroit/Smalltalk TDD-ers developing their test doubles by this means:
just code such a thing up […] Generally we’d build them up because we work very incrementally – I think more incrementally than London Schoolers often do – so it is natural for our mock objects to come into being gradually. [emphasis added]
I don’t know where he gets that impression about the “LondonSchool”. In my experience, in London and elsewhere, mocks made with frameworks also come into being gradually, one expectation or so at a time. How else? UPDATE: Rachel Davies reminds me that the originators of mocking had a background in Smalltalk programming anyway.
Ron speculates that mocks are likely to be more popular amongst programmers who work with libraries that they don’t control, and I expect so. Smalltalkers don’t do that much, almost everyone else does, lots. He speculates that mocks are likely to be more popular amongst programmers who work with distributed systems of various kinds, and I expect so. Smalltalkers don’t do that much, almost everyone else does, lots. Now, if we could all write our software in Smalltalk the world would undeniably be a better place, but…
In fact, I suspect that Smalltalkers write a lot of mocks, but that these tend to develop quite naturally into the real objects. The Smalltalk environment and tools affords that well. Almost everyone else’s environment and tooling fights against that every step of the way. And Smalltalkers won’t generally use a mocking framework, although there are some super cute ones, because the don’t have to overcome the stumbling blocks that languages like Java put in the way of anyone who actually wants to get anything done.
Anyway, there’s this thing about tools. Tools have affordances, and good tools strongly afford using them the right way and weakly—or not at all—afford using them the wrong way. And there are very special purpose tools, and there are tools that are very flexible. I read somewhere that the screwdriver is the most abused tool in the toolbox, because a steel rod that’s almost sharp at one end and has a handle at the other is just so damn useful. But that doesn’t mean that it’s a good idea to use one as a chisel. I grew up on a farm and I remember an old Ferguson tractor which was started by using a (very large) screwdriver to short between the starter motor solenoid and the engine block. Also not a good idea.
That we can do these things with them does not make screwdrivers bad. And the screwdriver does not encourage us to be idiots—it just doesn’t stop us. And so it is with mocks—they are enormously powerful and useful and flexible and will not stop us from being stupid. In particular, they will not stop us from doing our design work badly. And neither will TDD.
What I think they do do, in fact, is make the implementation of bad design conspicuously painful—remember that line about the next test being hard to write? But programmers tend to suffer from very bad target fixation when a tool becomes difficult to use and they put their head down and power through, when they should really stop and take a step back and think about what the hell they’re doing.
Jump to the TL;DR if you want.
What does #NoEstiamtes mean? It’s surprisingly difficult to tell. Depending on whom you ask It might mean, as the name suggests, No Estimates! or it might mean Estimate All The Things—just don’t call it that! or it might mean something in between those, or it might mean something which has nothing to do with estimates at all, or it might be about questions not answers and it might be about just “starting a debate”1. A term that can mean so many things runs the risk of meaning nothing, or of just being the latest shiny buzzword to signal that you get it (not like those other silly folks, stuck in their ways).
When it comes to #NoEstimates what I’ve found is that the most concrete statement of what it might mean that anyone can point to is Vasco Duatre’s self–published book No Estimates: How to measure project success without estimating. (I’ll refer to it as “NE” in what follows, whereas the NoEstimates movement at large will be #NE)
It’s a commendably brief book, and not so expensive. It’s also clearly a labour of love and I do respect that. The urge to share really cool ideas is a strong and respectable one. There’s a bit of a “business novel” style of story running through it, linking some tutorial style material, and this story tells the initially very sad tale of Carmen, a well-meaning but inexperienced project manager—even within 10 pages of the end of the book she still thinks that a Gantt chart is going to be of any use to her—and her profoundly idiotic and bullying boss, both of whom seem to work at a rather desperate and very old–fashioned outsource software development house, named Carlsson and Associates (I’ll call them “CA”).
The word “budget” occurs ten times in NE, variously in the contexts of: the difficulty of not exceeding one, the unreasonableness of demands made relative to them, the further unreasonableness of demanding that people conform to budgets that they neither determined nor can control, and so on. And those are all difficult and unreasonable things. However, a company like CA, which is taking part in a competitive bid in Carmen’s story is going to have to produce a proposal, containing a quote, a proposed budget for the work on the “Big Fish” government contract.
NE often focusses on the difference between an estimate, a commitment, and a forecast. Those are different things, but NE seems to want the distinction to hinge on whether or not you have data (that would be a “forecast”) or whether you’re just guessing (that’s an “estimate”). I’d like to suggest that amongst people who know what they are doing the distinction is much less clear-cut, and much of what NE calls forecasting looks a great deal like estimation to me.
But NE doesn’t seem to mention quotes (other than as in “what somebody said once”). Throughout the book there’s no indication that I can find of how exactly an #NE “practitioner” is supposed to produce a quote for a piece of work—which will be required at some point by anyone who isn’t working for an in–house team and needs to win a contract.
Update: I’ve been asked how quotes fit into an agile world. In my experience, if you are a supplier of development effort to clients then the quote is what gets you permission to start spending money. It’s not really “the budget”—although it might be described that way—it’s a starting point for an on-going conversation about value. Again, in my experience, a £60,000 proof–of–concept or a £120,000 Discovery activity can, though the establishment of a reputation for steady delivery of value, grow into a multi-million pound endeavour spanning several years without anyone ever deciding that this is what it should end up being. But sometimes you really do need to talk about the years and the millions, and if you can’t: no sale!
In the story Carmen sets about the estimation task (to produce the undeclared quote) in the worst way possible: she tries to construct a Work Breakdown Structure2, estimate the effort for the leaf nodes, and then roll that up into an estimate for the whole thing, which is madness. CA get the gig—after somehow having sight of their competitor’s bid, which suggests that the client is pretty sloppy. It also suggests that CA did a very common thing and priced their bid “to win”, that is, by producing a very low quote. It’s important to realise that the estimated effort (time/team size/cost, whatever…) to complete a piece of work is only one input to a quote. By quoting a price to win the work CA are following in the footsteps of many a supplier who has low balled an alleged “fixed–price” for a piece of work comfortable in the knowledge that he client will want to change their mind about the scope and can then be charged for change control for very, very long time—which is where the unscrupulous supplier3 makes their profit. CA don’t seem to be even that smart, and Carmen’s boss seems to think that CA can somehow price to win with a fixed price and a fixed scope and then deliver against both. Carmen’s project is pre–doomed. Which can be a good thing. So long as everyone recognises that you have no chance of delivering, whatever you do, then it doesn’t matter what you do and all sorts of options which were previously unavailable can become plausible, because what the hell!
Now, Carmen’s boss is an idiot but weirdly, on page 62, he suddenly asks a smart question, albeit in a stupid way and for the wrong reason:
“Carmen, we have a review of the project with the Client next week. How are things going, what kind of progress can we show them?” Asked her boss.
“Good Morning sir. Yes, we do have the requirements delivery and the Earned Value Management reports that I showed you just yesterday.”
“That is great Carmen, but I was asking if we can show them working software. You know, to make a good impression.”
Tuns out that there is no way to demonstrate any useful intermediate state of the implementation of the Big Fish system. Carmen’s project has become even more doomed than it was before CA won the gig. Although CA seem highly clueless, unfortunately Carmen’s situation is not so fictional as one might hope. But…and this I think speaks to the core of why #NE is so disappointing to so many people, CA have allowed their client to make them do stupid things and then CA have piled stupidity upon stupidity in how they respond to that. Competent suppliers just don’t behave the way that CA does, not these days.
Although all too plausible the scenario in the story is also a sort of pastiche of what too many mainstream project looked like more than ten years ago. I certainly saw projects like this when I started working in the industry in the early 90s. But these days, not so much…in between times, something changed.
BigFish is a government project and as NE explains, government projects are notoriously very expensive, very late, and often deliver almost nothing of any value. The astonishingly terrible UK project to build a new IT system for the NHS is cited. But, here’s the thing, that project came to a long slow, shuddering halt, finally stopping all together in 2013—and even governments can learn. Since 2011 new build projects in HM Government departments4 are run with oversight from the Government Digital Service, who know what they are doing. All GDS projects are iterative, incremental and evolutionary. Spending departments simply are not allowed to sign up for the kind of catastrophic deal with the Usual Suspects that lead to those horror story government IT projects of the lore.
This was meant to be chapter–by–chapter review of NE, but my eyes started to glaze over—which I release is a poor trait in a book reviewer, but the reason why they did is interesting. Back to the story:
Carmen’s Big Fish project gets into exactly the sort of trouble that you’d expect, being driven by guesswork and wishful thinking, and she ends up appealing to the local #NE guru, Herman. In the charming illustrations by Ángel Medinilla this Herman is depicted as a portly, bearded, balding fellow. I certainly applaud the principle that portly, bearded, balding men are the fount of all wisdom. Anyway, Herman gives Carmen various items of good, commonplace and uncontroversial advice and between them they get the project back on track.
Now, through the first half of NE I’d been thinking: so far so unsurprising, when do we get to the new thing? And when Herman entered the story I though: great! here comes the punchline. But it just doesn’t.
Errata to NE
Perhaps these can be addressed in later version of the book. They are found in the pdf of version 1.0
p16 J. B. Rainsberger has made many fine contributions to the state of the art, but did not introduce the concept of distinguishing essential from accidental complexity in 2013 (although I’m happy to believe that he spoke about it that year). This distinction was introduced by Fred Brooks in his famous paper No Silver Bullet[pdf] — Essence and Accidents of Software Engineering. The distinction was part of the folklore of the industry when I started programming for money in the early 1990s, a long time before I met J.B.
p51 incorrectly characterises Set Based Concurrent Engineering[pdf] as the process of starting to build the production line for a product before you’ve finished developing it. It isn’t. Or rather, doing that is just (one part of) “Concurrent Engineering”. The “Set” is of alternative design choices and they are all developed (concurrently) to a surprisingly high level of refinement and each eliminated through a tournament until one remains which then goes into production. This SBCE process is followed in part to allow for the decision to go to production to be made as late as possible. Reinertsen, in his The Principles of Product Development Flow criticises this approach as too often delaying the decision too long, beyond the point where the economic return on further delay starts to decline.
p64 wrongly states that RUP5 is a linear process model. It’s not. Or rather, it’s not supposed to be. Philippe Kruchten, who was the brains of the operation, built RUP to be very flexible and highly configurable and the first thing any RUP project was supposed to do was tailor the process within some very broad parameters by creating a “Development Case”. The non–negotiable bits of a RUP-derived process were meant to be [emphasis added]:
- Develop iteratively, with [technical] risk as the primary iteration driver
- Manage requirements
- Employ a component-based architecture
- Model software visually
- Continuously verify quality
- Control changes
It’s important to note that in Kruchten’s idea of what a RUP project should look like, the implementation, testing and deployment to production of code happens in every iteration of every phase of the project. However, what a lot of people (every RUP project I ever saw, in the UK or the USA, certainly) did was to carry on doing whatever linear, phased process they were doing before but rename bits of it using RUP terminology. Thus, the requirements gathering phase was renamed “Inception” and so on, and this worked about as well as you’d expect: very, very badly. And so the reputation of RUP was destroyed.
The aspect of RUP—when done right—that most lean/agile folks would object to most these days is the scheduling of work by risk rather than by value: we believe that agile technical practices tame technical risk for us, whatever order we develop features in. They’d probably not to keen on visual modelling (it is a mistake not to use visual modelling) nor on controlling changes (we embrace change, don’t we?) .
I think it was the great philosopher Robert Anton Wilson who said that the secret of leadership is to find some people who are going somewhere and get in front of them. I feel as if #NE, certainly as described in NE, might be doing something very much like that. Which isn’t a bad thing, necessarily, so much as it is disingenuous. Maybe that makes the #NE folks sound too cynical—which I don’t think they are. But there’s a huge gulf between the sort of pre–doomed idiocy of the way CA run their project to begin with in the story and what competent suppliers working with the current good practice of iterative, incremental, evolutionary development (the only way that has ever worked in the general case, currently known as “Agile”) do today. And the gap6 between that and what #NE recommends and what NE very well explains is very small to non–existent.
At least this book is the first place I have seen all of those current good practices collected together with a semi-coherent story about how to use them all together on the same project. That’s a very useful artefact to have. But I might wish that the continual identification these good practices as being an approach distinct from the leading edge of mainstream lean/agile practice (which it is not) were dropped. The book would be greatly improved thereby and would, specifically, look a lot less like snake-oil salesmanship—which I don’t believe it is, but it looks like it, especially with all the charlatan hard–sell techniques you have to get past on the site to buy the thing.
So what is the substantive content of #NE (as revealed in NE)?
There is one specific practice, illustrated very well in the book, which may be unfamiliar to many people doing mainstream Agile: slicing stories until they are all about the same size7, at which point “velocity” becomes a count of stories completed, not the sum of estimates of stories completed. Note that this isn’t a new, nor particularly radical idea, merely unfamiliar to many.
If you’ve drunk too much of the Scrum kool-aid (enough the for the effects to become irreversible) then you will hold fast to the dictum that “Work may be of varying size, or estimated effort” [Scrum Guide, v1, p 9] however, what might have slipped you mind is that the Scrum Guide says only this about how Sprint Planning works:
The input to this meeting is the Product Backlog, the latest product Increment, projected capacity of the Development Team during the Sprint, and past performance of the Development Team.
This allows for a great deal of latitude in how that goal is achieved—and the #NE proposition, as explained in NE would seem to fit that fine, if you were so minded. My experience with Certified ScrumMasters and Professional Scrum Masters8, however, is that the actual courses they do lead them to have a fetishistic determination to estimate, and as the Scrum Guide says, “estimate” [9 occurrences] and “re-estimate”  stories, and even “[make] more precise estimates […] based on the greater clarity and increased detail [available on items at the top of the backlog]” I’ll admit that the obsession that Scrum seems to have with estimating and re-estimating has struck me as odd, ever since I myself became a Certified ScrumMaster back in the 2000s. But is doing estimation the root of all evil? No.
Who is this for, again?
So, NE and #NE take a specific view on this specific issue: don’t estimate stories, slice them. And this is pretty much the only difference I can see between what #NE recommends and what any of the Agile teams that I think of as “getting it” do—and since many of them do slicing, often there’s no difference. Now, the detail material in NE explains with great subtlety and much appeal to thought experiments with probability distributions and what-not how not doing estimation is a waste–eliminating optimisation for your process—although they do not demonstrate the effort of doing the slicing is actually less than the effort of doing the estimation, nor indeed that slicing is somehow value–adding and therefore not waste. But, Carmen’s story is one of utter foolish disregard for intelligence in project management brought under control by an Agile process which just so happens to use slicing instead of estimation—and the story also just so happens to leave out how you’d do the activities (such as providing a quote) that really do need estimates. This leaves me at a loss as to who NE (and #NE) is for: is it a subtle optimisation for people who are basically doing everything pretty much right? Is it a wakeup call for those in the lengthy tail of very, very late adopters of Agile processes? I don’t know, and I can’t tell.
With some brutal editing to strip out all the propaganda, NE would actually be very useful both as a thing to use to introduce current good practice in Agile to newbies, and as an aide memoire for current practitioners. But it has this incessant drumbeat insistence that the techniques presented are New! and Different! and Radical! when they simply are not, which I think makes it little use for either group.
I do strongly suspect that if v2.0 of NE had, instead of he story of Carmen and the chaos at CA, a protagonist working at a company that was operating current good practice in Agile development, then the switch to #NE then the differences, and the story, would be much less compelling—but maybe more useful.
2 WBSs for software development are almost never valid. I have seen valid ones, but only cases where a team is in almost a manufacturing mode, grinding out another instantiation of a very well-known product with only marginal changes from a bunch of other instantiations of it. This is dull, low risk work and therefore low margin, and most of it is done by low–cost development shops in Farawayvia (or, as it may be, Distantistan). Anyone doing any remotely interesting software development work simply will not be able to construct a valid—never mind useful—WBS and should not even bother trying.
4 Full disclosure: my employer is a supplier to more than one department of HM Government, where we run projects as mandated by GDS and it works so well that we’e started to use the same DABL framework on private sector projects.
5 RUP is the process that will not lie down dead. Amongst those people who don’t seem to be comfortable running a development project without a vast and incomprehensible wall chart to follow, parts of the re-animated corpse of RUP are currently lurching around in two flavours: SAFe and SEMAT.
6 Theres this diagram in NE which could have been copy-pasted out of one of my own project proposals—I don’t suggest plagiarism, nor any sort of influence either way, it’s just a nice illustration of how NE doesn’t contain much of anything new, and of how #NE doesn’t contain much of anything that many people aren’t just doing anyway. It’s the one on p116 of the PDF, where Herman explains how to explain to a client what of their backlog they will, might, and won’t get—as best we know.
You and I might imagine that constructing such a diagram might involve estimation…that’s certainly how I do mine. In fact, many of the techniques that Herman uses are estimation techniques, even though he insists otherwise, without really explaining why not. I think that this sort of thing is what leads Alister Cockburn to conclude that #NE is a “bait–and–switch”, they spend far too much time explaining how they estimate stuff.
8 “ScrumMaster” or “Scrum Master”? What are the semiotics of that interposed whitespace? Or is it simply a matter of not infringing intellectual property rights? A “ScrumMaster” was, originally, someone who had mastery of doing Scrum. A “Scrum Master” seems more like the master–of–the–Scrum…
Ron Jeffries and Steve McConnell have been discussing #NoEstimates.
Ron wants me to signup to a google group to comment, and who has time for that? Worse, Steve wants me to become a registered user of Construx. So, instead I’ll comment here. I’m still paying for this site, after all.
A you might imagine, world famous estimation guru McConnell isn’t so keen on #NoEstimates. Here’s Ron’s response to Steve’s response to Ron’s response to Steve’s video responding to the #NoEstimates thing.
One of the smartest things I ever read about estimation, and one that I quote freely is this: “The primary purpose of software estimation is not to predict a project’s outcome; it is to determine whether a project’s targets are realistic enough to allow the project to be controlled to meet them”—McConnell, 2006.
That was published about 10 years ago. In the context of the state of the art of software development ten years ago, this statement was quite radical—surprisingly many organisations today still don’t get it. In the ten years since then the state of the art has move on to the point that some (not all, but some) development shops are now so good at controlling a project to meet its targets that creating an up-front determination of whether or not that can be done is really not so useful an exercise. Of course, part of that process has been to teach “the business” that they are wasting their time in trying to fix their targets far ahead into the future, because they will want to change them.
Anther very smart thing, from only eight years ago: “strict control is something that matters a lot on relatively useless projects and much less on useful projects. It suggests that the more you focus on control, the more likely you’re working on a project that’s striving to deliver something of relatively minor value.”—DeMarco, 2009
Very true. And since then that same progression in the state of the art has so reduced the cost of building working, tested software that the balance has moved in further in the direction of not doing projects where the exact cost matters a lot. #NoEstimates is this pair of ideas carried to their natural conclusion.
It’s still not unusual to see IT departments tie themselves in knots over whether a project who’s goal is to protect billions in revenue should have a budget of one million or one point five million. And to spend hundreds of thousands on trying to figure that out. The #NoEstimates message is that they don’t need to put themselves into that position.
It’s not for free, of course, that state of the art in development has to be present. But if it is, on we go.
In the video, Steve tries some rhetorical jiu-jitsu and claims that if we follow the Agile Manifesto value judgement and prefer to collaborate with our customers than to negotiate contracts with them, and they ask for estimates why then we should, in a collaborative mood, produce estimates. That’s a bit like suggesting that if an alcoholic asks me for a drink, I should, in a cooperative and generous spirit, buy them one.
I’d like to suggest a root cause of the disagreement between Ron and Steve. I’m going to speculate about the sorts of people and projects that Ron world with and that Steve works with. Personally, I’ve worked in start-ups and in gigantic consultancies and I’ve done projects for blue-chip multinationals selling a service and for one-man-band products shops. My speculation is that in Steve’s world, IT is a always and only a cost centre. It’s viewed by the rest of the business as a dark hole into which, for unclear reasons, a gigantic pile of money disappears every year. The organisation is of course very well motived to both understand how big that hole is, and to try to make it smaller. Hence: estimation! in addition, Steve likes to present estimation as this cooly rational process of producing the best information we can from the merge scraps of fact available, suitably and responsibly hedged with caveats and presented in a well-disciplined body of statistical inferences. And then the ugly political horse-trading of the corporation gets going. I think that believing this is a reasonable defence mechanism for a smart and thoughtful person caught in the essentially medieval from of life that exists inside large corporations (and, whisper it, all the more so enlarge American corporations). But it isn’t realistic. In those environments, estimation is political, always.
My speculation is that Ron, and many #NoEstimates advocates, work more in a world where the effort (and treasure) that goes into building some software is very clearly, and very closely in time and space, connected with the creation of value. And that this understanding of IT work as part of the value creation of the organisation and the quickness of the return leads to estimation being really not such a big deal. An overhead of limited utility. So why do that?
You organization, I’ll bet, falls somewhere between these two models, so you probably are going to have to do #AsMuchEstimationAsYouNeedWhenYouNeedItAndThatsLessThatYouThinkAndNotSoOftenAsAllThatReallyButjustGetOverIT
Please add your comments about the session to this post.