cumulativehypotheses

personal open science notebook

#AsMuchEstimationAsYouNeedWhenYouNeedItAndThatsLessThatYouThinkAndNotSoOftenAsAllThatReallyButJustGetOverIt

leave a comment »

Ron Jeffries and Steve McConnell have been discussing #NoEstimates.

Ron wants me to signup to a google group to comment, and who has time for that? Worse, Steve wants me to become a registered user of Construx. So, instead I’ll comment here. I’m still paying for this site, after all.

A you might imagine, world famous estimation guru McConnell isn’t so keen on #NoEstimates. Here’s Ron’s response to Steve’s response to Ron’s response to Steve’s video responding to the #NoEstimates thing.

One of the smartest things I ever read about estimation, and one that I quote freely is this: “The primary purpose of software estimation is not to predict a project’s outcome; it is to determine whether a project’s targets are realistic enough to allow the project to be controlled to meet them”—McConnell, 2006.

That was published about 10 years ago. In the context of the state of the art of software development ten years ago, this statement was quite radical—surprisingly many organisations today still don’t get it. In the ten years since then the state of the art has move on to the point that some (not all, but some) development shops are now so good at controlling a project to meet its targets that creating an up-front determination of whether or not that can be done is really not so useful an exercise. Of course, part of that process has been to teach “the business” that they are wasting their time in trying to fix their targets far ahead into the future, because they will want to change them.

Anther very smart thing, from only eight years ago: “strict control is something that matters a lot on relatively useless projects and much less on useful projects. It suggests that the more you focus on control, the more likely you’re working on a project that’s striving to deliver something of relatively minor value.”—DeMarco, 2009

Very true. And since then that same progression in the state of the art has so reduced the cost of building working, tested software that the balance has moved in further in the direction of not doing projects where the exact cost matters a lot. #NoEstimates is this pair of ideas carried to their natural conclusion.

It’s still not unusual to see IT departments tie themselves in knots over whether a project who’s goal is to protect billions in revenue should have a budget of one million or one point five million. And to spend hundreds of thousands on trying to figure that out. The #NoEstimates message is that they don’t need to put themselves into that position.

It’s not for free, of course, that state of the art in development has to be present. But if it is, on we go.

In the video, Steve tries some rhetorical jiu-jitsu and claims that if we follow the Agile Manifesto value judgement and prefer to collaborate with our customers than to negotiate contracts with them, and they ask for estimates why then we should, in a collaborative mood, produce estimates. That’s a bit like suggesting that if an alcoholic asks me for a drink, I should, in a cooperative and generous spirit, buy them one.

I’d like to suggest a root cause of the disagreement between Ron and Steve. I’m going to speculate about the sorts of people and projects that Ron world with and that Steve works with.  Personally, I’ve worked in start-ups and in gigantic consultancies and I’ve done projects for blue-chip multinationals selling a service and for one-man-band products shops. My speculation is that in Steve’s world, IT is a always and only a cost centre. It’s viewed by the rest of the business as a dark hole into which, for unclear reasons, a gigantic pile of money disappears every year. The organisation is of course very well motived to both understand how big that hole is, and to try to make it smaller. Hence: estimation! in addition, Steve likes to present estimation as this cooly rational process of producing the best information we can from the merge scraps of fact available, suitably and responsibly hedged with caveats and presented in a well-disciplined body of statistical  inferences. And then the ugly political  horse-trading of the corporation gets going. I think that believing this is a reasonable defence mechanism for a smart and thoughtful person caught in the essentially medieval from of life that exists inside large corporations (and, whisper it, all the more so enlarge American corporations). But it isn’t realistic. In those environments, estimation is political, always.

My speculation is that Ron, and many #NoEstimates advocates, work more in a world where the effort (and treasure) that goes into building some software is very clearly, and very closely in time and space, connected with the creation of value. And that this understanding of IT work as part of the value creation of the organisation and the quickness of the return leads to estimation being really not such a big deal. An overhead of limited utility. So why do that?

You organization, I’ll bet, falls somewhere between these two models, so you probably are going to have to do #AsMuchEstimationAsYouNeedWhenYouNeedItAndThatsLessThatYouThinkAndNotSoOftenAsAllThatReallyButjustGetOverIT

Written by keithb

August 2, 2015 at 9:10 am

Posted in Uncategorized

TDD as if You Meant It at London Software Craftsmanship

leave a comment »

Please add your comments about the session to this post.

Written by keithb

August 29, 2012 at 4:10 pm

Posted in TDD

TDD as if You Meant It at XP Day London 2011

leave a comment »

Attendees, please add your thoughts, and links to your code repo if you wish, as comments to this post.

Thanks.

Written by keithb

November 21, 2011 at 11:56 am

Posted in conference, Raw results, TDD

Hiring…

with 2 comments

If you like the kind of work you see here, come join me in London. We’re hiring. Apply via LinkedIn or drop me a line.

Principal consultants take responsibility for particularly challenging solutions and in demanding organisational environments. They closely interact with senior project managers, customer representatives at all levels including senior management, and guide project teams. Together with the responsible project managers, they lead technical and strategic initiatives to success, ranging from critical consulting mandates to complex delivery projects. Together with business development and business unit managers, they actively expand Zuhlke’s business and develop new opportunities. This can involve taking the leading technical role in large bids.

Lead consultants take decisions and provide advice regarding complex technical systems. They closely liaise with the software development team, the project manager, and customer representatives, often with a technical background. They ensure that sound technical decisions are made and subsequently realised in state-of-the-art solutions by the project team. They can take the leading role in technical consulting assignments within their specialisation area.

The role is based in London and the majority of the work takes place in UK but on occasion training and consulting engagements may be delivered anywhere in the world.

The competitive package includes 20 days of professional development time per year.

Written by keithb

October 3, 2011 at 5:13 pm

Posted in Uncategorized

TDD as if You Meant it at Agile Cambridge 2011

with 4 comments

Attendees of the session at Agile Cambridge, please add links to you code in comments to this post.

Written by keithb

September 29, 2011 at 12:04 pm

Posted in Uncategorized

Iterative, Incremental Kanban

with 8 comments

There’s something about Kanban which worries me. Not kanban, which is a venerable technique used to great effect by manufacturing and service organization the world over for decades, but “Kanban” as applied to software development. More specifically, the examples of Kanban boards that I see worry me.

What you do now

David Anderson gives guidance on applying Kanban to software development something like this:

  • Start with what you do now
  • Agree to pursue incremental, evolutionary change
  • Respect the current process, roles, responsibilities & titles

Which is fine. What worries me is that the published examples of Kanban that I see so often seem to come from a place where what they do now is a linear, phased, one-shot process and the current process roles, responsibilities and titles are those of separate teams organized by technical specialism with handovers between them. Of course, there are lots of development organizations which do work that way.

But there are lots that do not. I’ve spent the last ten years and more as one of the large group of people promoting the idea that linear, phased processes with teams organized by technical specialism is a wasteful, high risk, slow and error prone way to develop software. And that a better alternative is an iterative, incremental, evolutionary processes with a single cross–functional team. And that this has been known for literally as long as I’ve been alive so it shouldn’t be controversial (although it still too often is).

Best case: Kanban will be picked up by those who are still doing linear, phased etc. etc. processes and will help them move away from that. A good thing. Worst case: the plethora of Kanban examples showing phases and technology teams undoes a lot of hard work by a lot of people by making linear, phased processes respectable again. After all, Kanban is the hot new thing! (And so clearly better).

Kanban boards

Take a look at the example boards that teams using Kanban have kindly published (note: I wish every one of those teams great success and am grateful that they have chosen to publish their results).  The overwhelming theme is columns read from left to right with a sequence of names like “Analysis”, “Design”, “Review”, “Code”, “Test”, “Deploy”. Do you see a problem with this?

Taken as a diagnostic instrument there is discussion of ideas like this: if lots of items queue up in and before the the “Test” column then the testers are overloaded and the developers should rally round to help with testing. Do you see a problem with this?

There is a way of talking about /[Kk]anban/ which strongly invites the inference that each work item must pass though every column on the board exactly once in order. This discussion of kanban boards as value stream maps, while very interesting in its own right, makes very explicit that in the view of its author the reason a work item might return from a later column to an earlier one is because it is “defective” or has been “rejected”. How is one to understand iterative development in which we plan to re-work perfectly acceptable, high quality work items with such language?

Not Manufacturing

Iterative development plans to rework items. Not because they are of low quality, not because they are defective, not because they are unacceptable, but because we choose to limit the scope of them earlier so that we can get to learn something about them sooner. This is a product development approach. Kanban is mainly a manufacturing technique. Software development resembles manufacturing to a degree of approximately 0.0 so it’s a bit of a puzzle why this manufacturing technique has become quite so popular with software developers. Added to which the software industry has a catastrophically bad track record at adopting management ideas from manufacturing in an appropriate way. We in IT are perennially confused about manufacturing, product development and engineering, three related but very different kinds of activity.

An Example

So, what if “what you do now” is iterative and incremental? What if you don’t have named specialist teams? And yet you would like to obtain some of the unarguable benefits of visualising your work and limiting work in progress. What would your kanban board look like?

Here’s one possibility (click for full-size):

Iterative, Incremental kanban board

Some colleagues were working on a problem and their environment lead to some very hard WIP limits: only two development workstations, only two test environments, only one deployment channel. But they are a cross-functional team, and they want to iteratively develop features. So, the column on the far left is a queue for new features and the column on the right holds things that are done (done recently, ancient history is in the space below). The circle in between is divided into three sectors, one for each of the three things that have WIP limits. Each sector has an inner and and outer part, to allow for two different kinds of activity: feature and integration. For example, both test environments might be in use but one for integration testing of several features and one for iterative testing of one particular feature.

The sectors of the circle are unordered. Any story can be placed in, and moved to, or back to, any other sector at any time any number of times, but respecting the WIP limit.

Feedback

Why can’t I find more examples like this?

I expect that some Kanban experts are going to see this and comment that they don’t mean for groups using Kanban to adopt linear, phased processes and specialized teams. And I’m sure that many of them don’t. But that’s what the examples pretty much universally show—and we know that people have a tendency to treat examples (intended to be illustrative) as if they were normative.

I’d really like to hear more stories of whole–team iterative, incremental kanban. Please point some out.

Written by keithb

September 16, 2011 at 11:59 am

Posted in kanban

Tagged with , ,

Distribution of Complexity in Hudson

with one comment

Suppose we were to take methods one by one, at random and without replacement, from the source code of Hudson 2.1.0 How would we expect the Cyclomatic Complexity of those methods to be distributed?

Here you will find some automation to discover the raw numbers, and here is a Mathematica Computable Document (get the free reader here) showing the analysis. If you have been playing along so far you might expect the distribution of complexity to follow a power law.

Result:

This evidence suggests that the Cyclomatic Complexity per method in this version of Hudson is not distributed according to a discrete power–law distribution (the hypothesis that it is, is rejected at the 5% level).

Probability of Complexity of Methods in Hudson

This chart shows the empirical probability of a given complexity in blue and that from the maximum–likelihood fitted power–law distribution in red. Solid lines show where the fitted distribution underestimates the probability of methods with a certain complexity occurring, dashed lines where it overestimates. As you can see, the fit is not great, especially in the tail.

Note that both scales are logarithmic.

Other long-tailed distributions (e.g. log-normal) can be fitted onto this data, but the hypothesis that they represent data is rejected at the 5% level.

Written by keithb

August 31, 2011 at 8:23 pm

Follow

Get every new post delivered to your Inbox.