cumulativehypotheses

mostly professional blather

Archive for the ‘Uncategorized’ Category

Why I have such a strong negative reaction to #NoEstimates

with 6 comments

TL;DR

You may be pleased to learn that this is probably the penultimate thing I have to say here about #NoEstimates.

Anyway, it’s for these reasons…

It’s conceptually incoherent

From what what I can gather from following twitter discussions, and reading blogs, and articles, and buying and reading the book, then, in #NoEstimates land, supposing that someone were to come and ask you “how long will it take to develop XYZ piece of software?” then any one of the below could be an acceptable #NoEstimates answer, depending on which advocate’s line of reasoning you follow:

  1. Estimation is morally bankrupt and I shall have no part in continuing the evil of it. You are a monster! Get out of my office! But fund the endeavour anyway. Yes, I do mean an open-ended commitment to spend however much it turns out to take.
  2. Estimation is impossible even in principle so there is no way to answer that question, however roughly. But do please still fund the endeavour. No, I can’t indicate a reasonable budget.
  3. Estimation is impossible even in principle so there is no way to answer that question and even if there were I still wouldn’t because you can’t be trusted with such information. No, I can’t indicate a reasonable budget. It’ll be ok. Trust me. No, I don’t trust you; but trust me.
  4. Estimation is so difficult and the results so vague that you’re better off just starting the work and correcting as you go. It’ll be ok. Trust me. No, I still don’t trust you.
  5. Estimation is so difficult and the results so vague that you’re better off choosing to invest a small, but not too small, amount of money to do something and learn from it and then decide if you’ve come to trust me and want to do some more (or not, which would be disappointing but OK). For your kind of thing, I suggest we start with $BUDGET_FOR_A_BIT, expect to end up spending something in $TOP_DOWN_SYNTOPIC_ESTIMATED_SPEND_AS_A_RANGE
  6. Estimation is difficult to do with any useful level of confidence and the results of it hard to use effectively. What would you do with an estimate if I did provide it? How could we meet that need some other way?
  7. Here is a very rough estimate, heavily encrusted with caveats and  hedges, of the effort required to deliver something of a size such as experience suggests that what you asked for will end up being. No, I will not convert that into a delivery date for you. Let me explain a better way to plan.
  8. OK, OK, since you insist, here is a grudgingly made estimate of a delivery date in which I have little faith, I hope it makes you happy. Please don’t use it against me.

For the record: my preferred answer is some sort of combination of 5 and 6, with a hint of 4, and 7 as a backup. And I have turned down potentially lucrative work on the basis of those kinds of answer being rejected.

That’s a huge range of options, many subsets of which are fundamentally, conceptually, incompatible with other subsets. Which means that #NoEstimates doesn’t really seem to me as if it’s much help in deciding what to do.

Except…one good thing about #NE is that it does rule out this answer: “let me fire up MS Project and do a WBS and figure out the critical path and…” which is madness, for software development, but you still see people trying to do it.

Also for the record: In my opinion far too many teams spend far too much time estimating far too many things in far too much detail, and in ways that aren’t sufficiently smart or useful.

Even in an “Agile” setting you see this, and for that I blame Scrum which has had from the beginning in it this weird obsession with estimating and re-estimating, and re-re-estimating again and again and again. I don’t do that. And I certainly don’t do, and do not recommend task-level estimates (or even having tasks smaller than a story).

I can’t understand what anyone’s saying

It seems as if the “no” in #NoEstimates doesn’t mean no. Or maybe it does. Or it might mean: prefer not to but do if you have to. Or it might mean: I’d prefer that you didn’t, but if it’s working for you carry on.

And the “estimate” in #NoEstimates doesn’t mean estimate. It means: an involuntary commitment to an unsubstantiated wild-arsed guess that someone in authority later punishes you for not meeting§. Or it might mean estimate after all, but only if the estimate is based on no data. If there’s data, then that’s a “forecast”, which is both a kind of estimate and not an estimate.

“Control” seems to be a magic word for some #NE people. It’s said to them in the morally neutral, cybernetics sense but they hear it in some Theory X sense, as if it always has the prefix “Command and ”. This creates the impression that they have no interest in being in control of development, budgets, etc. Which might or might not be true. Who can say?

So not only are the #NoEstimates concepts all over the place, they’re discussed in something close to a private vocabulary—maybe more than one private vocabulary. This is not an effective approach to an engineering topic.

Nevertheless: it’s strong medicine and it’s being handled sloppily

…which, if you’ve ever taken strong medicine you’ll know is a poor policy.

In the contexts for software development that I’m familiar with the idea of making estimates as an input to a budgetary process at the level of months, quarters, years and maybe (hopefully) beyond is really deeply baked in. Maybe this is part of why Scrum has managed to find such a better fit in corporate land that, say, XP ever did, because a Scrum team can seem to still play that game.

For a development team to around and say even that estimates are too difficult to make useful so lets do something else instead is very challenging to the conventions of the corporation. Conventions which I believe should be challenged, in principle. To turn around and say estimation is (and always was) impossibly difficult and management were doing bad things with the results of it is going to deeply challenge and upset many people in an unhealthy way. That’s not the way to effectively change organisational habits. We saw this before with Scrum.

Now, I happen to be of the opinion that estimation is hard, but can be done well, and you can learn to do it better, and the results of it are often misapplied. And I’ve come to the opinion that the most effective and/because least upsetting route to dealing with that is to re-educate managers to do their work in a better way such that they stop asking for estimates.I  find that coaching managers to ask more helpful questions beats coaching programmers to give fewer unhelpful answers.

For the record, too: I agree that too many enterprises use developers’ estimates in a way that is invalid in theory, unhelpful in practice, and questionable in its effect on the long term health of the business (and the developers). But, also for the record, I do not agree that this is an inevitable consequence of some intrinsic problem with estimation.

But in the #NE materials that I’ve seen there’s not really much recognition of these organizational aspects of the proposed change. It seems mainly to be about convincing developers that they shouldn’t be producing estimates and explain how misguided (and best) or evil (at worst) management are to ask for estimates in the first place.

We’re just not smart about this kind of thing

…and the treatment of #NoEstimates that I’ve seen fosters exactly the kind of not-smartness that can get us into a real mess.

The industry, and corporations within it, and teams within corporations have a tendency to lurch from one preposterous extreme to another, and to wildly mis-interpret very well intentioned recommendations. This is a particular problem when the recommendation is to do less of something that’s perceived as irksome.

eXtreme Programming offers a good example. When considering a proposed way of working I often find it useful to consider to what it has arisen as a reaction. On one hand, #NoEstimates seems to be partly a reaction against the very degenerate Scrum-and-Jira style of “Agile” development that many corporations are far to comfortable with. And on another hand, it seems to be a reaction against some really terrible management behaviour* that’s connected with estimation.

eXtremeProgramming can be usefully read as a reaction against the insane conditions** in large corporate software shops in the mid 1990s. People who really wanted to be programmers rushed to XP with joyous relief. As it happens I took some convincing, because I kind-of wanted the models thing—and not just boxes-and-arrows, I love, love, love me some ∃s and ∀s—to work. But it doesn’t. So, you know, I’m able to recognise that my ideas need to change, and I’m prepared to do the work.

Anyway, in part of the rush to XP we found that people abandoned the writing of design documents—these days they’d be condemned as muda, but that Lean/kanban vocabulary wasn’t so widespread then—but unfortunately the design work that we were meant to do instead in other ways didn’t really get picked up. Similarly, BRDs and Use Cases and all that stuff went out of the window but good requirements practices tended to vanish too. And the results were not pretty.

And so, over a long and painful period we had to invent things like Software Craftsmanship to try to re-inject design thinking, and we—that is, Jeff Patton—had to introduce things like Story Mapping to get back to a way to talk about big, complex scope.

I expect the worst

I invite you to get back to me in, oh, say, 5 years and check on this…forecast: Either #NoEstimates will have burned out and no-one will really remember what it was, or…

  1. There will be a[t least one] subscription taking membership organization  devoted to #NoEstimates
    1. Leading members of that organization will make their living as #NoEstimates consultants and trainers, far removed from any actual development
    2. This organization will operate a multi-level marketing scheme in which the organization certifies, at great expense, senior trainers who train, at substantial expense, certified trainers who train, at not outrageous expense, certified #NoEstimates “practitioners”
  2. adoption of #NoEstimates will turn out to lack some benefit of estimation that #NoEstimates advocates wont’t see and can’t imagine and some other practice will have to have been invented to get it back.

And then the circle will be complete. I don’t think that we’re collectively smart enough to avoid this dreary, self-defeating outcome.

Update—as if by magic, this twitter exchange appears within 12 hours of my post (NB I mean no criticism of either party and I thank them for holding this conversation in public):

Noel asks: What is the first thing one should consider when contemplating a move to NoEstimates?
And Woody replies isn’t something you “move to”. It is about exploring our dependence on and use of estimates, and seeking better.

I expect that many conversations like this are taking place. And that’s how the subtle but valuable message fades away and the industry’s hunger to be told the right thing to do  (and then, worse, do it) takes over.


§ A terrible idea which I am happy to join in condemning

† Which is largely in corporations, by which I mean for-profit enterprises which intend to be consistently profitable over a long period.

Although I have worked at, as it were, newly-founded companies with few staff, little money and one big idea, working out of adapted loft space I don’t characterise those as “startups”. That’s because the business model was not to throw fairly random things at the wall in the hope that one of them stuck long enough to pay the bills until the next funding round arrived; repeat until exit strategy kicks in or broke—which is how I understand “startups” to work. That’s a world that I don’t understand very well because I haven’t done it.

So: some of the corporations I’ve worked have been product companies, and some sold a service, and some were small and local and some were large and global, and plenty of variations on that. That’s the world I understand, and what I write here grows out of that understanding.

* Years ago I was asked to brief the management functions of a certain corporate entity about this new “Agile” thing that the technology division wanted to do. This was interesting.

The F&A people wanted to know when, during an iterative, incremental development process they were supposed to create the intangible asset on the balance sheet representing the value-add on the CAPEX on building the system, so that they could start amortising it.

The HR people wanted to know how, with a cross-functional, self-organizing team in place, they could do performance management and, and I quote, figure out “who to pay a bonus to and who to fire”.

I’ve recently heard of companies that link the “accuracy” (whatever that might mean…) of task estimates to bonus pay. And I agree with J.B. that it’s fucking disgusting. What I very much doubt is that fixing that state of affairs is primarily a bottom-up exercise in not estimating.

** Around that time I held—mercifully briefly—a job as a “Software Engineer” in which the ideal we were meant to strive for was that no code was created other than by forward-generation from a model in Rational Rose.

Advertisements

Written by keithb

December 12, 2015 at 7:32 pm

Posted in Uncategorized

Is Testing “Waste”?

leave a comment »

That is, in the technical, sense used in Lean manufacturing, who’s first two principles include:

  1. Specify value from the standpoint of the end customer by product family.
  2. Identify all the steps in the value stream for each product family, eliminating whenever possible those steps that do not create value.

The “steps that do not create value” are waste. If our product is, or contains a lot of, software, is the action of testing that software waste, that is, not creating value from the standpoint of the end customer?

At the time of writing I am choosing the carpets tiles for our new office. On the back of the sample book is a list of 11 test results for the carpet relating to various ISO, EN and BS standards, eg the EN 986* dimensional stability of these carpet tiles is < 0.2%—good to know! There are also the marks of cradletocradle certification, GUT registration, BREEAM registration, a few other exotica and a CE mark. Why would the manufacturer go to all this trouble? Partly, because of regulation: an office fitter would baulk at using carpet that did not meet certain mandatory standards. And partly because customers like to see a certain amount of testing.

Take a look around your home or office, I’ll bet you have a lot of small electrical items of various kinds. Low-voltage power supplies, in particular.  Take a look a them. You will find on some the mark of Underwriters Laboratories, UL Mark which indicates that the manufacturer has voluntarily had the product tested by UL for safety, and maybe for other things. If you’re in the habit of taking things apart, or building things, you might also be familiar with the UL’s “recognised component” mark for parts of productsUL Recognised Component Mark. On British made goods you might see the venerable British Standards Institution “Kite Mark” BSI Kite Mark, or maybe on Canadian gear the CSA mark CSA Mark, on German kit one of the TÜV marks, and so on. These certifications are for the most part voluntary. Manufacturers will not be sanctioned for not obtaining these marks for their products, nor will—other than in some quite specialised cases—anyone be sanctioned for buying a product which does not bear these marks.

Sometimes a manufacturer will obtain many marks for a product, and sometimes fewer, and sometimes none. I invite you to do a little survey of the electrical items in your office or home: how many marks does each one have. Do you notice a pattern?

I’ll bet that the more high-end a device—in the case of power supplies, the more high-end what they drive—the more marks the device will bear, and the more prestigious those marks will be. Cheaper gear will have fewer, less prestigious marks—ones that make you say “uh?!”†† and the very cheapest will have none.

If testing is waste, why do manufacturers do this?

How does your answer translate to software development?


* BS EN 986:1995—Textile floor coverings. Tiles. Determination of dimensional changes due to the effects of varied water and heat conditions and distortion out of plane.

† tanks, missiles, land-mines, leg-irons, electric cattle-prods, that sort of thing.

†† There are persistent rumours that some Chinese manufacturers of questionable business ethics have concocted a mark of their own which looks from a distance like the Screen Shot 2015-11-04 at 20.38.27mark

Written by keithb

November 4, 2015 at 8:41 pm

Posted in Uncategorized

Mocks

with one comment

Well, this feels like a conversation from a long time ago. This presentation got tweeted about, which asserts that

Mocks kill TDD. [sic]

which seems bold. And also that

TDD = Design Methodology

which seems wrong. And also that

Test-first encourages you to design code well enough to test…and no further

which seems to have profoundly misunderstood TDD.

TDD

Just so we can all agree what we’re talking about, I think that TDD works like this:
repeat until done:

  • write a little test, reflecting the next thing that your code needs to do, but doesn’t yet
  • see it fail
  • make all tests—including the new one—pass, as quickly and easily as possible
  • refactor your working code to produce an improved design

I don’t see that as being a design methodology. It’s a small-scale process for making rapid progress towards done while knowing that you’ve not broken anything that was working, and which contains a publicly stated commitment to creating and maintaining a good design. There’s nothing there about what makes a good design—although TDD typically comes with guidance about well designed code being simple, well designed code lacking duplication and—often overlooked, this—well designed code being easy to change. I also often suggest that if the next test turns out to be hard to write, you should probably do some more refactoring.

Note that in TDD we don’t—or shouldn’t—test a design, that is, we shouldn’t come up with a design and then test for it. Instead we discover a design through writing tests. TDD doesn’t design for you, but it does give you a set of behaviours within which to do design. And I’m pretty sure that when followed strictly, TDD leads to designs that have measurably different properties than designs arrived at other ways. Which is why this blog existed in the first place (yes, I have been a bit lax about that stuff recently). UPDATE: a commentator on lobste.rs (no, me neither) quotes me saying that “TDD doesn’t design for you, but it does give you a set of behaviours within which to do design.” and asks: how is TDD not a design methodology, then?! And I answer: because it doesn’t provide a vocabulary of terms with which to talk about design, it doesn’t provide a goal for design, it doesn’t provide any criteria by which a design could be assessed, it doesn’t provide any guidance for doing design beyond this—do some, do it little bit at a time, do it by improving the design of already working code. If that looks like a methodology to you, then OK.

But Ken does have a substantive objection to code that he’s seen written with mocks. Code which has tests like this:

A Terrible Test which Happens to use Mocks

A Terrible Test which happens to use Mocks

and I certainly agree that this is a terrible test. There are far too many mocks in it, and their expectations are far too complex and far too specific. Worst of all, the expectations refer to other mocks. This is terrible stuff. You can’t tell what the hell’s going on, and this test will be extraordinarily brittle because it reaches out far too far into the system. It probably has a net negative value to the programmers who wrote it. That’s bad. Don’t do that.

Is this the fault of mocks? Not really. The code under test here wouldn’t be much different, I’ll bet, if it hadn’t been TDD’d—If this code even was TDD’d, I have my doubts although people do do this sort of thing, I know. This confusing, brittle, unhelpful test has been written with mocks, but not because of mocks. One could speculate that it was written by someone who’d got far, far too carried away with the things that mock frameworks can do, and failed to apply good taste, common sense and any kind of design sensibility to what they were doing. Is that the fault of mocks? Not really. Show me a tool that can’t be abused and I’ll show you a tool that isn’t worth having.

Other Styles of Programming

Ken, of course, has an agenda, which is really to promote a functional style of programming in which mock objects are not much help in writing the tests. I think he’s right about that and it should be no surprise as mocks are about writing tests that have something to say about what method invocations happen in what order, and as you move towards a functional style that becomes less and less of a concern. So maybe Ken’s issue with mocks is that they don’t stop you from writing non-functional code—to which I say: that doesn’t mean that you have to.

If you can move to functional programming (spoiler: not everyone can) and if your problem is one that is best solved though a functional solution (spoiler: not all of them are), then off you go, and mocks will not be a big part of your world and fair enough and more power to you. But if not…

Now, I tweeted to this effect and that got Ron wondering about that kind of variation, and why it might be that Smalltalk programmers don’t use mocks when doing TDD. Ron kind-of conflates what he calls the “Detroit School” of TDD and “doing TDD in Smalltalk”, which is kind-of fair enough as Kent and he and the others developed their thinking about TDD in Smalltalk and that’s the style of TDD that was first widely discussed on the wiki and spread from there.

Ron says that he does use “test doubles” for:

“slow” operations, and operations across an interface to software that I don’t have control of

and of course mocks are very handy in those cases. But that’s not what they’re for. Ron says:

Perhaps our system relies on a slow operation, such as a database access […] When we TDD such a thing, we will often build a little object that pretends to be a database […] that responds instantly without actually exercising any of the real mechanism. This is dead center in the Mock Object territory,

Well, no. Again, you can use mocks for such tests, but you’ll only get much value from that if your test cares about, say, what the query to the database is (rather than merely using the result). And while it will make your tests go fast, that’s not the real motivation for the mock handy as it may be.

A Brief History Lesson

Mocks were invented to solve a very specific problem: how to test Java objects which do not expose any state. Really not any. No public fields, no public getters. It was kind-of a whim of a CTO. And the solution was to pass in a collaborating object which would test the object which was the target of the test “from the inside” by expecting to be called with certain values (in a certain order, blah blah blah) by the object under test and failing the test otherwise.

A paper from 2001 by the originators of mocks describes the characteristics of a good mock very well:

A Mock Object is a substitute implementation to emulate or instrument other domain code. It should be simpler than the real code, not duplicate its implementation, and allow you to set up private state to aid in testing. The emphasis in mock implementations is on absolute simplicity, rather than completeness. […] We have found that a warning sign of a Mock Object becoming too complex is that it starts calling other Mock Objects – which might mean that the unit test is not sufficiently local. [emphasis added]

the object under test in a mock object test is surrounded by a little cloud of collaborating mocks which are simple, incomplete and local. UPDATE: Nat Pryce reminds me that process calculi, such as CSP, had an influence on the JMock approach to mocking.

Ron talks about Detroit/Smalltalk TDD-ers developing their test doubles by this means:

just code such a thing up […] Generally we’d build them up because we work very incrementally – I think more incrementally than London Schoolers often do – so it is natural for our mock objects to come into being gradually. [emphasis added]

I don’t know where he gets that impression about the “LondonSchool”. In my experience, in London and elsewhere, mocks made with frameworks also come into being gradually, one expectation or so at a time. How else? UPDATE: Rachel Davies reminds me that the originators of mocking had a background in Smalltalk programming anyway.

Ron speculates that mocks are likely to be more popular amongst programmers who work with libraries that they don’t control, and I expect so. Smalltalkers don’t do that much, almost everyone else does, lots. He speculates that mocks are likely to be more popular amongst programmers who work with distributed systems of various kinds, and I expect so. Smalltalkers don’t do that much, almost everyone else does, lots. Now, if we could all write our software in Smalltalk the world would undeniably be a better place, but…

In fact, I suspect that Smalltalkers write a lot of mocks, but that these tend to develop quite naturally into the real objects. The Smalltalk environment and tools affords that well. Almost everyone else’s environment and tooling fights against that every step of the way. And Smalltalkers won’t generally use a mocking framework, although there are some super cute ones, because the don’t have to overcome the stumbling blocks that languages like Java put in the way of anyone who actually wants to get anything done.

Tools

Anyway, there’s this thing about tools. Tools have affordances, and good tools strongly afford using them the right way and weakly—or not at all—afford using them the wrong way. And there are very special purpose tools, and there are tools that are very flexible. I read somewhere that the screwdriver is the most abused tool in the toolbox, because a steel rod that’s almost sharp at one end and has a handle at the other is just so damn useful. But that doesn’t mean that it’s a good idea to use one as a chisel. I grew up on a farm and I remember an old Ferguson tractor which was started by using a (very large) screwdriver to short between the starter motor solenoid and the engine block. Also not a good idea.

That we can do these things with them does not make screwdrivers bad. And the screwdriver does not encourage us to be idiots—it just doesn’t stop us. And so it is with mocks—they are enormously powerful and useful and flexible and will not stop us from being stupid. In particular, they will not stop us from doing our design work badly. And neither will TDD.

What I think they do do, in fact, is make the implementation of bad design conspicuously painful—remember that line about the next test being hard to write? But programmers tend to suffer from very bad target fixation when a tool becomes difficult to use and they put their head down and power through, when they should really stop and take a step back and think about what the hell they’re doing.

Written by keithb

November 3, 2015 at 9:41 pm

Posted in Uncategorized

#AsMuchEstimationAsYouNeedWhenYouNeedItAndThatsLessThatYouThinkAndNotSoOftenAsAllThatReallyButJustGetOverIt

leave a comment »

Ron Jeffries and Steve McConnell have been discussing #NoEstimates.

Ron wants me to signup to a google group to comment, and who has time for that? Worse, Steve wants me to become a registered user of Construx. So, instead I’ll comment here. I’m still paying for this site, after all.

A you might imagine, world famous estimation guru McConnell isn’t so keen on #NoEstimates. Here’s Ron’s response to Steve’s response to Ron’s response to Steve’s video responding to the #NoEstimates thing.

One of the smartest things I ever read about estimation, and one that I quote freely is this: “The primary purpose of software estimation is not to predict a project’s outcome; it is to determine whether a project’s targets are realistic enough to allow the project to be controlled to meet them”—McConnell, 2006.

That was published about 10 years ago. In the context of the state of the art of software development ten years ago, this statement was quite radical—surprisingly many organisations today still don’t get it. In the ten years since then the state of the art has move on to the point that some (not all, but some) development shops are now so good at controlling a project to meet its targets that creating an up-front determination of whether or not that can be done is really not so useful an exercise. Of course, part of that process has been to teach “the business” that they are wasting their time in trying to fix their targets far ahead into the future, because they will want to change them.

Anther very smart thing, from only eight years ago: “strict control is something that matters a lot on relatively useless projects and much less on useful projects. It suggests that the more you focus on control, the more likely you’re working on a project that’s striving to deliver something of relatively minor value.”—DeMarco, 2009

Very true. And since then that same progression in the state of the art has so reduced the cost of building working, tested software that the balance has moved in further in the direction of not doing projects where the exact cost matters a lot. #NoEstimates is this pair of ideas carried to their natural conclusion.

It’s still not unusual to see IT departments tie themselves in knots over whether a project who’s goal is to protect billions in revenue should have a budget of one million or one point five million. And to spend hundreds of thousands on trying to figure that out. The #NoEstimates message is that they don’t need to put themselves into that position.

It’s not for free, of course, that state of the art in development has to be present. But if it is, on we go.

In the video, Steve tries some rhetorical jiu-jitsu and claims that if we follow the Agile Manifesto value judgement and prefer to collaborate with our customers than to negotiate contracts with them, and they ask for estimates why then we should, in a collaborative mood, produce estimates. That’s a bit like suggesting that if an alcoholic asks me for a drink, I should, in a cooperative and generous spirit, buy them one.

I’d like to suggest a root cause of the disagreement between Ron and Steve. I’m going to speculate about the sorts of people and projects that Ron world with and that Steve works with.  Personally, I’ve worked in start-ups and in gigantic consultancies and I’ve done projects for blue-chip multinationals selling a service and for one-man-band products shops. My speculation is that in Steve’s world, IT is a always and only a cost centre. It’s viewed by the rest of the business as a dark hole into which, for unclear reasons, a gigantic pile of money disappears every year. The organisation is of course very well motived to both understand how big that hole is, and to try to make it smaller. Hence: estimation! in addition, Steve likes to present estimation as this cooly rational process of producing the best information we can from the merge scraps of fact available, suitably and responsibly hedged with caveats and presented in a well-disciplined body of statistical  inferences. And then the ugly political  horse-trading of the corporation gets going. I think that believing this is a reasonable defence mechanism for a smart and thoughtful person caught in the essentially medieval from of life that exists inside large corporations (and, whisper it, all the more so enlarge American corporations). But it isn’t realistic. In those environments, estimation is political, always.

My speculation is that Ron, and many #NoEstimates advocates, work more in a world where the effort (and treasure) that goes into building some software is very clearly, and very closely in time and space, connected with the creation of value. And that this understanding of IT work as part of the value creation of the organisation and the quickness of the return leads to estimation being really not such a big deal. An overhead of limited utility. So why do that?

You organization, I’ll bet, falls somewhere between these two models, so you probably are going to have to do #AsMuchEstimationAsYouNeedWhenYouNeedItAndThatsLessThatYouThinkAndNotSoOftenAsAllThatReallyButjustGetOverIT

Written by keithb

August 2, 2015 at 9:10 am

Posted in Uncategorized

Hiring…

with 2 comments

If you like the kind of work you see here, come join me in London. We’re hiring. Apply via LinkedIn or drop me a line.

Principal consultants take responsibility for particularly challenging solutions and in demanding organisational environments. They closely interact with senior project managers, customer representatives at all levels including senior management, and guide project teams. Together with the responsible project managers, they lead technical and strategic initiatives to success, ranging from critical consulting mandates to complex delivery projects. Together with business development and business unit managers, they actively expand Zuhlke’s business and develop new opportunities. This can involve taking the leading technical role in large bids.

Lead consultants take decisions and provide advice regarding complex technical systems. They closely liaise with the software development team, the project manager, and customer representatives, often with a technical background. They ensure that sound technical decisions are made and subsequently realised in state-of-the-art solutions by the project team. They can take the leading role in technical consulting assignments within their specialisation area.

The role is based in London and the majority of the work takes place in UK but on occasion training and consulting engagements may be delivered anywhere in the world.

The competitive package includes 20 days of professional development time per year.

Written by keithb

October 3, 2011 at 5:13 pm

Posted in Uncategorized

TDD as if You Meant it at Agile Cambridge 2011

with 4 comments

Attendees of the session at Agile Cambridge, please add links to you code in comments to this post.

Written by keithb

September 29, 2011 at 12:04 pm

Posted in Uncategorized

What this blog is for

with one comment

Over the years I have been doing some rather informal research into the relationship between Test-Driven Development and certain measurable properties of the resulting code. That older work is discussed on my Blogger blog. I want to put that research onto a more firm foundation and to do so in the style of “open science

I welcome you comments, criticisms and even contributions.

Thanks to Steve Freeman for help with the name.

Written by keithb

July 10, 2011 at 10:55 am

Posted in Uncategorized