cumulativehypotheses

mostly professional blather

Special–ism

leave a comment »

If there is a common theme to this “Agile” journey that you’re almost certainly on if you’re reading this—and there might not be—then a good candidate for that could be a certain series of unpleasant, ego–challenging realisations.

It began with programmers, back in the late 90s, as eXtreme Programming took hold and the unpleasant, ego–challenging realisations dawned that: the common old–school programmers’ preferred mode of engagement with users—pretending that they don’t exist—and their preferred mode of engagement with their employers—surly passive–aggression leavened with withering cynicism—and their preferred mode of engagement with each other—isolation interspersed with bouts of one–up–manship—and their preferred mode of engagement with the subject matter at hand—wrestling it into submission via the effortful application of sheer individual intellectual brilliance—weren’t going to cut it any more.

Over time, the various other specialisms that go into most industrial–scale software development—and IT systems work more generally—have had also to come to terms with the idea that they aren’t that special. With the idea that locking themselves into their caves with a sign outside saying “Do Not Disturb, Genius at Work” and each finding a way to make their particular specialism into the one true bastion of intelligence—and the true single point of failure, and that not seen as a bad thing—is not going to cut it any more.

A cross–functional, self–organising team will of course contain, must contain, various individuals who have, though education, experience or inclination, a comparative advantage over their colleagues in some particular skill, domain, tool, or whatever. It would be perverse indeed for a cross–functional, self–organising team not have the person who knows the most about databases look first at a data storage problem. And it would be foolish indeed to let that person do so by themselves—at least pair, maybe mob—so that the knowledge and experience is spread around. And it would be perverse indeed for a cross–functional, self–organising team to make a run preventing any one member of the team having a go at any particular problem merely because they aren’t the one with the strongest comparative advantage in that topic. And it would be foolish indeed for such a team to not create a physical, technical and psychological environment where that could be done safely. And so on.

Different disciplines have embraced their not–special–ism at different times and with different levels of enthusiasm. “Lean UX” represents the current batch, as designers get to grips with the idea that they—in common with ever other discipline—turn out not to be the special and unique snowflakes uniquely crucial to the success of any development endeavours. Where is your discipline on this journey?

Written by keithb

May 18, 2016 at 2:47 pm

Posted in Uncategorized

TDD as if You Meant It at Agile Mancs 2016

leave a comment »

Please share your experience in comments to this blog post. Would be most grateful if you also shared your code. Thanks.

Written by keithb

May 12, 2016 at 9:22 am

Posted in Uncategorized

Corrigendum to Lightning Talk at Agile Mancs 2016

leave a comment »

Wrong Theorem

We’ll see on the video, I guess, but I think I said “Mean Value” when I should have said “Central Limit”. My mistake. It had been a long day, and they were handing out beer.

On the β-distribution

If you are going to try that sort of Monte Carlo estimation for yourself you don’t need to wrestle with the full-blown β-distribution, which is a bit of a beast. You can approximate it well enough using a triangular distribution. The base of the triangle is the closed interval—time, money, whatever, I’ll talk about time—between, as Dan North suggested, that time which if the work took less than that you’d be very, very surprised and that time which if the work took much longer you’d be very, very embarrassed. Somewhere in that interval—and very rarely in the middle—is the time that you think it’s most likely for the work to take* and the apex of the triangle lies above that point. The altitude of the apex must be such that the area of the triangle is 1.

Have fun!

Getting to not having to do any of that

Being asked for a estimate is, it’s true, a sign that your interrogator doesn’t trust you.

Usually this is not because they are monsters who relax every evening with a well-thumbed copy of The 48 Laws of Power and a small glass of chilled kitten’s tears. Usually it’s because, firstly: they have suffered a long history of paying people to develop software for them and not getting much back in return, and secondly: they don’t know you from Adam or Eve.

In a better world, they would trust you, because your heart is pure and your intentions good and wouldn’t if be great if we could all just get along? And it would.

Until that world arrives we will have to deal with suspicious, hard-bitten CFOs who have a keen understanding of their fiduciary duty. They—or their subordinates—will ask you for an estimate. Some of them will want then to interpret your estimate as a commitment, perhaps with contractual remedies for failure to meet same, and so hold your feet to the fire to deliver exactly against those estimates. I strongly recommend not doing business with the ones that cleave tightly to such a position. They are setting up themselves, and you, for failure.

But many will be amenable to understanding that, as McConnell says, the point of estimation is not to predict outcomes but to see if you are in with a chance of managing your way to success.

How to break the cycle?

Deliver value. Early, often, reliably. Very early, very often.

With contemporary engineering practices and tooling this can now often be done very quickly. Even in those “heavily regulated” environments. When you have—and maintain—a reputation for delivering value then your conversations with the people who’re paying you to do that change dramatically. From: “how much will X cost?” To: “how much do I need to pay to have you keep on doing J, P, R, S, Q…?” That’s much more agreeable for all concerned.

Stepping back

YMMV, but I’ve found, while doing and managing software development work in and for VC funded local startups, publicly traded global blue-chips, privately owned product companies, government departments and many other scenarios, on  four continents, that when something about the scenario says “new”, whether it’s that you are a new supplier, or they are a new CFO, or whatever, then requests for estimates—maybe with some story about “just for budgetary purposes” attached—will come; but once a reputation for reliable delivery of value is established they fade away, until something changes that takes us back to “new”.

It would be nice if we didn’t have to erect this infrastructure of estimation every now and again, and you can if you wish refuse to work under such terms—you may then find work unpleasantly hard to come by—or you can invest in re-educating people not to ask those questions—but the first rule of consultancy is not to solve a problem you aren’t being paid to solve—or you can roll with it, demonstrate the simple lack of necessity of estimation in an effective organization and wait for the next cycle which often will be less severe than the previous one. And so on.

Good luck!


* Whenever you are asked for an estimate, for anything, you should reply with at least these three numbers—or their equivalent—and if that shape, never mind the content, of answer is not acceptable then you have discovered that you were not being asked for an estimate.

Written by keithb

May 12, 2016 at 9:19 am

TDDaiYMI XPManchester

with 3 comments

Please report on your experience with the session in comments to this post. And, if you’re happy to, please post a link to your code.

Written by keithb

April 14, 2016 at 4:36 pm

Posted in Uncategorized

Some Mass Transport Metaphors

leave a comment »

One day the software development industry will be gown-up enough to talk about what we do in its own terms, but for now we have, for some reason, to use metaphors. And we love, for some reason†, transport related ones. A recent entry is the (Agile) Release Train. It’s a terrible metaphor. Here are some other mass–transportation metaphors for how you might organise the release schedule for your software development activities, ordered from less to more meritorious in respect of the figures of merit batch size, and the cost of delay if you miss one.

In these metaphors, people stand for features.

  1. Ocean liner—several thousands of people move slowly at great expense once in a while. Historically, this has been the status quo for scheduling releases.
  2. Wide–body airliner—several hundreds of people move quickly, a couple of times a day
  3. Train—several hundreds of people move at moderately high speeds several times a day

Some failure modes of the train metaphor…

Although the intent of a “release train” is that it leaves on time no matter what, and you can get on or off at any time until then, and if you miss this one, another will be a long in a little while, in practice we see attempts to either:

  • cram more and more people onto the train in a desperate attempt to avoid having to wait for the next one, à la rush-hour in many large cities, or
  • add more and more rolling stock to the train to avoid having to run another one

More generally, what a metaphor based on trains will mean to you may depend a lot on your personal experience of trains. For my colleagues in Zürich trains are frequent, swift, punctual, reliable, capacious & cheap. For my colleagues in London, not so much any of those things…

  1. Tram—several dozens of people move significantly faster than walking pace every few minutes

† Actually, that might be because for quite a lot of the the industrial period, say from about 1829 to about 1969, various forms of transport were the really hot technological stuff of the day.

Written by keithb

March 29, 2016 at 11:22 am

Posted in Uncategorized

TDD: giving in to get along

with 3 comments

I like TDD. But if you reading this blog you probably knew that already.

“coding like a bastard” considered harmful

I can remember a time when I thought, because I’d been taught so and it seemed to make sense, that software—programs—were designed on paper, using ▭s and →s of various kinds and, in several of my earlier jobs, more than a few ∀s and ∃s and that the goodness of a design was determined by printing it out in a beautifully formatted document† and having an older, wiser, better—well, older and therefore presumably wiser, and more senior so presumably better, anyway—designer write comments and suggestions all over it in red pen during a series of grotesquely painful “review” meetings and then tying to fix it up until the older etc. designer was happy with it. After which came an activity know at the time as “coding like a bastard” after which came the agony of integration, after which came the dismaying emotional wasteland of the “test and debug” activity which took a duration essentially unbounded above, even in principle.

Hmm, now I come to write it down like that, it seems like that was a colossally idiotic way to proceed.

There were some guidelines about what made a good design. There were the design patterns. There were Parnas’s papers, such as On the Criteria… and there were all these textbook ideas about various kinds of coupling an cohesion and…ah, yes, the textbooks. Well, there was Pressman‘s, and there was Sommerville‘s and some more specialist volumes and some all–rans. When I returned to university after a fairly hair–raising time in my first job as a programmer, wanting to learn how to do this software thing properly, we used whatever edition of Somerville as current at the time—its now in its 9th—as our main textbook for the “software engineering” component: project management, planing, risk, that sort of thing.

So, it’s a bit…startling, we might say, to see Ian’s writeup of his experiment with TDD dealt with by Uncle Bob in quite such…robust, we might say, terms. Startling even for someone as…forthright, we might say, as I usually am myself.

As described, doing TDD wrongly

Thing is, though, Ian is, as described, doing TDD wrongly. And the disappointments that he reports with it are those commonly experienced by… by… by people who are very confident—rightly or wrongly and in Ian’s case, probably rightly—in their ability to design software well. I used to be very confident—perhaps wrongly, but I don’t think so—of my ability to design software. I mean to say, I could produce systems in C++ which worked at all—mid 1990s C++, at that—and this is no mean feat.

Interestingly, at the time I first heard about TDD by reading and then conversing with Kent and Ron and those guys on the wiki—C2, I mean, the wiki—I was already firmly convinced of the benefits of comprehensive automated unit testing, having been made to do that by a previous boss—who had himself learned it long before that—but of course we wrote the tests after we wrote the code, or, to be more honest about it, while debugging. And, yes, even with that experience behind me, I though that TDD sounded just crazy. Because it to someone used to the ▭s and →s and design as an activity that goes on away from a keyboard and especially to someone who does that well—or believes that they do—it does sound crazy.

And so a  lot of the objections to TDD that Ian makes in his blog post seem early familiar to me. And not only because I’ve heard them often from others since I started embracing TDD.

Thorough, if unnecessarily harsh

Well, anyway, Bob’s critique of what Ian reports is pretty thorough, if unnecessarily harshly worded in places, but there are a few observations that I’d add.

Ian says:

Test-first or test-driven driven development (TDD) is an approach to software development where you write the tests before you write the program.

Apart from the fact that writing tests first is merely necessary, but very much not sufficient, for doing TDD, so far so good.

You write a program to pass the test, extend the test or add further tests and then extend the functionality of the program to pass these tests.

Mmmm.

You build up a set of tests over time that you can run automatically every time you make program changes.

That does happen,yes…

The aim is to ensure that, at all times, the program is operational and passes the all tests.

Yep. I especially like the distinction between merely passing all the tests and also being operational.

You refactor your program periodically to improve its structure and make it easier to read and change.

and, sadly, this last sentence misses a key practice of TDD and largely invalidates what comes before. Which practice is that you refactor your code with maniacal determination up to as frequently as after every green bar.

Every. Green. Bar.

Technically, we could claim to be doing something “periodically” if we did it every 29th of February or every millisecond but I think that to say we do something “periodically” points to a lower frequency. But in TDD we should be refactoring often. Very often. Many times an hour. To really do TDD requires that we spend quite a large proportion of all the time invested any given programming exercise on refactoring. So, Ian has kind–of fallen at the first hurdle because he’s not really doing TDD right in the first instance.

Now, it used to be a frequent complaint about TDD advocates that we sounded like Communists: it was claimed that we would immediately respond to anyone who said that “I tried TDD and it doesn’t work” by claiming that they weren’t even doing TDD, really, in the same way that fans of Communism would contend that it had never really been tried properly so, hey, it might work, you don’t know.

Not a useful response

The thing is, though, a lot of people who dismiss TDD really haven’t tried it properly—and a lot who say that they do TDD aren’t doing it right either and are missing some benefits, but that’s another story—so of course they didn’t get the advertised effect. And by now we have lots of examples of people who really have tried TDD properly and the interesting and positive results they’ve obtained. Ian did not try doing TDD properly.

And then since that wasn’t going so well, he stopped even trying to:

[…] as I started implementing a GUI, the tests got harder to write and I didn’t think that the time spent on writing these tests was worthwhile.

Well, yes, we know that writing automated tests for GUIs is 1) hard and 2) relatively low value. But this:

So, I became less rigid (impure, perhaps) in my approach, so that I didn’t have automated tests for everything and sometimes implemented things before writing tests.

is not a useful response.

One useful response is to use something like MVC, or MVP, or ports-and-adaptors or one of the many other ways to make the GUI very, very thin and do automated tests behind that and test the actual GUI by hand. But from this point on Ian has basically invalidated his own exercise in TDD because although he wasn’t really doing it to begin with he was at least trying but it turned out to be tough and so he stopped trying. And also stopped learning. Which is a missed opportunity for him, and also for the rest of us. I encourage Ian to try again, maybe with some coaching, and see how that goes, because I would be genuinely interest to see how a seasoned software engineering academic gets on with that.

Not your daddy’s COBOL compiler

Ian says:

Think-first rather than test-first is the way to go.

Well…

he also says:

I started programming at a time where computer time was limited and you had to spend time looking at and thinking about the program as a whole.

Yes. There’s a whole hour long presentation that I have about this but—the microeconomics of programming have changed in quite a fundamental way over the last few decades. Even since I started working.

In my second job as a programmer I worked on a product written in C++ where, no joke, a full build was something you started on Friday lunchtime and went down the pub, hoping that it would be finished by the time you strolled in late on Monday morning. Even incremental builds on just the sub-system I was working on took “go have a cup of tea” amounts of time. Running our comprehensive automated unit test suite (written post hoc, as described above) took “go have lunch” amounts of time.

The time period that Ian is talking about was much worse even than that. In that era the rare and expensive resource was machine cycles and they need to be dedicated to doing the useful, revenue–earning thing. Programmer thinking time was, relatively, cheap and abundant so they mode of working tended to use lots of that to avoid wasting machine cycles on code that was not strongly expected to be correct.

If you wanted to work the way we do now—for example, with approximately one computer per programmer—you had to be, say, NASA, and you had to have, say, basically unlimited resources because your project was, say, considered to be a matter of national survival. But for most programmers, their employer could not afford that. The entire organisation might have as few as one computer. Maybe one to a department.

The whole edifice of traditional software engineering can be seen as a perfectly reasonable attempt to deal with the constraint that you can’t afford to have a programmer use machine cycles to do programming with. So you need to find ways to write programs away form a computer. That’s what the ▭s and →s were trying to do. The people who came up with that stuff meant well, but ended up creating that world of the colossally idiotic ways to proceed.

I was once sent on a COBOL programming course—it’s a long and dreary story—and on this course we worked within a simulation of those bad old good old days: programs were designed using what I later realised was Jackson Structured Programming,  written out in pencil on pre-printed 80-column coding sheets, desk-checked, and then typed in to a COBOL development system. One PC for a class of about 20 students—before which we formed a queue—and we each only had three goes at the compiler. If it took more than three compile/test/debug episodes to get your program running you failed the course.

Today, we are awash with machine cycles. I have many billions of them available to me here right now every second and all I’m using them for is writing this blog post. John von Neumann* must be spinning in his grave.

Don’t play dumb

If I were programming right now, rather than doing this, then I could use those billions of cycles to get prompt, concrete feedback from a large body of tests and from other tools about my current position in a long series of small design decisions.

Rather than thinking in big, speculative lumps I could think in tiny, tiny increments—always with the ever important continual, frequent and determined refactoring.

There is a failure mode, though. Ian says:

[…] with TDD, you dive into the detail in different parts of the program and rarely step back and look at the big picture.

Don’t do that.

I don’t think that there’s anything in TDD that says not to step back and look at the big picture. There’s nothing that says to do that, it’s true, but why would’t you? It’s disappointing to see a retired Professor of Software Engineering playing dumb like this—if he feels the need to step back and look at the big picture then he should. He shouldn’t not do that merely because he’s making an attempt to try out a technique that doesn’t say to do that. I mean, really!

Mighty thinking is not the winning strategy

Added to which, I don’t recall anyone ever saying that TDD is the only design technique—and it is a design technique—that anyone needs to use at any scale to produce a good system. What is said, by me for one, is that by using TDD to guide design thinking and most importantly, to make it quick, easy, cheap and safe to explore different design options, we can get to better results sooner and more reliably than we can by mighty thinking, which was previously the only economically viable method.

I understand that this can discomforting to those who’s thoughts tend to the mighty. It’s almost as if in contemporary** software development mighty thinking has turned out not to be the winning strategy, long term.

Neither for individuals in their careers nor for their employers, nor for their industry. It might be time to come to terms with that. And for a certain kind of very smart, very capable, very confident designer of programs that means letting go. Letting go of the code, of the design, letting go of a certain sense of control and gaining in return a safe way to explore  design options that you were too smart to think up yourself.  

And that’s not easy.


† we had to use professional quality document preparation systems to do that, because of all the ▭s and →s and ∀s and ∃s. Which was fun.

* He’s supposed to have responded to a demo of some tools written by a programmer to make programming easier by saying that “it is a waste of a valuable scientific computing instrument to use it to do clerical work”

** that is, since about 2006…

Written by keithb

March 20, 2016 at 8:40 pm

Posted in TDD

Circumflexual

leave a comment »

So, reports that in France there is outrage amongst (right-wing, conservative) commentators that the current government of President Hollande (a socialist) has reconfirmed the orthographic changes proposed originally in 1990 and agreed by the government of President Chirac (right-wing, conservative) which—amongst other things—deletes the circumflex from words where it makes no difference.

Some of these reports seem to follow the wikipedia article on the circumflex in mentioning that in English, apart from loan words, the circumflex is not used today but was once: in the days when posting a letter was priced by weight an ô was used to abbreviate ough. As in “thô” for “though”. This seems like a fine convention, and one that I intend to adopt in tweets and instant messages. Now that we can pretty much assume that both ends of any messaging app conversation will have good Unicode support we can do a range of interesting things.

For example, althô you can put newlines in tweets† it seems as if many messaging apps are designed on the assumption that no–one using them ever has two consecutive thoughts and interprets a [RETURN] as send. I’ve started using in messages. I wish it could be typed on an iPhone soft keyboard. For some reason § can be, which I think is no more obscure. Anyway, the pilcrow can be copied and pasted, as can ‘∀’ to  mean “all” & ‘∃’ to mean “there’s a” or similar. I’d like to use ‘¬’ for “not” but that might be a step too far, althô I do see a lot of “!=” and “=/=” type of thing in my twitter stream. I also tend to use pairs of unspaced em–dash for parenthetical remarks—like this—which saves two characters in a tweet vs. using actual parens (like this). The ellipsis comes in very handy in several ways… ¶ Over time I’m getting more relaxed about using ‘&’ which of course has a particularly long heritage, although not so long as is sometimes thôt.¶ What other punctuation can we revive, re-purpose or re-use?

Update: how do we feel about ‘þ’ or ‘ð’, both easily available from the Icelandic keyboard, for the?


† I’ve used this to sneak footnotes into tweets. Of course, this will all become a bit pointless if the managers at Twitter really do continue to force fit their brilliant ideas into the product, rather than continuing their previously successful strategy of paving cowpaths.

Written by keithb

February 20, 2016 at 2:19 pm

Posted in Uncategorized