Author: francis

Organisational muddiness

Big ball of mud

Way back in the dark mists of the early 2000s design patterns were a new-ish idea. People went crazy for them, no project planning meeting was completed without several links to various design pattern architectures and blah blah. Eventually people realised that the best way to write software was to write software, in an unconscious appeal to the original principles in the Agile Manifesto.

The tools we used that created bazillions of class diagrams that were then translated into incomprehensible Java were quietly put to one side by most people.

Some patterns persist and are useful, things like Factory, and sometimes refactoring to a known pattern can actively simplify how things work. There are also architectural patterns, Martin Fowler wrote an excellent book explaining them and DHH used a lot of them when designing Rails. I think this is why Rails worked so well, it was opinionated software and the opinions were the result of a lot of careful thought and deep experience.

The big ball of mud pattern is what you get when you don’t take steps to design your architecture and instead end up with something that works but is internally incoherent. You see this a lot with successful start up companies. They ran really fast, didn’t try to follow the long-term survival practices such as behaviour driven development. If they are successful, after about 18 months everything comes to an abrupt halt because it’s extremely hard to maintain and full of decisions which hindsight indicates are now poor and inflexible.

I contend that there is also an organisational equivalent, you end up with the structure that’s obvious when confronted with the next problem, and suddenly we have dev teams that work on the same thing but different, and devops because everyone else has one, and the sales teams are using squads because it’s hip and everyone’s in too much of a hurry to read Deming or Goldratt and properly think about what works, for them, now.

Conway’s law

This was originally meant as a joke but it’s been seen in the wild so many times that it isn’t one any longer. Put simply, the architecture of the software you build and the way your organisation is structured will mirror each other. in essence, if you have a pragmatic ball of mud with muddy communications then so will the software you produce.

Whirlpool of mud

So you end up with a self-reinforcing pragmatic organisation that will continue to create muddy solutions with muddy architectures for muddy conceptions of what your customers want. You need to be as deliberate about your organisation’s structure as you are about the software you need to run it.

Successful systems

A system that works is built from multiple smaller systems that also work. This sounds obvious but people often ignore broken or poorly performing sub systems and throw things together.

Poor process produces poor outcomes

It’s there in the section title, mud breeds mud, haste breeds waste, rushing breeds redoing, inconsistency breeds incoherence.


Not addressing the mud means you are just wishing that it will go away. This doesn’t generally work that well, but you often find it if you’re working with people who have read too many self-help books that tell you how to build more confidence by affirmations confusing their desires with what causality makes possible.

Sales driven organisations can suffer really badly from this.

Solutions – a starter

Constancy of purpose

When I used to run the Lean Teams consultancy the very first thing I used to do when I ran a workshop for my clients was a session about constancy of purpose. In a nutshell, can you write down what your organisation does in a sentence or two? When you make decisions do they align with it or not? If not, don’t!

The other thing this helps to do is identify people’s needs. Customers and organisation.

Note it all down

  • Identify teams
  • Define responsibilities
  • Define info flows and transformations
  • Use the documentation to create consensus

Be mindful of Conway’s law, is the shape of the organisation correct for what you’re trying to do? Also, noting down means using pictures, long screeds of text are hard to write accurately and follow precisely. Interaction diagrams with notes where it works are by far the best way of doing this.

Double Looped learning

  • Single looped learning is being able to turn a handle and get a result.
  • Double looped is knowing what the objective is behind the turning of the handle and maybe changing it.

You need to get to single loop first. You will have a rough idea of what needs to be done and need to work out how to do it. Again, use pictures. Only attending to the single loop can make you very myopic and changes outside the loop can remove the need for it, causing you to waste your time.

Have a look here.

Good processes

Good processes have these qualities

  • Repeatability
  • Lightweight
  • Well defined interfaces with other processes
  • Learning
  • Less than ten steps

Lightweight means that a decent level of automation is also required – being able to repeat something that involves twenty manual steps which could be forgotten is not scaleable or useful.

Putting it together

After writing it all down draw your processes end to end.

  • Do they match the constancy of purpose?
  • Can they be done within one team?
  • Do they pass through many teams?
  • Have you documented any data transformations needed when passing between teams or roles?

Once you have this you need to start building the second loop. Constancy of purpose should be giving you an idea of the value each process and step adds, lots of data transforms means you’re involving lots of different teams or functional areas – is that a good idea?

The next thing you need to do is work on a consensus about how things are done.

Then it’s time to start thinking about the second loop. If you do some googling you will find that there are even more.

The end goal is not functionality

Many years ago I read an academic paper that talked about why the Apple Lisa (you won’t remember it) failed. The paper put it down to the three Fs: functionality, functionality and functionality. This is quite a witty point, and indeed the amount of new startup websites one sees that don’t actually say what the app is supposed to do because they’re trying to persuade their confused potential customers to tell them what they want is a testament to the idea.

There is a problem with this view, though. People who use your stuff don’t want functionality, they want capability. They have a problem they want to solve and you’re offering them a way to either get rid of it or manage it down to tolerability. The Lisa didn’t allow people to do things they needed to do. The Mac, later, was originally the only platform that did desktop publishing well and reasonably affordably (compared with the price of a conventional printing studio).

So, when you’re doing your interviews and looking for user pain you can meet you need to understand that pain isn’t functionality, it’s capability. So even making a hard thing a little easier to do is worthwhile. This changes your perspective on what you can and should deliver.

As with a lot of engineering type problems, take a step back, think about what the wider picture is. It’s always a valuable exercise.

Mistakes are a good thing

If you make a bad mistake you’ve proved two things to the people you’re working with:

  1. You aren’t sitting around doing nothing
  2. You can learn

Many years ago I read a story about a man who had cost the company he worked for several million dollars. He managed to do some other things that mitigated the loss, but there was still a loss of a couple of hundred thousand bucks.

He glumly told his boss what had happened and how he had managed to claw back a good chunk of the cash.

He was very surprised when he wasn’t fired.

Why didn’t you fire me?

Why would I? I just spent 200k training you how to manage risk!

There we have it. If you can take a breath, look at what caused the mistake, and why, and how to reduce the chance of it happening again, you’ve come out ahead.

Don’t let the fear of mistakes paralyse you. If you’re junior you won’t have been put in a place where you can do any serious damage if you’re working for people with any sense. If you’re senior, well, there’s always something new you need to learn.

The newbie super power

If you’re new or junior in your job you can sometimes find yourself paralysed by how little you know. This can come from two different places:

  • Context. There are so many choices you don’t know where to start.
  • Fear. Your ignorance means you don’t know what to do to even try to start.

You can fix both of these by using your newbie super power:

Ask questions.

We’ve been taught in school that you’re not supposed to cheat, you’re just supposed to know things and teacher will be angry if you don’t. You were taught that cribbing from other people is bad.

But real life, in a real job, is not an exam. Everyone, selfishly, needs you to become productive and be able to make a contribution. The only way that can happen is if you ask what they need you to do. If you don’t understand, ask again. I would try something first, just so you have something to talk about and learn from, but asking is a super power.

Watch the people who are senior. They listen, and they ask. This is a superpower you need to master.

Rspec failing with no such method validate_uniqueness_of

This was fun, in a not really that much fun kind of way

I am getting an old project ready to move to a different Heroku stack, which has involved a lot of running bundle and scratching my head

So I fire up the tests … and some fail with

  undefined method `validate_uniqueness_of' for #<RSpec::ExampleGroups::Blah::Validation::.....>

So, let the debugging begin

  1. Is it a problem with Shoulda? Try everything, and discover that shoulda is looking for that actual method. Also discovered that there are some useful methods on models that shoulda calls by default, but none of them have the word unique in them.
  2. Have another project that works, go into the debugger and discover that it calls the matcher and doesn’t just look for a method.
  3. Stroke chin, and then add type: :model, onto the spec declaration because other project needed it and worth a try. Problem goes away.
  4. Go diving into yet another project that has spec helper that works without type: model, newer versions of Rspec have a cheeky config attribute

I add this to my helper and all the tests pass

So, I thought I would put something here that the next mug might find if the same thing happens to them and they won’t lose a morning to it.

This has been a community service announcement, have fun.

Today I rediscovered the Ruby double splat operator

Old Ruby hands will know that you can use the splat operator * to turn an array into a list of arguments:

def fred(x, y, z)
  puts x
  puts y
  puts z

argy_bargy = [1, 2, 42]


=> 1
=> 2
=> 42

There are also another trick you can do with this:

a = 1

puts [*a]

=> [1]

a = [1]

puts [*a]

=> [1]

Where the dereferencing lets you ensure you are always using an array. This is thought to be a bit old fashioned now and Rubyists now use the Array() method that does the same thing but with less syntactic magic. Back in the days before Ruby 2 (or so) it was the only way to do it.

I’m a great believer in trying to avoid the cardinal sin of Primitive Obsession – in Ruby this usually manifests itself as storing hashes everywhere and ending up with lots of code full of several levels of square brackets that manipulates them that ends up being parked in the wrong place. Ironically Rails is full of this and you often find yourself peering through documentation looking for the options hash to see what you can ask an object to do.

So in an ideal world you might want to be able to create object on the fly from hashes (say ones stored as JSONB in a Postgres database for instance ūüėČ) that have some intelligence in them beyond the simple, somewhat cumbersome, data carrier that Hash offers. So you could try something like this:

Thing =, :y)

harsh = { x: 1, y: 2 }

wild_thing =*harsh.values)

The only problem here is you lose the useful semantics of the Hash, if it is initialised in a different order, or from a merge, then you’re screwed. The hash keys and the Struct’s keys having the same names is a mere semantic accident. This isn’t what you want. Instead let’s use the double splat:

class Thing 
  attr_reader :x, :y
  def initialize(x:, y:)
    @x, @y = x, y

harsh = { x: 1, y: 2 }

wild_thing =**harsh)

You now have a way to create an object you can start adding methods to from your hash source that is isolated from argument order changes. There is one last wrinkle, which is that JSONB keys are strings, so you need:

hash_from_json = { "x" => 1, "y" => 2 }

wild_thing =**hash_from_json.symbolize_keys)

Now you’re almost ready to be able to take chunks of structured JSON and quickly and reliably turn it into usable objects with things like the <=> operator in them


That’s a Hash and nothing more. You can even add methods that implement [] and []= to keep Hash-like capabilities with a bit of meta programming, perhaps for backward compatibility, if you really have to.

I have arrays of these, that map to diary entries in an app I’m working on. I’m also very very lazy when I expect the computer to do the work, I want to solve problems once and they stay solved with as little fiddly RSI creating pressing shift to get brackets and things as possible. So:

class DiaryEntry
 attr_reader :name, :start_time, :finish_time

 def initialize(name:, start_time:, finish_time:)
   @name, @start_time, @finish_time = name, start_time, finish_time

def apply_schedule(schedule, offset:)
  make_entry = -> (entry_hash) {**entry_hash.symbolize_keys) 
  all_entries =
  # ...

Code has been split across multiple lines to ease understanding. I now have a nice array of proper objects that I can filter and add to the map using methods instead of clumsy brackets everywhere. We can sort them and so on if we wish too.

This last example also shows one of my favourite techniques, which is to remap arrays of things using simple lambda functions, or use them with methods like filter inherited from Enumerable, a named lambda isn’t always necessary but it follows the useful pattern intention revealing name that can make you code much clearer. Using lambdas like this means then can also be passed as arguments to methods for filtering or formatting, that can be very functional-style without having to go all object-oriented.

For extra marks, dear reader, there’s a post on stack overflow that uses a technique like this to create structs from hashes on the fly. See if you can find it and come up with some uses for it, I’m sure there are many.

Phoenix: It was nice while it lasted

We decided to pull the plug on using Phoenix.

The people we know who are using it are mostly well funded and have the time to learn to use it and find their way around things that are a struggle.

We are not.

I wanted to do something that would have taken me five minutes in Rails and Phoenix just wouldn’t do it. Or, at least, the way to do it wasn’t documented.

I also got burned, and wasted a lot of time, because the Phoenix commands have been renamed to be phx instead of phoenix.

I ended up creating a Phoenix 1.2 app by mistake because the old command wasn’t deleted.

It’s annoying. I like the Elixir language a lot. But it’s back to Ruby on Rails because I don’t have time or the dime.

I think in about a year I might come back to it because it will be a bit more mature.

Notes on my first Ember/Phoenix app

I hit a bit of a problem. I’m writing a quotation system for my company.

I have an entity called EstimateType – it is the driver for the rest of the setup. Basically it’s a name and it’s used to group the pieces of the estimate together, so you have, say, small business, or sole trader, and they each may have the same parts of the quote but the details will be different (for example sole traders are generally taught one to one and we charge a flat fee not per delegate).

I built a prototype front end in Ember and used the mirage library.

Using Mirage I just sent the EstimateTypeId to the app and it worked.

The back end’s in Phoenix and I’m using a library called ja_serializer that’s supposed to support Ember out of the box. Having done some experimenting with hand building stuff that can talk to Ember I think this is a good idea and will save some time.

The code generated by this library puts the parent up away in a different part of the JSON from the main data, in a place called relationships. This would be fine (I suppose) but the ID doesn’t end up getting saved either by the generated controllers, or by the generated change sets (I had to add it in).

I’m really not convinced this is right.

10 Apr

The generator doesn’t do parent child properly. It essentially generates the same set of models, tests and controllers that you would get if there were no parent. This is a bit useless and is what got me confused.

I added in some more methods to the tests that create the parent entity and then put it into the queries and structs used by the wrapper methods in the Estimates module (which is the main one for this part of the app).

I’m still a bit meh about having to put things into a module for different parts of the app, which I think came in with 1.3. It’s nice, but often those decisions at the beginning of a development or design run will seem not quite right, and then you get into the problem of asking yourself if it’s worth moving things around. I’d far rather have the problem of deciding if it was worth pushing things into domains and/or namespaces because my app had become overcomplex. It feels like adding an extra layer of indirection for its own sake, and I’ve had enough of that from the days I used to write lots of Java.

Now I have a set of tests that run, and controllers that do parent child correctly.

I did get somewhat hampered where my CI system was failing and not deploying when the tests were running locally. Have since worked out that this was because

MIX_ENV=test mix ecto.reset

Has running the seeds built into the aliases. I’ve since added these aliases to my mix.exs file:

defp aliases do
 "ecto.setup": ["ecto.create", "ecto.migrate", "run priv/repo/seeds.exs"],
 "ecto.setup-noseed": ["ecto.create", "ecto.migrate"],
 "ecto.reset-test": ["ecto.drop", "ecto.setup-noseed"],
 "test": ["ecto.create --quiet", "ecto.migrate", "test"],

And now I do ecto.reset-test if I want to trash the test db. I still haven’t worked out how to tell mix to always run this with the test environment, but not worrying about that now.

I’ve also added

 {:mix_test_watch, "~> 0.5", only: :test},
 {:ex_unit_notifier, "~> 0.1", only: :test},

To my deps, so that I can run the equivalent of guard. Test watch autosets the environment to test but I added only: test because I didn’t want the dep in my production setup. It does mean I need to put MIX_ENV=test onto the command line or it won’t compile and run, but it’s no great hardship.

Later the same day

I must have used the wrong generator commands because this now seems to at least attempt to create the parent records

mix ja_serializer.gen.phx_api Estimates Estimate estimates company_name:string aspiration:string prepared_for:string email:string body:map estimate_type_id:references:estimate_types

The estimates tests now contain

estimate_type = Repo.insert!(%RosieEstimatesServer.Estimates.EstimateType{})

In the controller tests. The tests are still all busted, but at least there’s a starter for 10 there now where there wasn’t before. I still had to set up an alias for Repo, though.

And another thing

The autogenerated change sets don’t have the parent id in them – maybe they’re supposed to be used differently – but in the absence of any decent examples it’s a bit hard to get to the bottom of.

In all cases I’ve had to add estimate_type_id to the cast and validate_required clauses in the model files.

In addition

|> foreign_key_constraint(:estimate_type_id)

Wasn’t put in automatically, which seems a bit weird.


In order to get errors to be returned in a format Ember likes I needed to change views/changeset_view.ex so that it returned the errors in a compatible list

def render("error.json", %{changeset: changeset}) do
  %{errors: errors_to_json_format(translate_errors(changeset))}

defp errors_to_json_format(errors) do
  errors |> {k, v} -> %{detail: List.first(v), source: %{ pointer: "data/attributes/#{k}"}} end)

As in the old format isn’t supported any more. This code needs a bit more refactoring but right now it works. Thanks to this guy for the tip.

Also Ember pluralises the entity name so the controller methods needed to be changed

- def create(conn, %{"data" => _data = %{"type" => "estimate", "attributes" => estimate_params}}) do
+ def create(conn, %{"data" => _data = %{"type" => "estimates", "attributes" => estimate_params}}) d

As in, pluralise the type.

Happy days.

And …

import { underscore } from '@ember/string';
keyForAttribute(attr) {
 return underscore(attr);

In the serializer config – because elixir inbound wants underscores and I lost patience with JSON API sez X pedantry ūüôā

Book Review: Black Box Thinking: Matthew Syed

Black Box Thinking: The Surprising Truth About SuccessBlack Box Thinking: The Surprising Truth About Success by Matthew Syed
My rating: 5 of 5 stars

I found the ideas in this book really resonated with my experience in a number of industries.

Syed’s thesis is that we live in a society that usually plays the blame game when things go wrong. He contrasts this with the ideas used by the aviation industry where mistakes and errors are seen as problems with the system, and not individuals. Because he draws from this industry it uses the thinking behind¬† black box from aeroplanes as the central metaphor.

If something goes catastrophically wrong it’s not the pilot that’s to blame but instead the system as a whole that allowed the mistake to happen. This systems approach is why there are so few aviation accidents.

In the early parts of the book he contrasts aviation errors with medical errors. Doctors are trained not to admit to failure and be “experts”. So instead of learning from failure there’s a culture of “accepting the inevitable” and not trying to stop things happening again. Indeed there’s a really silly idea that doctors’ training makes them infallible.

Syed gives an account of one awful medical accident where a young mother ended up with brain damage because her throat closed up under anasthetic and the doctors spent so long trying to insert a breathing tube they didn’t do a tracheotomy in time. It turns out that under stress people lose track of time, and can end up going past hard limits (like how long someone can survive when they can’t breathe) with bad consequences. In this case the poor woman’s husband was an airline pilot and he didn’t accept the “one of those things” arguments.

Eventually after a fight the husband managed to get to the truth of what had happened. Instead of being bitter he shared what he had found in medical journals, made sure that the simple things you can do to make sure your intense focus hasn’t made you blind was wider known. For example, you can make sure that all people involved can call for certain things, hierachies need to be questioned. Two people work on a problem, but one of them stays out of the decision making loop and the cognitive stress so they can see what’s happening and call time.

This information has saved a lot of lives. The woman’s husband has been written to by many surgeons all over the world. There are examples of how this telescoping time phenomenon caused crashes of aircraft, but that industry changed the way it did crisis management to make the problem far less likely to occur.

It sounds simple, doesn’t it? Learning from failure has the status of a cliché. But it turns out that, for reasons both prosaic and profound, a failure to learn from mistakes has been one of the single greatest obstacles to human progress. Healthcare is just one strand in a long, rich story of evasion. Confronting this could not only transform healthcare, but business, politics and much else besides. A progressive attitude to failure turns out to be a cornerstone of success for any institution.

Next he looks at experts. If there is no feedback loop after they qualify as experts then they do not improve. Without being able to measure your success you are stuck, probably making the same mistakes over and over again.

If we wish to improve the judgement of aspiring experts then, we shouldn‚Äôt just focus on conventional issues like motivation and commitment. In many cases, the only way to drive improvement is to find a way of ‚Äėturning the lights on‚Äô. Without access to the ‚Äėerror signal‚Äô, one could spend years in training or in a profession without improving at all

And of course failure is necessary, as long as systems are in place to learn from it:

beneath the surface of success ‚Äď outside our view, often outside our awareness ‚Äď is a mountain of necessary failure

This goes much further than the old saw about learning from failure. Syed’s argument is that you must be systematic about it and not blame individuals for systematic failures. But also it is important that individuals take responsibility for what happens and own up when things go wrong. Without this there can be no learning.

Another extremely interesting thread later in the book is when he picks up on marginal gains. This is a way to find improvements and is used by teams like the British Cycling team who were so successful at the Rio olympics. In short: everything matters every detail that can hold back success or performance is important and when you address them all they compound together to create an unstoppable chain of improvements. Small gains, marginal gains become the root of great success.

Marginal gains is not about making small changes and hoping they fly. Rather, it is about breaking down a big problem into small parts in order to rigorously establish what works and what doesn’t.

They use the first test not to improve the strategy, but to create richer feedback. Only when they have a deeper understanding of all the relevant data do they start to iterate.

see weaknesses with a different set of eyes. Every error, every flaw, every failure, however small, is a marginal gain in disguise.

I heartily recommend this book, it’s easy to read and the stories make the examples stay in your head. I hadn’t heard of the marginal gains technique but will be using it myself.

View all my reviews

Comparing velocity of agile teams is futile


This relates back to the somewhat thorny issue of estimating how long things will take so people can plan how to allocate resources.

As creators of value giving¬†stuff we need to be able to make promises to our friends and paymasters that they will get the things they need to be able to do their business.¬†Enterprises of all stripes want to know that they can invest¬†a specific amount of cash and have something that will¬†make more cash. Preferably more than they invested, and in a timescale that isn’t too far in the future.

The problem is¬†they can’t.¬†When you are creating something new it is by nature an unpredictable beast. Instead what you have to do is take steps to minimise the cost of mistakes and maximise the results you get.

The cost of mistakes is also why we also work in short iterations because we know we are human and will make them. This isn’t a bad thing, it just lets us experiment and learn without having to bet the farm.¬†Because of this there is a disconnect right at the heart of what’s laughably called software engineering. Businesses invest to make money, any software you create is an investment. However, you can’t guarantee the results or even how long it will take to get somewhere useful and people don’t like that.

The quest for agility

Agile was a response to this, it came from the makers of software who said that they couldn’t reasonably commit to very large¬†projects with any degree of certainty. It’s all about mitigating the risks by trusting people to think small and act small in well bounded contexts. So you eat an elephant one bite at a time – you develop large systems one small iteration at a time, and the small iterations allow the business to change its mind relatively easily and cheaply as well.

Development is a process and you discover things and questions you didn’t know you needed to answer as you follow it through. Small iterations let you answer these questions before they derail you.

Typically a large system is broken down deliverable chunks, and the chunks into projects. The projects become a list of tasks that are small enough for one or two people to understand and deliver. These are then sorted into priority order and a series of iterations are planned roughly.

Then we plan an iteration in detail. Iterations are time boxed. No task should take more than a couple of days at most, and we try and work out how many we can get done in each iteration.

This is where velocity comes in.

Velocity explained

The team get together and give each piece (or story as it’s called in some schemes) an arbitrary amount of effort they think it will take and allocate it a number of points depending in the difficulty. It’s important to emphasise here that the key word is¬†think.¬†They don’t actually¬†know,¬†because if they did they’d be playing the lottery and living the high life. It takes little time to do this and they work out what will fit into the next iteration, plus a couple of extra things that they could fit in if things go well. It’s also not effort, as such, but a yardstick that lets you work with the relative sizes of tasks.

Over time they get a feel for how many of these magical points they can get done in an iteration, and how many certain types of task may take.

Very strong concept to emphasise here is that there is no science in this, the points are what that team think for that specific iteration. The points measure nothing and are only to give a yardstick of what can probably be fitted into a given iteration by that specific team.

So …

  • If you try and use velocity to compare teams you aren’t going to be able to say¬†anything meaningful, and just piss people off.
  • If you say things like¬†you only got 23 points done this iteration and 25 last one, you need to work harder¬†to a specific team they can easily fix it by adding some nonsense to¬†the plan for the next¬†iteration. You’re also demonstrating¬†you just don’t get it.

To complete the process when each iteration finishes the team traditionally check back with the business representative (often called the business owner) and make sure that what they’ve built fits with the business needs. This is known as the demo. If they’ve been doing technical stuff to support future iterations there may be nothing to show.

Good teams then review what they got done in the last iteration and look for ways to improve.

Velocity is how many points¬†a given team gets done in an iteration. It’s worth measuring because you can start finding out whether or not:

  • The project is possible with the people and resources you have
  • The team find certain kinds of task difficult and maybe need some mentoring for them
  • The communication with the business is being properly managed
  • Some tasks¬†in particular areas are taking a lot longer than first thought¬†– you can go find out why before it kills everything

and you can find this all out very early so you can fix it before you’ve wasted huge sums of money invisibly choking on the proverbial¬†elephant.

So velocity is a simple rule of thumb that lets you make sure you can keep your promises and head off any problems early. It’s not a management tool to get more productivity. In fact it has little to do with it. For example a team of very experienced people who’ve been working together a long time might well be able to deliver¬†a lot more ¬†in an iteration, but could easily allocate the same number of points as a less effective team because that’s what they’re comfortable with.¬† If you were some clueless spreadsheet jockey working far away from the delivery you would have no way of knowing this.

The number is meaningless, except as a rule of thumb for a specific team at a specific time. It’s a very blunt instrument and changes anyway¬†as the teams get better or change their practices.


You can’t create a pretty gantt chart and say¬†this will be ready by the 12th of November¬†because you don’t know for sure when things will finish. You can get a range of dates once you have some data. If it must be ready by a specific time you can either use the knowledge you gain to manage scope or make sure that the parts of the project that must ship to make it workable by that date¬†are done first.

You can also get creative. For example Basecamp wrote their billing module after they delivered their first fully featured version because it gave them short term time to put more into the product and left them 30 days to create the billing module.

In software there are no hard and fast rules and you need to take the time to work out how to get the business what it needs, which is¬†capability,¬†not functionality. That’s a topic for a different post.