Author: francis

So, the Vicissitudes of Interviews?

I’ve recently gone through the process of finding another job. It was quite frustrating. I thought I’d write a few comments on the various processes I went through for fun and profit.

1. The code test that was an exercise in catching people out

For the record, I’ve taught classes on how to do Test Driven Development. I’ve also been writing software professionally since I did my sandwich year in 1985, started actual coding in 1983 (in those days we didn’t have many PCs or the Internet so most people didn’t code until they got to University). I originally learned to do it in Ruby at the European Rails conference in 2008 from (if I recall) one of the people who first contributed to RSpec. I’ve also run classes and written how to in languages other than Ruby (Perl and PHP – which were both interesting). I think it’s fair to say I know what I’m doing.

So I get asked to develop something using TDD. I get comments back saying these are the worst tests the person has ever seen, and some other fairly snarky stuff. I can only conclude that they have never seen anything developed with the TDD like you mean it concept. This means you start with a test that makes sure the class just loads and all of the libraries are in place, then you work outwards from there.

I did have the flu when I did this and may have misunderstood the brief. I did the infamous one small change just before I took myself to my bed and the cut I sent to github didn’t load – I was almost seeing double at the time but it was a silly mistake.

The thing is – if I was interviewing me I would have found myself wanting to have a conversation, if the brief had been misunderstood I would want to know why because that’s an interesting conversation in itself. If I had seen an unfamiliar way of constructing the tests I would have again wanted to talk to the candidate because there may be something to learn there. The software I wrote did indeed use the API and show stuff on the screen as requested – for some reason that wasn’t right, and I don’t care enough to find out why.

For the uninitiated, a lint program is one that goes through the code and looks for common errors. It sometimes formats things as well, but not always. I used the standard gem to do linting and checking and it found nothing. I have a habit when I just hammer Ruby code into the editor of not putting in new lines except between method definitions, and even then I sometimes I don’t bother. So my Ruby code straight off the press, as it were, can sometimes look a bit dense. But you can press the return key a few times, right? You’re a grown up. I should have done this, and had a last tidy of the code but as I said I could hardly focus on the screen at the time and I just wanted to close it out.

That hurrying and not just asking for a couple more days was a mistake and I own it. I don’t own the other faults the reviewer found. I also reject entirely calling the tests poor – I used to teach this stuff, I might know more than you and have a different approach. Your ignorance is showing. You could try talking to me – I’ve been doing this stuff for nearly 40 years – have you read my CV?

After this long I can read code in a variety of languages that may not be formatted at all well and (to be honest) I don’t notice. I might run the reformat command in my editor of choice (because sometimes you see indentation move in a way that indicates code blocks are formatted but the actual syntax tells a different story).

The rude review I saw was moaning about the code not being linted – it was linted to death, buddy. You mean formatted and you don’t know the difference. If you want a new line between definitions and before blocks say so in your spec. Standard doesn’t care about that, neither does RubyMine’s formatter. Your inexperience is showing and you probably aren’t even aware of it. I mean, provide a rubocop config file and be done with it.

I definitely feel I dodged a bullet if I would have ended working with the individual who wrote that review. Maybe they were having a bad day too, but I gave up an entire weekend, feeling rough as a bear’s bum and worse and worse as time passed, and a little bit of courtesy wouldn’t have gone amiss – from their comments they spent about 5 minutes and never even attempted to understand my approach. We would probably have had a falling out eventually if they turned out as arrogant as they appear to be.

2. That Gem you wrote is all wrong

I’ve recently been playing with word games on my phone and wanted a little tool that I could give me a list of options. To that end I wrote the silly Ruby gem werds that lets you do this. It has a deliberate design choice where it loads the whole dictionary it uses into memory once so it can be interrogated over and over again from the console.

Obviously, this won’t work that well in a web context where the gem might be loaded once per request and use up lots of memory that then has to be released. But for my uses and prototyping it was fine. Future versions of the gem might even load things up into a Postgres database and use regular expressions as queries.

I put this gem on my CV, because why not? Problem is, according to someone who looked at the gem the gem was wrong. I had made a deliberate decision that I might change in future. I’m not an idiot. I’ve been doing this stuff for a very long time. But instead of having a conversation with me about that choice the gem was wrong. Err, no, it works the way it works for what I consider to be good reasons that may change later.

So they decided not to go on with the interview process after the first interview (which I really enjoyed, by the way). I started to have a conversation with the recruitment agent about this and see if I could rescue the process, but then decided I couldn’t be bothered. I don’t play games and the incident had soured things for me.

So, if I didn’t have anything on Github at all, or didn’t mention that gem it might have gone forward.

Why play some game of gotcha and demonstrate you don’t know how to talk to someone with my level of experience? What was the benefit, there? Another bullet dodged, I think.

3. The one that didn’t get away

After the initial HR has this guy only got one head and can he speak coherently interview I did a very interesting exercise with a couple of people where we talked through the potential faults and improvements we could make to some Rails controller code. It was fun and let me establish a human relationship with them.

The final stage was me doing a simple but hard enough task using TDD – we did it together and they watched and I talked through my process.

Can you see the difference here?

First, they talked to me and got to know how I think. Second, they watched my process and had me explain it to them while we worked together on something.

These people know what they’re doing and I’m really happy to be going forward with them.

If you treat interviews as some off in the distance zero sum game you will probably not be able to find candidates who either have a depth of experience you’re not used to, an approach you’ve never thought of, or any one of a number of things that could help you with being more diverse or deeper in terms of experience. It’s not school, it’s not some sport, it’s work. You will end up with candidates who look and think like you do, because that’s what your rules will pull in.

Put away the childish things and talk to people. It may appear to take longer, but when there are very few people with the skills you need you don’t want to turn down a candidate because you haven’t got a clue why they approached a problem in a way you either don’t understand, or perhaps don’t agree with. I’ve been doing this stuff for some small while now, and I never do anything without a reason I can back up.

My approach to these exercises is always code is a conversation. If you end up not wanting to talk to me I’m more than happy not to – the chances are we won’t work work together well.

I will own that if you’re not feeling well, ask for more time and try later. That one was on me.

Organisational muddiness

Big ball of mud

Way back in the dark mists of the early 2000s design patterns were a new-ish idea. People went crazy for them, no project planning meeting was completed without several links to various design pattern architectures and blah blah. Eventually people realised that the best way to write software was to write software, in an unconscious appeal to the original principles in the Agile Manifesto.

The tools we used that created bazillions of class diagrams that were then translated into incomprehensible Java were quietly put to one side by most people.

Some patterns persist and are useful, things like Factory, and sometimes refactoring to a known pattern can actively simplify how things work. There are also architectural patterns, Martin Fowler wrote an excellent book explaining them and DHH used a lot of them when designing Rails. I think this is why Rails worked so well, it was opinionated software and the opinions were the result of a lot of careful thought and deep experience.

The big ball of mud pattern is what you get when you don’t take steps to design your architecture and instead end up with something that works but is internally incoherent. You see this a lot with successful start up companies. They ran really fast, didn’t try to follow the long-term survival practices such as behaviour driven development. If they are successful, after about 18 months everything comes to an abrupt halt because it’s extremely hard to maintain and full of decisions which hindsight indicates are now poor and inflexible.

I contend that there is also an organisational equivalent, you end up with the structure that’s obvious when confronted with the next problem, and suddenly we have dev teams that work on the same thing but different, and devops because everyone else has one, and the sales teams are using squads because it’s hip and everyone’s in too much of a hurry to read Deming or Goldratt and properly think about what works, for them, now.

Conway’s law

This was originally meant as a joke but it’s been seen in the wild so many times that it isn’t one any longer. Put simply, the architecture of the software you build and the way your organisation is structured will mirror each other. in essence, if you have a pragmatic ball of mud with muddy communications then so will the software you produce.

Whirlpool of mud

So you end up with a self-reinforcing pragmatic organisation that will continue to create muddy solutions with muddy architectures for muddy conceptions of what your customers want. You need to be as deliberate about your organisation’s structure as you are about the software you need to run it.

Successful systems

A system that works is built from multiple smaller systems that also work. This sounds obvious but people often ignore broken or poorly performing sub systems and throw things together.

Poor process produces poor outcomes

It’s there in the section title, mud breeds mud, haste breeds waste, rushing breeds redoing, inconsistency breeds incoherence.

Wishing

Not addressing the mud means you are just wishing that it will go away. This doesn’t generally work that well, but you often find it if you’re working with people who have read too many self-help books that tell you how to build more confidence by affirmations confusing their desires with what causality makes possible.

Sales driven organisations can suffer really badly from this.

Solutions – a starter

Constancy of purpose

When I used to run the Lean Teams consultancy the very first thing I used to do when I ran a workshop for my clients was a session about constancy of purpose. In a nutshell, can you write down what your organisation does in a sentence or two? When you make decisions do they align with it or not? If not, don’t!

The other thing this helps to do is identify people’s needs. Customers and organisation.

Note it all down

  • Identify teams
  • Define responsibilities
  • Define info flows and transformations
  • Use the documentation to create consensus

Be mindful of Conway’s law, is the shape of the organisation correct for what you’re trying to do? Also, noting down means using pictures, long screeds of text are hard to write accurately and follow precisely. Interaction diagrams with notes where it works are by far the best way of doing this.

Double Looped learning

  • Single looped learning is being able to turn a handle and get a result.
  • Double looped is knowing what the objective is behind the turning of the handle and maybe changing it.

You need to get to single loop first. You will have a rough idea of what needs to be done and need to work out how to do it. Again, use pictures. Only attending to the single loop can make you very myopic and changes outside the loop can remove the need for it, causing you to waste your time.

Have a look here.

Good processes

Good processes have these qualities

  • Repeatability
  • Lightweight
  • Well defined interfaces with other processes
  • Learning
  • Less than ten steps

Lightweight means that a decent level of automation is also required – being able to repeat something that involves twenty manual steps which could be forgotten is not scaleable or useful.

Putting it together

After writing it all down draw your processes end to end.

  • Do they match the constancy of purpose?
  • Can they be done within one team?
  • Do they pass through many teams?
  • Have you documented any data transformations needed when passing between teams or roles?

Once you have this you need to start building the second loop. Constancy of purpose should be giving you an idea of the value each process and step adds, lots of data transforms means you’re involving lots of different teams or functional areas – is that a good idea?

The next thing you need to do is work on a consensus about how things are done.

Then it’s time to start thinking about the second loop. If you do some googling you will find that there are even more.

The end goal is not functionality

Many years ago I read an academic paper that talked about why the Apple Lisa (you won’t remember it) failed. The paper put it down to the three Fs: functionality, functionality and functionality. This is quite a witty point, and indeed the amount of new startup websites one sees that don’t actually say what the app is supposed to do because they’re trying to persuade their confused potential customers to tell them what they want is a testament to the idea.

There is a problem with this view, though. People who use your stuff don’t want functionality, they want capability. They have a problem they want to solve and you’re offering them a way to either get rid of it or manage it down to tolerability. The Lisa didn’t allow people to do things they needed to do. The Mac, later, was originally the only platform that did desktop publishing well and reasonably affordably (compared with the price of a conventional printing studio).

So, when you’re doing your interviews and looking for user pain you can meet you need to understand that pain isn’t functionality, it’s capability. So even making a hard thing a little easier to do is worthwhile. This changes your perspective on what you can and should deliver.

As with a lot of engineering type problems, take a step back, think about what the wider picture is. It’s always a valuable exercise.

Mistakes are a good thing

If you make a bad mistake you’ve proved two things to the people you’re working with:

  1. You aren’t sitting around doing nothing
  2. You can learn

Many years ago I read a story about a man who had cost the company he worked for several million dollars. He managed to do some other things that mitigated the loss, but there was still a loss of a couple of hundred thousand bucks.

He glumly told his boss what had happened and how he had managed to claw back a good chunk of the cash.

He was very surprised when he wasn’t fired.

Why didn’t you fire me?

Why would I? I just spent 200k training you how to manage risk!

There we have it. If you can take a breath, look at what caused the mistake, and why, and how to reduce the chance of it happening again, you’ve come out ahead.

Don’t let the fear of mistakes paralyse you. If you’re junior you won’t have been put in a place where you can do any serious damage if you’re working for people with any sense. If you’re senior, well, there’s always something new you need to learn.

The newbie super power

If you’re new or junior in your job you can sometimes find yourself paralysed by how little you know. This can come from two different places:

  • Context. There are so many choices you don’t know where to start.
  • Fear. Your ignorance means you don’t know what to do to even try to start.

You can fix both of these by using your newbie super power:

Ask questions.

We’ve been taught in school that you’re not supposed to cheat, you’re just supposed to know things and teacher will be angry if you don’t. You were taught that cribbing from other people is bad.

But real life, in a real job, is not an exam. Everyone, selfishly, needs you to become productive and be able to make a contribution. The only way that can happen is if you ask what they need you to do. If you don’t understand, ask again. I would try something first, just so you have something to talk about and learn from, but asking is a super power.

Watch the people who are senior. They listen, and they ask. This is a superpower you need to master.

Rspec failing with no such method validate_uniqueness_of

This was fun, in a not really that much fun kind of way

I am getting an old project ready to move to a different Heroku stack, which has involved a lot of running bundle and scratching my head

So I fire up the tests … and some fail with

NoMethodError:
  undefined method `validate_uniqueness_of' for #<RSpec::ExampleGroups::Blah::Validation::.....>

So, let the debugging begin

  1. Is it a problem with Shoulda? Try everything, and discover that shoulda is looking for that actual method. Also discovered that there are some useful methods on models that shoulda calls by default, but none of them have the word unique in them.
  2. Have another project that works, go into the debugger and discover that it calls the matcher and doesn’t just look for a method.
  3. Stroke chin, and then add type: :model, onto the spec declaration because other project needed it and worth a try. Problem goes away.
  4. Go diving into yet another project that has spec helper that works without type: model, newer versions of Rspec have a cheeky config attribute
config.infer_spec_type_from_file_location!

I add this to my helper and all the tests pass

So, I thought I would put something here that the next mug might find if the same thing happens to them and they won’t lose a morning to it.

This has been a community service announcement, have fun.

Today I rediscovered the Ruby double splat operator

Old Ruby hands will know that you can use the splat operator * to turn an array into a list of arguments:

def fred(x, y, z)
  puts x
  puts y
  puts z
end

argy_bargy = [1, 2, 42]

fred(*argy_bargy)

=> 1
=> 2
=> 42

There are also another trick you can do with this:

a = 1

puts [*a]

=> [1]

a = [1]

puts [*a]

=> [1]

Where the dereferencing lets you ensure you are always using an array. This is thought to be a bit old fashioned now and Rubyists now use the Array() method that does the same thing but with less syntactic magic. Back in the days before Ruby 2 (or so) it was the only way to do it.

I’m a great believer in trying to avoid the cardinal sin of Primitive Obsession – in Ruby this usually manifests itself as storing hashes everywhere and ending up with lots of code full of several levels of square brackets that manipulates them that ends up being parked in the wrong place. Ironically Rails is full of this and you often find yourself peering through documentation looking for the options hash to see what you can ask an object to do.

So in an ideal world you might want to be able to create object on the fly from hashes (say ones stored as JSONB in a Postgres database for instance 😉) that have some intelligence in them beyond the simple, somewhat cumbersome, data carrier that Hash offers. So you could try something like this:

Thing = Struct.new(:x, :y)

harsh = { x: 1, y: 2 }

wild_thing = Thing.new(*harsh.values)

The only problem here is you lose the useful semantics of the Hash, if it is initialised in a different order, or from a merge, then you’re screwed. The hash keys and the Struct’s keys having the same names is a mere semantic accident. This isn’t what you want. Instead let’s use the double splat:

class Thing 
  attr_reader :x, :y
  def initialize(x:, y:)
    @x, @y = x, y
  end
end

harsh = { x: 1, y: 2 }

wild_thing = Thing.new(**harsh)

You now have a way to create an object you can start adding methods to from your hash source that is isolated from argument order changes. There is one last wrinkle, which is that JSONB keys are strings, so you need:

hash_from_json = { "x" => 1, "y" => 2 }

wild_thing = Thing.new(**hash_from_json.symbolize_keys)

Now you’re almost ready to be able to take chunks of structured JSON and quickly and reliably turn it into usable objects with things like the <=> operator in them

instead["of"]["quoted"]["nightmare"]

That’s a Hash and nothing more. You can even add methods that implement [] and []= to keep Hash-like capabilities with a bit of meta programming, perhaps for backward compatibility, if you really have to.

I have arrays of these, that map to diary entries in an app I’m working on. I’m also very very lazy when I expect the computer to do the work, I want to solve problems once and they stay solved with as little fiddly RSI creating pressing shift to get brackets and things as possible. So:

class DiaryEntry
 attr_reader :name, :start_time, :finish_time

 def initialize(name:, start_time:, finish_time:)
   @name, @start_time, @finish_time = name, start_time, finish_time
 end
end

def apply_schedule(schedule, offset:)
  make_entry = -> (entry_hash) { 
    DiaryEntry.new(**entry_hash.symbolize_keys) 
  }
  all_entries = diary_entries.map(&make_entry)
  # ...
end

Code has been split across multiple lines to ease understanding. I now have a nice array of proper objects that I can filter and add to the map using methods instead of clumsy brackets everywhere. We can sort them and so on if we wish too.

This last example also shows one of my favourite techniques, which is to remap arrays of things using simple lambda functions, or use them with methods like filter inherited from Enumerable, a named lambda isn’t always necessary but it follows the useful pattern intention revealing name that can make you code much clearer. Using lambdas like this means then can also be passed as arguments to methods for filtering or formatting, that can be very functional-style without having to go all object-oriented.

For extra marks, dear reader, there’s a post on stack overflow that uses a technique like this to create structs from hashes on the fly. See if you can find it and come up with some uses for it, I’m sure there are many.

Phoenix: It was nice while it lasted

We decided to pull the plug on using Phoenix.

The people we know who are using it are mostly well funded and have the time to learn to use it and find their way around things that are a struggle.

We are not.

I wanted to do something that would have taken me five minutes in Rails and Phoenix just wouldn’t do it. Or, at least, the way to do it wasn’t documented.

I also got burned, and wasted a lot of time, because the Phoenix commands have been renamed to be phx instead of phoenix.

I ended up creating a Phoenix 1.2 app by mistake because the old command wasn’t deleted.

It’s annoying. I like the Elixir language a lot. But it’s back to Ruby on Rails because I don’t have time or the dime.

I think in about a year I might come back to it because it will be a bit more mature.

Notes on my first Ember/Phoenix app

I hit a bit of a problem. I’m writing a quotation system for my company.

I have an entity called EstimateType – it is the driver for the rest of the setup. Basically it’s a name and it’s used to group the pieces of the estimate together, so you have, say, small business, or sole trader, and they each may have the same parts of the quote but the details will be different (for example sole traders are generally taught one to one and we charge a flat fee not per delegate).

I built a prototype front end in Ember and used the mirage library.

Using Mirage I just sent the EstimateTypeId to the app and it worked.

The back end’s in Phoenix and I’m using a library called ja_serializer that’s supposed to support Ember out of the box. Having done some experimenting with hand building stuff that can talk to Ember I think this is a good idea and will save some time.

The code generated by this library puts the parent up away in a different part of the JSON from the main data, in a place called relationships. This would be fine (I suppose) but the ID doesn’t end up getting saved either by the generated controllers, or by the generated change sets (I had to add it in).

I’m really not convinced this is right.

10 Apr

The generator doesn’t do parent child properly. It essentially generates the same set of models, tests and controllers that you would get if there were no parent. This is a bit useless and is what got me confused.

I added in some more methods to the tests that create the parent entity and then put it into the queries and structs used by the wrapper methods in the Estimates module (which is the main one for this part of the app).

I’m still a bit meh about having to put things into a module for different parts of the app, which I think came in with 1.3. It’s nice, but often those decisions at the beginning of a development or design run will seem not quite right, and then you get into the problem of asking yourself if it’s worth moving things around. I’d far rather have the problem of deciding if it was worth pushing things into domains and/or namespaces because my app had become overcomplex. It feels like adding an extra layer of indirection for its own sake, and I’ve had enough of that from the days I used to write lots of Java.

Now I have a set of tests that run, and controllers that do parent child correctly.

I did get somewhat hampered where my CI system was failing and not deploying when the tests were running locally. Have since worked out that this was because

MIX_ENV=test mix ecto.reset

Has running the seeds built into the aliases. I’ve since added these aliases to my mix.exs file:

defp aliases do
 [
 "ecto.setup": ["ecto.create", "ecto.migrate", "run priv/repo/seeds.exs"],
 "ecto.setup-noseed": ["ecto.create", "ecto.migrate"],
 "ecto.reset-test": ["ecto.drop", "ecto.setup-noseed"],
 "test": ["ecto.create --quiet", "ecto.migrate", "test"],
 ]
end

And now I do ecto.reset-test if I want to trash the test db. I still haven’t worked out how to tell mix to always run this with the test environment, but not worrying about that now.

I’ve also added

 {:mix_test_watch, "~> 0.5", only: :test},
 {:ex_unit_notifier, "~> 0.1", only: :test},

To my deps, so that I can run the equivalent of guard. Test watch autosets the environment to test but I added only: test because I didn’t want the dep in my production setup. It does mean I need to put MIX_ENV=test onto the command line or it won’t compile and run, but it’s no great hardship.

Later the same day

I must have used the wrong generator commands because this now seems to at least attempt to create the parent records

mix ja_serializer.gen.phx_api Estimates Estimate estimates company_name:string aspiration:string prepared_for:string email:string body:map estimate_type_id:references:estimate_types

The estimates tests now contain

estimate_type = Repo.insert!(%RosieEstimatesServer.Estimates.EstimateType{})

In the controller tests. The tests are still all busted, but at least there’s a starter for 10 there now where there wasn’t before. I still had to set up an alias for Repo, though.

And another thing

The autogenerated change sets don’t have the parent id in them – maybe they’re supposed to be used differently – but in the absence of any decent examples it’s a bit hard to get to the bottom of.

In all cases I’ve had to add estimate_type_id to the cast and validate_required clauses in the model files.

In addition

|> foreign_key_constraint(:estimate_type_id)

Wasn’t put in automatically, which seems a bit weird.

Meh

In order to get errors to be returned in a format Ember likes I needed to change views/changeset_view.ex so that it returned the errors in a compatible list

def render("error.json", %{changeset: changeset}) do
  %{errors: errors_to_json_format(translate_errors(changeset))}
 end

defp errors_to_json_format(errors) do
  errors |> Enum.map(fn {k, v} -> %{detail: List.first(v), source: %{ pointer: "data/attributes/#{k}"}} end)
end

As in the old format isn’t supported any more. This code needs a bit more refactoring but right now it works. Thanks to this guy for the tip.

Also Ember pluralises the entity name so the controller methods needed to be changed

- def create(conn, %{"data" => _data = %{"type" => "estimate", "attributes" => estimate_params}}) do
+ def create(conn, %{"data" => _data = %{"type" => "estimates", "attributes" => estimate_params}}) d

As in, pluralise the type.

Happy days.

And …

import { underscore } from '@ember/string';
...
keyForAttribute(attr) {
 return underscore(attr);
 }

In the serializer config – because elixir inbound wants underscores and I lost patience with JSON API sez X pedantry 🙂

Book Review: Black Box Thinking: Matthew Syed

Black Box Thinking: The Surprising Truth About SuccessBlack Box Thinking: The Surprising Truth About Success by Matthew Syed
My rating: 5 of 5 stars

I found the ideas in this book really resonated with my experience in a number of industries.

Syed’s thesis is that we live in a society that usually plays the blame game when things go wrong. He contrasts this with the ideas used by the aviation industry where mistakes and errors are seen as problems with the system, and not individuals. Because he draws from this industry it uses the thinking behind  black box from aeroplanes as the central metaphor.

If something goes catastrophically wrong it’s not the pilot that’s to blame but instead the system as a whole that allowed the mistake to happen. This systems approach is why there are so few aviation accidents.

In the early parts of the book he contrasts aviation errors with medical errors. Doctors are trained not to admit to failure and be “experts”. So instead of learning from failure there’s a culture of “accepting the inevitable” and not trying to stop things happening again. Indeed there’s a really silly idea that doctors’ training makes them infallible.

Syed gives an account of one awful medical accident where a young mother ended up with brain damage because her throat closed up under anasthetic and the doctors spent so long trying to insert a breathing tube they didn’t do a tracheotomy in time. It turns out that under stress people lose track of time, and can end up going past hard limits (like how long someone can survive when they can’t breathe) with bad consequences. In this case the poor woman’s husband was an airline pilot and he didn’t accept the “one of those things” arguments.

Eventually after a fight the husband managed to get to the truth of what had happened. Instead of being bitter he shared what he had found in medical journals, made sure that the simple things you can do to make sure your intense focus hasn’t made you blind was wider known. For example, you can make sure that all people involved can call for certain things, hierachies need to be questioned. Two people work on a problem, but one of them stays out of the decision making loop and the cognitive stress so they can see what’s happening and call time.

This information has saved a lot of lives. The woman’s husband has been written to by many surgeons all over the world. There are examples of how this telescoping time phenomenon caused crashes of aircraft, but that industry changed the way it did crisis management to make the problem far less likely to occur.

It sounds simple, doesn’t it? Learning from failure has the status of a cliché. But it turns out that, for reasons both prosaic and profound, a failure to learn from mistakes has been one of the single greatest obstacles to human progress. Healthcare is just one strand in a long, rich story of evasion. Confronting this could not only transform healthcare, but business, politics and much else besides. A progressive attitude to failure turns out to be a cornerstone of success for any institution.

Next he looks at experts. If there is no feedback loop after they qualify as experts then they do not improve. Without being able to measure your success you are stuck, probably making the same mistakes over and over again.

If we wish to improve the judgement of aspiring experts then, we shouldn’t just focus on conventional issues like motivation and commitment. In many cases, the only way to drive improvement is to find a way of ‘turning the lights on’. Without access to the ‘error signal’, one could spend years in training or in a profession without improving at all

And of course failure is necessary, as long as systems are in place to learn from it:

beneath the surface of success – outside our view, often outside our awareness – is a mountain of necessary failure

This goes much further than the old saw about learning from failure. Syed’s argument is that you must be systematic about it and not blame individuals for systematic failures. But also it is important that individuals take responsibility for what happens and own up when things go wrong. Without this there can be no learning.

Another extremely interesting thread later in the book is when he picks up on marginal gains. This is a way to find improvements and is used by teams like the British Cycling team who were so successful at the Rio olympics. In short: everything matters every detail that can hold back success or performance is important and when you address them all they compound together to create an unstoppable chain of improvements. Small gains, marginal gains become the root of great success.

Marginal gains is not about making small changes and hoping they fly. Rather, it is about breaking down a big problem into small parts in order to rigorously establish what works and what doesn’t.

They use the first test not to improve the strategy, but to create richer feedback. Only when they have a deeper understanding of all the relevant data do they start to iterate.

see weaknesses with a different set of eyes. Every error, every flaw, every failure, however small, is a marginal gain in disguise.

I heartily recommend this book, it’s easy to read and the stories make the examples stay in your head. I hadn’t heard of the marginal gains technique but will be using it myself.

View all my reviews