Rspec failing with no such method validate_uniqueness_of

This was fun, in a not really that much fun kind of way

I am getting an old project ready to move to a different Heroku stack, which has involved a lot of running bundle and scratching my head

So I fire up the tests … and some fail with

  undefined method `validate_uniqueness_of' for #<RSpec::ExampleGroups::Blah::Validation::.....>

So, let the debugging begin

  1. Is it a problem with Shoulda? Try everything, and discover that shoulda is looking for that actual method. Also discovered that there are some useful methods on models that shoulda calls by default, but none of them have the word unique in them.
  2. Have another project that works, go into the debugger and discover that it calls the matcher and doesn’t just look for a method.
  3. Stroke chin, and then add type: :model, onto the spec declaration because other project needed it and worth a try. Problem goes away.
  4. Go diving into the spec helper that works, newer versions of Rspec have a cheeky config attribute

I add this to my helper and all the tests pass

So, I thought I would put something here that the next mug might find if the same thing happens to them and they won’t lose a morning to it.

This has been a community service announcement, have fun.

Today I rediscovered the Ruby double splat operator

Old Ruby hands will know that you can use the splat operator * to turn an array into a list of arguments:

def fred(x, y, z)
  puts x
  puts y
  puts z

argy_bargy = [1, 2, 42]


=> 1
=> 2
=> 42

There are also another trick you can do with this:

a = 1

puts [*a]

=> [1]

a = [1]

puts [*a]

=> [1]

Where the dereferencing lets you ensure you are always using an array. This is thought to be a bit old fashioned now and Rubyists now use the Array() method that does the same thing but with less syntactic magic. Back in the days before Ruby 2 (or so) it was the only way to do it.

I’m a great believer in trying to avoid the cardinal sin of Primitive Obsession – in Ruby this usually manifests itself as storing hashes everywhere and ending up with lots of code full of several levels of square brackets that manipulates them that ends up being parked in the wrong place. Ironically Rails is full of this and you often find yourself peering through documentation looking for the options hash to see what you can ask an object to do.

So in an ideal world you might want to be able to create object on the fly from hashes (say ones stored as JSONB in a Postgres database for instance ūüėČ) that have some intelligence in them beyond the simple, somewhat cumbersome, data carrier that Hash offers. So you could try something like this:

Thing =, :y)

harsh = { x: 1, y: 2 }

wild_thing =*harsh.values)

The only problem here is you lose the useful semantics of the Hash, if it is initialised in a different order, or from a merge, then you’re screwed. The hash keys and the Struct’s keys having the same names is a mere semantic accident. This isn’t what you want. Instead let’s use the double splat:

class Thing 
  attr_reader :x, :y
  def initialize(x:, y:)
    @x, @y = x, y

harsh = { x: 1, y: 2 }

wild_thing =**harsh)

You now have a way to create an object you can start adding methods to from your hash source that is isolated from argument order changes. There is one last wrinkle, which is that JSONB keys are strings, so you need:

hash_from_json = { "x" => 1, "y" => 2 }

wild_thing =**hash_from_json.symbolize_keys)

Now you’re almost ready to be able to take chunks of structured JSON and quickly and reliably turn it into usable objects with things like the <=> operator in them


That’s a Hash and nothing more. You can even add methods that implement [] and []= to keep Hash-like capabilities with a bit of meta programming, perhaps for backward compatibility, if you really have to.

I have arrays of these, that map to diary entries in an app I’m working on. I’m also very very lazy when I expect the computer to do the work, I want to solve problems once and they stay solved with as little fiddly RSI creating pressing shift to get brackets and things as possible. So:

class DiaryEntry
 attr_reader :name, :start_time, :finish_time

 def initialize(name:, start_time:, finish_time:)
   @name, @start_time, @finish_time = name, start_time, finish_time

def apply_schedule(schedule, offset:)
  make_entry = -> (entry_hash) {**entry_hash.symbolize_keys) 
  all_entries =
  # ...

Code has been split across multiple lines to ease understanding. I now have a nice array of proper objects that I can filter and add to the map using methods instead of clumsy brackets everywhere. We can sort them and so on if we wish too.

This last example also shows one of my favourite techniques, which is to remap arrays of things using simple lambda functions, or use them with methods like filter inherited from Enumerable, a named lambda isn’t always necessary but it follows the useful pattern intention revealing name that can make you code much clearer. Using lambdas like this means then can also be passed as arguments to methods for filtering or formatting, that can be very functional-style without having to go all object-oriented.

For extra marks, dear reader, there’s a post on stack overflow that uses a technique like this to create structs from hashes on the fly. See if you can find it and come up with some uses for it, I’m sure there are many.

Phoenix: It was nice while it lasted

We decided to pull the plug on using Phoenix.

The people we know who are using it are mostly well funded and have the time to learn to use it and find their way around things that are a struggle.

We are not.

I wanted to do something that would have taken me five minutes in Rails and Phoenix just wouldn’t do it. Or, at least, the way to do it wasn’t documented.

I also got burned, and wasted a lot of time, because the Phoenix commands have been renamed to be phx instead of phoenix.

I ended up creating a Phoenix 1.2 app by mistake because the old command wasn’t deleted.

It’s annoying. I like the Elixir language a lot. But it’s back to Ruby on Rails because I don’t have time or the dime.

I think in about a year I might come back to it because it will be a bit more mature.

Notes on my first Ember/Phoenix app

I hit a bit of a problem. I’m writing a quotation system for my company.

I have an entity called EstimateType – it is the driver for the rest of the setup. Basically it’s a name and it’s used to group the pieces of the estimate together, so you have, say, small business, or sole trader, and they each may have the same parts of the quote but the details will be different (for example sole traders are generally taught one to one and we charge a flat fee not per delegate).

I built a prototype front end in Ember and used the mirage library.

Using Mirage I just sent the EstimateTypeId to the app and it worked.

The back end’s in Phoenix and I’m using a library called ja_serializer that’s supposed to support Ember out of the box. Having done some experimenting with hand building stuff that can talk to Ember I think this is a good idea and will save some time.

The code generated by this library puts the parent up away in a different part of the JSON from the main data, in a place called relationships. This would be fine (I suppose) but the ID doesn’t end up getting saved either by the generated controllers, or by the generated change sets (I had to add it in).

I’m really not convinced this is right.

10 Apr

The generator doesn’t do parent child properly. It essentially generates the same set of models, tests and controllers that you would get if there were no parent. This is a bit useless and is what got me confused.

I added in some more methods to the tests that create the parent entity and then put it into the queries and structs used by the wrapper methods in the Estimates module (which is the main one for this part of the app).

I’m still a bit meh about having to put things into a module for different parts of the app, which I think came in with 1.3. It’s nice, but often those decisions at the beginning of a development or design run will seem not quite right, and then you get into the problem of asking yourself if it’s worth moving things around. I’d far rather have the problem of deciding if it was worth pushing things into domains and/or namespaces because my app had become overcomplex. It feels like adding an extra layer of indirection for its own sake, and I’ve had enough of that from the days I used to write lots of Java.

Now I have a set of tests that run, and controllers that do parent child correctly.

I did get somewhat hampered where my CI system was failing and not deploying when the tests were running locally. Have since worked out that this was because

MIX_ENV=test mix ecto.reset

Has running the seeds built into the aliases. I’ve since added these aliases to my mix.exs file:

defp aliases do
 "ecto.setup": ["ecto.create", "ecto.migrate", "run priv/repo/seeds.exs"],
 "ecto.setup-noseed": ["ecto.create", "ecto.migrate"],
 "ecto.reset-test": ["ecto.drop", "ecto.setup-noseed"],
 "test": ["ecto.create --quiet", "ecto.migrate", "test"],

And now I do ecto.reset-test if I want to trash the test db. I still haven’t worked out how to tell mix to always run this with the test environment, but not worrying about that now.

I’ve also added

 {:mix_test_watch, "~> 0.5", only: :test},
 {:ex_unit_notifier, "~> 0.1", only: :test},

To my deps, so that I can run the equivalent of guard. Test watch autosets the environment to test but I added only: test because I didn’t want the dep in my production setup. It does mean I need to put MIX_ENV=test onto the command line or it won’t compile and run, but it’s no great hardship.

Later the same day

I must have used the wrong generator commands because this now seems to at least attempt to create the parent records

mix ja_serializer.gen.phx_api Estimates Estimate estimates company_name:string aspiration:string prepared_for:string email:string body:map estimate_type_id:references:estimate_types

The estimates tests now contain

estimate_type = Repo.insert!(%RosieEstimatesServer.Estimates.EstimateType{})

In the controller tests. The tests are still all busted, but at least there’s a starter for 10 there now where there wasn’t before. I still had to set up an alias for Repo, though.

And another thing

The autogenerated change sets don’t have the parent id in them – maybe they’re supposed to be used differently – but in the absence of any decent examples it’s a bit hard to get to the bottom of.

In all cases I’ve had to add estimate_type_id to the cast and validate_required clauses in the model files.

In addition

|> foreign_key_constraint(:estimate_type_id)

Wasn’t put in automatically, which seems a bit weird.


In order to get errors to be returned in a format Ember likes I needed to change views/changeset_view.ex so that it returned the errors in a compatible list

def render("error.json", %{changeset: changeset}) do
  %{errors: errors_to_json_format(translate_errors(changeset))}

defp errors_to_json_format(errors) do
  errors |> {k, v} -> %{detail: List.first(v), source: %{ pointer: "data/attributes/#{k}"}} end)

As in the old format isn’t supported any more. This code needs a bit more refactoring but right now it works. Thanks to this guy for the tip.

Also Ember pluralises the entity name so the controller methods needed to be changed

- def create(conn, %{"data" => _data = %{"type" => "estimate", "attributes" => estimate_params}}) do
+ def create(conn, %{"data" => _data = %{"type" => "estimates", "attributes" => estimate_params}}) d

As in, pluralise the type.

Happy days.

And …

import { underscore } from '@ember/string';
keyForAttribute(attr) {
 return underscore(attr);

In the serializer config – because elixir inbound wants underscores and I lost patience with JSON API sez X pedantry ūüôā

Book Review: Black Box Thinking: Matthew Syed

Black Box Thinking: The Surprising Truth About SuccessBlack Box Thinking: The Surprising Truth About Success by Matthew Syed
My rating: 5 of 5 stars

I found the ideas in this book really resonated with my experience in a number of industries.

Syed’s thesis is that we live in a society that usually plays the blame game when things go wrong. He contrasts this with the ideas used by the aviation industry where mistakes and errors are seen as problems with the system, and not individuals. Because he draws from this industry it uses the thinking behind¬† black box from aeroplanes as the central metaphor.

If something goes catastrophically wrong it’s not the pilot that’s to blame but instead the system as a whole that allowed the mistake to happen. This systems approach is why there are so few aviation accidents.

In the early parts of the book he contrasts aviation errors with medical errors. Doctors are trained not to admit to failure and be “experts”. So instead of learning from failure there’s a culture of “accepting the inevitable” and not trying to stop things happening again. Indeed there’s a really silly idea that doctors’ training makes them infallible.

Syed gives an account of one awful medical accident where a young mother ended up with brain damage because her throat closed up under anasthetic and the doctors spent so long trying to insert a breathing tube they didn’t do a tracheotomy in time. It turns out that under stress people lose track of time, and can end up going past hard limits (like how long someone can survive when they can’t breathe) with bad consequences. In this case the poor woman’s husband was an airline pilot and he didn’t accept the “one of those things” arguments.

Eventually after a fight the husband managed to get to the truth of what had happened. Instead of being bitter he shared what he had found in medical journals, made sure that the simple things you can do to make sure your intense focus hasn’t made you blind was wider known. For example, you can make sure that all people involved can call for certain things, hierachies need to be questioned. Two people work on a problem, but one of them stays out of the decision making loop and the cognitive stress so they can see what’s happening and call time.

This information has saved a lot of lives. The woman’s husband has been written to by many surgeons all over the world. There are examples of how this telescoping time phenomenon caused crashes of aircraft, but that industry changed the way it did crisis management to make the problem far less likely to occur.

It sounds simple, doesn’t it? Learning from failure has the status of a cliché. But it turns out that, for reasons both prosaic and profound, a failure to learn from mistakes has been one of the single greatest obstacles to human progress. Healthcare is just one strand in a long, rich story of evasion. Confronting this could not only transform healthcare, but business, politics and much else besides. A progressive attitude to failure turns out to be a cornerstone of success for any institution.

Next he looks at experts. If there is no feedback loop after they qualify as experts then they do not improve. Without being able to measure your success you are stuck, probably making the same mistakes over and over again.

If we wish to improve the judgement of aspiring experts then, we shouldn‚Äôt just focus on conventional issues like motivation and commitment. In many cases, the only way to drive improvement is to find a way of ‚Äėturning the lights on‚Äô. Without access to the ‚Äėerror signal‚Äô, one could spend years in training or in a profession without improving at all

And of course failure is necessary, as long as systems are in place to learn from it:

beneath the surface of success ‚Äď outside our view, often outside our awareness ‚Äď is a mountain of necessary failure

This goes much further than the old saw about learning from failure. Syed’s argument is that you must be systematic about it and not blame individuals for systematic failures. But also it is important that individuals take responsibility for what happens and own up when things go wrong. Without this there can be no learning.

Another extremely interesting thread later in the book is when he picks up on marginal gains. This is a way to find improvements and is used by teams like the British Cycling team who were so successful at the Rio olympics. In short: everything matters every detail that can hold back success or performance is important and when you address them all they compound together to create an unstoppable chain of improvements. Small gains, marginal gains become the root of great success.

Marginal gains is not about making small changes and hoping they fly. Rather, it is about breaking down a big problem into small parts in order to rigorously establish what works and what doesn’t.

They use the first test not to improve the strategy, but to create richer feedback. Only when they have a deeper understanding of all the relevant data do they start to iterate.

see weaknesses with a different set of eyes. Every error, every flaw, every failure, however small, is a marginal gain in disguise.

I heartily recommend this book, it’s easy to read and the stories make the examples stay in your head. I hadn’t heard of the marginal gains technique but will be using it myself.

View all my reviews

Comparing velocity of agile teams is futile


This relates back to the somewhat thorny issue of estimating how long things will take so people can plan how to allocate resources.

As creators of value giving¬†stuff we need to be able to make promises to our friends and paymasters that they will get the things they need to be able to do their business.¬†Enterprises of all stripes want to know that they can invest¬†a specific amount of cash and have something that will¬†make more cash. Preferably more than they invested, and in a timescale that isn’t too far in the future.

The problem is¬†they can’t.¬†When you are creating something new it is by nature an unpredictable beast. Instead what you have to do is take steps to minimise the cost of mistakes and maximise the results you get.

The cost of mistakes is also why we also work in short iterations because we know we are human and will make them. This isn’t a bad thing, it just lets us experiment and learn without having to bet the farm.¬†Because of this there is a disconnect right at the heart of what’s laughably called software engineering. Businesses invest to make money, any software you create is an investment. However, you can’t guarantee the results or even how long it will take to get somewhere useful and people don’t like that.

The quest for agility

Agile was a response to this, it came from the makers of software who said that they couldn’t reasonably commit to very large¬†projects with any degree of certainty. It’s all about mitigating the risks by trusting people to think small and act small in well bounded contexts. So you eat an elephant one bite at a time – you develop large systems one small iteration at a time, and the small iterations allow the business to change its mind relatively easily and cheaply as well.

Development is a process and you discover things and questions you didn’t know you needed to answer as you follow it through. Small iterations let you answer these questions before they derail you.

Typically a large system is broken down deliverable chunks, and the chunks into projects. The projects become a list of tasks that are small enough for one or two people to understand and deliver. These are then sorted into priority order and a series of iterations are planned roughly.

Then we plan an iteration in detail. Iterations are time boxed. No task should take more than a couple of days at most, and we try and work out how many we can get done in each iteration.

This is where velocity comes in.

Velocity explained

The team get together and give each piece (or story as it’s called in some schemes) an arbitrary amount of effort they think it will take and allocate it a number of points depending in the difficulty. It’s important to emphasise here that the key word is¬†think.¬†They don’t actually¬†know,¬†because if they did they’d be playing the lottery and living the high life. It takes little time to do this and they work out what will fit into the next iteration, plus a couple of extra things that they could fit in if things go well. It’s also not effort, as such, but a yardstick that lets you work with the relative sizes of tasks.

Over time they get a feel for how many of these magical points they can get done in an iteration, and how many certain types of task may take.

Very strong concept to emphasise here is that there is no science in this, the points are what that team think for that specific iteration. The points measure nothing and are only to give a yardstick of what can probably be fitted into a given iteration by that specific team.

So …

  • If you try and use velocity to compare teams you aren’t going to be able to say¬†anything meaningful, and just piss people off.
  • If you say things like¬†you only got 23 points done this iteration and 25 last one, you need to work harder¬†to a specific team they can easily fix it by adding some nonsense to¬†the plan for the next¬†iteration. You’re also demonstrating¬†you just don’t get it.

To complete the process when each iteration finishes the team traditionally check back with the business representative (often called the business owner) and make sure that what they’ve built fits with the business needs. This is known as the demo. If they’ve been doing technical stuff to support future iterations there may be nothing to show.

Good teams then review what they got done in the last iteration and look for ways to improve.

Velocity is how many points¬†a given team gets done in an iteration. It’s worth measuring because you can start finding out whether or not:

  • The project is possible with the people and resources you have
  • The team find certain kinds of task difficult and maybe need some mentoring for them
  • The communication with the business is being properly managed
  • Some tasks¬†in particular areas are taking a lot longer than first thought¬†– you can go find out why before it kills everything

and you can find this all out very early so you can fix it before you’ve wasted huge sums of money invisibly choking on the proverbial¬†elephant.

So velocity is a simple rule of thumb that lets you make sure you can keep your promises and head off any problems early. It’s not a management tool to get more productivity. In fact it has little to do with it. For example a team of very experienced people who’ve been working together a long time might well be able to deliver¬†a lot more ¬†in an iteration, but could easily allocate the same number of points as a less effective team because that’s what they’re comfortable with.¬† If you were some clueless spreadsheet jockey working far away from the delivery you would have no way of knowing this.

The number is meaningless, except as a rule of thumb for a specific team at a specific time. It’s a very blunt instrument and changes anyway¬†as the teams get better or change their practices.


You can’t create a pretty gantt chart and say¬†this will be ready by the 12th of November¬†because you don’t know for sure when things will finish. You can get a range of dates once you have some data. If it must be ready by a specific time you can either use the knowledge you gain to manage scope or make sure that the parts of the project that must ship to make it workable by that date¬†are done first.

You can also get creative. For example Basecamp wrote their billing module after they delivered their first fully featured version because it gave them short term time to put more into the product and left them 30 days to create the billing module.

In software there are no hard and fast rules and you need to take the time to work out how to get the business what it needs, which is¬†capability,¬†not functionality. That’s a topic for a different post.

Can’t install postgis on PostgreSQL 9.4 after running apt-get postgis

I found this question on stack overflow and answered it – then the pedantic site moderators deleted it. So here is the answer again.

On a clean install of postgres and postgis you will see this error because postgis doesn’t have the dependency to pull in the shared library. When you go into psql the command

my_database=# CREATE EXTENSION postgis;


ERROR: could not access file “$libdir/postgis-2.1”: No such file or directory

found I also needed

sudo apt-get install postgresql-9.4-postgis-2.1

I found it by looking at

apt-cache search postgres | grep gis

Review of Antifragile: Things That Gain from Disorder

Antifragile: Things That Gain from DisorderAntifragile: Things That Gain from Disorder by Nassim Nicholas Taleb
My rating: 5 of 5 stars

I enjoyed this book but suspect others might find it a hard read.

Taleb points out that this is the last in the trilogy that includes Fooled by Randomness: The Hidden Role of Chance in Life and in the Markets and The Black Swan: The Impact of the Highly Improbable. He also says you could read the chapters from all 3 in any order if you want to. I got a lot out of the three books but reading them is a labour of love. If you want a quick textbook style explanation go looking for his other more technical works.

You first have to get to grips with his literary, raconteur (although he would prefer fl√Ęneur) style. It’s not a textbook and in fact the heavy number based, more academic, arguments are in other documents you can get from his website. Some readers find the style hard to get to grips with, but I like it.

He also makes words up, like Antifragile itself, sometimes for effect and sometimes because he doesn’t feel there is a word that works. I like this playing with words, it amuses me, I play with words a lot myself.

The core idea in Antifragile comes from the ones he explores in the other books. In essence we live in a world that isn’t dominated by the comforting shape of the normal distribution. There are events that are rare but will happen and they completely drown out the rest of the things you come into contact with the other 99% of the time. This is why the Black-Scholes equation is bunk, you can’t take a derivative of a catastrophic disconnect so the risk it gives is useless, without perfect knowledge of the future. It does work when things are stable, for example I’ve seen it used in estimating risks in queues in development processes, but as soon as you are open to catastrophic black swan events the figures it gives are meaningless, in fact dangerous.

If you have antifragility then you can take advantage of these sharp disconnects to make you richer, stronger or happier. He uses as examples where systems become stronger when challenged. Of course these are used mostly metaphorically to show that it does happen out there in the real world.

The bit that had me laughing out loud was his description of the “Soviet-Harvard illusion” where people assume that things that happen together have some connection in reality. He gives the example of a Harvard professor going and lecturing birds on how to fly because they wouldn’t be able to without the series of lectures and growing his or her own sense of importance because of it. This is his beef with academic theorists, none of their ideas have weight in the real world, and if you look at how we actually do things and what the real risks are when you take black swan events into account.

I also liked the Barbell concept, put most of your risk into very conservative places, and a small amount in very high risk (as in the risk is shape is a barbell, yes?). If the high risk pays off all is good, but you’ve not lost much if it doesn’t pan out. On the other hand, most of us go for “medium” risk, which is in fact not medium at all because of the propensity for the economy to have black swan events. This is in fact the riskiest long-term strategy and we’ve all bought into it because it’s been mis sold and feels safest when times are calm. It isn’t because long term times are not calm and never will be.

Similarly, take things like climate change, or fracking. The onus isn’t on the people who worry about it to prove there is a problem. Put simply if you start doing something novel or unusual you must prove it doesn’t change things for the worse – the onus is on the new to prove its safety. We already understand the old works fine. Again, this is about unknown black swans waiting for you.

So, if you want to meet Fat Tony and a host of interesting characters who live in this place, read the books. But remember – they aren’t text books, but a literary exploration of some interesting ideas and you have to be prepared to walk a while with Taleb while he tells you his stories.

View all my reviews

Review of The Tyranny of Experts by William Easterly

The Tyranny of Experts: Economists, Dictators, and the Forgotten Rights of the PoorThe Tyranny of Experts: Economists, Dictators, and the Forgotten Rights of the Poor by William Easterly

My rating: 5 of 5 stars

Easterly is well known economist, who used to be one of the people he characterises as a “Development Economist” in the book. His central thesis is that experts think that the world’s poor don’t worry about their rights; they’re far more worried about their poverty and must be “helped” by the experts’ expertise to get out of poverty. Only then do their rights matter.

Easterly demonstrates with masterful strokes how, in fact, respecting rights is the cornerstone of sustainable growth. You won’t put effort into something if the government can arbitrarily turn up with a truck full of soldiers and take it away from you. If a king can just confiscate what you make you won’t make much, or trade with other people, because what’s the point?

Once the individual’s rights are respected, only then, can growth happen.

He goes right back to Adam Smith’s invisible hand from the wealth of nations¬†and gives a far more nuanced reading of Smith than the current dogma about markets would lead you to believe is the case. For example, Smith would have been appalled by the monopolists cartels that run much of our economy. The invisible hand is, instead, people working in a self-interested way with the limited knowledge at their disposal, with each other, to create an economy that works for them. There is no expert saying how it should work in an abstract sense. There is no way an “expert” can possibly have all of the knowledge needed to create an economy, or have a deep understanding of people’s individual needs. It’s simply too big a problem. The knowledge needed is in no single head, and creates a different structure with a different history depending on what the individuals knew or discovered when they collaborated with each other. Of course, there can’t be any miracles caused by some anointed leader either.

His other target is what he calls the “blank slate” approach. Experts and the dictators that appoint them start from the assumption that whatever poor country they are about to blight is a blank slate, with no history, no already operating, particular, invisible hand that gets things done. So they proceed to impose a way of doing things on people instead of letting them find it out for themselves, and also trample on the rights of those people in the process “for their own good”.

He also discusses at length the works of Friedrich Hayek and Gunnar Myrdal. Hayek has been somewhat hijacked by later thinkers such as Milton Friedman but in The Road to Serfdom he outlines why the old state-socialist vision of experts telling us how to live our lives is deeply flawed, if not fascistic, and he also defended the right of the individual to not have their lives decided for them by the state. In contrast Myrdal’s vision of removing children from their families and having them brought up by more “efficient” state-controlled organisations is frankly terrifying. Myrdal’s vision for what we now call the third world suffered from the benevolent expert illusion. He wrote a huge treatise Asian Drama An Inquiry into the Poverty of Nations without once mentioning anything to do with the history and culture of the place or the needs of the people there, but instead talking of them like they are children. Myrdal’s work invented the so-called science of development economics that Easterly’s book is a polemic with.

Easterly makes the point that local custom and democratic tradition meant that Myrdal’s vision in his native Sweden made his ideas about reorganising the family not start. People wouldn’t let this happen because democracy and basic rights mean such harebrained ideas can’t take root. However, in the non-democratic, less developed world they can, due to the lack of individual rights and the dominance of dictators. Of course, the custom and tradition is the product of many generations of trial and error, it won’t be perfect, but the best people have come up with so far. It will almost definitely be better than something just made up in the mind of a Myrdal, because it has been tested by the people living with it.

There is a detailed discussion debunking the myth of dictators promoting growth, and what is a mountain of evidence pointing the other way. Also, evidence of growth is called into question. In the long run you will get periods of apparently high growth for purely statistical reasons followed by low or average, but our human propensity for seeing patterns will credit it to an individual or government because it makes a better story, even when the dates don’t overlap. A far better measure is to look at the growth on average for a region and see if it matches that of neighbouring countries, then see if there is a significant difference that might be caused by the dictator. This explains the miracle of Singapore far better than any story about Lee Kuan Yew being responsible for it.

One of the most striking examples Easterly covers is that of Korea. People living in an area where the land was awful for growing food, and where “experts” may have spent a lot of time and energy getting the crop yield from terrible to bad instead gained skills servicing motor cars, they swapped these skills with the people who had land that could grow food. This became the motor for the massive technology companies in Korea, but if the experts had arrived they all would still be subsistence farmers growing slightly more food than they might have done otherwise on marginal land.

One of the most telling arguments about the abuse of statistics comes when the Gates foundation are taken to task for claiming they reduced infant mortality in Ethiopia. In essence the figures are made up from guesses looking at what might have happened and have no rigour. The Ethiopian government are also rights abusers on a grand scale, and use British aid to drive people of their land and pay for their political prisons, as well as hold people to ransom (vote for us or starve) over political reform. The book opens with the story of some farmers in the USA being forced off their land and moved to model villages so a British company can grow wood there. Of course, this could not happen in a democratic country like the USA, but it did happen in Ethiopia and Easterly uses this to make the point that individual rights against the government are paramount if you want economic progress. They are not a nice to have, at some undefined point in the future. I am personally very angered that my government’s much lauded ethical foreign policy was a smokescreen for this. Of course the government has changed since then, but I am sure the same ignorant, condescending, rights ignoring view holds.

I have used some quite emotional language writing this review, but in fact Easterly is scrupulous in making sure the evidence speaks for itself and does not make any polemical points the way I have here for brevity’s sake. He also goes into some depth looking at an area in New York that is now one of the most desirable places to live that was left alone by the zealous bureaucrats by a process of accident and prevention by protest, and how it was transformed because it was left alone while the invisible hand found a better use for it, this is fascinating and also calls into question the current zeal for tearing everything down and evicting people from perfectly good houses because of some grand plan.

To sum up, this is a well written, engaging book. It recasts some writers who have been unjustly hijacked by some of the more extreme political views of the last half century and lets their ideas breathe. The central thesis, that people find excellent solutions themselves when not interfered with or stolen from by the state, is valid. It also calls into question the grip the monopolists have on our economy, to create a metaphor of my own, the invisible hand has become a strangler’s and Adam Smith would have had no truck with it.

View all my reviews

Survivor bias: an experiment

I write a simple computer program that randomly selects one of two outcomes. I send out letters telling people I can pick stocks and shares that will go up. About 50% of the people I write to think I know what I’m doing and think they might buy subscriptions to my stock picking service.

I do the same thing with the 50% that got the right answer, each time making an ever smaller group think I’m a stock picking genius.

Eventually I will run out of people, but right up to the last two it will appear to them that I really knew what I was doing, when I was effectively tossing a coin.

In the mean time quite a few of them may have subscribed to my stock picking service. Good for me, not so much for them.

The people at the end of this chain of probabilities will think that they are kings of the world, when they are only survivors of a simple process that could have picked anyone from the original group of stock buyers.

I recently attended a virtual course on complexity and discovered the fun simulation language Net Logo. For historical reasons the little actors displayed on the screen are called turtles. I could just as easily build a program that has a population that halves every turn. Is the lone turtle left blinking on the screen at the end a special turtle, did it somehow avoid the grim reaper against the odds? No, obviously not.

If that turtle was instead a person?¬†The story it had to tell would be a story the rest of us would want to emulate because it was a¬†survivor. This is why the biographies of the heroic entrepreneurs are often not that surprising. This is why their opinions quite often don’t differ a lot from those of others in the same cohort.

I used to work for Oracle and read the unauthorised biography of Larry Ellison – it’s an entertaining read and certainly not very complimentary. One of the things that comes through is¬†luck, Oracle came close to going over several times, literally so close that a single large order from Japan saved it. I was talking about Richard Branson with someone recently. If I remember it right he originally had the idea of opening his record stores near tube stations late at night so people travelling home could buy records, he also had an aunt who lent him the money to open these stores. Great idea, but he didn’t have to go to a bank and get laughed at. Again, when Steve Jobs early Apple got some venture capital so they could¬†make things happen – whoa, things started happening! (Thanks to Tim Spencer for this one).

There is an unknown population of other people and businesses that didn’t make the cut, or that stayed small service companies that are still around but not mega corporations. Probably 99.99% of them. Jobs himself acknowledged this with the famous analogy that the things joining the dots together are only apparent when you look back and make a story of them. In my last post I talked about¬†pareidolia, which is the human tendency to invent patterns where none exist. We can all find¬†stories up like this; if I hadn’t answered a job advert in the Independent I wouldn’t have met my wife, my kids wouldn’t exist and, like, wow man (sarcasm off). We make a causal chain of events but forget the massive part that chance plays in what happens to us. If the University had placed the ad in a different paper on a different day, who knows?

Survivor bias makes the winners’ stories compelling, but there are another 9999 (or many more) stories of other people¬†we never hear. Remember this, next time someone tells you that you must emulate this or that hero of theirs. Success needs an element of luck, and the same person may not be lucky twice. Napoleon used to ask of a general¬†is he lucky? He was no fool. We all love 37 signals’ (now Bascamps’) story, but there was luck there. Lots of people have tried their formula without getting what they have, or even losing everything.

This isn’t meant to sound like a doom and gloom principle, far from it, what it is saying is you need to find your own way that works for you. You also need to take that opportunity when it falls into your lap and do something with it. But you aren’t any more special than anyone else. It doesn’t matter. What matters is being clear about what you want and sticking to it.

© 2020 Francis Fish

Theme by Anders NorénUp ↑