Page 2 of 41

Phoenix: It was nice while it lasted

We decided to pull the plug on using Phoenix.

The people we know who are using it are mostly well funded and have the time to learn to use it and find their way around things that are a struggle.

We are not.

I wanted to do something that would have taken me five minutes in Rails and Phoenix just wouldn’t do it. Or, at least, the way to do it wasn’t documented.

I also got burned, and wasted a lot of time, because the Phoenix commands have been renamed to be phx instead of phoenix.

I ended up creating a Phoenix 1.2 app by mistake because the old command wasn’t deleted.

It’s annoying. I like the Elixir language a lot. But it’s back to Ruby on Rails because I don’t have time or the dime.

I think in about a year I might come back to it because it will be a bit more mature.

Notes on my first Ember/Phoenix app

I hit a bit of a problem. I’m writing a quotation system for my company.

I have an entity called EstimateType – it is the driver for the rest of the setup. Basically it’s a name and it’s used to group the pieces of the estimate together, so you have, say, small business, or sole trader, and they each may have the same parts of the quote but the details will be different (for example sole traders are generally taught one to one and we charge a flat fee not per delegate).

I built a prototype front end in Ember and used the mirage library.

Using Mirage I just sent the EstimateTypeId to the app and it worked.

The back end’s in Phoenix and I’m using a library called ja_serializer that’s supposed to support Ember out of the box. Having done some experimenting with hand building stuff that can talk to Ember I think this is a good idea and will save some time.

The code generated by this library puts the parent up away in a different part of the JSON from the main data, in a place called relationships. This would be fine (I suppose) but the ID doesn’t end up getting saved either by the generated controllers, or by the generated change sets (I had to add it in).

I’m really not convinced this is right.

10 Apr

The generator doesn’t do parent child properly. It essentially generates the same set of models, tests and controllers that you would get if there were no parent. This is a bit useless and is what got me confused.

I added in some more methods to the tests that create the parent entity and then put it into the queries and structs used by the wrapper methods in the Estimates module (which is the main one for this part of the app).

I’m still a bit meh about having to put things into a module for different parts of the app, which I think came in with 1.3. It’s nice, but often those decisions at the beginning of a development or design run will seem not quite right, and then you get into the problem of asking yourself if it’s worth moving things around. I’d far rather have the problem of deciding if it was worth pushing things into domains and/or namespaces because my app had become overcomplex. It feels like adding an extra layer of indirection for its own sake, and I’ve had enough of that from the days I used to write lots of Java.

Now I have a set of tests that run, and controllers that do parent child correctly.

I did get somewhat hampered where my CI system was failing and not deploying when the tests were running locally. Have since worked out that this was because

MIX_ENV=test mix ecto.reset

Has running the seeds built into the aliases. I’ve since added these aliases to my mix.exs file:

defp aliases do
 [
 "ecto.setup": ["ecto.create", "ecto.migrate", "run priv/repo/seeds.exs"],
 "ecto.setup-noseed": ["ecto.create", "ecto.migrate"],
 "ecto.reset-test": ["ecto.drop", "ecto.setup-noseed"],
 "test": ["ecto.create --quiet", "ecto.migrate", "test"],
 ]
end

And now I do ecto.reset-test if I want to trash the test db. I still haven’t worked out how to tell mix to always run this with the test environment, but not worrying about that now.

I’ve also added

 {:mix_test_watch, "~> 0.5", only: :test},
 {:ex_unit_notifier, "~> 0.1", only: :test},

To my deps, so that I can run the equivalent of guard. Test watch autosets the environment to test but I added only: test because I didn’t want the dep in my production setup. It does mean I need to put MIX_ENV=test onto the command line or it won’t compile and run, but it’s no great hardship.

Later the same day

I must have used the wrong generator commands because this now seems to at least attempt to create the parent records

mix ja_serializer.gen.phx_api Estimates Estimate estimates company_name:string aspiration:string prepared_for:string email:string body:map estimate_type_id:references:estimate_types

The estimates tests now contain

estimate_type = Repo.insert!(%RosieEstimatesServer.Estimates.EstimateType{})

In the controller tests. The tests are still all busted, but at least there’s a starter for 10 there now where there wasn’t before. I still had to set up an alias for Repo, though.

And another thing

The autogenerated change sets don’t have the parent id in them – maybe they’re supposed to be used differently – but in the absence of any decent examples it’s a bit hard to get to the bottom of.

In all cases I’ve had to add estimate_type_id to the cast and validate_required clauses in the model files.

In addition

|> foreign_key_constraint(:estimate_type_id)

Wasn’t put in automatically, which seems a bit weird.

Meh

In order to get errors to be returned in a format Ember likes I needed to change views/changeset_view.ex so that it returned the errors in a compatible list

def render("error.json", %{changeset: changeset}) do
  %{errors: errors_to_json_format(translate_errors(changeset))}
 end

defp errors_to_json_format(errors) do
  errors |> Enum.map(fn {k, v} -> %{detail: List.first(v), source: %{ pointer: "data/attributes/#{k}"}} end)
end

As in the old format isn’t supported any more. This code needs a bit more refactoring but right now it works. Thanks to this guy for the tip.

Also Ember pluralises the entity name so the controller methods needed to be changed

- def create(conn, %{"data" => _data = %{"type" => "estimate", "attributes" => estimate_params}}) do
+ def create(conn, %{"data" => _data = %{"type" => "estimates", "attributes" => estimate_params}}) d

As in, pluralise the type.

Happy days.

And …

import { underscore } from '@ember/string';
...
keyForAttribute(attr) {
 return underscore(attr);
 }

In the serializer config – because elixir inbound wants underscores and I lost patience with JSON API sez X pedantry 🙂

Book Review: Black Box Thinking: Matthew Syed

Black Box Thinking: The Surprising Truth About SuccessBlack Box Thinking: The Surprising Truth About Success by Matthew Syed
My rating: 5 of 5 stars

I found the ideas in this book really resonated with my experience in a number of industries.

Syed’s thesis is that we live in a society that usually plays the blame game when things go wrong. He contrasts this with the ideas used by the aviation industry where mistakes and errors are seen as problems with the system, and not individuals. Because he draws from this industry it uses the thinking behind  black box from aeroplanes as the central metaphor.

If something goes catastrophically wrong it’s not the pilot that’s to blame but instead the system as a whole that allowed the mistake to happen. This systems approach is why there are so few aviation accidents.

In the early parts of the book he contrasts aviation errors with medical errors. Doctors are trained not to admit to failure and be “experts”. So instead of learning from failure there’s a culture of “accepting the inevitable” and not trying to stop things happening again. Indeed there’s a really silly idea that doctors’ training makes them infallible.

Syed gives an account of one awful medical accident where a young mother ended up with brain damage because her throat closed up under anasthetic and the doctors spent so long trying to insert a breathing tube they didn’t do a tracheotomy in time. It turns out that under stress people lose track of time, and can end up going past hard limits (like how long someone can survive when they can’t breathe) with bad consequences. In this case the poor woman’s husband was an airline pilot and he didn’t accept the “one of those things” arguments.

Eventually after a fight the husband managed to get to the truth of what had happened. Instead of being bitter he shared what he had found in medical journals, made sure that the simple things you can do to make sure your intense focus hasn’t made you blind was wider known. For example, you can make sure that all people involved can call for certain things, hierachies need to be questioned. Two people work on a problem, but one of them stays out of the decision making loop and the cognitive stress so they can see what’s happening and call time.

This information has saved a lot of lives. The woman’s husband has been written to by many surgeons all over the world. There are examples of how this telescoping time phenomenon caused crashes of aircraft, but that industry changed the way it did crisis management to make the problem far less likely to occur.

It sounds simple, doesn’t it? Learning from failure has the status of a cliché. But it turns out that, for reasons both prosaic and profound, a failure to learn from mistakes has been one of the single greatest obstacles to human progress. Healthcare is just one strand in a long, rich story of evasion. Confronting this could not only transform healthcare, but business, politics and much else besides. A progressive attitude to failure turns out to be a cornerstone of success for any institution.

Next he looks at experts. If there is no feedback loop after they qualify as experts then they do not improve. Without being able to measure your success you are stuck, probably making the same mistakes over and over again.

If we wish to improve the judgement of aspiring experts then, we shouldn’t just focus on conventional issues like motivation and commitment. In many cases, the only way to drive improvement is to find a way of ‘turning the lights on’. Without access to the ‘error signal’, one could spend years in training or in a profession without improving at all

And of course failure is necessary, as long as systems are in place to learn from it:

beneath the surface of success – outside our view, often outside our awareness – is a mountain of necessary failure

This goes much further than the old saw about learning from failure. Syed’s argument is that you must be systematic about it and not blame individuals for systematic failures. But also it is important that individuals take responsibility for what happens and own up when things go wrong. Without this there can be no learning.

Another extremely interesting thread later in the book is when he picks up on marginal gains. This is a way to find improvements and is used by teams like the British Cycling team who were so successful at the Rio olympics. In short: everything matters every detail that can hold back success or performance is important and when you address them all they compound together to create an unstoppable chain of improvements. Small gains, marginal gains become the root of great success.

Marginal gains is not about making small changes and hoping they fly. Rather, it is about breaking down a big problem into small parts in order to rigorously establish what works and what doesn’t.

They use the first test not to improve the strategy, but to create richer feedback. Only when they have a deeper understanding of all the relevant data do they start to iterate.

see weaknesses with a different set of eyes. Every error, every flaw, every failure, however small, is a marginal gain in disguise.

I heartily recommend this book, it’s easy to read and the stories make the examples stay in your head. I hadn’t heard of the marginal gains technique but will be using it myself.

View all my reviews

Comparing velocity of agile teams is futile

Background

This relates back to the somewhat thorny issue of estimating how long things will take so people can plan how to allocate resources.

As creators of value giving stuff we need to be able to make promises to our friends and paymasters that they will get the things they need to be able to do their business. Enterprises of all stripes want to know that they can invest a specific amount of cash and have something that will make more cash. Preferably more than they invested, and in a timescale that isn’t too far in the future.

The problem is they can’t. When you are creating something new it is by nature an unpredictable beast. Instead what you have to do is take steps to minimise the cost of mistakes and maximise the results you get.

The cost of mistakes is also why we also work in short iterations because we know we are human and will make them. This isn’t a bad thing, it just lets us experiment and learn without having to bet the farm. Because of this there is a disconnect right at the heart of what’s laughably called software engineering. Businesses invest to make money, any software you create is an investment. However, you can’t guarantee the results or even how long it will take to get somewhere useful and people don’t like that.

The quest for agility

Agile was a response to this, it came from the makers of software who said that they couldn’t reasonably commit to very large projects with any degree of certainty. It’s all about mitigating the risks by trusting people to think small and act small in well bounded contexts. So you eat an elephant one bite at a time – you develop large systems one small iteration at a time, and the small iterations allow the business to change its mind relatively easily and cheaply as well.

Development is a process and you discover things and questions you didn’t know you needed to answer as you follow it through. Small iterations let you answer these questions before they derail you.

Typically a large system is broken down deliverable chunks, and the chunks into projects. The projects become a list of tasks that are small enough for one or two people to understand and deliver. These are then sorted into priority order and a series of iterations are planned roughly.

Then we plan an iteration in detail. Iterations are time boxed. No task should take more than a couple of days at most, and we try and work out how many we can get done in each iteration.

This is where velocity comes in.

Velocity explained

The team get together and give each piece (or story as it’s called in some schemes) an arbitrary amount of effort they think it will take and allocate it a number of points depending in the difficulty. It’s important to emphasise here that the key word is think. They don’t actually know, because if they did they’d be playing the lottery and living the high life. It takes little time to do this and they work out what will fit into the next iteration, plus a couple of extra things that they could fit in if things go well. It’s also not effort, as such, but a yardstick that lets you work with the relative sizes of tasks.

Over time they get a feel for how many of these magical points they can get done in an iteration, and how many certain types of task may take.

Very strong concept to emphasise here is that there is no science in this, the points are what that team think for that specific iteration. The points measure nothing and are only to give a yardstick of what can probably be fitted into a given iteration by that specific team.

So …

  • If you try and use velocity to compare teams you aren’t going to be able to say anything meaningful, and just piss people off.
  • If you say things like you only got 23 points done this iteration and 25 last one, you need to work harder to a specific team they can easily fix it by adding some nonsense to the plan for the next iteration. You’re also demonstrating you just don’t get it.

To complete the process when each iteration finishes the team traditionally check back with the business representative (often called the business owner) and make sure that what they’ve built fits with the business needs. This is known as the demo. If they’ve been doing technical stuff to support future iterations there may be nothing to show.

Good teams then review what they got done in the last iteration and look for ways to improve.

Velocity is how many points a given team gets done in an iteration. It’s worth measuring because you can start finding out whether or not:

  • The project is possible with the people and resources you have
  • The team find certain kinds of task difficult and maybe need some mentoring for them
  • The communication with the business is being properly managed
  • Some tasks in particular areas are taking a lot longer than first thought – you can go find out why before it kills everything

and you can find this all out very early so you can fix it before you’ve wasted huge sums of money invisibly choking on the proverbial elephant.

So velocity is a simple rule of thumb that lets you make sure you can keep your promises and head off any problems early. It’s not a management tool to get more productivity. In fact it has little to do with it. For example a team of very experienced people who’ve been working together a long time might well be able to deliver a lot more  in an iteration, but could easily allocate the same number of points as a less effective team because that’s what they’re comfortable with.  If you were some clueless spreadsheet jockey working far away from the delivery you would have no way of knowing this.

The number is meaningless, except as a rule of thumb for a specific team at a specific time. It’s a very blunt instrument and changes anyway as the teams get better or change their practices.

Consequences

You can’t create a pretty gantt chart and say this will be ready by the 12th of November because you don’t know for sure when things will finish. You can get a range of dates once you have some data. If it must be ready by a specific time you can either use the knowledge you gain to manage scope or make sure that the parts of the project that must ship to make it workable by that date are done first.

You can also get creative. For example Basecamp wrote their billing module after they delivered their first fully featured version because it gave them short term time to put more into the product and left them 30 days to create the billing module.

In software there are no hard and fast rules and you need to take the time to work out how to get the business what it needs, which is capability, not functionality. That’s a topic for a different post.

Can’t install postgis on PostgreSQL 9.4 after running apt-get postgis

I found this question on stack overflow and answered it – then the pedantic site moderators deleted it. So here is the answer again.

On a clean install of postgres and postgis you will see this error because postgis doesn’t have the dependency to pull in the shared library. When you go into psql the command

my_database=# CREATE EXTENSION postgis;

returns:

ERROR: could not access file “$libdir/postgis-2.1”: No such file or directory

found I also needed

sudo apt-get install postgresql-9.4-postgis-2.1

I found it by looking at

apt-cache search postgres | grep gis

Review of Antifragile: Things That Gain from Disorder

Antifragile: Things That Gain from DisorderAntifragile: Things That Gain from Disorder by Nassim Nicholas Taleb
My rating: 5 of 5 stars

I enjoyed this book but suspect others might find it a hard read.

Taleb points out that this is the last in the trilogy that includes Fooled by Randomness: The Hidden Role of Chance in Life and in the Markets and The Black Swan: The Impact of the Highly Improbable. He also says you could read the chapters from all 3 in any order if you want to. I got a lot out of the three books but reading them is a labour of love. If you want a quick textbook style explanation go looking for his other more technical works.

You first have to get to grips with his literary, raconteur (although he would prefer flâneur) style. It’s not a textbook and in fact the heavy number based, more academic, arguments are in other documents you can get from his website. Some readers find the style hard to get to grips with, but I like it.

He also makes words up, like Antifragile itself, sometimes for effect and sometimes because he doesn’t feel there is a word that works. I like this playing with words, it amuses me, I play with words a lot myself.

The core idea in Antifragile comes from the ones he explores in the other books. In essence we live in a world that isn’t dominated by the comforting shape of the normal distribution. There are events that are rare but will happen and they completely drown out the rest of the things you come into contact with the other 99% of the time. This is why the Black-Scholes equation is bunk, you can’t take a derivative of a catastrophic disconnect so the risk it gives is useless, without perfect knowledge of the future. It does work when things are stable, for example I’ve seen it used in estimating risks in queues in development processes, but as soon as you are open to catastrophic black swan events the figures it gives are meaningless, in fact dangerous.

If you have antifragility then you can take advantage of these sharp disconnects to make you richer, stronger or happier. He uses as examples where systems become stronger when challenged. Of course these are used mostly metaphorically to show that it does happen out there in the real world.

The bit that had me laughing out loud was his description of the “Soviet-Harvard illusion” where people assume that things that happen together have some connection in reality. He gives the example of a Harvard professor going and lecturing birds on how to fly because they wouldn’t be able to without the series of lectures and growing his or her own sense of importance because of it. This is his beef with academic theorists, none of their ideas have weight in the real world, and if you look at how we actually do things and what the real risks are when you take black swan events into account.

I also liked the Barbell concept, put most of your risk into very conservative places, and a small amount in very high risk (as in the risk is shape is a barbell, yes?). If the high risk pays off all is good, but you’ve not lost much if it doesn’t pan out. On the other hand, most of us go for “medium” risk, which is in fact not medium at all because of the propensity for the economy to have black swan events. This is in fact the riskiest long-term strategy and we’ve all bought into it because it’s been mis sold and feels safest when times are calm. It isn’t because long term times are not calm and never will be.

Similarly, take things like climate change, or fracking. The onus isn’t on the people who worry about it to prove there is a problem. Put simply if you start doing something novel or unusual you must prove it doesn’t change things for the worse – the onus is on the new to prove its safety. We already understand the old works fine. Again, this is about unknown black swans waiting for you.

So, if you want to meet Fat Tony and a host of interesting characters who live in this place, read the books. But remember – they aren’t text books, but a literary exploration of some interesting ideas and you have to be prepared to walk a while with Taleb while he tells you his stories.

View all my reviews

Review of The Tyranny of Experts by William Easterly

The Tyranny of Experts: Economists, Dictators, and the Forgotten Rights of the PoorThe Tyranny of Experts: Economists, Dictators, and the Forgotten Rights of the Poor by William Easterly

My rating: 5 of 5 stars

Easterly is well known economist, who used to be one of the people he characterises as a “Development Economist” in the book. His central thesis is that experts think that the world’s poor don’t worry about their rights; they’re far more worried about their poverty and must be “helped” by the experts’ expertise to get out of poverty. Only then do their rights matter.

Easterly demonstrates with masterful strokes how, in fact, respecting rights is the cornerstone of sustainable growth. You won’t put effort into something if the government can arbitrarily turn up with a truck full of soldiers and take it away from you. If a king can just confiscate what you make you won’t make much, or trade with other people, because what’s the point?

Once the individual’s rights are respected, only then, can growth happen.

He goes right back to Adam Smith’s invisible hand from the wealth of nations and gives a far more nuanced reading of Smith than the current dogma about markets would lead you to believe is the case. For example, Smith would have been appalled by the monopolists cartels that run much of our economy. The invisible hand is, instead, people working in a self-interested way with the limited knowledge at their disposal, with each other, to create an economy that works for them. There is no expert saying how it should work in an abstract sense. There is no way an “expert” can possibly have all of the knowledge needed to create an economy, or have a deep understanding of people’s individual needs. It’s simply too big a problem. The knowledge needed is in no single head, and creates a different structure with a different history depending on what the individuals knew or discovered when they collaborated with each other. Of course, there can’t be any miracles caused by some anointed leader either.

His other target is what he calls the “blank slate” approach. Experts and the dictators that appoint them start from the assumption that whatever poor country they are about to blight is a blank slate, with no history, no already operating, particular, invisible hand that gets things done. So they proceed to impose a way of doing things on people instead of letting them find it out for themselves, and also trample on the rights of those people in the process “for their own good”.

He also discusses at length the works of Friedrich Hayek and Gunnar Myrdal. Hayek has been somewhat hijacked by later thinkers such as Milton Friedman but in The Road to Serfdom he outlines why the old state-socialist vision of experts telling us how to live our lives is deeply flawed, if not fascistic, and he also defended the right of the individual to not have their lives decided for them by the state. In contrast Myrdal’s vision of removing children from their families and having them brought up by more “efficient” state-controlled organisations is frankly terrifying. Myrdal’s vision for what we now call the third world suffered from the benevolent expert illusion. He wrote a huge treatise Asian Drama An Inquiry into the Poverty of Nations without once mentioning anything to do with the history and culture of the place or the needs of the people there, but instead talking of them like they are children. Myrdal’s work invented the so-called science of development economics that Easterly’s book is a polemic with.

Easterly makes the point that local custom and democratic tradition meant that Myrdal’s vision in his native Sweden made his ideas about reorganising the family not start. People wouldn’t let this happen because democracy and basic rights mean such harebrained ideas can’t take root. However, in the non-democratic, less developed world they can, due to the lack of individual rights and the dominance of dictators. Of course, the custom and tradition is the product of many generations of trial and error, it won’t be perfect, but the best people have come up with so far. It will almost definitely be better than something just made up in the mind of a Myrdal, because it has been tested by the people living with it.

There is a detailed discussion debunking the myth of dictators promoting growth, and what is a mountain of evidence pointing the other way. Also, evidence of growth is called into question. In the long run you will get periods of apparently high growth for purely statistical reasons followed by low or average, but our human propensity for seeing patterns will credit it to an individual or government because it makes a better story, even when the dates don’t overlap. A far better measure is to look at the growth on average for a region and see if it matches that of neighbouring countries, then see if there is a significant difference that might be caused by the dictator. This explains the miracle of Singapore far better than any story about Lee Kuan Yew being responsible for it.

One of the most striking examples Easterly covers is that of Korea. People living in an area where the land was awful for growing food, and where “experts” may have spent a lot of time and energy getting the crop yield from terrible to bad instead gained skills servicing motor cars, they swapped these skills with the people who had land that could grow food. This became the motor for the massive technology companies in Korea, but if the experts had arrived they all would still be subsistence farmers growing slightly more food than they might have done otherwise on marginal land.

One of the most telling arguments about the abuse of statistics comes when the Gates foundation are taken to task for claiming they reduced infant mortality in Ethiopia. In essence the figures are made up from guesses looking at what might have happened and have no rigour. The Ethiopian government are also rights abusers on a grand scale, and use British aid to drive people of their land and pay for their political prisons, as well as hold people to ransom (vote for us or starve) over political reform. The book opens with the story of some farmers in the USA being forced off their land and moved to model villages so a British company can grow wood there. Of course, this could not happen in a democratic country like the USA, but it did happen in Ethiopia and Easterly uses this to make the point that individual rights against the government are paramount if you want economic progress. They are not a nice to have, at some undefined point in the future. I am personally very angered that my government’s much lauded ethical foreign policy was a smokescreen for this. Of course the government has changed since then, but I am sure the same ignorant, condescending, rights ignoring view holds.

I have used some quite emotional language writing this review, but in fact Easterly is scrupulous in making sure the evidence speaks for itself and does not make any polemical points the way I have here for brevity’s sake. He also goes into some depth looking at an area in New York that is now one of the most desirable places to live that was left alone by the zealous bureaucrats by a process of accident and prevention by protest, and how it was transformed because it was left alone while the invisible hand found a better use for it, this is fascinating and also calls into question the current zeal for tearing everything down and evicting people from perfectly good houses because of some grand plan.

To sum up, this is a well written, engaging book. It recasts some writers who have been unjustly hijacked by some of the more extreme political views of the last half century and lets their ideas breathe. The central thesis, that people find excellent solutions themselves when not interfered with or stolen from by the state, is valid. It also calls into question the grip the monopolists have on our economy, to create a metaphor of my own, the invisible hand has become a strangler’s and Adam Smith would have had no truck with it.

View all my reviews

Survivor bias: an experiment

I write a simple computer program that randomly selects one of two outcomes. I send out letters telling people I can pick stocks and shares that will go up. About 50% of the people I write to think I know what I’m doing and think they might buy subscriptions to my stock picking service.

I do the same thing with the 50% that got the right answer, each time making an ever smaller group think I’m a stock picking genius.

Eventually I will run out of people, but right up to the last two it will appear to them that I really knew what I was doing, when I was effectively tossing a coin.

In the mean time quite a few of them may have subscribed to my stock picking service. Good for me, not so much for them.

The people at the end of this chain of probabilities will think that they are kings of the world, when they are only survivors of a simple process that could have picked anyone from the original group of stock buyers.

I recently attended a virtual course on complexity and discovered the fun simulation language Net Logo. For historical reasons the little actors displayed on the screen are called turtles. I could just as easily build a program that has a population that halves every turn. Is the lone turtle left blinking on the screen at the end a special turtle, did it somehow avoid the grim reaper against the odds? No, obviously not.

If that turtle was instead a person? The story it had to tell would be a story the rest of us would want to emulate because it was a survivor. This is why the biographies of the heroic entrepreneurs are often not that surprising. This is why their opinions quite often don’t differ a lot from those of others in the same cohort.

I used to work for Oracle and read the unauthorised biography of Larry Ellison – it’s an entertaining read and certainly not very complimentary. One of the things that comes through is luck, Oracle came close to going over several times, literally so close that a single large order from Japan saved it. I was talking about Richard Branson with someone recently. If I remember it right he originally had the idea of opening his record stores near tube stations late at night so people travelling home could buy records, he also had an aunt who lent him the money to open these stores. Great idea, but he didn’t have to go to a bank and get laughed at. Again, when Steve Jobs early Apple got some venture capital so they could make things happen – whoa, things started happening! (Thanks to Tim Spencer for this one).

There is an unknown population of other people and businesses that didn’t make the cut, or that stayed small service companies that are still around but not mega corporations. Probably 99.99% of them. Jobs himself acknowledged this with the famous analogy that the things joining the dots together are only apparent when you look back and make a story of them. In my last post I talked about pareidolia, which is the human tendency to invent patterns where none exist. We can all find stories up like this; if I hadn’t answered a job advert in the Independent I wouldn’t have met my wife, my kids wouldn’t exist and, like, wow man (sarcasm off). We make a causal chain of events but forget the massive part that chance plays in what happens to us. If the University had placed the ad in a different paper on a different day, who knows?

Survivor bias makes the winners’ stories compelling, but there are another 9999 (or many more) stories of other people we never hear. Remember this, next time someone tells you that you must emulate this or that hero of theirs. Success needs an element of luck, and the same person may not be lucky twice. Napoleon used to ask of a general is he lucky? He was no fool. We all love 37 signals’ (now Bascamps’) story, but there was luck there. Lots of people have tried their formula without getting what they have, or even losing everything.

This isn’t meant to sound like a doom and gloom principle, far from it, what it is saying is you need to find your own way that works for you. You also need to take that opportunity when it falls into your lap and do something with it. But you aren’t any more special than anyone else. It doesn’t matter. What matters is being clear about what you want and sticking to it.

Generic is the wrong answer

Dear reader, cast your mind back to the time when you first wanted to put some information on a website.

You had to trust the crusty programmer types, and if you wanted changes they wouldn’t necessarily be able to do them at any great speed. This was fine in the stone age, around the year 2000 or probably earlier. Then, one day, the technology started to catch up with the aspirations and everything had to be done at internet speed. So crusty developed less crusty tools that let things happen quite a bit quicker.

We suddenly had content that people created for themselves. Never to be outdone the constant search for simple generic solutions to be everything meant that the companies that made the very expensive PC based tools for document management had a go at creating something. Some of their original big corporate customers kept on with them. These systems don’t work for the internet, except when you are creating some kind of library, which is the metaphor they started with.

The more considered approach was to treat managing content like it was a solved problem. The content management system (CMS) was born (or at least became more mainstream) and allowed you to manage your catalogue in one place so you could, you know, sell your stuff to people that wanted it.

Then we fell victim to the illusion of best practice. Problems are in a space where they seem to be the same, so we (software engineers, academics and other people who should know better) develop generic solutions. I mean, content is just abstract stuff that we let those tricky user types create so they can pay us. We can just pull something open source or off the shelf and say off you go to them and get back to important stuff like arguing over which 20 year old text editor is best.

But here’s the thing. The pattern user -> catalogue -> put catalog on the interwebs in front of customers is the same, but the user / business / enterprise is most definitely not. Most (not all) open source CMS were created for a small community to share info – they do that really well. However, a commercial company that sells white goods does not have even remotely the same needs as one that sells online holidays where the stock is a multi lingual database of hotels they have a relationship with. They need content management, the thing, but not in the same way, and the OSS solution is probably completely wrong. Also – you don’t want your site to look exactly like your competitors or you are competing on price, which is suicide.

Human beings are subject to pareidolia. Seeing patterns that aren’t there in the data. This is also related to clustering, confirmation and congruence biases. When it meant running away from the wind in the grass because you thought it was a lion that was ok. Now we look for levers to fix and change things that often don’t exist. Eric Reiss calls this robots in the Lean Startup. He cites the case when the US manufacturers couldn’t work out how the Japanese were getting such massive productivity gains so they went to see and saw robots. After a robot installing frenzy the US was still far behind. The Japanese spent a lot of energy understanding how they did things first and put the automation in where it made sense, gradually, and always testing the results, for their specific needs. Good practice, but hard to emulate without a complete culture change. Pareidolia (and the other biases) put robots inside processes that were still broken. In knowledge work this is even easier because adopting the latest fad is only the cost of a couple of books and some (more) training, until your organisation fails catastrophically. Danger, Will Robinson.

Best practice only works in very simple environments where all of the problems are completely understood. Buying a CMS from someone assumes that they understand your needs properly. They probably don’t. It’s dangerous.

Good practice means the using the skills of analysing the problem properly and divining the principles you should follow. A company should control its own content; after all that’s what it’s selling. This means layout, search, everything. However you might use a markup language and a standard group of elements for layout to make it easy for non-web heads to create that content. Find levers for the easy stuff, for the things that do have generic solutions that work.

Startups don’t want to spend money on stuff like a CMS that is tailored to their needs and will tend to go for the OSS solution. I think that they need to start small and find out what those needs are, just like everything in the lean approach. So don’t buy one, and don’t slavishly pull in an OSS one that will bite you and turn out to be an expensive mistake. Create something as simple as possible that meets your needs, then you will know what the needs are and where the pain is. Then you can use good practice to get to where you want to be. Then you can find the metaphor that works for you, and let your competitors try and shoe horn someone else’s into places they don’t work.

Best practice implies generic – generic means cheapest and also (face it) BORING.

Good practice is searching for the right metaphors and being patient enough to wait for them to appear. And, of course, those metaphors are the ones that work for you, might not be right for someone else.

Why I hate todo list apps

I recently had a twitter discussion about why I don’t like using Basecamp.

It’s a bit weird, I love Rails and it originated from the need to create a stable framework to create Basecamp. DHH and his colleagues were generous enough to share it with the rest of us and I’ve been using it for years to great effect. I admire DHH and the guys at the company and have read their books. But I hate their product. Using it makes me feel depressed.

I can’t see the flexibility others see anywhere. They are a poster child for great design and easy UI and it’s pretty dull and obvious, maybe that’s what the fuss is about. I’ve used it off and on for years and it doesn’t seem to have substantively changed in all that time, yet there are allegedly changes and improvements happening to it constantly. It’s also boring, even more boring than the bazillion sites you see built on top of Twitter Bootstrap – because they usually have some metaphor the coders are exploring and bootstrap’s a quick way to make it presentable while they experiment.

But the main thing is I can’t stand todo list apps.

This comes out of the way I work and think. I will write todo lists on paper and cross them off, but for anything bigger than just me (and up to 5 items) I need to visualise it and look at the flow and commitment of work. You can’t do this with static text-based lists without doing a lot of counting in your head. As far as I can work out you can’t create a different visualisation (in Basecamp) where you have a private list taken from other lists of things you care about today in the order that matters to you without creating another list that isn’t linked to the original tasks so you’d have to close everything twice. It hasn’t changed in years.

In the physical world you can do this with sticky notes or file cards and a bit of blu tac to put each piece of work into something that can be visualised and moved about. It doesn’t have to be complex, but nicking ideas from scrum and kanban around allocating points for pieces of work and limiting your work in progress (WIP) you can come up with something that lets you look at your pipeline and your commitments so you can start to have grown up conversations with other people about priorities and schedules. Limiting your WIP also gives you a far better chance of actually getting something worthwhile done. This is a big win over boring lists, but harder to understand at first.

Todo lists are, I think, a great tool for linear thinking and controlling.*

The metaphor lists give you is the linear instruction manual, codes, all you can do is change their priority and create more items. They also suffer from being unbounded, so you can just keep adding to them and never finish. There’s a very interesting time management book by Mark Forster called Do it Tomorrow. In it I found the idea of a closed list. This is a list you don’t add to and work on until it’s done – he’s discovered this makes you more productive and also you start tackling the tasks you really don’t want to do because you can’t close the list off until you’ve done them. This is a great antidote to procrastination. It’s also limiting your WIP. In fact it’s a different take on the same idea as a personal productivity booster.

So, in essence, for me the list metaphor stifles your ability to think and visualise without a lot of effort breaking out of the mental straightjackets it puts you in. There are better metaphors that allow you to easily move work about and discuss it properly with stake holders. So use them a little for small tasks, but for the big stuff – use bigger metaphors. I also think gantt charts cause similar cognitive blindness when you use them for anything other than checking if you are still on track.


* Aside: I was going to use the phrase command and control. The original meaning of this was simply that military operations have a designated officer who is responsible for seeing the mission through. How that officer does it is not defined. Some systems thinkers, for example John Seddon, use this term to mean enterprises where the people doing the work are not expected to show any initiative and be told what to do all the time so everything costs a fortune and is badly done by people who could do a much better job if left alone to do it properly. Military operations don’t work like this, people are trusted to do what their training tells them so they can achieve their objectives, see commander’s intent. Seddon’s meaning seems to have triumphed in systems thinking debates, however.