The Suddenness of Trouble

Sometimes, things go wrong in projects, no matter how carefully we work to prevent them. I’ve noticed that when a crisis hits, people often say (and I always feel) “why didn’t you notice earlier?”. Typically, in hindsight, there’s been multiple signs that things are not going in the right direction, but nobody has taken any forceful action to correct them. And suddenly, we’re in crisis mode, with everybody doing everything possible to fix what looks likely to be a disaster.

It’s a bit like what happens when you slowly pour grains of sand onto a pile. The pile keeps getting higher, then suddenly something breaks and there’s an avalanche.

I wonder if this is good, bad or neither. I think it is probably a little bit bad, it must be better to be able to detect and react to the signs of trouble while they are still signs as opposed to actual trouble.

More to the point, though, I wonder if the avalanches are avoidable. I guess the only way to do so is to always treat any kind of problem as a mini-crisis and act forcefully to deal with it. It seems like the cost of doing that – much of the time one would be chasing ghost problems – is probably too high. So it would be my best guess that having ‘step’ reactions to problems and feeling that ‘I should have realised sooner’ is pretty much the way things should be. Hopefully, with experience, the grains of sand will line up earlier so the avalanches are smaller.

  1. #1 by jonas jepson on February 23, 2010 - 12:25

    I think of this post as a reflection on risks that occurs. There also seems to be risks that one have been unaware of. The solution is of course not to act on all risk because they might appear. The solution (at least for me) is to be aware of that the risk might happen. Setting aside time to Reflect and communicate risks that might occur is my bet. This isn’t easy because people tend to not want to listen to an updated risk assessment, it pretty much destroys the positive excitement and excitement that usually comes with development of something. How to make yourself aware is the first and easy step, and most disasters will be forseen and you will feel wise. How to make other people aware is the hard part, because they generally dont want to.
    Just my humble opinion…

    • #2 by pettermahlen on February 23, 2010 - 19:07

      I guess it is a reflection on risk. It’s definitely a reflection on how a risk that occurs but wasn’t foreseen feels like it should have been noticed much earlier, and how once it does occur, there were almost always signs of it coming. These signs taken by themselves have often been things I have noticed, but they’ve been too small for me to take any action until the number or seriousness of them reaches some sort of threshold. I’ve not had this type of experience with a risk that was identified in a proper risk assessment sense. Probably identifying it beforehand would have been better. :)

  2. #3 by oskar on February 25, 2010 - 12:38

    I think the avalanches are avoidable if you prioritize avoiding them. This seems to require a relatively special state where the speed has priority over ambition. The speed is needed for validating or invalidating hypothesis of value. Something that seems to be rather common for technologically heavy projects is that hypothesis of value tend to be focused on technical risk, while in most cases it is the risk presented by the market that causes the avalanche effect.

    When I have tried to study what more experienced people say about this problem I have heard the word “shadow belief” describe it. (I think that was Eric Reis btw.)

    A typical shadow belief presents symptoms such as statements that say: “we know that the market really wants X so the sollution to the problem is the technical implementation of X.”

    It seems as if this is statistically wrong and that the thing you get as a result of building X is a new hypothesis about what the market wants.

    A strategic position that takes on this problem begins with the technical challenge of validating, or rather invalidating, the largest number of hypothesis possible before you run out of money. The practical solution that seems to have won acceptance the last few years is to invest in the required technology which enables a “continuous deployment” pipeline.

    As a musician this makes perfect sense to me. The more you need to “attract an audience” the faster you need to iterate your product. As I write this I also get a distinctive feeling that the more niched your product is the faster you need to iterate against the audience. If you want to make a hit in the “vampire metal ballads” genre you better start by playing live before you write the songs. You will probably not even find an audience regardless of how well you write the songs.

    Mixing up niche and mainstream will pretty much release a guaranteed risk avalanche when the product needs to meet its market.

Leave a reply to oskar Cancel reply