Posts Tagged Refactoring

Fluent APIs in Java

In our inventory management system at Shopzilla we’ve got a reporting tool that we’ve built using Wicket. A couple of months ago (this post has languished in draft status for a while), I was going to add a new page and took a look at a previous one to see what I could copy/paste. What I found was this wall of text for creating a table:

List<IColumn<AggregatedFeedRunSummary, String>> columns = Lists.newArrayList();
 columns.add(new PropertyColumn<AggregatedFeedRunSummary, String>(new Model("Name"), "merchant.merchantName", "merchant.merchantName"));
 columns.add(new PropertyColumn<AggregatedFeedRunSummary, String>(new Model("Merchant Id"), "merchant.merchantId", "merchant.merchantId"));
 columns.add(new PropertyColumn<AggregatedFeedRunSummary, String>(new Model("Feed Id"), "feed.feedId", "feed.feedId"));
 columns.add(new PropertyColumn<AggregatedFeedRunSummary, String>(new Model("Country"), "merchant.countryCode", "merchant.countryCode"));
 columns.add(new PropertyColumn<AggregatedFeedRunSummary, String>(new Model("Account Manager"), "merchant.accountManagerName", "merchant.accountManagerName"));
 columns.add(new PropertyColumn<AggregatedFeedRunSummary, String>(new Model("Tier"), "merchant.keyStatus", "merchant.keyStatus"));
 columns.add(new PropertyColumn<AggregatedFeedRunSummary, String>(new Model("Revenue Status"), "merchant.revStatus", "merchant.revStatus"));
 columns.add(new IndexablePropertyColumn<AggregatedFeedRunSummary, String>(new Model("Indexable"), "feed.indexable"));

 columns.add(new AggregatedFeedRunPropertyColumn(new Model("Proteus Last Run"), PROTEUS, "start_time")
    .isDate().isLinkTo(RunDetailsPage.class, FEEDRUN_PARAM));
 columns.add(new AggregatedFeedRunPropertyColumn(new Model("Feed Changed"), PROTEUS, "feed_changed"));

 columns.add(new AggregatedFeedRunPropertyColumn(new Model("FQS Last Run"), FQS, "start_time")
    .isDate().isLinkTo(RunDetailsPage.class, FEEDRUN_PARAM));

 columns.add(new AggregatedFeedRunPropertyColumn(new Model("Character encoding"), FQS, "encodingUsed"));
 columns.add(new CharacterEncodingSourceColumn(new Model("CE Source"), FQS));

 columns.add(new AggregatedFeedRunPropertyColumn(new Model("Total Raw Offers"), FQS, "total"));
 columns.add(new AggregatedFeedRunPropertyColumn(new Model("Invalid"), FQS, "invalid"));
 columns.add(new AggregatedFeedRunPropertyColumn(new Model("Duplicate"), FQS, "duplicate"));
 columns.add(new AggregatedFeedRunPropertyColumn(new Model("Valid"), FQS, "valid"));

 columns.add(new AggregatedFeedRunPropertyColumn(new Model("DI Last Run"), DI, "start_time")
    .isDate().isLinkTo(RunDetailsPage.class, FEEDRUN_PARAM));
 columns.add(new AggregatedFeedRunPropertyColumn(new Model("Total"), DI, "total"));
 columns.add(new AggregatedFeedRunPropertyColumn(new Model("Created"), DI, "created"));
 columns.add(new AggregatedFeedRunPropertyColumn(new Model("Updated"), DI, "updated"));
 columns.add(new AggregatedFeedRunPropertyColumn(new Model("Deleted"), DI, "deleted"));
 columns.add(new AggregatedFeedRunPropertyColumn(new Model("Unchanged"), DI, "unchanged"));
 columns.add(new AggregatedFeedRunPropertyColumn(new Model("Resent"), DI, "resent"));
 columns.add(new AggregatedFeedRunPropertyColumn(new Model("Failed"), DI, "Failed"));

 columns.add(new DatePropertyColumn<AggregatedFeedRunSummary, String>(new Model("Last DP Snapshot"), "lastSnapshotDate"));
 columns.add(new PropertyColumn<AggregatedFeedRunSummary, String>(new Model("Viable"), "viable", "viable"));
 columns.add(new PropertyColumn<AggregatedFeedRunSummary, String>(new Model("Non-viable"), "nonViable", "nonViable"));
 columns.add(new PropertyColumn<AggregatedFeedRunSummary, String>(new Model("Deleted"), "deleted", "deleted"));

As an aside, calling it a wall of text should in no way be interpreted as criticism of the people who wrote it – on the contrary, I think creating the wall of text was a great engineering decision. More on that later.

The code above is a clear example of what a lot of people really don’t like about Java (me included, though I do like Java as a language overall). The signal-to-noise ratio is horrible, mostly due to all the type parameters. There’s stuff being done to the language to help reduce that, using diamond notation and things, but I think in this case, the problem is not really the language. Saying Java sucks on the basis of the code above is a case of blaming the tools for a mistake that we made ourselves. Here are some things that are wrong with the code above:

  1. First, there’s a lot of stuff that’s not really relevant. We don’t care about the exact data types that are used as column entries in the Wicket view, but we do care about what the contents of the column are and how they are displayed to the user. So “new AggregatedFeedRunPropertyColumn(new Model(“, etc, is mostly noise except insofar as it tells us where to look for the information we’re after.
  2. A very similar point is that a lot of the information that is there is not very clear. The fact that the type of the column is a ‘PropertyColumn<AggregatedFeedSummary, String> doesn’t convey much in the way of information unless you’re very familiar with Wicket and the specific data types that we’re using.
  3. There’s also missing information: as an example, the string “merchant.merchantName” is repeated twice, and it’s not clear why. There is in fact a reason: the first time, it indicates which property value to display in a particular table cell, and the second time, it indicates which property value to use when sorting based on this column.

Using a fluent API, we can express the code above like so:

return FrvColumns.columnsFor(AggregatedFeedRunSummary.class, String.class)
 .add("Name").sourceProperty("merchant.merchantName").sortable("merchant.merchantName")
 .add("Merchant Id").sourceProperty("merchant.merchantId").sortable("merchant.merchantId")
 .add("Feed Id").sourceProperty("feed.feedId").sortable("feed.feedId")
      .link(FeedRunHistoryPage.class)
        .param("merchantId").sourceProperty("merchant.merchantId")
        .param("feedId").sourceProperty("feed.feedId")
 .add("Country").sourceProperty("merchant.countryCode").sortable("merchant.countryCode")
 .add("Account Manager").sourceProperty("merchant.accountManagerName").sortable("merchant.accountManagerName")
 .add("Tier").sourceProperty("merchant.keyStatus").sortable("merchant.keyStatus")
 .add("Revenue Status").sourceProperty("merchant.revStatus").sortable("merchant.revStatus")
 .add("Indexable").sourceProperty("feed.indexableAsString")

 .add("Proteus Last Run").sourceProperty("properties.proteus.start_time").epoch()
    .link(RunDetailsPage.class)
       .param(FEEDRUN_PARAM).sourceProperty("feedRunId.id")
       .param("component").value(PROTEUS.toString())
 .add("Feed Changed").sourceProperty("properties.proteus.feed_changed")

 .add("FQS Last Run").sourceProperty("properties.fqs.start_time").epoch()
    .link(RunDetailsPage.class)
       .param(FEEDRUN_PARAM).sourceProperty("feedRunId.id")
       .param("component").value(FQS.toString())
 .add("Character encoding").sourceProperty("encodingInfo.encoding")
 .add("CE Source").sourceProperty("encodingInfo.source")
 .add("Total Raw Offers").sourceProperty("properties.fqs.total")
 .add("Invalid").sourceProperty("properties.fqs.invalid")
 .add("Duplicate").sourceProperty("properties.fqs.duplicate")
 .add("Valid").sourceProperty("properties.fqs.valid")

 .add("DI Last Run").sourceProperty("properties.di.start_time").epoch()
    .link(RunDetailsPage.class)
       .param(FEEDRUN_PARAM).sourceProperty("feedRunId.id")
       .param("component").value(DI.toString())
 .add("Total").sourceProperty("properties.di.total")
 .add("Created").sourceProperty("properties.di.created")
 .add("Updated").sourceProperty("properties.di.updated")
 .add("Deleted").sourceProperty("properties.di.deleted")
 .add("Unchanged").sourceProperty("properties.di.unchanged")
 .add("Resent").sourceProperty("properties.di.resent")
 .add("Failed").sourceProperty("properties.di.failed")

 .add("Last DP Snapshot").sourceProperty("lastSnapshotDate").date()
 .add("Viable").sourceProperty("viable").sortable("viable")
 .add("Non-viable").sourceProperty("nonViable").sortable("nonViable")
 .add("Deleted").sourceProperty("deleted").sortable("deleted")
 .build();

The key improvements, in my opinion, are:

  • Less text. It’s not exactly super-concise, and there’s a lot of repetition of long words like ‘sourceProperty’, and so on. But there’s a lot less text, meaning a lot less noise. And pretty much everything that’s there carries relevant information.
  • More clarity in what the various strings represent. I think in both the original and updated versions, most people would guess that the first string is the title of the column. The fluent API version is much clearer about why some of the columns have the same property string repeated – it’s because it indicates that the list should be sortable by that column and which property of the list items should be used for sorting.
  • Easier modification – rather than having to figure out what class to use to display a correctly formatted Date or epoch-based timestamp, you just tack on the right transformation method (.date() or .epoch()). Since this is available at your fingertips via command-space, or whatever your favourite IDE uses for completion, it’s very easy to find out what your options are.
  • More transparency. The primary example is the AggregatedFeedRunPropertyColumn, which behind the scenes looks for property values in ‘properties.%s.%s’, and adds ‘component=%s’ as a parameter if the thing is a link. This behaviour is visible if you look in the class, but not if you look at the configuration of the columns. With the fluent API, both those behaviours are explicit.

So, in this case a fluent API both reduces noise and adds signal that was missing before. The cost, of course, is that it takes some effort to implement the fluent API: that’s why I think it was a good engineering decision not to do so immediately. The investment in the API will pay off only if there’s enough volume in terms of pages and column definitions that can use it. What’s more, it’s not just hard to know whether or not you should invest in a fluent API up front, it won’t be clear what the API should be until you have a certain volume of pages that use it.

Other big factors that determine whether an investment in a clearer API is going to pay off are the rate of change of the system in question and whether it’s a core component that everyone working on will know well or a more peripheral one that many developers can be expected to touch with only a vague understanding of how it works. In this case, it’s the latter – reporting is the sort of thing that you need to change often to reflect changes in business logic. Also, since you need to report on all features in the system, it’s normally the case that people will need to be able to modify the report views without being experts in how the tool works. That means that ease of understanding is crucial, and that in turn means it’s crucial that it be obvious for people how to make changes so as to avoid having everyone come up with their own solution for the same type of problem.

The standard pattern I tend to use when building a fluent API is to have a Builder class, or in more complex cases like above, a set of them. So in the example above, this is how it starts:


public static <T, S> ColumnsBuilder<T, S> columnsFor(Class rowClass, Class<S> sortClass) {
   return new ColumnsBuilder<T, S>();
}

public static class ColumnsBuilder<T, S> implements IClusterable {
   private final List<SingleColumnBuilder<T, S>> builders = Lists.newArrayList();

   public SingleColumnBuilder<T, S> addColumn(IColumn<T, S> column) {
      SingleColumnBuilder<T, S> columnBuilder = new SingleColumnBuilder<T, S>(this, column);
      builders.add(columnBuilder);
      return columnBuilder;
   }

   public SingleColumnBuilder<T, S> add(String label) {
      SingleColumnBuilder<T, S> columnBuilder = new SingleColumnBuilder<T, S>(this, label);

      builders.add(columnBuilder);

      return columnBuilder;
   }

   public List<IColumn<T, S>> build() {
      return ImmutableList.copyOf(
        Lists.transform(builders, new Function<SingleColumnBuilder<T, S>, IColumn<T, S>>() {
           @Override
           public IColumn<T, S> apply(SingleColumnBuilder<T, S> input) {
              return input.buildColumn();
           }
      }));
   }
}

In this case, the builder hierarchy matches the resulting data structure: The ColumnsBuilder knows how to add a new column to the table (TableBuilder might have been a better name), the SingleColumnBuilder knows how to build a single column(!), and then further down the hiearchy, there are LinkBuilders, ParamBuilders, etc.

I sometimes let a single Builder class implement multiple interfaces, if I want to enforce a particular ‘grammar’ to the API. So, for instance, if I want code to look like this:

 ActionChain actions = Actions.first().jump().onto(platform).then().climb().onto(couch).build();

I might go with something like this:


public interface NeedsVerb {
  public NeedsTarget jump();
  public NeedsTarget climb();
}

public interface NeedsTarget {
  public CanBeBuilt onto(Target target);
}

public interface CanBeBuilt {
  public NeedsVerb then();
  public ActionChain build();
}

public class Actions {
  public NeedsVerb first() {
    return new Builder();
  }

  static class Builder implements NeedsVerb, NeedsTarget, CanBeBuilt {
    private final List actions = new LinkedList();
    private Verb verb = null;

    @Override
    public NeedsTarget jump() {
       verb = Verb.JUMP;
       return this;
    }

    // .. same for climb()

    @Override
    public CanBeBuilt onto(Target target) {
       actions.add(new Action(verb, target));
       return this;
    }

    @Override
    public NeedsVerb then() {
       verb = null; // whatever, this is just an example ;)
       return this;
    }

    @Override
    public ActionChain build() {
       return new ActionChain(actions);
    }
  }
}

Some points to note:

  • I usually don’t bother too much with things like immutability and thread-safety in the Builders. They’re “internal”, and generally only used in single-threaded, initialisation-type contexts. I would worry very much about those things in the Action and ActionChain classes.
  • I usually spend more time on consistency checks and error-reporting than in the example above. The point of building a fluent API is usually to make life easier for somebody who is not an expert (and shouldn’t have to be) in the internal workings of the exposed functionality. So making it very easy to set things up correctly and to understand what is wrong when things have not been set up correctly are key.

In summary, fluent APIs are, in my opinion, often a great choice to improve code readability – and this is of course language independent, though particularly noticeable in a verbose language like Java. The main drawback is the up-front implementation overhead, but since developers spend much more time trying to understand old code than write new, that implementation cost is often insignificant compared to the improvement in reading time. This is particularly true when a) the code calling the API changes very often, b) many developers need to use the API, and c) the code behind the API is outside of developers’ main focus. As always, one of the best sources of great code is Guava, so for some first-class fluent API implementations, take a look at for instance FluentIterable and CacheBuilder. I think fluent APIs are underused in ‘normal business logic’; they don’t need to be solely the domain of snazzy library developers.

, ,

Leave a comment

Product Development vs Productivity Development

A couple of years ago, the team I was in started to use a concept we called gradual refactoring, whereby we would allocate a fixed percentage of our capacity to doing refactoring in order to keep technical debt at bay. That worked pretty well, and has been refined over the years into something we now tend to call the Productivity Backlog, in contrast to the normal Product Backlog. The way we use this idea boils down to reserving a more-or-less fixed percentage of overall capacity to improving that capacity. This is typically refactoring or cleaning up old, hard-to-change code or building or improving tools to help with something that slows us down. Normally those tools help us automate some process, making it less error-prone and less labour-intensive.

This works really well. Initially, I wanted to keep productivity efforts as part of the normal backlog, because I wanted a global prioritisation order. “All we need is to get really good at weighing medium to long-term benefits of productivity improvements vs the shorter term benefits from the product improvements”. That has turned out to be even harder in practice than I had thought. It’s just easier to keep things separate, for the following reasons:

  • It’s an easier sell to the non-technical parts of/people in the organisation. “Continuous improvement” is something almost all organisations would like to do, and keeping a Productivity Backlog is an easy way to formalise that at a team level.
  • It means that, as a technical person, you don’t have to try to sell the benefits of intangible things like “refactoring” or “eliminating technical debt” each time you need to untangle something particularly troublesome in your code base.
  • It means that, as a business person, you don’t need to try to understand the reasons why some code is poorly suitable to some task or other, freeing you up to focus more clearly on figuring out what features and changes will make the product more successful in the future.
  • It simplifies prioritisation both on the product and the productivity side, because a) you are comparing apples with apples (or at least fruits with fruits, if you know what I mean) on both backlogs and b) you can have different owners who can structure and order their own backlog without a need to coordinate needs and requirements from the other one.
  • It provides an easy knob to turn as needs change – if you urgently need to get some product feature out, you can easily turn the productivity backlog down to 10% or even 5%. And if you’re suffering acutely from wasted time due to inefficient deployment procedures, you can turn it up to 20-25% until you’ve solved that crisis. If you do things on a case-by-case basis, you have to weigh the benefits of saving 3 minutes each time somebody needs to deploy a particular build of something somewhere against trying out a new front page to improve conversion, something that is hard and requires people with different types of expertise to understand each other’s problems and agree to a compromise.

Some of the issues we’ve had in using this concept have been around figuring out what types of changes should go on it. Initially, we called it the ‘tech backlog’, because it was owned by ‘tech’ and dealt very much with issues in implementation. That led to attempts at adding things like performance problems/measurement, or bug fixes driven more by technology than by the business side of things (like things getting logged that made real problems hard to find even though there was no obvious end-user impact). But those things are product features – quality and performance are clearly features in the product. Productivity is about the team’s ability to deliver more product features over time. That means it can be about fixing bugs, if those bugs, for instance, mean that deployment occasionally fail. But most of the time, it’s about removing obstacles and frustrations that keep us from delivering the product improvements that are what we get our job satisfaction from.

Productivity backlogs, compared to product backlogs, are reactive more than proactive. You feel the pain of some faulty process or code, and when it grows unbearable, you add fixing it near the top of your productivity backlog. There’s very little point in having long term goals for a productivity backlog.

A separate productivity backlog makes life simpler for both the business and the delivery side of things, is a great way to formalise continuous improvement and helps improve job satisfaction in delivery teams partly by removing obstacles to productivity and partly by providing knowledge (or hope at least) that the muck you’re having to wade through at the moment will at some point in the future be cleared out. Everybody should have one!

, ,

2 Comments

Composition vs Inheritance

One of the questions I often ask when interviewing candidate programmers is “what is the difference between inheritance and composition”. Some people don’t know, and they’re obviously out right then and there. Most people explain that “inheritance represents an is-a relationship and composition a has-a”, which is correct but insufficient. Very few people have a good answer to the question that I’m really asking: what impact does the choice of one or the other have on your software design? The last few years it seems there has been a trend towards feeling that inheritance is over-used and composition to a corresponding degree under-used. I agree, and will try to show some concrete reasons why that is the case. (As a side note, the main trigger for this blog post is that about a year ago, I introduced a class hierarchy as a part of a refactoring story, and it was immediately obvious that it was an awkward solution. However, it (and the other changes) represented a big improvement over what went before, and there wasn’t time to fix it back then. It’s bugged me ever since, and thanks to our gradual refactoring, we’re getting closer to the time when we can put something better in place, so I started thinking about it again.)

One guy I interviewed the other day had a pretty good attempt at describing when one should choose one or the other: it depends on which is closest to the real world situation that you’re trying to model. That has the ring of truth to it, but I’m not sure it is enough. As an example, take the classically hierarchical model of the animal kingdom: Humans and Chimpanzees are both Primates, and, skipping a bunch of steps, they are Animals just like Kangaroos and Tigers. This is clearly a set of is-a relationships, so we could model it like so:


public class Animal {}

public class Primate extends Animal {}

public class Human extends Primate {}

public class Chimpanzee extends Primate {}

public class Kangaroo extends Animal {}

public class Tiger extends Animal {}

Now, let’s add some information about their various body types and how they walk. Both Humans and Chimpanzees have two arms and two legs, and Tigers have four legs. So we could add two legs and two arms at the Primate level, and four legs at the Tiger level. But then it turns out that Kangaroos also have two legs and two arms, so this solution doesn’t work without duplication. Perhaps a more general concept involving limbs is needed at the animal level? There would still be duplication of the ‘legs + arms’ classification in both Kangaroo and Primate, not to mention what would happen if we introduce a RattleSnake into our Eden (or an EarthWorm if you prefer an animal that has never had limbs at any stage of its evolutionary history).

A similar problem is shown by introducing a walk() method. Humans do a bipedal walk, Chimpanzees and Tigers walk on all fours, and Kangaroos hop using their rear feet and tail. Where do we locate the logic for movement? All the animals in our sample hierarchy can walk on all fours (although humans tend to do so only when young or inebriated), so having a quadrupedWalk() method at the Animal level that is called by the walk() methods of Tiger and Chimpanzee makes sense. At least until we add a Chicken or Halibut, because they will now have access to a type of locomotion they shouldn’t.

The root problem here is of course that the model is bad even though it seems like a natural one. If you think about it, body types and locomotion are not defined by the various creatures’ positions in the taxonomy – there are both mammals and fish that swim, and both fish, lizards and snakes can be legless. I think it is natural for us to think in terms of templates and inherited properties from a more general class (Douglas Hofstadter said something very similar, I think in Gödel, Escher, Bach). If that kind of thinking is a human tendency, that could explain why it seems so common to initially think that a hierarchical model is the best one. It is often a mistake to assume that the presence of an is-a relationship in the real world means that inheritance is right for your object model.

But even if, unlike the example above, a hierarchical model  is a suitable one to describe a certain real-world scenario, there are issues. A big one, in my opinion, is that the classes that make up a hierarchical model are very tightly coupled. You cannot instantiate a class without also instantiating all of its superclasses. This makes them harder to test – you can never  ‘mock’ a superclass, so each test of a leaf in the hierarchy requires you to do all the setup for all the classes above. In the example above, you might have to set up the limbs of your RattleSnake and the data needed for the quadrupedWalk() method of the Halibut before you could test them.

With inheritance, the smallest testable unit is larger than it is when composition is used and you increase code duplication in your tests at least. In this way, inheritance is similar to use of static methods – it removes the option to inject different behaviour when testing. The more code you try to reuse from the superclass/es, the more redundant test code you will typically get, and all that redundancy will slow you down when you modify the code in the future. Or, worse, it will make you not test your code at all because it’s such a pain.

There is an alternative model, where each of the beings above have-a BodyType and possibly also have-a Locomotion that knows how to move that body. Probably, each of the animals could be differently configured instances of the same Animal class.


public interface BodyType {}

public TwoArmsTwoLegs implements BodyType {}

public FourLegs implements BodyType {}

public interface Locomotion<B extends BodyType> {
  void walk(B body);
}

public class BipedWalk implements Locomotion<TwoArmsTwoLegs> {
  public void walk(TwoArmsTwoLegs body) {}
}

public class Slither implements Locomotion<NoLimbs> {
  public void walk(NoLimbs body) {}
}

public class Animal {
   BodyType body;
   Locomotion locomotion;
}

Animal human = new Animal(new TwoArmsTwoLegs(), new BipedWalk());

This way, you can very easily test the bodies, the locomotions, and the animals in isolation of each other. You may or may not want to do some automated functional testing of the way that you wired up your Human to ensure that it behaves like you want it to. I think composition is almost universally preferable to inheritance for code reuse – I’ve tried to think of cases where the only natural model is a hierarchical one and you can’t use composition instead, but it seems to be beyond me.

In addition to the arguments here, there’s the well-known fact that inheritance breaks encapsulation by exposing subclasses to implementation details in the superclass. This makes the use of subclassing as a means to integrate with a library a fragile solution, or at least one that limits the future evolution of the library.

So, there it is – that summarises what I think are the differences between composition and inheritance, and why one should tend to prefer composition over inheritance. So, if you’re reading this because you googled me before an interview,  this is one answer you should get right!

, , ,

12 Comments

Gradual Refactoring

We’re right now in the middle of something that feels a little like an unplanned experiment in code management. Unplanned in the sense that I didn’t expect us to work the way we do, not so much in the sense that I think it is particularly risky. It started not quite a year ago with my colleague Mateusz suggesting that we should dedicate a fixed percentage of the story points in each sprint to what he called maintenance or technical stories. That soon crystallised into a backlog owned by me, where we schedule stories that are aimed at somehow improving our productivity as opposed to our product. So Paul, the ‘normal’ product owner, is in charge of the main backlog that deals with improving the products, and I am the owner of the maintenance backlog, where we improve the efficiency of how we work by improving processes, tools and removing technical debt. We’re aiming at spending about 15% of our time on technical backlog stories, and we more or less do.

Typical examples of stories that have gone onto our technical backlog are:

  • A tool that allows QA to specify that outgoing service calls matching certain regular expressions should return mock data specified in a file rather actually calling the service. This makes us a lot more productive with regard to verifying site behaviour in certain hard-to-recreate and data-dependent cases.
  • Improvements to our performance monitoring systems and tools that make it easier for us to figure out where we have performance problems when we do.
  • Auditing and optimising the QA server allocation in order to speed up especially our automated test scripts.
  • Various refactoring stories that clean up code where functional evolution has led to the original design no longer being suitable.

That has worked out pretty much as expected: we’ve gained benefits from the productivity improvements and we continue to spend 5-6 times more effort on money-making product improvements than on engineering driven platform-building.

The unexpected thing that has happened is that we’re heading towards a situation where we have different design generations that solve similar problems. As an example, the original pattern we used to create Spring MVC controllers has broken down, so we’ve come up with a new one that is better though not yet perfect. In order to have stories small enough to complete within one iteration, we’ve had to apply this pattern on a controller-by-controller basis – each refactoring has taken about 2 weeks of calendar time so far, except the first one which took about 4, so the effort isn’t trivial. There are nearly 40 controller implementations in our site code and about 95% of the incoming traffic is handled by 4 of those. We’re now at a stage where we’ve refactored three of those four controllers. Given that the other 35 or so controllers a) don’t serve a lot of traffic, so don’t have a lot of business value and b) don’t get changed a lot because the functions they provide aren’t ones that we need to innovate in, I don’t feel like refactoring all of them. In fact, the next refactoring story in the backlog is aimed at a different area in the site, where a repeated pattern has broken down in a similar way.

My initial gut reaction was that we should apply any new pattern for a commonly occurring situation across the board, then tackle the next similar situation. That keeps the code clean and makes it easy to find your way around. But the point of refactoring is that it must be an investment that you can recoup, and if we haven’t spent more than 4 hours working on a particular controller in the last year, what are the chances of ever recouping an investment of a man-week? We would probably have to keep using the same controller for at least 20 years, assuming the refactoring made us twice as productive, and that doesn’t seem likely to happen. Given that, it seems like the best option is to focus refactoring efforts where they give return on investment, which is those parts of the code that you do most of your work in.

I think we’re more or less permanently stuck with two different generations of controller patterns. But it will be interesting to see what will happen over the next year or two – can this super-pattern of having annual growth rings of standardised solution patterns handle more than two generations? I’m not sure, but I believe it can.

,

Leave a comment