Posts Tagged DCI

DCI Better with DI?

I recently posted some thoughts about DCI and although I mostly thought it was great, I had two points of criticism in how it was presented: first, that using static class composition was broken from a layering perspective and second, that it seemed like class composition in general could be replaced by dependency injection. Despite getting a fair amount of feedback on the post, I never felt that those claims were properly refuted, which I took as an indication that I was probably right. But, although I felt and still feel confident that I am right about the claim about static class composition and layering, I was and am less sure about the second one. There was this distinction being made between class-oriented and object-oriented programming that I wasn’t sure I understood. So I decided to follow James Coplien’s advice that I should read up on Trygve Reenskaug’s work in addition to his own. Maybe that way I could understand if there was a more subtle distinction between objects and classes than the one I called trivial in my post.

Having done that, my conclusion is that I had already understood about objects, and the distinction is indeed that which I called trivial. So no epiphany there. But what was great was that I did understand two new things. The first was something that Jodi Moran, a former colleague, mentioned more or less in passing. She said something like “DI (Dependency Injection) is of course useful as it separates the way you wire up your objects from the logic that they implement”. I had to file that away for future reference as I only understood it partially at the time – it sounded right, but what, exactly, were the benefits of separating out the object graph configuration from the business logic? Now, two years down the line, and thanks to DCI, I think I fully get what she meant, and I even think I can explain it. The second new thing I understood was that there is a benefit of injecting algorithms into objects (DCI proper) as opposed to injecting objects into algorithms (DCI/DI). Let’s start with the point about separating your wiring up code from your business logic.

An Example

Explanations are always easier to follow if they are concrete, so here’s an example. Suppose we’re working with a web site that is a shop of some kind, and that we’re running this site in different locations across the world. The European site delivers stuff all over Europe, and the one based in Wisconsin (to take some random US state) sells stuff all over but not outside the US.  And let’s say that when a customer has selected some items, there’s a step before displaying the order for confirmation when we calculate the final cost:


// omitting interface details for brevity
public interface VatCalculator { }

public interface Shipper { }

public class OrderProcessor {
   public void finaliseOrder(Order order) {
      vatCalculator.addVat(order);
      shipper.addShippingCosts(order);
      // ... probably other stuff as well
   }
}

Let’s add some requirements about VAT calculations:

  • In the US, VAT is based on the state where you send stuff from, and Wisconsin applies the same flat VAT rate to any item that is sold.
  • In the EU, you need to apply the VAT rules in the country of the buyer, and:
    • In Sweden, there are two different VAT rates: one for books, and another for anything else.
    • In Poland, some items incur a special “luxury goods VAT” in addition to the regular VAT. These two taxes need to be tracked in different accounts in the bookkeeping, so must be different posts in the order.

VAT calculations in the above countries may or may not work a little bit like that, but that’s not very relevant. The point is just to introduce some realistic complexity into the business logic.

Here’s a couple of classes that sketch implementations of the above rules:

public class WisconsinVAT implements VAT {
   public void addVat(Order order) {
      order.addVat(RATE * order.getTotalAmount(), "VAT");
   }
}

public class SwedenVAT implements VAT {
   public void addVat(Order order) {
      Money bookAmount = sumOfBookAmounts(order.getItems());
      Money nonBookAmount = order.getTotalAmount() - bookAmount)
      order.addVat(BOOK_RATE * bookAmount + RATE * nonBookAmount, "Moms");
  }
}

public class PolandVAT implements VAT {
   public void addVat(Order order) {
      Money luxuryAmount = sumOfLuxuryGoodsAmounts(order.getItems());

      // Two VAT lines on this order
      order.addVat(RATE * order.getTotalAmount(), "Podatek VAT");
      order.addVat(LUXURY_RATE * luxuryAmount, "Podatek akcyzowy");
    }
}

Right – armed with this example, let’s see how we can implement it without DI, with traditional DI and with DCI/DI.

VAT Calculation Before DI

This is a possible implementation of the OrderProcessor and VAT calculator without using dependency injection:

public class OrderProcessor {
   private VatCalculator vatCalculator = new VatCalculatorImpl();
   private Shipper shipper = new ShipperImpl();

   public void finaliseOrder(Order order) {
      vatCalculator.addVat(order);
      shipper.addShippingCosts(order);
      // ... probably other stuff as well
   }
}

public class VatCalculatorImpl implements VatCalculator {
   private WisconsinVAT wisconsinVAT = new WisconsinVAT();
   private Map<Country, VAT> euVATs = new HashMap<>();

   public VatCalculator() {
       euVATs.put(SWEDEN, new SwedenVAT());
       euVATs.put(POLAND, new PolandVAT());
   }

   public void addVat(Order order) {
      switch (GlobalConfig.getSiteLocation()) {
         case US:
            wisconsinVAT.addVat(order);
            break;
         case EUROPE:
            VAT actualVAT = euVATs.get(order.getCustomerCountry());
            actualVAT.addVat(order);
      }
   }
}

The same classes that implement the business logic also instantiate their collaborators, and the VatCalculatorImpl accesses a singleton implemented using a public static method (GlobalConfig).

The main problems with this approach are:

  1. Only leaf nodes (yes, the use of the term ‘leaf’ is sloppy when talking about a directed graph – ‘sinks’ is probably more correct) are unit testable in practice. So while it’s easy to instantiate and test the PolandVAT class, instantiating a VatCalculator forces the instantiation of four other classes: all the VAT implementations plus the GlobalConfig, which makes testing awkward. Nobody describes these problems better than Miško Hevery, see for instance this post. Oh, and unit testing is essential not primarily as a quality improvement measure, but for productivity and as a way to enable many teams to work on the same code.
  2. As Trygve Reenskaug describes, it is in practice impossible to look at the code and figure out how objects are interconnected. Nowhere in the OrderProcessor is there any indication that it not only will eventually access the GlobalConfig singleton,  but also needs the getSiteLocation() method to return a useful value, and so on.
  3. There is no flexibility to use polymorphism and swap implementations depending on the situation, making the code less reusable.The OrderProcessor algorithm is actually general enough that it doesn’t care exactly how VAT is calculated, but this doesn’t matter since the wiring up of the object graph is intermixed with the business logic. So there is no easy way to change the wiring without also risking inadvertent change to the business logic, and if we would want to launch sites in for instance African or Asian countries with different rules, we might be in trouble.
  4. A weaker argument, or at least one I am less sure of, is: because the object graph contains all the possible execution paths in the application, automated functional testing is nearly impossible. Even for this small, simple example, most of the graph isn’t in fact touched by a particular use case.

Countering those problems is a large part of the rationale for using Dependency Injection.

VAT Calculation with DI

Here’s what an implementation could look like if Dependency Injection is used.


public class OrderProcessor {
   private VatCalculator vatCalculator;
   private Shipper shipper;

   public OrderProcessor(VatCalculator vatCalculator, Shipper shipper) {
      this.vatCalculator = vatCalculator;
      this.shipper = shipper;
   }

   public void finaliseOrder(Order order) {
      vatCalculator.addVat(order);
      shipper.addShippingCosts(order);
      // ... probably other stuff as well
   }
}

public class FlatRateVatCalculator implements VatCalculator {
   private VAT vat;

   public FlatRateVatCalculator(VAT vat) {
      this.vat = vat;
   }

   public void addVat(Order order) {
      vat.addVat(order);
   }
}

public class TargetCountryVatCalculator implements VatCalculator {
   private Map<Country, VAT> vatsForCountry;

   public TargetCountryVatCalculator(Map<Country, VAT> vatsForCountry) {
      vatsForCountry = ImmutableMap.copyOf(vatsForCountry);
   }

   public void addVat(Order order) {
       VAT actualVAT = vatsForCountry.get(order.getCustomerCountry());
       actualVAT.addVat(order);
   }
}

// actual wiring is better done using a framework like Spring or Guice, but
// here's what it could look like if done manually

public OrderProcessor wireForUS() {
   VatCalculator vatCalculator = new FlatRateVatCalculator(new WisconsinVAT());
   Shipper shipper = new WisconsinShipper();

   return new OrderProcessor(vatCalculator, shipper);
}

public OrderProcessor wireForEU() {
   Map<Country, VAT> countryVats = new HashMap<>();
   countryVATs.put(SWEDEN, new SwedenVAT());
   countryVATs.put(POLAND, new PolandVAT());

   VatCalculator vatCalculator = new TargetCountryVatCalculator(countryVats);
   Shipper shipper = new EuShipper();

   return new OrderProcessor(vatCalculator, shipper);
}

This gives the following benefits compared to the version without DI:

  1. Since you can now instantiate single nodes in the object graph, and inject mock objects as collaborators, every node (class) is testable in isolation.
  2. Since the wiring logic is separated from the business logic, the business logic classes get a clearer purpose (business logic only, no wiring) and are therefore simpler and more reusable.
  3. Since the wiring logic is separated from the business logic, wiring is defined in a single place. You can look in the wiring code or configuration and see what your runtime object graph will look like.

However, there are still a couple of problems:

  1. At the application level, there is a single object graph with objects that are wired together in the same way that needs to handle every use case. Since object interactions are frozen at startup time, objects need conditional logic (not well described in this example, I’m afraid) to deal with variations. This complicates the code and means that any single use case will most likely touch only a small fragment of the execution paths in the graph.
  2. Since in a real system, the single object graph to rule them all will be very large, functional testing – testing of the wiring and of object interactions in that particular configuration – is still hard or impossible.
  3. There’s too much indirection – note that the VatCalculator and VAT interfaces define what is essentially the same method.

This is where the use-case specific context idea from DCI comes to the rescue.

DCI+DI version

In DCI, there is a Context that knows how to configure a specific object graph – defining which objects should be playing which roles – for every use case. Something like this:


public class OrderProcessor {
   private VAT vat;
   private Shipper shipper;

   public OrderProcessor(VAT vat, Shipper shipper, ...) {
      this.vat = vat;
      this.shipper = shipper;
   }

   public void finaliseOrder(Order order) {
      vat.addVat(order);
      shipper.addShippingCosts(order);
      // ... probably other stuff as well
   }
}

public UsOrderProcessingContext implements OrderProcessingContext {
   private WisconsinVat wisconsinVat; // injected
   private Shipper shipper; // injected

   public OrderProcessor setup(Order order) {
      // in real life, would most likely do some other things here
      // to figure out other use-case-specific wiring
      return new OrderProcessor(wisconsinVat, shipper);
   }
}

public EuOrderProcessingContext implements OrderProcessingContext {
   private Map<Country, VAT> vatsForCountry; // injected
   private Shipper shipper; // injected

   public OrderProcessor setup(Order order) {
      // in real life, would most likely do some other things here
      // to figure out other use-case-specific wiring
      VAT vat = vatsForCountry.get(order.getCustomerCountry();

      return new OrderProcessor(vat, shipper);
   }
}

Note that dependency injection is being used to instantiate the contexts as well, and that it is possible to mix ‘normal’ static object graphs with dynamic, DCI-style graphs. In fact, I’m pretty sure the contexts should be part of a static graph of objects wired using traditional DI.

Compared to normal DI, we get the following advantages:

  1. Less indirection in the business logic because we don’t need to make up our minds about which path to take – note that the VatCalculator interface and implementation are gone; their only reason for existing was to select the correct implementation of VAT. Simpler and clearer business logic is great.
  2. The object graph is tight, every object in the graph is actually used in the use case.
  3. Since the object graphs for each use case contain subsets of all the objects in the application, it should be easier to create automated functional tests that actually cover a large part of the object graph.

The main disadvantage I can see without having tested this in practice is that the wiring logic is now complex, which is normally a no-no (see for instance DIY-DI). There might also be quite a lot of contexts, depending on how many and how different the use cases are. On the other hand, it’s not more complex than any other code you write, and it is simple to increase your confidence in it through unit testing – which is something that is hard with for instance wiring done using traditional Spring config files. So maybe that’s not such a big deal.

So separating out the logic for wiring up your application from the business logic has the advantage of simplifying the business logic by making it more targeted, and of making the business logic more reusable by removing use-case-specific conditional logic and/or indirection from it. It also clarifies the program structure by making it specific to a use case and explicitly defined, either dynamically in contexts or statically in DI configuration files or code. Good stuff!

A Point of Injecting Code into Objects

The second thing I understood came from watching Trygve Reenskaug’s talk at Øredev last year. He demoed the Interactions perspective in BabyIDE, and showed how the roles in the arrows example interact. That view was pretty eye-opening for me (it’s about 1h 10mins into the talk), because it showed how you could extract the code that objects run to execute a distributed algorithm and look at only that code in isolation from other code in the objects. So the roles defined in a particular context are tightly tied in with each other, making up a specific distributed algorithm. Looking at that code separately from the actual objects that will execute it at runtime means you can highlight the distributed algorithm and make it readable.

So, clearly, if you want to separate an algorithm into pieces that should be executed by different objects without a central controlling object, injecting algorithms into objects gives you an advantage over injecting objects into an algorithm. Of course, the example algorithm from the talk is pretty trivial and could equally well be implemented with a central controlling object that draws arrows between the shapes that play the roles. Such an algorithm would also be even easier to overview than the fragmented one in the example, but that observation may not always be true. The difficult thing about architecture is that you have to see how it works in a large system before you know if it is good or not – small systems always look neat.

So with DCI proper, you can do things you can’t with DCI/DI – it is more powerful. But without an IDE that can extract related roles and highlight their interactions, I think that understanding a system with many different contexts and sets of interacting roles could get very complex. I guess I’m not entirely sure that the freedom to more easily distribute algorithm pieces to different objects is an unequivocally good thing in terms of writing code that is simple. And simple is incredibly important.

Conclusion

I still really like the thought of doing DCI but using DI to inject objects into algorithms instead of algorithms into objects. I think it can help simplify the business logic as well as make the system’s runtime structure easier to understand. I think DCI/DI does at least as well as DCI proper in addressing a point that Trygve Reenskaug comes back to – the GOF statement that “code won’t reveal everything about how a system will work”. Compared to DCI proper, DCI/DI may be weaker in terms of flexibility, but it has a couple advantages, too:

  • You can do it even in a language that doesn’t allow dynamically adding/removing code to objects.
  • Dynamically changing which code an object has access to feels like it may be an area that contains certain pitfalls, not least from a concurrency perspective. We’re having enough trouble thinking about concurrency as it is, and adding “which code can I execute right now” to an object’s mutable state seems like it could potentially open a can of worms.

I am still not completely sure that I am right about DI being a worthy substitute for class composition in DCI, but I have increased my confidence that it is. And anyway, it probably doesn’t matter too much to me personally, since, using dynamic, context-based DI looks like an extremely promising technique compared to the static DI that I am using today. I really feel like trying DCI/DI out in a larger context to see if it keeps its promises, but I am less comfortable about DCI proper due to the technical and conceptual risks involved in dynamically injecting code into objects.

Advertisement

,

8 Comments

DCI Architecture – Good, not Great, or Both?

A bit more than a week ago, my brother sent me a couple of links about DCI. The vision paper, a presentation by James Coplien and some other stuff. There were immediately some things that rang true to me, but there were also things that felt weird – in that way that means that either I or the person who wrote it has missed something. Since then, I’ve spent a fair amount of time trying to understand as much as I can of the thoughts behind DCI. This post, which probably won’t make a lot of sense unless you’ve read some of the above materials, is a summary of what I learned and what I think is incomplete about the picture of DCI that is painted today.

The Good

I immediately liked the separation of data, context and roles (roles are not part of the acronym, but are stressed much more than interactions in the materials I’ve found). It was the sort of concept that matches your intuitive thinking, but that you haven’t been able to phrase, or perhaps not even grasped that there was a concept there to formulate. I have often felt that people attach too much logic to objects that should be dumb (probably in an attempt to avoid anemic domain models, something I, apparently like Mr Coplien, have not quite understood why it is such a horror). Thinking of objects as either containing data or playing some more active role in a context makes great sense, and makes it easier to figure out which logic should go where. And I second the opinion that algorithms shouldn’t be scattered in a million places – at least in the sense that at a given level of abstraction, you should be able to get an overview of the whole algorithm in some single place.

Another thing I think is great is looking at the mapping between objects in the system and things in the user’s mental model. That mapping should be intuitive in order to make it very easy to understand what to do in order to achieve a certain result. Of course, in any system, there are a lot of objects that are completely uninteresting for – and hopefully invisible to – end users, but very central to another kind of software users: programmers. Thinking about DCI has made me think that it is central both to have a good mapping between the end user’s mental model and the objects that s/he needs to interact with and the programmer’s mental model and the objects that s/he needs to interact with and understand. And DCI talks about dumb data objects, which I think is right from the perspective of creating an intuitive mapping. I think most people tend to think of some things as more passive and others as active. So a football can bounce, but it won’t do so of its own accord. It will only move if it is kicked by a player, an actor. It probably won’t even fall unless pulled downwards by gravity, another active object. It’s not exactly dumb, because bouncing is pretty complicated, it just doesn’t do much unless nudged, and certainly not in terms of coordinating the actions of other objects around it. The passive objects are the Data in DCI, and the active objects are the algorithms that coordinate and manipulate the passive objects.

The notion of having a separate context for each use case, and ‘casting’ objects to play certain roles in that context, feels like another really good idea. What seems to happen in most of the systems I’ve seen is that you do some wiring of objects either based on the application scope or, in a web service, occasionally the request scope. Then you don’t get more fine-grained with how you wire up your object graph, so typically, there are conditional paths of logic within the various objects that allow them to behave differently based on user input. So for every incoming request, say, you have the exact same object graph that needs to be able to handle it differently based on the country the user is in, whether he has opted to receive email notifications, etc. This is far less flexible and testable than tailoring your object graph to the use case at hand, connecting exactly the set of objects that are needed to handle it, and leaving everything else out.

The Not Great

The thing that most clearly must be broken in the way that James Coplien presents DCI is the use in some languages of static class composition. That is, at compile time, each domain object declares which roles it can play. This doesn’t scale to large applications or systems of applications because it creates a tight coupling between layers that should be separated. Coplien talks about the need for separating things that change rarely from those that change often when he discusses ‘shear layers’ in architecture, but doesn’t seem to notice that the static class composition actually breaks that. You want to have relatively stable foundations at the bottom layers of your systems, with a low rate of change, and locate the faster-changing bits higher up where they can be changed in isolation from other parts of the system. Static class composition means that if two parts of the system share a domain class, and that class needs to play a new role in one, the class will have to change in the other as well. This leads to a high rate of change in the shared code, which is problematic. Dynamic class composition using things like traits in Scala or mixins feels like a more promising approach.

But anyway, is class composition/role injection really needed? In the examples that are used to present DCI – the spell checker and the money transfer – it feels like they are attaching logic to the wrong thing. To me, as a programmer with a mental model of how things happen in the system, it feels wrong to have a spell checking method in a text field, or a file, or a document, or a selected region of text. And it feels quite unintuitive that a source account should be responsible for a money transfer. Spell checking is done by a spell checker, of course, and that spell checker can process text in a text field, a file, a document or a selected region. A spell checker is an active or acting object, an object that encodes an algorithm, as opposed to a passive one. Similarly, an account doesn’t transfer money from itself to another, or from another to itself. That is done by something else (a money transferrer? probably more like a money transaction handler), again an active object. So while I see the point of roles and contexts, I don’t see the point of making a dumb, passive, data object play a role that is cut out for an active, smart object.

My Thoughts

First, I haven’t yet understood how Dependency Injection, DI, is ‘one of the blind men that only see a part of the elephant’. To me, possibly still being blind, it seems like DI can work just as well as class composition. Here’s what the money transfer example from the DCI presentation at QCon in London this June looks like with Java and DI:


interface SourceAccount {...}

interface DestinationAccount {...}

class SavingsAccount implements SourceAccount, DestinationAccount {...}

class PhoneBill implements DestinationAccount {...}

public class MoneyTransactionHandler {
   private final GUI gui;

   public MoneyTransactionHandler(GUI gui) {
       this.gui = gui;
   }

   public void transfer(Currency amount,
                        SourceAccount source,
                        DestinationAccount destination) {
      beginTransaction();

      if (source.balance() < amount) {
         throw new InsufficientFundsException();
      }

      source.decreaseBalance(amount);
      destination.increaseBalance(amount);
      source.updateLog("Transfer out", new Date(), amount);
      destination.updateLog("Transfer in", new Date(), amount);

      gui.displayScreen(SUCCESS_DEPOSIT_SCREEN);

      endTransaction();
   }
}

This code is just as readable and reusable as the code in the example, as far as I can tell. I don’t see the need for class composition/role injection where you force a role that doesn’t naturally fit it upon an object.

Something that resonates well with some thoughts I’ve had recently around how we should think about the way we process requests is using contexts as DI scopes. At Shopzilla, every page that is rendered is composed of the results of maybe 15-20 service invocations. These are processed in parallel, and rendering of the page starts as soon as service results for the top of the page are available. That is great, but there are problems with the way that we’re dealing with this. All the data that is needed to render parts of the page (we call them pods) needs to be passed as method parameters as pods are typically singleton-scoped. In order to get a consistent pod interface, this means that the pods take a number of kitchen sink objects – essentially maps containing all of the data that was submitted as a part of the URL, anything in the HTTP session, the results of all the service invocations, and things that are results of logic that was previously executed while processing the request. This means that everything has access to everything else and that we get occasional strange side effects, that things are harder to test than they should be, etc. If we would look at the different use cases as different DI scopes, we could instead wire up the right object graph to deal with the use case at hand and fire away, each object getting exactly the data it needs, no more, no less. It seems like it’s easy to get stuck on the standard DI scopes of ‘application’, ‘request’, and so on. Inventing your own DI scopes and wiring them up on the fly seems like a promising avenue, although it probably comes with some risk in terms of potentially making the wiring logic overly complex.

In some of the material produced by DCI advocates, there is a strong stress of the separation between ‘object-oriented’ and ‘class-oriented’ programming, and class-oriented is bad. This is a distinction I don’t get. Or, the distinction I see feels too trivial to be made into a big deal. Of course objects are instances of a class, and of course the system’s runtime state in terms of objects and their relationships is the thing that makes it work or not. But who ever said anything different? There must be something that I’m missing here, so I will have to keep digging at it to figure out what it is.

DCI + DI

I think the spell-checker example that James Coplien uses in his presentations can be used in a great way to show how DCI and DI together make a harmonious combo. If you need to do spell checking, you probably need to be able to handle different languages. So you could have a spell checker for English, one for French and one for Swedish. Exactly which one you would choose for a given file, text field, document or region of text is probably something that you can decide in many ways: based on the language of the document or text region, based on some application setting, or based on the output of a language detector. A very naive implementation would make the spell checker a singleton and let it figure out which language to use. This ties the spell checker tightly to application-specific things such as configuration settings, document metadata, etc., and obviously doesn’t aid code reuse. A less naive solution might take a language code as a parameter to the singleton spell checker, and let the application figure out what language to pass. This becomes problematic when you realise that you may want to spell check English (US) differently from English (UK) and therefore language isn’t enough, you need country as well – in the real world, there are often many additional parameters like this to an API. By making the spell checker object capable of selecting the particular logic for doing spell checking, it has been tied to application-specific concerns. Its API will frequently need to change when application-level use cases change, so it becomes hard to reuse the spell-checker code.

A nicer approach is to have the three spell checkers as separate objects, and let a use-case-specific context select which one to use (I’m not sure if the context should figure out about the language or if that should be a parameter to the context – probably, contexts should be simple, but I can definitely think of cases where you might want some logic there). All the spell checker needs is some text. It doesn’t care about whether it came from a file or a selected region of text, and it doesn’t care what language it is in. All it does is check the spelling according to the rules it knows and return the result. For this to work, there needs to be a few application-specific things: the trigger for a spell checking use case, the context in which the use case executes (the casting or wiring up of the actors to play different roles), and the code for the use case itself. In this case, the ‘use case’ code probably is something like:


public class SpellCheckingUseCase {
   private final Text text;
   private final SpellChecker spellChecker;
   private final Reporter reporter;

   // constructor injection ...

   public void execute() {
      SpellCheckingResult result = spellChecker.check(text);
      reporter.display(result);
   }
}

The reporter role could be played by something that knows how to render squiggly red lines in a document or print out a list of misspelled words to a file or whatever.

In terms of layering and reuse, this is nice. The contexts are use-case specific and therefore at the very top layer. They can reuse the same use case execution code, but the use case code is probably too specific to be shared between applications. The spell checkers, on the other hand, are completely generic and can easily be reused in other applications, as is the probably very fundamental concept of a text. The Text class and the spell checkers are unlikely to change frequently because language doesn’t change that much. The use case code can be expected to change, and the contexts that trigger the use case will probably change very frequently – need to check spelling when a document is first opened? Triggered by a button click? A whole document? A region, or just the last word that was entered? Use different spell checkers for different parts of a document?

I think this is a great example of how DCI helps you get the right code structure. So I like DCI on the whole – but I still don’t get the whole classes vs objects discussion and why it is such a great thing to do role injection. Let the algorithm-encoding objects be objects in their own right instead.

,

15 Comments