Saturday 16 March 2019

Command Handling Pattern Part 4

Decorators


Now that I have a command dispatcher I have decoupled the issuing of the command from the handling of the command. This means that I can change how and where a command is handled without affecting clients.

This led me to the idea of using decorators to add functionality to the handling of a command, and the implementation of cross-cutting  concerns is an obvious starting point.

There are various cross-cutting concerns that should be implemented outside of the core handler, and most obvious one to implement is logging.

Logging


The ability to log the execution of a command without any code in the handler itself is really useful. Which is what following class does (the actual Log.... methods just write to the _logger field):

public class LoggingDecorator<T> : ICommandHandler<T>
{
    private readonly ICommandHandler<T> _wrappedHandler;
    private readonly ILogger<ICommandHandler<T>> _logger;

    public LoggingDecorator(
        ICommandHandler<T> wrappedHandler,
        ILogger<ICommandHandler<T>> logger)
    {
        _wrappedHandler = wrappedHandler;
        _logger = logger;
    }

    public async Task<CommandResult> HandleAsync(T command)
    {
        LogEntry(command);
        var result = await _wrappedHandler.HandleAsync(command, cancellationToken);
        if (!result.IsSuccess)
        {
            LogFailure(command, result);
        }
        LogExit(command);

        return result;
    }
}
When the command dispatcher is about to send a command to the appropriate handler, it creates a LoggingDecorator instance to wrap it, so the execution of all commands get logged, along with any errors.


Exception Handling


Another basic cross-cutting concern is exception handling. The command handler (or command handling pipeline) is at the boundary of the domain, so it sometimes makes sense to prevent any exceptions bubbling up to outer layers. So an exception handling decorator can be useful as one of the outer handlers.

public class ExceptionHandlingDecorator<T> : ICommandHandler<T>
{
    private readonly ICommandHandler<T> _wrappedHandler;

    public ExceptionHandlingDecorator(
        ICommandHandler<T> wrappedHandler)
    {
        _wrappedHandler = wrappedHandler;
    }

    public async Task<CommandResult> HandleAsync(T command)
    {
        try
        {
            return await _wrappedHandler.HandleAsync(command);
        }
        catch (Exception exception)
        {
            return CommandResult.Error(exception.ToString());
        }
    }
}
Any exception that occurs anywhere during the handling of a command is caught and returned as a CommandResult, so clients always get a known type of result.

It's obviously useful to log exceptions as they're caught, so either the LoggingDecorator can be the outermost handler (and will therefore log exceptions as the error contained within the CommandResult object), or the ExceptionHandlingDecorator can take a logger as a dependency to log the exception in the catch block.


Validation


Validation is something that almost every handler needs to perform on the incoming command. What if this could be extracted out into another decorator so that each core handler can just focus on the actual business logic to be executed?

This isn't as simple as logging, as validation is obviously different for every command. So there's an interface to be implemented for any command requiring validation, and then a decorator that takes an instance of that interface. Like this:

public interface ICommandValidator<T>
{
    Task<CommandResult> ValidateAsync(T command);
}

public class ValidatingDecorator<T> : ICommandHandler<T>
{
    private readonly ICommandHandler<T> _wrappedHandler;
    private readonly ICommandValidator<T> _commandValidator;

    public ValidatingDecorator(
        ICommandHandler<T> wrappedHandler,
        ICommandValidator<T> commandValidator)
    {
        _wrappedHandler = wrappedHandler;
        _commandValidator = commandValidator;
    }

    public async Task<CommandResult> HandleAsync(T command)
    {
        var validationResult = await _commandValidator.ValidateAsync(command);
        if (!validationResult.IsSuccess)
        {
            return validationResult;
        }

        return await _wrappedHandler.HandleAsync(command);
    }
}
 

What's Next?


There are a few more decorators that I've used to good effect, so the next blog post will be about two of them: a pre-processing decorator and a post-processing decorator. Til next time!

Saturday 9 March 2019

Command Handling Pattern Part 3

A Command Dispatcher

 

The approach outlined in the previous post is fine, however there are a few limitations. One is that clients that need to issue several commands will have a proliferation of dependencies. For example, even in a simple CRUD application, a controller for a particular entity will likely have to Create, Update and Delete instances of that entity. Each of these operations will be a command, so the controller will need a handler for each command. For example:
public class CustomersController : Controller
{
    private readonly ICommandHandler<AddCustomer> _addCustomerHandler;
    private readonly ICommandHandler<EditCustomer> _editCustomerHandler;
    private readonly ICommandHandler<DeleteCustomer> _deleteCustomerHandler;

    public CustomersController(
        ICommandHandler<AddCustomer> _addCustomerHandler,
        ICommandHandler<EditCustomer> _editCustomerHandler,
        ICommandHandler<DeleteCustomer> _deleteCustomerHandler)
    {
        // assign fields
    }

    // More code here
}

This can quickly get out of hand, and when looking at the dependencies for a particular class it's enough to know that they issue commands without knowing the specific commands that are issued. It's usually clear from the context which entity is being operated on anyway.

So a single class that can accept any command and dispatch it to the appropriate handler seemed like a good idea. That would make the above controller code much more concise:
public class CustomersController : Controller
{
    private readonly ICommandDispatcher _commandDispatcher;

    public CustomersController(ICommandDispatcher commandDispatcher)
    {
        _commandDispatcher = commandDispatcher;
    }

    // More code here
}

Here is an initial implementation of the command dispatcher itself. Note it uses the application's IoC container to get the correct command handler (I'm using the IServiceProvider that comes with AspNetCore). This may be an anti-pattern to some (Service Locator anyone?) but the dispatcher needs to be able to instantiate any handler in the system.
public class CommandDispatcher : ICommandDispatcher
{
    private readonly IServiceProvider _serviceProvider;

    public CommandDispatcher(IServiceProvider serviceProvider)
    {
        _serviceProvider = serviceProvider;
    }

    public async Task<CommandResult> ExecuteAsync<T>(T command)
    {
        var handler = _serviceProvider.GetService<ICommandHandler<T>>();
        // Null checks

        return await handler.HandleAsync(command);
    }
}

The dispatcher simply gets the required handler from the IoC container and then passes it the command to be handled. If this was all it did then I probably would class it as an anti-pattern and consign it to the dustbin. However now that all clients will depend on the dispatcher rather than individual handlers, it means that additional processing can be introduced outside of the handler and be run with every request. This is particularly useful for cross-cutting concerns like logging and exception handling, and for me is the major attraction of this approach. I'll spend the next couple of posts going through some examples.

Monday 4 March 2019

Command Handling Pattern Part 2

The Simple Approach

 

Commands


A Command is a class that contains the data required to perform the business process. For example you might have an EditCustomerName Command that contains the ID of the Customer to be edited and their new name.
public class EditCustomerName : ICommand
{
    public int CustomerId { get; set; }
    public string NewName { get; set; }
}

Handlers


Each Command is handled by a (you guessed it) Command Handler. To handle the above Command we might have something like this:
public class EditCustomerNameHandler : ICommandHandler<EditCustomerName>
{
    public async Task HandleAsync(EditCustomerName command)
    {
        // Do stuff
    }
}
This leads to much more focused code as each handler is responsible for just one business process. So if there are seven different operations that can be performed on a Customer, there will be seven different Handlers, each responsible for its own Command. In the service approach, the CustomerService would have had seven (unrelated) methods, which leads to hard to understand (and therefore hard to maintain) code.


Results


Strict interpretations of the Command Pattern often say that you shouldn't return anything from the processing of a Command. I find this to be unrealistic as it's often necessary to communicate (at the very least) the success or failure of the operation. So for the simple approach the return object (including convenience factory methods) I used was this:
public class CommandResult
{
    public bool IsSuccess { get; }
    public IEnumerable<string> Errors { get; }

    public CommandResult(bool isSuccess, params string[] errors)
    {
        IsSuccess = isSuccess;
        Errors = errors;
    }

    public static CommandResult Success()
    {
        return new CommandResult(true);
    }

    public static CommandResult Error(params string[] errors)
    {
        return new CommandResult(false, errors);
    }
}

Summing Up


Pulling together the ideas above we can define a Command Handler interface as follows:
public interface ICommandHandler<T> where T : ICommand
{
    Task<CommandResult> HandleAsync(EditCustomerName command);
}
This simple approach allows each business process to be handled by its own class, and for any errors to be communicated back to the client.

There are however a number of limitations, such as difficulty composing different Command Handlers together and no way to communicate data back to client. I'll look to address these limitations in the next few posts.

Thursday 28 February 2019

Command Handling Pattern Part 1

The Problem to be Solved


I've spent a lot of time over the past few years writing line-of-business applications in various industries, and I never felt entirely comfortable with how I structured business logic. I generally used entity-based services - e.g. a Product Service that operates on a Product entity - and the problem that I saw again and again with this approach is that the service ends up a bloated, unfocused mess: the 'God' service that knows and does everything.

So I went searching for alternatives and decided to start using a Command Handling Pattern. The central idea is that a business process that needs to be performed (i.e. writes not reads) is represented by a Command (this is similar to, but not exactly the same as, the traditional Command Pattern).

I'm planning on writing a series of posts to explore how my ideas evolved from a fairly simple pattern to a more complex solution taking ideas from the Chain of Responsibility Pattern.

Next time I'll go into the first step on this journey. Watch this space!

Saturday 8 December 2018

How Many Projects in a .NET Solution?

The number of projects I use in a .NET solution has changed a lot over recent months.

I used to create separate projects for just about every concern in my application. For example, a typical 3-tier application might have the following projects:
  • MVC or Web API
  • Data (e.g. Entity Framework)
  • Data.Contracts (repository interfaces)
  • Business (core business logic classes)
  • Models (Entity Framework model classes)
  • Dtos (cross-cutting DTO classes)
  • Common (common logic and data such as shared constants and extension methods)
  • Common.Logging (usually some kind of abstraction over a logging library such as NLog)
  • Test projects for each 'implementation' project above

I'm now of the opinion that this is massive overkill. It may appear give separation of concerns, but I think this is an illusion. The projects can all reference each other (with the obvious exception around circular references) so all you get is additional complexity and longer build times.

My last few applications (all APIs) have had the following projects:
  • Api (very thin project containing the Web API controllers and not much else)
  • Core (the bulk of the code is in here; it's everything - domain logic, etc. - that marks my application out as unique)
  • Infrastructure (any communication with things outside my application - e.g. file system, database - is in here; generally contains implementations of interfaces defined in the Core project)
  • Single Tests project

Why fewer projects? Faster build times for one. And simpler structure (I certainly find it easier to find things now). Areas of the application can still be logical separated with namespaces. You don't need physical separation you get by building them as different dlls.

The choice of four projects is not arbitrary. It's based on ideas from the Ports & Adapters pattern. Very briefly, Ports live on the boundary of your domain and handle inputs and outputs to/from the domain. Adapters are actors outside of the domain that interact with it in some way. Primary Adapters make requests to the domain; Secondary Adapters receive requests from the domain. For example, a Web API controller would be a Primary Adapter, whereas a database access layer of some kind would be a Secondary Adapter.

So rephrasing the project explanations in terms of Ports & Adapters:
  • Api (Primary Adapter for Web API technology)
  • Core (Ports and core domain)
  • Infrastructure (Secondary Adapters)
  • Tests (Primary Adapter for tests)

As you can see, there is essentially a project per Primary Adapter. This is because each Primary Adapter is generally a deployable or executable program based on a specific technology or framework (e.g. Web API or WPF).

The same approach could be taken with Secondary Adapters (single project per Adapter), however I find this to quickly get out of hand as you will generally have more Secondary Adapters compared to Primary Adapters, and integration with them is usually achieved by just using a simple class library. So my Infrastructure project splits the Adapters out into folders instead.

The only other thing to mention is the organisation of the Core project. The bulk of the code is in here so it's important to have a structure that makes sense. I generally start out with a set of standard top-level folders (Common, Models, Dtos) and then add additional folders/sub-folders as required. I also have a Contracts folder containing interfaces to be implemented by one or more Adapter.

Thursday 10 November 2016

Beyond TDD....to BDD?

It's been a while since I posted anything here - I've moved jobs recently so the last few months have been taken up with settling into the new role.

I've been thinking a lot about test-driven development recently and whether it's always the best approach, or if there are variations on it that are better suited to some applications. When I first started using TDD I was writing software to run risk and cost-benefit models, which were generally perfect for a test-driven approach. The calculation could be broken down in sub-calculations, each of which would be implemented by a single method, with related calculations grouped into classes. Each sub-calculation could then be unit tested and whole thing usually worked out beautifully. They were almost textbook cases of when to employ TDD.

Recently I've been developing much more standard line-of-business applications, heavy on CRUD, with a small amount of complex business logic, and doing TDD just feels like going through the motions, with little benefit gained from it.

What I've decided to do is start researching behaviour-driven development (BDD) as I think it will be better suited to the kind of systems I'm developing now. Plus it will be fun to learn something new!

My companion on this adventure is the Manning book BDD In Action, which I'll be reading on my commute to and from work. I'm also going to document my progress right here! It will be an opportunity to order some of my thoughts, plus a great way to measure my progress.

Watch this space for the first instalment!!

Thursday 4 February 2016

Creating WCF Client Channels from Configuration

I'm going to write today about WCF: specifically about how to create client channel instances when your config is held somewhere other than the app.config or web.config file of your application. I should point out that I'm not a huge WCF fan, and try not to use it where possible, but sometimes you have no choice.

I've recently been working on a project where some parts of the application don't have access to the web.config file, so their configuration had to be held in a database (there are very good reasons for this, but I won't go into them).

So rather than have the client config in the web.config file and use the generated client proxy classes to make calls to the service, the solution I used was to use the ConfigurationChannelFactory<T> class provided in the System.ServiceModel.Configuration namespace. I created a WcfConfiguration class (more on this later) and used it to create the channel factory as follows:
private static ConfigurationChannelFactory<TChannel> CreateChannelFactory(WcfConfiguration wcfConfiguration)
{
    return new ConfigurationChannelFactory<TChannel>(
        wcfConfiguration.EndpointConfiguration.Name,
        wcfConfiguration.Config,
        new EndpointAddress(wcfConfiguration.EndpointConfiguration.Address));
}

The ConfigurationChannelFactory<T> object needs various configuration in the form of a System.Configuration.Configuration object, and also some endpoint configuration information. One big restriction of the System.Configuration.Configuration class is that, AFAIK, it can only be instantiated from a config file on the file system. So what I did was take the <system.serviceModel> configuration section that was stored in the database as a string and write this out to a file, to then be used to create the Configuration object. A bit roundabout but there you go!!
private System.Configuration.Configuration CreateTempConfigFile(string filename, string rawConfig)
{
    var configFilename = $"{filename}.config";
    var configFilepath = Path.Combine(_configDirectory, configFilename);
    File.WriteAllText(configFilepath, string.Format(ConfigFormat, rawConfig));
    
    var virtualDirectoryMapping = new VirtualDirectoryMapping(_configDirectory, false, configFilename);
    var fileMap = new WebConfigurationFileMap();
    fileMap.VirtualDirectories.Add(VirtualDirectoryName, virtualDirectoryMapping);
    var webSiteName = HostingEnvironment.SiteName;
    var configuration = WebConfigurationManager.OpenMappedWebConfiguration(fileMap, VirtualDirectoryName, webSiteName);

    return configuration;
}

The System.Configuration.Configuration object can now be used to create an instance of my WcfConfiguration class, which is just a data bucket for the objects it's given:
private WcfConfiguration CreateWcfConfiguration(System.Configuration.Configuration configuration, string rawConfig)
{
    var serviceModelSectionGroup = ServiceModelSectionGroup.GetSectionGroup(configuration);
    if (serviceModelSectionGroup == null)
    {
     throw new System.Configuration.ConfigurationErrorsException("The WCF client configuration does not contain a 'system.serviceModel' section.");
    }
    if (serviceModelSectionGroup.Client == null)
    {
     throw new System.Configuration.ConfigurationErrorsException("The WCF client configuration does not contain a 'client' section.");
    }
    if (serviceModelSectionGroup.Client.Endpoints == null || serviceModelSectionGroup.Client.Endpoints.Count == 0)
    {
     throw new System.Configuration.ConfigurationErrorsException("The WCF client configuration does not contain any endpoints in the 'client' section.");
    }

    var endpointConfig = serviceModelSectionGroup.Client.Endpoints[0];
    var wcfConfiguration = new WcfConfiguration(rawConfig, configuration, endpointConfig)

    return wcfConfiguration;
}

The ConfigurationChannelFactory<T> is created as shown above and cached. The cached object is used to create any channels that are needed. This means that, although it's a pain having to write out a physical config file, at least it only needs to be done once.

If anyone has a better solution to this problem, or ideas on how this can be improved, then please let me know!