Can AOP help fix bad architectures?

I recently posted a question on Stack Overflow on the feasibility of using IL rewriting frameworks to rectify bad design after the fact. The confines of the answer comment area were too small to give the subject proper treatment so I though a new blog post was in order. Here’s the original question:

I’ve recently been playing around with PostSharp, and it brought to mind a problem I faced a few years back: A client’s developer had produced a web application, but they had not given a lot of thought to how they managed state information – storing it (don’t ask me why) statically on the Application instance in IIS. Needless to say the system didn’t scale and was deeply flawed and unstable. But it was a big and very complex system and thus the cost of redeveloping it was prohibitive. My brief at the time was to try to refactor the code-base to impose proper decoupling between the components.

At the time I tried to using some kind of abstraction mechanism to allow me to intercept all calls to the static resource and redirect them to a component that would properly manage the state data. The problem was there was about 1000 complex references to be redirected (and I didn’t have a lot of time to do it in). Manual coding (even with R#) proved to be just too time consuming – we scrapped the code base and rewrote it properly. it took over a year to rewrite.

What I wonder now is – had I had access to an assembly rewriter and/or Aspect oriented programming system (such as a PostSharp) could I have easily automated the refactoring process of finding the direct references and converted them to interface references that could be redirected automatically and supplied by factories.

Has anyone out there used PostSharp or similar systems to rehabilitate pathological legacy systems? How successful were the projects? Did you find after the fact that the effort was worth it? Would you do it again?

I subsequently got into an interesting (though possibly irrelevant) discussion with Ira Baxter on program transformation systems, AOP and the kind of intelligence a program needs to have in order to be able to refactor a system, preserving the business logic whilst rectifying any design flaws in the system.  There’s no space to discuss the ideas, so I want to expand the discussion here.

The system I was referring to had a few major flaws:

  1. The user session information (of which there was a lot) was stored statically on a specific instance of IIS. This necessitated the use of sticky sessions to ensure the relevant info was around when user requests came along.
  2. Session information was lost every time IIS recycled the app pool, thus causing the users to lose call control (the app was phone-related).
  3. State information was glommed into a tight bolus of data that could not be teased apart, so refactoring the app was an all-or-nothing thing.

As you can guess, tight coupling/lack of abstraction and direct resource dispensation were key flaws that prevented the system from being able to implement fail-over, disaster recovery, scaling, extension and maintenance.

This product is  in a very competitive market and needs to be able to innovate to stay ahead, so the client could ill afford to waste time rewriting code while others might be catching up. My question was directed in hindsight to the problem of whether one could retroactively fix the problems, without having to track down, analyse and rectify each tightly coupled reference between client code and state information and within the state information itself.

What I needed at the time was some kind of declarative mechanism whereby I could confer the following properties on a component:

  1. location independence
  2. intercepted object creation
  3. transactional persistence

Imagine, that we could do it with a mechanism like PostSharp’s multicast delegates:

[assembly: DecoupledAndSerialized(    
    AspectPriority = -1,
    AttributeTargetAssemblies = "MyMainAssembly",
    AttributeTargetTypes = "MainAssembly.MyStateData",
    AttributeTargetMembers = "*")]

What would this thing have to do to be able to untie the knots that people get themselves into?

  1. It would have to be able to intercept every reference to MainAssembly.MyStateData and replace the interaction with one that got the instance from some class factory that could go off to a database or some distant server instead.

    that is – introduce an abstraction layer and some IoC framework.
  2. It must ensure that the component managing object persistence checked into and out of the database appropriately (atomically) and all contention over that data was properly managed.
  3. It must ensure that all session specific data was properly retrieved and disposed for each request.

This is not a pipe dream by any means – there are frameworks out there that are designed to place ‘shims’ between layers of a system to allow the the shim to be expanded out into a full-blown proxy that can communicate through some inter-machine protocol or just devolve to plain old in-proc communication while in a dev environment. The question is, can you create a IL rewriter tool that is smart enough to work out how to insert the shims based on its own knowledge of good design principles? Could I load it up with a set of commandments graven in stone, like “never store user session data in HttpContext.Application”? If it found a violation of such a cardinal sin, could it insert a proxy that would redirect the flow away from the violated resource, exploiting some kind of resource allocation mechanism that wasn’t so unscaleable?

From my own experience, these systems require you to be able to define service boundaries to allow the system to know what parts to make indirect and which parts to leave as-is. Juval Lowy made a strong case for the idea that every object should be treated as a service, and all we lack is a language that will seamlessly allow us to work with services as though they were objects. Imagine if we could do that, providing an ‘abstractor tool’ as part of the build cycle. While I think he made a strong case, my experience of WCF (which was the context of his assertions) is that it would be more onerous to configure the blasted thing than it would be to refactor all those references. But if I could just instruct my IL rewriter to do the heavy lifting, then I might be more likely to consider the idea.

Perhaps PostSharp has the wherewithal to address this problem without us having to resort to extremes or to refactor a single line? PostSharp has two incredible features that make it a plausible candidate for such a job. the first is the Multicast delegate feature that would allow me to designate a group of APIs declaratively as the service boundary of a component. The original code is never touched, but by using an OnMethodInvocationAspect you could intercept every call to the API turning them into remote service invocations. The second part of the puzzle is compile-time instantiation of an Aspect – in this system your aspects are instantiated at compile time, given an opportunity to perform some analysis and then to serialize the results of that analysis for runtime use when the aspect is invoked as part of a call-chain. The combination of these two allows you to perform an arbitrary amount of compile time analysis prior to generating a runtime proxy object that could intercept those method calls necessary to enforce the rules designated in the multicast delegate. The aspect could perform reflective analysis and comparison against a rules base (just like FxCop) but with an added collection of refactorings it could take to alleviate the problem. The user might still have to provide hints about where to get and store data, or how the app was to be deployed, but given high level information, perhaps the aspect could self configure?

Now that would be a useful app – a definite must-have addition to any consultant’s toolkit.

PostSharp Laos – Beautiful AOP.

I’ve recently been using PostSharp 1.5 (Laos) to implement various features such as logging, tracing, API performance counter recording, and repeatability on the softphone app I’ve been developing. Previously, we’d been either using hand-rolled code generation systems to augment the APIs with IDisposable-style wrappers, or hand coded the wrappers within the implementation code. The problem was that by the time we’d added all of the above, there were hundreds of lines of code to maintain around the few lines of code that actually provided a business benefit.

Several years ago, when I worked for Avanade, I worked on a very large scale project that used the Avanade Connected Architecture (ACA.NET) – a proprietary competitor for PostSharp. We found Aspect Oriented Programming (AOP) to be a great way to focus on the job at hand and reliably dot all the ‘i’s and cross all the ‘t’s in a single step at a later date.

ACA.NET, at that time, used a precursor of the P&P Configuration Application Block and performed a form of post build step to create external wrappers that instantiated the aspect call chain prior to invoking your service method. That was a very neat step that could allow configurable specifications of applicable aspects. It allowed us to develop the code in a very naive in-proc way, and then augment the code with top level exception handlers, transactionality etc at the same time that we changed the physical deployment architecture. Since that time, I’ve missed the lack of such a tool, so it was a pleasure to finally acquaint myself  with PostSharp.

I’d always been intending to introduce PostSharp here, but I’d just never had time to do it. Well, I finally found the time in recent weeks and was able to do that most gratifying thing – remove and simplify code, improve performance and code quality, reduced maintenance costs and increased the ease with I introduce new code policies all in a single step. And all without even scratching the surface of what PostSharp is capable of.

Here’s a little example of the power of AOP using PostSharp, inspired by Elevate’s memoize extension method. We try to distinguish as many of our APIs as possible into Pure and Impure. Those that are impure get database locks, retry handlers etc. Those that are pure in a functional sense can be cached, or memoized. Those that are not pure in a functional sense are those that while not saving any data still are not one-to-one between arguments and result, sadly that’s most of mine (it’s a distributed event driven app).

public class PureAttribute : OnMethodInvocationAspect
    Dictionary<int, object> PreviousResults = new Dictionary<int, object>();

    public override void OnInvocation(MethodInvocationEventArgs eventArgs)
        int hashcode = GetArgumentArrayHashcode(eventArgs.Method, eventArgs.GetArgumentArray());
        if (PreviousResults.ContainsKey(hashcode))
            eventArgs.ReturnValue = PreviousResults[hashcode];
            PreviousResults[hashcode] = eventArgs.ReturnValue;

    public int GetArgumentArrayHashcode(MethodInfo method, params object[] args)
        StringBuilder result = new StringBuilder(method.GetHashCode().ToString());

        foreach (object item in args)
        return result.ToString().GetHashCode();

I love what I achieved here, not least for the fact that it took me no more than about 20 lines of code to do it. But that’s not the real killer feature, for me. It’s the fact that PostSharp Laos has MulticastAttributes, that allow me to apply the advice to numerous methods in a single instruction, or even numerous classes or even every method of every class of an assembly. I can specify what to attach the aspects to by using regular expressions, or wildcards. Here’s an example that applies an aspect to all public methods in class MyServiceClass.

[assembly: Pure(
    AspectPriority = 2,
    AttributeTargetAssemblies = "MyAssembly",
    AttributeTargetTypes = "UiFacade.MyServiceClass",
    AttributeTargetMemberAttributes = MulticastAttributes.Public,
    AttributeTargetMembers = "*")]

Here’s an example that uses a wildcard to NOT apply the aspect to those methods that end in “Impl”.

[assembly: Pure(
    AspectPriority = 2,
    AttributeTargetAssemblies = "MyAssembly",
    AttributeTargetTypes = "UiFacade.MyServiceClass",
    AttributeTargetMemberAttributes = MulticastAttributes.Public,
    AttributeExclude = true,
    AttributeTargetMembers = "*Impl")]

Do you use AOP? What aspects do you use, other than the usual suspects above?

Wanted: Volunteers for .NET semantic web framework project

 LinqToRdf* is a full-featured LINQ** query provider for .NET written in C#. It provides developers with an intuitive way to make queries on semantic web databases. The project has been going for over a year and it’s starting to be noticed by semantic web early adopters and semantic web product vendors***. LINQ provides a standardised query language and a platform enabling any developer to understand systems using semantic web technologies via LinqToRdf. It will help those who don’t have the time to ascend the semantic web learning curve to become productive quickly.

The project’s progress and momentum needs to be sustained to help it become the standard API for semantic web development on the .NET platform. For that reason I’m appealing for volunteers to help with the development, testing, documentation and promotion of the project.

Please don’t be concerned that all the best parts of the project are done. Far from it! It’s more like the foundations are in place, and now the system can be used as a platform to add new features. There are many cool things that you could take on. Here are just a few:

Reverse engineering tool
This tool will use SPARQL to interrogate a remote store to get metadata to build an entity model.

Tutorials and Documentation
The documentation desperately needs the work of a skilled technical writer. I’ve worked hard to make LinqToRdf an easy tool to work with, but the semantic web is not a simple field. If it were, there’d be no need for LinqToRdf after all. This task will require an understanding of the LINQ, ASP.NET, C#, SPARQL, RDF, Turtle, and SemWeb.NET systems. It won’t be a walk in the park.


Supporting SQL Server
The SemWeb.NET API has recently added support to SQL Server, which has not been exploited inside LinqToRdf (although it may be easy to do).  This task would also involve thinking about robust scalable architectures for semantic web applications in the .NET space.


Porting LinqToRdf to Mono
LINQ and C# 3.0 support in Mono is now mature enough to make this a desirable prospect. Nobody’s had the courage yet to tackle it. Clearly, this would massively extend the reach of LinqToRdf, and it would be helped by the fact that some of the underlying components are developed for Mono by default.


SPARQL Update (SPARUL) Support
LinqToRdf provides round-tripping only for locally stored RDF. Support of SPARQL Update would allow data round-tripping on remote stores. This is not a fully ratified standard, but it’s only a matter of time.


Demonstrators using large scale web endpoints
There are now quite a few large scale systems on the web with SPARQL endpoints. It would be a good demonstration of LinqToRdf to be able to mine them for useful data.


These are just some of the things that need to be done on the project. I’ve been hoping to tackle them all for some time, but there’s just too much for one man to do alone. If you have some time free and you want to learn more about LINQ or the Semantic Web, there is not a better project on the web for you to join.  If you’re interested, reply to this letting me know how you could contribute, or what you want to tackle. Alternatively join the LinqToRdf discussion group and reply to this message there.




Andrew Matthews





Announcing LinqToRdf v0.8

I’m very pleased to announce the release of version 0.8 of LinqToRdf. This release is significant for a couple of reasons. Firstly, because it provides a preview release of RdfMetal and secondly because it is the first release containing changes contributed by someone other than yours truly. The changes in this instance being provided by Carl Blakeley of OpenLink Software.

LinqToRdf v0.8 has received a few major chunks of work:

  • New installers for both the designer and the whole framework
    WIX was proving to be a pain, so I downgraded to the integrated installer generator in Visual Studio.
  • A preview release of RdfMetal. I brought this release forward a little, on Carl Blakeley’s request, to coincide with a post he’s preparing on using OpenLink Virtuoso with LinqToRdf, so RdfMetal is not as fully baked as I’d planned. But it’s still worth a look. Expect a minor release in the next few weeks with additional fixes/enhancements.

I’d like to extend a very big thank-you to Carl for the the work he’s done in recent weeks to help extend and improve the mechanisms LinqToRdf uses to represent and traverse relationships. His contributions also include improvements in representing default graphs, and referencing multiple ontologies within a single .NET class. He also provided fixes around the quoting of URIs and some other fixes in the ways LinqToRdf generates SPARQL for default graphs. Carl also provided an interesting example application using OpenLink Virtuoso’s hosted version of Musicbrainz that is significantly richer than the test ontology I created for the unit tests and manuals.

I hope that Carl’s contributions represent an acknowledgement by OpenLink that not only does LinqToRdf support Virtuoso, but that there is precious little else in the .NET space that stands a chance of attracting developers to the semantic web. .NET is a huge untapped market for semantic web product vendors. LinqToRdf is, right now, the best way to get into semantic web development on .NET.

Look out for blog posts from Carl in the next day or two, about using LinqToRdf with OpenLink Virtuoso.

State Machines in C# 3.0 using T4 Templates

UPDATE: The original code for this post, that used to be available via a link on this page, is no longer available. I’m afraid that if you want to try this one out, you’ll have to piece it together using the snippets contained in this post. Sorry for the inconvenience – blame it on ISP churn.

Some time back I wrote about techniques for implementing non-deterministic finite automata (NDFAs) using some of the new features of C# 3.0. Recently I’ve had a need to revisit that work to provide a client with a means to generate a bunch of really complex state machines in a lightweight, extensible and easily understood model. VS 2008 and C# 3.0 are pretty much the perfect platform for the job – they combine partial classes and methods, lambda functions and T4 templates making it a total walk in the park. This post will look at the prototype system I put together. This is a very code intensive post – sorry about that, but it’s late and apparently my eyes are very red, puffy and panda like.

State machines are the core of many applications – yet we often find people hand coding them with nested switch statements and grizzly mixtures of state control and business logic. It’s a nightmare scenario making code completely unmaintainable for anything but the most trivial applications.

The key objective for a dedicated application framework that manages a state machine is to provide a clean way to break out the code that manages the state machine from the code that implements the activities performed as part of the state machine. C# 3.0 has a nice solution for this – partial types and methods.

Partial types and methods

A partial type is a type whose definition is not confined to a single code module – it can have multiple modules. Some of those can be written by you, others can be written by a code generator. Here’s an example of a partial class definition:

public partial class MyPartialClass{}

By by declaring the class to be partial, you say that other files may contain parts of the class definition. the point of this kind of structure is that you might have piece of code that you want to write by hand, and others that you want to have driven from a code generator, stuff that gets overwritten every time the generator runs. If your code got erased every time you ran the generator, you’d get bored very quickly. You need a way to chop out the bits that don’t change. Typically, these will be framework or infrastructure stuff.

Partial classes can also have partial methods. Partial methods allow you to define a method signature in case someone wants to define it in another part of the partial class. This might seem pointless, but wait and see – it’s nice. Here’s how you declare a partial method:

// the code generated part public partial class MyPartialClass {
    partial void DoIt(int x);

You can then implement it in another file like so:

// the hand-written part partial class MyPartialClass {
    partial void DoIt(int x)
        throw new NotImplementedException();

This is all a little abstract, right now, so let’s see how we can use this to implement a state machine framework. First we need a way to define a state machine. I’m going to use a simple XML file for this:

<?xml version="1.0" encoding="utf-8" ?> <StateModels> <StateModel ID="My" start="defcon1"> <States> <State ID="defcon1" name="defcon1"/> <State ID="defcon2" name="defcon2"/> <State ID="defcon3" name="defcon3"/> </States> <Inputs> <Input ID="diplomaticIncident" name="diplomaticIncident"/> <Input ID="assassination" name="assassination"/> <Input ID="coup" name="coup"/> </Inputs> <Transitions> <Transition from="defcon1" to="defcon2" on="diplomaticIncident"/> <Transition from="defcon2" to="defcon3" on="assassination"/> <Transition from="defcon3" to="defcon1" on="coup"/> </Transitions> </StateModel> </StateModels>

Here we have a really simple state machine with three states (defcon1, defcon2 and defcon3) as well as three kinds of input (diplomaticIncident, assassination and coup). Please excuse the militarism – I just finished watching a season of 24, so I’m all hyped up. This simple model also defines three transitions. it creates a model like this:

Microsoft released the Text Template Transformation Toolkit (T4) system with Visual Studio 2008. This toolkit has been part of GAT and DSL tools in the past, but this is the first time that it has been available by default in VS. It allows an ASP.NET syntax for defining templates. Here’s a snippet from the T4 template that generates the state machine:

<#@ template language="C#" #>
<#@ assembly name="System.Xml.dll" #>
<#@ import namespace="System.Xml" #>

    XmlDocument doc = new XmlDocument();
    XmlElement xnModel = (XmlElement)doc.SelectSingleNode("/StateModels/StateModel");
    string ns = xnModel.GetAttribute("ID");
    XmlNodeList states = xnModel.SelectNodes("descendant::State");
    XmlNodeList inputs = xnModel.SelectNodes("descendant::Input");
    XmlNodeList trns = xnModel.SelectNodes("descendant::Transition");
using System;
using System.Collections.Generic;

namespace <#=ns#> {
public enum <#=ns#>States : int{
string sep = "";
foreach(XmlElement s in states)
    Write(sep + s.GetAttribute("ID"));
    WriteLine(@"// " + s.GetAttribute("name"));
    sep = ",";

} // end enum <#=ns#>States

public enum <#=ns#>Inputs : int{
sep = "";
foreach(XmlElement s in inputs)
    Write(sep + s.GetAttribute("ID"));
    WriteLine(@"// " + s.GetAttribute("name"));
    sep = ",";

} // end enum <#=ns#>States

public partial class <#=ns#>StateModel{

        public <#=ns#>StateModel()

Naturally, there’s a lot in the template, but we’ll get to that later. First we need a representation of a state. You’ll see from the template that an enum get’s generated called <#=ns#>States. Here’s what it looks like for the defcon model.

public enum MyStates : int {
defcon1// defcon1 ,defcon2// defcon2 ,defcon3// defcon3 } // end enum MyStates 

This is still a bit too bare for my liking. I can’t attach an event model to these states, so here’s a class that can carry around one of these values:

public class State {
    public int Identifier { get; set; }
public delegate void OnEntryEventHandler(object sender, OnEntryEventArgs e);
    // ...public event OnEntryEventHandler OnEntryEvent;
    // ...}

There’s a lot left out of this, but the point is that as well as storing an identifier for a state, it has events for both entry into and exit from the state. This can be used by the event framework of the state machine to provide hooks for your custom state transition and entry code. The same model is used for transitions:

public class StateTransition {
    public State FromState { get; set; }
    public State ToState { get; set; }
public event OnStateTransitioningEventHandler OnStateTransitioningEvent;
public event OnStateTransitionedEventHandler OnStateTransitionedEvent;

Here’s the list of inputs that can trigger a transition between states:

public enum MyInputs : int {
diplomaticIncident// diplomaticIncident ,assassination// assassination ,coup// coup } // end enum MyStates

The template helps to define storage for the states and transitions of the model:

public static Dictionary<<#= ns#>States, State> states                = new Dictionary<<#= ns#>States, State>();
public static Dictionary<string, StateTransition> arcs                = new Dictionary<string, StateTransition>();
public State CurrentState { get; set; }

which for the model earlier, will yield the following:

public static Dictionary<MyStates, State> states = new Dictionary<MyStates, State>();
public static Dictionary<string, StateTransition> arcs = new Dictionary<string, StateTransition>();
public State CurrentState { get; set; }

Now we can create entries in these tables for the transitions in the model:

private void SetStartState()
    CurrentState = states[<#= ns#>States.<#=xnModel.GetAttribute("start")#>];

private void SetupStates()
foreach(XmlElement s in states)
    WriteLine("states[" + ns + "States."+s.GetAttribute("ID")+"] =               new State { Identifier = (int)"+ns+"States."+s.GetAttribute("ID")+" };");
    WriteLine("states[" + ns + "States."+s.GetAttribute("ID")+"].OnEntryEvent               += (x, y) => OnEnter_"+s.GetAttribute("ID")+"();");
    WriteLine("states[" + ns + "States."+s.GetAttribute("ID")+"].OnExitEvent               += (x, y) => OnLeave_"+s.GetAttribute("ID")+"(); ;");
private void SetupTransitions()
foreach(XmlElement s in trns)
    arcs["<#=s.GetAttribute("from")#>_<#=s.GetAttribute("on")#>"] = new StateTransition
        FromState = states[<#= ns#>States.<#=s.GetAttribute("from")#>],
        ToState = states[<#= ns#>States.<#=s.GetAttribute("to")#>]
    arcs["<#=s.GetAttribute("from")#>_<#=s.GetAttribute("on")#>"].OnStateTransitioningEvent              += (x,y)=>MovingFrom_<#=s.GetAttribute("from")#>_To_<#=s.GetAttribute("to")#>;
    arcs["<#=s.GetAttribute("from")#>_<#=s.GetAttribute("on")#>"].OnStateTransitionedEvent              += (x,y)=>MovedFrom_<#=s.GetAttribute("from")#>_To_<#=s.GetAttribute("to")#>;

which is where the fun starts. First notice that we create a new state for each state in the model and attach a lambda to the entry and exit events of each state. For our model that would look like this:

private void SetupStates()
    states[MyStates.defcon1] = new State {Identifier = (int) MyStates.defcon1};
    states[MyStates.defcon1].OnEntryEvent += (x, y) => OnEnter_defcon1();
    states[MyStates.defcon1].OnExitEvent += (x, y) => OnLeave_defcon1();

    states[MyStates.defcon2] = new State {Identifier = (int) MyStates.defcon2};
    states[MyStates.defcon2].OnEntryEvent += (x, y) => OnEnter_defcon2();
    states[MyStates.defcon2].OnExitEvent += (x, y) => OnLeave_defcon2();

    states[MyStates.defcon3] = new State {Identifier = (int) MyStates.defcon3};
    states[MyStates.defcon3].OnEntryEvent += (x, y) => OnEnter_defcon3();
    states[MyStates.defcon3].OnExitEvent += (x, y) => OnLeave_defcon3();

For the Transitions the same sort of code gets generated, except that we have some simple work to generate a string key for a specific <state, input> pair. Here’s what comes out:

private void SetupTransitions()
    arcs["defcon1_diplomaticIncident"] = new StateTransition {
                 FromState = states[MyStates.defcon1],
                 ToState = states[MyStates.defcon2]
    arcs["defcon1_diplomaticIncident"].OnStateTransitioningEvent                  += (x, y) => MovingFrom_defcon1_To_defcon2;
    arcs["defcon1_diplomaticIncident"].OnStateTransitionedEvent                 += (x, y) => MovedFrom_defcon1_To_defcon2;
    arcs["defcon2_assassination"] = new StateTransition {
                 FromState = states[MyStates.defcon2],
                 ToState = states[MyStates.defcon3]
    arcs["defcon2_assassination"].OnStateTransitioningEvent                += (x, y) => MovingFrom_defcon2_To_defcon3;
    arcs["defcon2_assassination"].OnStateTransitionedEvent                += (x, y) => MovedFrom_defcon2_To_defcon3;
    arcs["defcon3_coup"] = new StateTransition {
                 FromState = states[MyStates.defcon3],
                 ToState = states[MyStates.defcon1]
    arcs["defcon3_coup"].OnStateTransitioningEvent                += (x, y) => MovingFrom_defcon3_To_defcon1;
    arcs["defcon3_coup"].OnStateTransitionedEvent                += (x, y) => MovedFrom_defcon3_To_defcon1;

You can see that for each state and transition event I’m adding lambdas that invoke methods that are also being code generated. these are the partial methods described earlier. Here’s the generator:

foreach(XmlElement s in states)
    WriteLine("partial void OnLeave_"+s.GetAttribute("ID")+"();");
    WriteLine("partial void OnEnter_"+s.GetAttribute("ID")+"();");
foreach(XmlElement s in trns)
    WriteLine("partial void MovingFrom_"+s.GetAttribute("from")+"_To_"+s.GetAttribute("to")+"();");
    WriteLine("partial void MovedFrom_"+s.GetAttribute("from")+"_To_"+s.GetAttribute("to")+"();");

Which gives us:

partial void OnLeave_defcon1();
partial void OnEnter_defcon1();
partial void OnLeave_defcon2();
partial void OnEnter_defcon2();
partial void OnLeave_defcon3();
partial void OnEnter_defcon3();
partial void MovingFrom_defcon1_To_defcon2();
partial void MovedFrom_defcon1_To_defcon2();
partial void MovingFrom_defcon2_To_defcon3();
partial void MovedFrom_defcon2_To_defcon3();
partial void MovingFrom_defcon3_To_defcon1();
partial void MovedFrom_defcon3_To_defcon1();

The C# 3.0 spec states that if you don’t choose to implement one of these partial methods then the effect is similar to attaching a ConditionalAttribute to it – it gets taken out and no trace is left of it ever having been declared. That’s nice, because for some state models you may not want to do anything other than make the transition.

We now have a working state machine with masses of extensibility points that we can use as we see fit. Say we decided to implement a few of these methods like so:

public partial class MyStateModel {
    partial void OnEnter_defcon1()
        Debug.WriteLine("Going Into defcon1.");
    partial void OnEnter_defcon2()
        Debug.WriteLine("Going Into defcon2.");
    partial void OnEnter_defcon3()
        Debug.WriteLine("Going Into defcon3.");

Here’s how you’d invoke it:

MyStateModel dfa = new MyStateModel();
dfa.ProcessInput((int) MyInputs.diplomaticIncident);
dfa.ProcessInput((int) MyInputs.assassination);
dfa.ProcessInput((int) MyInputs.coup);

And here’s what you’d get:

Going Into defcon2.
Going Into defcon3.
Going Into defcon1.

There’s a lot you can do to improve the model I’ve presented (like passing context info into the event handlers, and allowing some of the event handlers to veto state transitions). But I hope that it shows how the partials support in conjunction with T4 templates makes light work of this perennial problem. This could easily save you from writing thousands of lines of tedious and error prone boiler plate code. That for me is a complete no-brainer.

What I like about this model is the ease with which I was able to get code generation. I just added a file with extension ‘.tt’ to VS 2008 and it immediately started generating C# from it. All I needed to do at that point was load up my XML file and feed it into the template. I like the fact that the system is lightweight. There is not a mass of framework that takes over the state management, it’s infinitely extensible, and it allows a very quick turnaround time on state model changes.

What do you think? How would you tackle this problem?

Søren on DBC

Recently, Søren Skovsbøll wrote a excellent follow up to a little post I did a while back on using C# 3.0 expression trees for representing predicates in design by contract. The conclusion of that series was that C# was inadequate in lots of ways to the task of doing design by contract. Having said that, you can still achieve a lot using serialisation of object states and storage of predicates for running before and after a scope.

Søren was not happy with the format of errors being reported, nor the potential for massive serialisation blowout. Rather than comment on the blog, he went away and did something about it. And it’s pretty good! Go take a look, and then pick up the baton from him. Your challenge is to extract the parmeter objects from the expression trees of the predicates and take lightweight snapshots of the objects refered to. You also need a “platform independent” way to serialize objects for this scheme (i.e. one that doesn’t depend on XmlSerialisation or WCF data contracts.

Think you can do it? Apply here! :P