Month: October 2009

www.SemanticOverflow.com – the Web 2.0 Q&A site for all things Web 3.0.

www.SemanticOverflow.com is a new site based on the hugely popular StackOverflow.com, devoted to Q&A on anything related to the semantic web. The site is very new (created today) and I’m trying to get as many people to visit as I can, so please come and post your questions and together we’ll create a thriving community dedicated just to the semantic web. It’s free to join, free to ask questions, and most important of all – free to see the answers. You don’t even have to sign up to view, ask or answer questions. If you do join, all you get is kudos.
StackOverflow has been an overnight sensation because it combines many of the traditional features of wikis, newgroups and social media platforms combined with a reputation system that promotes community involvement and high quality discussion. I’m sure that as the Semantic Web starts to go exponential, Semantic Overflow will be a useful forum for more than just technical questions. I hope you’ll use it for speculation about the directions of the field itself. Unlike StackOverflow, non-technical discussions won’t be moderated out, provided they are on-topic. I think that in such an interesting and emerging space, discussion and dissemination of knowledge is critical. If you agree, please come and join us for a question or two.
Please also tell your friends, colleagues, relatives, acquaintances, pets and neighbors and ask them all to visit the site. If you have any suggestions about other places I could promote the site in – feel free to provide an answer here.

Can AOP help fix bad architectures?

I recently posted a question on Stack Overflow on the feasibility of using IL rewriting frameworks to rectify bad design after the fact. The confines of the answer comment area were too small to give the subject proper treatment so I though a new blog post was in order. Here’s the original question:

I’ve recently been playing around with PostSharp, and it brought to mind a problem I faced a few years back: A client’s developer had produced a web application, but they had not given a lot of thought to how they managed state information – storing it (don’t ask me why) statically on the Application instance in IIS. Needless to say the system didn’t scale and was deeply flawed and unstable. But it was a big and very complex system and thus the cost of redeveloping it was prohibitive. My brief at the time was to try to refactor the code-base to impose proper decoupling between the components.

At the time I tried to using some kind of abstraction mechanism to allow me to intercept all calls to the static resource and redirect them to a component that would properly manage the state data. The problem was there was about 1000 complex references to be redirected (and I didn’t have a lot of time to do it in). Manual coding (even with R#) proved to be just too time consuming – we scrapped the code base and rewrote it properly. it took over a year to rewrite.

What I wonder now is – had I had access to an assembly rewriter and/or Aspect oriented programming system (such as a PostSharp) could I have easily automated the refactoring process of finding the direct references and converted them to interface references that could be redirected automatically and supplied by factories.

Has anyone out there used PostSharp or similar systems to rehabilitate pathological legacy systems? How successful were the projects? Did you find after the fact that the effort was worth it? Would you do it again?

I subsequently got into an interesting (though possibly irrelevant) discussion with Ira Baxter on program transformation systems, AOP and the kind of intelligence a program needs to have in order to be able to refactor a system, preserving the business logic whilst rectifying any design flaws in the system.  There’s no space to discuss the ideas, so I want to expand the discussion here.

The system I was referring to had a few major flaws:

  1. The user session information (of which there was a lot) was stored statically on a specific instance of IIS. This necessitated the use of sticky sessions to ensure the relevant info was around when user requests came along.
  2. Session information was lost every time IIS recycled the app pool, thus causing the users to lose call control (the app was phone-related).
  3. State information was glommed into a tight bolus of data that could not be teased apart, so refactoring the app was an all-or-nothing thing.

As you can guess, tight coupling/lack of abstraction and direct resource dispensation were key flaws that prevented the system from being able to implement fail-over, disaster recovery, scaling, extension and maintenance.

This product is  in a very competitive market and needs to be able to innovate to stay ahead, so the client could ill afford to waste time rewriting code while others might be catching up. My question was directed in hindsight to the problem of whether one could retroactively fix the problems, without having to track down, analyse and rectify each tightly coupled reference between client code and state information and within the state information itself.

What I needed at the time was some kind of declarative mechanism whereby I could confer the following properties on a component:

  1. location independence
  2. intercepted object creation
  3. transactional persistence

Imagine, that we could do it with a mechanism like PostSharp’s multicast delegates:

[assembly: DecoupledAndSerialized(    
    AspectPriority = -1,
    AttributeTargetAssemblies = "MyMainAssembly",
    AttributeTargetTypes = "MainAssembly.MyStateData",
    AttributeTargetMembers = "*")]

What would this thing have to do to be able to untie the knots that people get themselves into?

  1. It would have to be able to intercept every reference to MainAssembly.MyStateData and replace the interaction with one that got the instance from some class factory that could go off to a database or some distant server instead.

    that is – introduce an abstraction layer and some IoC framework.
  2. It must ensure that the component managing object persistence checked into and out of the database appropriately (atomically) and all contention over that data was properly managed.
  3. It must ensure that all session specific data was properly retrieved and disposed for each request.

This is not a pipe dream by any means – there are frameworks out there that are designed to place ‘shims’ between layers of a system to allow the the shim to be expanded out into a full-blown proxy that can communicate through some inter-machine protocol or just devolve to plain old in-proc communication while in a dev environment. The question is, can you create a IL rewriter tool that is smart enough to work out how to insert the shims based on its own knowledge of good design principles? Could I load it up with a set of commandments graven in stone, like “never store user session data in HttpContext.Application”? If it found a violation of such a cardinal sin, could it insert a proxy that would redirect the flow away from the violated resource, exploiting some kind of resource allocation mechanism that wasn’t so unscaleable?

From my own experience, these systems require you to be able to define service boundaries to allow the system to know what parts to make indirect and which parts to leave as-is. Juval Lowy made a strong case for the idea that every object should be treated as a service, and all we lack is a language that will seamlessly allow us to work with services as though they were objects. Imagine if we could do that, providing an ‘abstractor tool’ as part of the build cycle. While I think he made a strong case, my experience of WCF (which was the context of his assertions) is that it would be more onerous to configure the blasted thing than it would be to refactor all those references. But if I could just instruct my IL rewriter to do the heavy lifting, then I might be more likely to consider the idea.

Perhaps PostSharp has the wherewithal to address this problem without us having to resort to extremes or to refactor a single line? PostSharp has two incredible features that make it a plausible candidate for such a job. the first is the Multicast delegate feature that would allow me to designate a group of APIs declaratively as the service boundary of a component. The original code is never touched, but by using an OnMethodInvocationAspect you could intercept every call to the API turning them into remote service invocations. The second part of the puzzle is compile-time instantiation of an Aspect – in this system your aspects are instantiated at compile time, given an opportunity to perform some analysis and then to serialize the results of that analysis for runtime use when the aspect is invoked as part of a call-chain. The combination of these two allows you to perform an arbitrary amount of compile time analysis prior to generating a runtime proxy object that could intercept those method calls necessary to enforce the rules designated in the multicast delegate. The aspect could perform reflective analysis and comparison against a rules base (just like FxCop) but with an added collection of refactorings it could take to alleviate the problem. The user might still have to provide hints about where to get and store data, or how the app was to be deployed, but given high level information, perhaps the aspect could self configure?

Now that would be a useful app – a definite must-have addition to any consultant’s toolkit.

Fowler’s Technical Debt Quadrant – Giving the co-ordinates where Agile is contraindicated.

Martin Fowler’s bliki has a very good post on what he calls the ‘technical debt quadrant‘. This post succinctly sums up the difference between those who produce poor designs in ways that are contrary to their best interests, and those who do so knowingly and reluctantly.

In the past I’ve noted that many of those who profess to being agile are really just defaulters in the bank of technical debt. They confuse incurring inadvertent reckless technical debt with being Agile. The YAGNI (You Ain’t Gonna Need It) maxim is an example of something that, when used prudently and knowingly, is a sensible (temporary) acceptance of short term technical debt. In the hands of those who recklessly and inadvertently incur technical debt, it becomes a waiver for all sorts of abuses of good software development practice. It’s not uncommon in small cash-strapped start-ups to hear people invoke the YAGNI maxim as they contemplate key architectural issues, confusing cost-cutting with architectural parsimony. I think it’s safe to say that those who live in the dark waters of the bottom left of the quadrant need the formality of a more ‘regulated’  process to support them as they grope their way into the light.

Does formality have to mean expensive? I don’t think so. I think in most cases it’s a matter of perception – people assume that ‘formality’ implies back-breaking bureaucracy. It needn’t, it just means creating a framework that helps people to stop, take a breath and look around them. Without that, people often lose their way and drive headlong into crippling technical debt. I’ve seen large companies that consciously avoided using up-front design subsequently paying the cost of that debt for years afterwards in maintenance. One notable example would regularly burn through its OpEx budget within the first 6 months of each year – they had a product that was so badly designed, inconsistent and hard to maintain that they lost all forward momentum, threw away their monopoly and effectively lost their place in the market. All through lack of up-front design and architecture.

For all companies where stakeholders are either so stressed or so inexperienced that they opt for the lower left of Fowler’s quadrant, Agile is contraindicated.

Eeebuntu

Perhaps the day has finally arrived when GNU/Linux seems like a viable option. Every six months or so I try out the latest GNU/Linux distros to see how they’re progressing. I look at them from the usual jaundiced perspective of the professional programmer. Not from the naive perspective of teenage rebellion. Normally I end up wandering off in disgust at the unfinished feel of the whole ensemble.

All I want is an environment that doesn’t fight back when I use it. I only care if it is possible to run all the usual tools with a minimum of fuss and bother. And, believe it or not, that’s fairly true of Eeebuntu. (With the sole exception of the fact that monodevelop is still a very long way behind VS 2010 or even sharpdevelop, but you can still do real C# development, and that’s so very cool).

I guess I’m lucky because I chose a distro specifically designed for my brand of machine, so everything is tuned and provided and properly configured. But it still amazes me that I am up and running in minutes (including dev tools) where previously on GNU/Linux that would have taken weeks of frustration.

I love the package management systems – they are (so far) flawless, as was the OS installer. Google Chrome, MonoDevelop, Mono, XSP2 and even ASP.NET MVC 1.0 are all available directly from the synaptic package manager and all install and run completely without hitches.

MonoDevelop even has settings allowing it to default to using VS 2010 project file formats! I’m impressed. Really. No piss taking at all. Well done, all involved.