.NET

anything related to .NET

Less Intrusive Visitors

Forgive the recent silence – I’ve been in my shed.

Frequently, I need some variation on the Visitor or HierarchicalVisitor patterns
to analyse or transform an object graph. Recent work on a query builder
for an old-skool query API sent my thoughts once again to the Visitor pattern. I
normally hand roll these frameworks based on my experiences with recursive
descent compilers, but this time I thought I’d produce a more GoF-compliant
implementation.

The standard implementation of the visitor looks a lot like the first code example. First you
define some sort of domain model (often following the composite pattern).
This illustration doesn’t bother with composite. I’ll show one later on, with an
accompanying HierarchicalVisitor implementation.

abstract class BaseElement {
  void Accept(IVisitor v);
  }
class T1 : BaseElement {
  void Accept(IVisitor v) {
    v.visit(this);
  }
}
class T2 : BaseElement {
  void Accept(IVisitor v) {
    v.visit(this);
  }
}
class T3 : BaseElement {
  void Accept(IVisitor v) {
    v.visit(this);
  }
}
interface IVisitor{
  void Visit (T1 t1);
  void Visit (T2 t2);
  void Visit (T3 t3);
}

Here’s an implementation of the visitor, normally you’d give default
implementations via and abstract base class. I’ll show how that’s done later.

class MyVisitor : IVisitor {
  void Visit (T1 t1) {
    // do something here
  }
  void Visit (T2 t2) {
    // do something here
  }
  void Visit (T3 t3) {
    // do something here
  }
}

The accept methods are on the domain model entities themselves. What if I have a
composite graph of objects that are not conveniently derived from some abstract
class or interface for my convenience? What if I want to iterate or navigate
the structures in alternate ways. What if I don’t want to (or can’t) pollute
my domain model with visitation code?

I thought it might be cleaner to factor out the responsibility for the
dispatching into another class – a Dispatcher. I provide the Dispatcher from my
client code and am still able to visit each element in turn. Surprisingly, the
result is slightly cleaner than the standard implementation of the pattern,
sacrificing nothing, but gaining a small increment in applicability.

Let’s contrast this canonical implementation with one that uses anemic objects
for the domain model. First we need to define a little composite pattern to
iterate over. This time, I’ll give the abstract base class for the entities
and for the visitors and show a composite pattern as well.

abstract class AbstractBase {
  public string Name {get;set;}
}
class Composite : AbstractBase {
  public string NonTerminalIdentifier { get; set; }
  public Composite(string nonTerminalIdentifier) {
    Name = "Composite";
    NonTerminalIdentifier = nonTerminalIdentifier;
  }
  public List SubParts = new List();
}
class Primitive1 : AbstractBase {
  public Primitive1() {
    Name = "Primitive1";
  }
}
class Primitive2 : AbstractBase {
  public Primitive2() {
    Name = "Primitive2";
  }
}

A composite class plus a couple of primitives. Next, Lets look at the visitor
interface.

interface IVisitor {
  void Visit(Primitive1 p1);
  void Visit(Primitive2 p2);
  bool StartVisit(Composite c);
  void EndVisit(Composite c);
}

According to the discussions at the Portland pattern repository, this could be
called the HierarchicalVisitor pattern, but I suspect most applications of
visitor are over hierarchical object graphs, and they mostly end up like this so
I won’t dwell on the name too much. True to form, it provides mechanisms to
visit each type of element allowed in our object graph. Next, the Dispatcher that
controls the navigation over the object graph. This is the departure from the
canonical model. A conventional implementation of visitor places this code in
the composite model itself, which seems unnecessary. Accept overloads are
provided for each type of the domain model.

class Dispatcher {
  public static void Accept(Primitive1 p1, TV visitor)
    where TV : IVisitor {
    visitor.Visit(p1);
  }
  public static void Accept(Primitive2 p2, TV visitor)
    where TV : IVisitor {
    visitor.Visit(p2);
  }
  public static void Accept(Composite c, TV visitor)
    where TV : IVisitor {
    if (visitor.StartVisit(c)) {
      foreach (var subpart in c.SubParts) {
        if (subpart is Primitive1) {
          Accept(subpart as Primitive1, visitor);
        }
        else if (subpart is Primitive2) {
          Accept(subpart as Primitive2, visitor);
        }
        else if (subpart is Composite) {
          Accept(subpart as Composite, visitor);
      }
    }
    visitor.EndVisit(c);
    }
  }
}

The dispatcher’s first parameter is the object graph element
itself. This provides the context that was implicit with the conventional
implementation. This is a trade-off. On the one hand you cannot access any
private object information inside the dispatch code. On the other hand you can
have multiple different dispatchers for different tasks. Another drawback with
an ‘external’ dispatcher is the need for old-fashioned dispatcher switch
statements in the Composite acceptor. The Composite stores its sub-parts as
references to the AbstractBase class, so it needs to decide manually what the
Accept method is that must handle the sub-part in question.

The implementation for a visitor is much the same as in a normal implementation.
A default implementation of the visit functions is given that
does nothing. To implement a HierarchicalVisitor, the
default StartVisit must return true to allow iteration of the
subparts of a Composite to proceed.

class BaseVisitor : IVisitor {
  public virtual void Visit(Primitive1 p1) { }
  public virtual void Visit(Primitive2 p2) { }
  public virtual bool StartVisit(Composite c) {
    return true;
  }
  public virtual void EndVisit(Composite c) { }
}

Here’s a Visitor that simply records the name of who gets visited.

class Visitor : BaseVisitor {
  public override void Visit(Primitive1 p1) {
    Debug.WriteLine("p1");
  }
  public override void Visit(Primitive2 p2) {
    Debug.WriteLine("p2");
  }
  public override bool StartVisit(Composite c) {
    Debug.WriteLine("->c");
    return true;
  }
  public override void EndVisit(Composite c) {
    Debug.WriteLine("<-c");
  }
}

Given an object graph of type Composite, it is simple to use this little framework.

Dispatcher.Accept(objGraph, new Visitor1());

I like this way of working with visitors more than the conventional
implementation – it makes it possible to provide a good visitor implementation on
thrid party frameworks (yes, I’m thinking of LINQ expression trees). It is no
more expensive to extend with new visitors, and it has the virtue that you can
navigate an object graph in any fashion you like.

Can AOP help fix bad architectures?

I recently posted a question on Stack Overflow on the feasibility of using IL rewriting frameworks to rectify bad design after the fact. The confines of the answer comment area were too small to give the subject proper treatment so I though a new blog post was in order. Here’s the original question:

I’ve recently been playing around with PostSharp, and it brought to mind a problem I faced a few years back: A client’s developer had produced a web application, but they had not given a lot of thought to how they managed state information – storing it (don’t ask me why) statically on the Application instance in IIS. Needless to say the system didn’t scale and was deeply flawed and unstable. But it was a big and very complex system and thus the cost of redeveloping it was prohibitive. My brief at the time was to try to refactor the code-base to impose proper decoupling between the components.

At the time I tried to using some kind of abstraction mechanism to allow me to intercept all calls to the static resource and redirect them to a component that would properly manage the state data. The problem was there was about 1000 complex references to be redirected (and I didn’t have a lot of time to do it in). Manual coding (even with R#) proved to be just too time consuming – we scrapped the code base and rewrote it properly. it took over a year to rewrite.

What I wonder now is – had I had access to an assembly rewriter and/or Aspect oriented programming system (such as a PostSharp) could I have easily automated the refactoring process of finding the direct references and converted them to interface references that could be redirected automatically and supplied by factories.

Has anyone out there used PostSharp or similar systems to rehabilitate pathological legacy systems? How successful were the projects? Did you find after the fact that the effort was worth it? Would you do it again?

I subsequently got into an interesting (though possibly irrelevant) discussion with Ira Baxter on program transformation systems, AOP and the kind of intelligence a program needs to have in order to be able to refactor a system, preserving the business logic whilst rectifying any design flaws in the system.  There’s no space to discuss the ideas, so I want to expand the discussion here.

The system I was referring to had a few major flaws:

  1. The user session information (of which there was a lot) was stored statically on a specific instance of IIS. This necessitated the use of sticky sessions to ensure the relevant info was around when user requests came along.
  2. Session information was lost every time IIS recycled the app pool, thus causing the users to lose call control (the app was phone-related).
  3. State information was glommed into a tight bolus of data that could not be teased apart, so refactoring the app was an all-or-nothing thing.

As you can guess, tight coupling/lack of abstraction and direct resource dispensation were key flaws that prevented the system from being able to implement fail-over, disaster recovery, scaling, extension and maintenance.

This product is  in a very competitive market and needs to be able to innovate to stay ahead, so the client could ill afford to waste time rewriting code while others might be catching up. My question was directed in hindsight to the problem of whether one could retroactively fix the problems, without having to track down, analyse and rectify each tightly coupled reference between client code and state information and within the state information itself.

What I needed at the time was some kind of declarative mechanism whereby I could confer the following properties on a component:

  1. location independence
  2. intercepted object creation
  3. transactional persistence

Imagine, that we could do it with a mechanism like PostSharp’s multicast delegates:

[assembly: DecoupledAndSerialized(    
    AspectPriority = -1,
    AttributeTargetAssemblies = "MyMainAssembly",
    AttributeTargetTypes = "MainAssembly.MyStateData",
    AttributeTargetMembers = "*")]

What would this thing have to do to be able to untie the knots that people get themselves into?

  1. It would have to be able to intercept every reference to MainAssembly.MyStateData and replace the interaction with one that got the instance from some class factory that could go off to a database or some distant server instead.

    that is – introduce an abstraction layer and some IoC framework.
  2. It must ensure that the component managing object persistence checked into and out of the database appropriately (atomically) and all contention over that data was properly managed.
  3. It must ensure that all session specific data was properly retrieved and disposed for each request.

This is not a pipe dream by any means – there are frameworks out there that are designed to place ‘shims’ between layers of a system to allow the the shim to be expanded out into a full-blown proxy that can communicate through some inter-machine protocol or just devolve to plain old in-proc communication while in a dev environment. The question is, can you create a IL rewriter tool that is smart enough to work out how to insert the shims based on its own knowledge of good design principles? Could I load it up with a set of commandments graven in stone, like “never store user session data in HttpContext.Application”? If it found a violation of such a cardinal sin, could it insert a proxy that would redirect the flow away from the violated resource, exploiting some kind of resource allocation mechanism that wasn’t so unscaleable?

From my own experience, these systems require you to be able to define service boundaries to allow the system to know what parts to make indirect and which parts to leave as-is. Juval Lowy made a strong case for the idea that every object should be treated as a service, and all we lack is a language that will seamlessly allow us to work with services as though they were objects. Imagine if we could do that, providing an ‘abstractor tool’ as part of the build cycle. While I think he made a strong case, my experience of WCF (which was the context of his assertions) is that it would be more onerous to configure the blasted thing than it would be to refactor all those references. But if I could just instruct my IL rewriter to do the heavy lifting, then I might be more likely to consider the idea.

Perhaps PostSharp has the wherewithal to address this problem without us having to resort to extremes or to refactor a single line? PostSharp has two incredible features that make it a plausible candidate for such a job. the first is the Multicast delegate feature that would allow me to designate a group of APIs declaratively as the service boundary of a component. The original code is never touched, but by using an OnMethodInvocationAspect you could intercept every call to the API turning them into remote service invocations. The second part of the puzzle is compile-time instantiation of an Aspect – in this system your aspects are instantiated at compile time, given an opportunity to perform some analysis and then to serialize the results of that analysis for runtime use when the aspect is invoked as part of a call-chain. The combination of these two allows you to perform an arbitrary amount of compile time analysis prior to generating a runtime proxy object that could intercept those method calls necessary to enforce the rules designated in the multicast delegate. The aspect could perform reflective analysis and comparison against a rules base (just like FxCop) but with an added collection of refactorings it could take to alleviate the problem. The user might still have to provide hints about where to get and store data, or how the app was to be deployed, but given high level information, perhaps the aspect could self configure?

Now that would be a useful app – a definite must-have addition to any consultant’s toolkit.

Eeebuntu

Perhaps the day has finally arrived when GNU/Linux seems like a viable option. Every six months or so I try out the latest GNU/Linux distros to see how they’re progressing. I look at them from the usual jaundiced perspective of the professional programmer. Not from the naive perspective of teenage rebellion. Normally I end up wandering off in disgust at the unfinished feel of the whole ensemble.

All I want is an environment that doesn’t fight back when I use it. I only care if it is possible to run all the usual tools with a minimum of fuss and bother. And, believe it or not, that’s fairly true of Eeebuntu. (With the sole exception of the fact that monodevelop is still a very long way behind VS 2010 or even sharpdevelop, but you can still do real C# development, and that’s so very cool).

I guess I’m lucky because I chose a distro specifically designed for my brand of machine, so everything is tuned and provided and properly configured. But it still amazes me that I am up and running in minutes (including dev tools) where previously on GNU/Linux that would have taken weeks of frustration.

I love the package management systems – they are (so far) flawless, as was the OS installer. Google Chrome, MonoDevelop, Mono, XSP2 and even ASP.NET MVC 1.0 are all available directly from the synaptic package manager and all install and run completely without hitches.

MonoDevelop even has settings allowing it to default to using VS 2010 project file formats! I’m impressed. Really. No piss taking at all. Well done, all involved.

PostSharp Laos – Beautiful AOP.

I’ve recently been using PostSharp 1.5 (Laos) to implement various features such as logging, tracing, API performance counter recording, and repeatability on the softphone app I’ve been developing. Previously, we’d been either using hand-rolled code generation systems to augment the APIs with IDisposable-style wrappers, or hand coded the wrappers within the implementation code. The problem was that by the time we’d added all of the above, there were hundreds of lines of code to maintain around the few lines of code that actually provided a business benefit.

Several years ago, when I worked for Avanade, I worked on a very large scale project that used the Avanade Connected Architecture (ACA.NET) – a proprietary competitor for PostSharp. We found Aspect Oriented Programming (AOP) to be a great way to focus on the job at hand and reliably dot all the ‘i’s and cross all the ‘t’s in a single step at a later date.

ACA.NET, at that time, used a precursor of the P&P Configuration Application Block and performed a form of post build step to create external wrappers that instantiated the aspect call chain prior to invoking your service method. That was a very neat step that could allow configurable specifications of applicable aspects. It allowed us to develop the code in a very naive in-proc way, and then augment the code with top level exception handlers, transactionality etc at the same time that we changed the physical deployment architecture. Since that time, I’ve missed the lack of such a tool, so it was a pleasure to finally acquaint myself  with PostSharp.

I’d always been intending to introduce PostSharp here, but I’d just never had time to do it. Well, I finally found the time in recent weeks and was able to do that most gratifying thing – remove and simplify code, improve performance and code quality, reduced maintenance costs and increased the ease with I introduce new code policies all in a single step. And all without even scratching the surface of what PostSharp is capable of.

Here’s a little example of the power of AOP using PostSharp, inspired by Elevate’s memoize extension method. We try to distinguish as many of our APIs as possible into Pure and Impure. Those that are impure get database locks, retry handlers etc. Those that are pure in a functional sense can be cached, or memoized. Those that are not pure in a functional sense are those that while not saving any data still are not one-to-one between arguments and result, sadly that’s most of mine (it’s a distributed event driven app).

[Serializable]
public class PureAttribute : OnMethodInvocationAspect
{
    Dictionary<int, object> PreviousResults = new Dictionary<int, object>();

    public override void OnInvocation(MethodInvocationEventArgs eventArgs)
    {
        int hashcode = GetArgumentArrayHashcode(eventArgs.Method, eventArgs.GetArgumentArray());
        if (PreviousResults.ContainsKey(hashcode))
            eventArgs.ReturnValue = PreviousResults[hashcode];
        else
        {
            eventArgs.Proceed();
            PreviousResults[hashcode] = eventArgs.ReturnValue;
        }
    }

    public int GetArgumentArrayHashcode(MethodInfo method, params object[] args)
    {
        StringBuilder result = new StringBuilder(method.GetHashCode().ToString());

        foreach (object item in args)
            result.Append(item);
        return result.ToString().GetHashCode();
    }
}

I love what I achieved here, not least for the fact that it took me no more than about 20 lines of code to do it. But that’s not the real killer feature, for me. It’s the fact that PostSharp Laos has MulticastAttributes, that allow me to apply the advice to numerous methods in a single instruction, or even numerous classes or even every method of every class of an assembly. I can specify what to attach the aspects to by using regular expressions, or wildcards. Here’s an example that applies an aspect to all public methods in class MyServiceClass.

[assembly: Pure(
    AspectPriority = 2,
    AttributeTargetAssemblies = "MyAssembly",
    AttributeTargetTypes = "UiFacade.MyServiceClass",
    AttributeTargetMemberAttributes = MulticastAttributes.Public,
    AttributeTargetMembers = "*")]

Here’s an example that uses a wildcard to NOT apply the aspect to those methods that end in “Impl”.

[assembly: Pure(
    AspectPriority = 2,
    AttributeTargetAssemblies = "MyAssembly",
    AttributeTargetTypes = "UiFacade.MyServiceClass",
    AttributeTargetMemberAttributes = MulticastAttributes.Public,
    AttributeExclude = true,
    AttributeTargetMembers = "*Impl")]

Do you use AOP? What aspects do you use, other than the usual suspects above?

Does this seem nice to you?

After years of recoiling at the sight of code like this, am I supposed now to embrace it in a spirit of reconciliation?

namespace ConsoleApplication1
{
    class Program
    {
        static void Main(string[] args)
        {
            dynamic blah = GetTheBlah();
            Console.WriteLine(blah);
        }

        private static dynamic GetTheBlah()
        {
            if (DateTime.Now.Millisecond % 3 == 0)
                return 0;
            else
                return "hello world!";
        }
    }
}

need to wash my hands.

New Resources for LinqToRdf

John Mueller recently sent through a link to a series of articles on working with RDF. As well as being a useful introduction to working with RDF, they use LinqToRdf for code examples.

They provide information on hosting RDF files as well as querying them using LinqToRdf. They show how easy it is to get semantic web applications up and running on .NET in no time at all. Please read the articles and share the links around.

John also told me about his new book LINQ for Dummies, which has a section on LinqToRdf. I’ve not had a chance to read it yet. I would welcome any feedback, which I’ll pass through to John. I understand that the content is broadly similar to the articles on DevSource.com, placing more emphasis on LINQ than RDF.  Again, please take a look and let me know what you think.

Not another mapping markup language!

Kingsley Idehen has again graciously given LinqToRdf some much needed link-love. He mentioned it in a post that was primarily concerned with the issues of mapping between the ontology, relational and object domains. His assertion is that LinqtoRdf, being an offshoot of an ORM related initiative, is reversing the natural order of mappings. He believes that in the world of ORM systems, the emphasis should be in mapping from the relational to the object domain.

I think that he has a point, but not for the reason he’s putting forward. I think that the natural direction of mapping stems from the relative richness of the domains being mapped. The impedence mismatch between the relational and object domains stems from (1) the implicitness of meaning in the relationships of relational systems and (2) the representation of relationships and (3) type mismatches.

If the object domain has great expressiveness and explicit meaning in relationships it has a ‘larger’ language than that expressible using relational databases. Relationships are still representable, but their meaning is implicit. For that reason you would have to confine your mappings to those that can be represented in the target (relational) domain. In that sense you get a priority inversion that forces the lowest common denominator language to control what gets mapped.

The same form of inversion occurs between the ontological and object domains, only this time it is the object domain that is the lowest common denominator. OWL is able to represent such things as restriction classes and multiple inheritance and sub-properties that are hard or impossible to represent in languages like C# or Java. When I heard of the RDF2RDB working group at the W3C, I suggested (to thunderous silence) that they direct their attentions to coming up with a general purpose mapping ontology that could be used for performing any kind of mapping.

I felt that it would have been extremely valuable to have a standard language for defining mappings. Just off the top of my head I can think of the following places where it would be useful:

  1. Object/Relational Mapping Systems (O/R or ORM)
  2. Ontology/Object Mappings (such as in LinqToRdf)
  3. Mashups (merging disparate data sources)
  4. Ontology Reconciliation – finding intersects between two sets of concepts
  5. Data cleansing
  6. General purpose data access layer automation
  7. Data export systems
  8. Synchronization Systems (i.e. keeping systems like CRM and AD in sync)
  9. mapping objects/tables onto UIs
  10. etc

You can see that most of these are perennial real-world problems that programmers are ALWAYS having to contend with. Having a standard language (and API?) would really help with all of these cases.

I think such an ontology would be a nice addition to OWL or RDF Schema, allowing a much richer definition of equivalence between classes (or groups or parts of classes). Right now one can define a one-to-one relationship using the owl:equivalentClass property. It’s easy to imagine that two ontology designers might approach a domain from such orthogonal directions that they find it hard to define any conceptual overlap between entities in their ontologies. A much more complex language is required to allow the reconciliation of widely divergent models.

I understand that by focusing their attentions on a single domain they increase their chances of success, but what the world needs from an organization like the W3C is the kind of abstract thinking that gave rise to RDF, not another mapping markup language!


Here’s a nice picture of how LinqToRdf interacts with Virtuoso (thanks to Kingsley’s blog).

How LINQ uses LinqToRdf to talk to SPARQL stores

How LINQ uses LinqToRdf to talk to SPARQL stores

Semantic Development Environments

The semantic web is a GOOD THING by definition – anything that enables us to create smarter software without also having to create Byzantine application software must be a step in the right direction. The problem is – many people have trouble translating the generic term “smarter” into a concrete idea of what they would have to do to achieve that palladian dream. I think a few concrete ideas might help to firm up people’s understanding of how the semantic web can help to deliver smarter products.

Software Development as knowledge based activity

In this post I thought it might be nice to share a few ideas I had about how OWL and SWRL could help to produce smarter software development environments. If you want to use the ideas to make money, feel free to do so, just consider them as released under the creative commons attribution license. Software development is the quintessential knowledge based activity. In the process of producing a modern application a typical developer will burn through knowledge at a colossal rate. Frequently, we will not reserve headspace for a lot of the knowledge we acquire to solve a task. Frequently, we bring together the ideas, facts, standards, API skills and problem requirements needed to solve a problem then just as quickly forget it all. The unique combination is never likely to arise again.

I’m sure we could make a few comments about how it’s more important to know where the information is than to know what it is – a fact driven home to me by my Computer Science lecturer John English, who seemed to be able to remember the contents page of every copy of the Proceedings of the ACM back to the ’60s. You might also be forgiven for thinking this wasn’t true , given the current obsession with certifications. We could also comment about how some information is more lasting than others, but my point is that every project these days seems to combine a mixture of ephemera, timeless principles and those bits that lie somewhere between the two (called ‘Best Practice’ in current parlance ;).

Requires cognitive assistance
Software development, then, is a knowledge intensive activity that brings together a variety of structured and unstructured information to allow the developer to produce a system that they endeavor to show is equivalent to a set of requirements, guidelines, nuggets of wisdom and cultural mores that are defined or mandated at the beginning of the project. Doesn’t this sound to you like exactly the environment for which the semantic web technology stack was designed?

Incidentally, the following applications don’t have much to do with the web, so perhaps they demonstrate that the term ‘Web 3.0′ is limiting and misleading. It’s the synergy of the complementary standards in the semantic web stack that makes it possible to deliver smarter products and to boost your viability in an increasingly competitive market place.

Documentation

OK, so the extended disclaimer/apology is now out of the way and I can start to talk about how the semantic web could offer help to improve the lives of developers. The first place I’ll look is at documentation. There are many types of documentation that are used in software development. In fact, there is a different form of documentation defined for each specific stage of the software lifecycle from conception of an idea through to its realization in code (and beyond). Each of these forms of documentation is more or less formally structured with different kinds of information related to documents and other deliverables that came before and after. This kind of documentation is frequently ambiguous, verbose and often gets written for the sake of compliance and then gets filed away and never sees the light of day again. Documentation for software projects needs to be precise, terse, rich and most of all useful.

Suggestion 1.

Use ontologies (perhaps standardised by the OMG) for the production of requirements. Automated tools could be used to convert these ontologies into human-readable reports or tools could be used to answer questions about specific requirements. A reasoner might be able to deduce conflicts or contradictions from a set of requirements. It might also be able to offer suggestions about implementations that have been shown to fulfill similar requirements in other projects. Clearly, the sky’s the limit in how useful an ontology, reasoner and rules language could be. It should also help documentation to be much more precise and less verbose. There is also scope for documentation reuse, specialization and for there to be diagramming and code generation driven off of documentation.

Documentation is used heavily inside the source code used by developers to write software too. It serves to provide an explanation for the purpose of a software component, to explain how to use it, to provide change notes, to generate API documentation web-sites, and to even store to-do list items or apologies for later reference. In .NET and Java, and now many other programming languages, it is common to use formal languages (like XML markup) to provide commonly used information. An ontology might be helpful in providing a rich and extensible language for representing code documentation. The use of URIs to represent unique entities means that the documentation can be the subject or other documents and can reach out to the wider ecology of data about the system.

Suggestion 2.

Provide an extensible ontology to allow the linkage of code documentation with the rest of the documentation produced for a software system. Since all parts of the software documentation process (being documented in RDF) will have unique URIs, it should be easy to link the documentation for a component to the requirements, specifications, plans, elaborations, discussions, blog posts and other miscellanea generated. Providing semantic web URIs to individual code elements helps to integrate the code itself into other semantic systems like change management and issue tracking systems. Use of URIs and ontologies within source code helps to provide a firm, rich linkage between source code and the documentation that gave rise to it.

Suggestion 3.

Boosted with richer, extensible markups to represent the meaning and wider documentation environment means that traditional intellisense can be augmented with browsers that provide access to all other pertinent documentation related to a piece of code. Imagine hovering over an object reference and getting links not only to a web site generated from the code commentary but to all the requirements that the code fulfills, to automated proofs demonstrating that the code matches the requirements, to blog posts written by the dev team and to MP3s taken during the brainstorming and design sessions during which this component was conceived.

It doesn’t take much imagination to see that some simple enhancements like these can provide a ramp for the continued integration of the IDE, allowing smoother cooperation between teams and their stakeholders. Making documentation more useful to all involved would probably increase the chances that people would give up Agile in favour of something less like the emperor’s clothes.

Suggestion 4.

Here’s some other suggestions about how documentation in the IDE could be enriched.
○ Guidelines on where devs should focus their attention when learning a new API
○ SPARQL could be exposed by code publisher
§ Could provide a means to publish documentation online
○ Automatic publishing of DOAP documents to an enterprise or online registry, allowing software registries.

Dynamic Systems

Augmenting the source code of a system with URIs that can be referenced from anywhere opens the semantic artifacts inside an application to analysis and reference from outside. Companies like Microsoft have already described their visions for the production of documentation systems that allow architects to describe how a system hangs together. This information can be used by other systems to deploy, monitor, control and scale systems in production environments.

I think that their vision barely glimpses what could be achieved through the use of automated inference systems, rich structured machine readable design documentation, and systems that are for the first time white boxes. I think that DSI-style declarative architecture documents are a good example of what might be achieved through the use of smart documentation. There is more though.

Suggestion 5.

Reflection and other analysis tools can gather information about the structure, inter-relationships and external dependencies of a software system. Such data can be fed to an inference engine to allow it to make comparisons about the runtime behavior of a production system. Rules of inference can help it to determine what the consequences of violating a rule derived from the architect or developers documentation. Perhaps it could detect when the system is misconfigured or configured in a way that will force it to struggle under load. Perhaps it can find explanations for errors and failures. Rich documentation systems should allow developers to indicate deployment guidelines (i.e. this component is thread safe, or is location independent and scalable). Such documentation can be used to predict failure modes, to direct testing regimes and to predict optimal deployment patterns for specific load profiles.

Conclusions

I wrote this post because I know I’ll never have time to pursue these ideas, but I would dearly love to see them come to pass. Why don’t you get a copy of LinqToRdf, crack open a copy of Coco/R and see whether you can implement some of these suggestions. And if you find a way to get rich doing it, then please remember me in your will.

Wanted: Volunteers for .NET semantic web framework project

 LinqToRdf* is a full-featured LINQ** query provider for .NET written in C#. It provides developers with an intuitive way to make queries on semantic web databases. The project has been going for over a year and it’s starting to be noticed by semantic web early adopters and semantic web product vendors***. LINQ provides a standardised query language and a platform enabling any developer to understand systems using semantic web technologies via LinqToRdf. It will help those who don’t have the time to ascend the semantic web learning curve to become productive quickly.

The project’s progress and momentum needs to be sustained to help it become the standard API for semantic web development on the .NET platform. For that reason I’m appealing for volunteers to help with the development, testing, documentation and promotion of the project.

Please don’t be concerned that all the best parts of the project are done. Far from it! It’s more like the foundations are in place, and now the system can be used as a platform to add new features. There are many cool things that you could take on. Here are just a few:

Reverse engineering tool
This tool will use SPARQL to interrogate a remote store to get metadata to build an entity model.

Tutorials and Documentation
The documentation desperately needs the work of a skilled technical writer. I’ve worked hard to make LinqToRdf an easy tool to work with, but the semantic web is not a simple field. If it were, there’d be no need for LinqToRdf after all. This task will require an understanding of the LINQ, ASP.NET, C#, SPARQL, RDF, Turtle, and SemWeb.NET systems. It won’t be a walk in the park.

 

Supporting SQL Server
The SemWeb.NET API has recently added support to SQL Server, which has not been exploited inside LinqToRdf (although it may be easy to do).  This task would also involve thinking about robust scalable architectures for semantic web applications in the .NET space.

 

Porting LinqToRdf to Mono
LINQ and C# 3.0 support in Mono is now mature enough to make this a desirable prospect. Nobody’s had the courage yet to tackle it. Clearly, this would massively extend the reach of LinqToRdf, and it would be helped by the fact that some of the underlying components are developed for Mono by default.

 

SPARQL Update (SPARUL) Support
LinqToRdf provides round-tripping only for locally stored RDF. Support of SPARQL Update would allow data round-tripping on remote stores. This is not a fully ratified standard, but it’s only a matter of time.

 

Demonstrators using large scale web endpoints
There are now quite a few large scale systems on the web with SPARQL endpoints. It would be a good demonstration of LinqToRdf to be able to mine them for useful data.

 

These are just some of the things that need to be done on the project. I’ve been hoping to tackle them all for some time, but there’s just too much for one man to do alone. If you have some time free and you want to learn more about LINQ or the Semantic Web, there is not a better project on the web for you to join.  If you’re interested, reply to this letting me know how you could contribute, or what you want to tackle. Alternatively join the LinqToRdf discussion group and reply to this message there.

 

Thanks,

 

Andrew Matthews

 

* http://code.google.com/p/linqtordf

** http://msdn.microsoft.com/en-us/netframework/aa904594.aspx

*** http://virtuoso.openlinksw.com/Whitepapers/html/linqtordf/linqtordf1.htm

Announcing LinqToRdf v0.8

I’m very pleased to announce the release of version 0.8 of LinqToRdf. This release is significant for a couple of reasons. Firstly, because it provides a preview release of RdfMetal and secondly because it is the first release containing changes contributed by someone other than yours truly. The changes in this instance being provided by Carl Blakeley of OpenLink Software.

LinqToRdf v0.8 has received a few major chunks of work:

  • New installers for both the designer and the whole framework
    WIX was proving to be a pain, so I downgraded to the integrated installer generator in Visual Studio.
  • A preview release of RdfMetal. I brought this release forward a little, on Carl Blakeley’s request, to coincide with a post he’s preparing on using OpenLink Virtuoso with LinqToRdf, so RdfMetal is not as fully baked as I’d planned. But it’s still worth a look. Expect a minor release in the next few weeks with additional fixes/enhancements.

I’d like to extend a very big thank-you to Carl for the the work he’s done in recent weeks to help extend and improve the mechanisms LinqToRdf uses to represent and traverse relationships. His contributions also include improvements in representing default graphs, and referencing multiple ontologies within a single .NET class. He also provided fixes around the quoting of URIs and some other fixes in the ways LinqToRdf generates SPARQL for default graphs. Carl also provided an interesting example application using OpenLink Virtuoso’s hosted version of Musicbrainz that is significantly richer than the test ontology I created for the unit tests and manuals.

I hope that Carl’s contributions represent an acknowledgement by OpenLink that not only does LinqToRdf support Virtuoso, but that there is precious little else in the .NET space that stands a chance of attracting developers to the semantic web. .NET is a huge untapped market for semantic web product vendors. LinqToRdf is, right now, the best way to get into semantic web development on .NET.

Look out for blog posts from Carl in the next day or two, about using LinqToRdf with OpenLink Virtuoso.