Using RDF and C# to create an MP3 Manager – Part 2

I’ve been off the air for a week or two – I’ve been hard at work on the final stages of a project at work that will go live next week. I’ve been on this project for almost 6 months now, and next week I’ll get a well earned rest. What that means is I get to do some dedicated Professional Development (PD) time which I have opted to devote to Semantic Web technologies. That was a hard sell to the folks at Readify, what with Silverlight and .NET 3 there to be worked on. I think I persuaded them that consultancies without SW skills will be at a disadvantage in years to come.

Anyway, enough of that – onto the subject of the post, which is the next stage of my mini-series about using semantic web technologies in the production of a little MP3 file manager.

At the end of the last post we had a simple mechanism for serialising objects into a triple store, with a set of services for extracting relevant information out of an object, and to tie it to predicates defined in on ontology. In this post I will show you the other end of the process. We need to be able to query against the triple store and get a collection of objects back.

The query I’ll show you is very simple, since the main task for this post is object deserialisation, once we can shuttle objects in and out of the triple store then we can focus on beefing up the query process.

Querying the triple store

For this example I just got a list of artists for the user and allowed them to select one. That artist was then fed into a graph match query in SemWeb, to bring back all of the tracks whose artist matches the one chosen.

The query works in the usual way – get a connection to the data store, create a query, present it and reap the result for conversion to objects:

private IList<Track> DoSearch()
{
  MemoryStore ms = Store.TripleStore;
  ObjectDeserialiserQuerySink<Track> sink = new ObjectDeserialiserQuerySink<Track>();
  string qry = CreateQueryForArtist(artists[0].Trim());
  Query query = new GraphMatch(new N3Reader(new StringReader(qry)));
  query.Run(ms, sink);
  return tracksFound = sink.DeserialisedObjects;
}

We’ll get on the the ObjectDeserialiserQuerySink in a short while. The process of creating the query is really easy, given the simple reflection facilities I created last time. I’m using the N3 format for the queries, for the sake of simplicity – we could just as easily used SPARQL. We start with a prefix string to give us a namespace to work with, we then enumerate the persistent properties of the Track type. For each property we then insert a triple meaning “whatever track is select – get its property as well”. Lastly, we add the artist name ass a known fact, allowing us to specify exactly what tracks we were talking about.

private static string CreateQueryForArtist(string artistName)
{
  string queryFmt = "@prefix m: <http: aabs.purl.org/ontologies/2007/04/music#> .\n";
  foreach (PropertyInfo info in OwlClassSupertype.GetAllPersistentProperties(typeof(Track)))
  {
    queryFmt += string.Format("?track <{0}> ?{1} .\n", OwlClassSupertype.GetPropertyUri(typeof(Track), info.Name), info.Name);
  }
  queryFmt += string.Format("?track <{0}> \"{1}\" .\n", OwlClassSupertype.GetPropertyUri(typeof(Track), "ArtistName"), artistName);
  return queryFmt;
}

Having created a string representation of the query we’re after we pass it to a GraphMatch object, which is a kind of query were you specify a graph that is a kind of prototype for the structure of the results desired. I also created a simple class called ObjectDeserialiserQuerySink:

public class ObjectDeserialiserQuerySink<T> : QueryResultSink where T : OwlClassSupertype, new()
{
  public List<T> DeserialisedObjects
  {
  get { return deserialisedObjects; }
  }
  private List<T> deserialisedObjects = new List<T>();
  public ObjectDeserialiserQuerySink()
  {
  }
  public override bool Add(VariableBindings result)
  {
    T t = new T();
    foreach (PropertyInfo pi in OwlClassSupertype.GetAllPersistentProperties(typeof(T)))
    {
      try
      {
        string vn = OwlClassSupertype.GetPropertyUri(typeof (T), pi.Name).Split('#')[1];
        string vVal = result[pi.Name].ToString();
        pi.SetValue(t, Convert.ChangeType(vVal, pi.PropertyType), null);
      }
      catch (Exception e)
      {
        Debug.WriteLine(e);
        return false;
      }
    }
    DeserialisedObjects.Add(t);
    return true;
  }
}

For each match that the reasoner is able to find, a call gets made to the Add method of the deserialiser with a set of VariableBindings. Each of the variable bindings corresponds to solutions of the free variables defined in the query. Since we generated the query out of the persistent properties on the Track type the free variables matched will also correspond to the persistent properties of the type. What that means is that it is a straightforward job to deserialise a set of VariableBindings into an object.

That’s it. We now have a simple triple store that we can serialise objects into and out of, with an easy persistence mechanism. But there’s a lot more that we need to do. Of the full CRUD behaviour I have implemented Create and Retrieve. That leave Update and Delete. As we saw in one of my previous posts, that will be a mainly manual programmatical task since semantic web ontologies are to a certain extend static. What that means is that they model a domain as a never changing body of knowledge about which we may deduce more facts, but where we can’t unmake (delete) knowledge.

The static nature of ontologies seems like a bit of handicap to one who deals more often than not with transactional data – since it means we need more than one mechanism for dealing with data – deductive reasoningh, and transactional processing. With the examples I have given up till now I have been dealing with in-memory triple stores where the SemWeb API is the only easy means of updating and deleting data. When we are dealing with a relational database as our triple store, we will have the option to exploit SQL as another tool for managing data.

Powered by ScribeFire.

Using RDF and C# to create an MP3 Manager – Part 1

This article follows on from the previous post about semantic web applications in C#. I’ll be using the SemWeb framework again, but this time I chose to demonstrate the capabilities of RDF by producing a simple MP3 file manager. I haven’t completed it yet, and I’ll be working on that over the next few days to show you just how easy RDF/OWL is to work with in C# these days.

The program is pretty simple – I was inspired to write it by the RDF-izers web site, where you can find conversion tools to producing RDF from a variety of different data sources. While I was playing with LINQ I produced a simple file tagging system – I simply scanned the file system extracting as much metadata as I could from the files that I found, adding them to a tag database that I kept in SQL server. Well this isn’t much different. I just extract ID3 metadata tags from the MP3 files I find and store them in Track objects. I then wrote a simple conversion system to extract RDF URIs from the objects I’d created for insertion into an in-memory triple store. All-up it took about 3-4 hours including finding a suitable API for ID3 reading. I won’t show (unless demanded to) the code for the test harness or for iterating the file system. Instead I’ll show you the code I wrote for persisting objects to an RDF store.

First up, we have the Track class. I’ve removed the vanilla implementation of the properties for the sake of brevity.

[OntologyBaseUri("file:///C:/dev/prototypes/semantic-web/src/Mp3ToRdf/")]
[OwlClass("Track", true)]
public class Track : OwlInstanceSupertype
{
[OwlProperty("title", true)]
public string Title /* ... */
[OwlProperty("artistName", true)]
public string ArtistName /* ... */
[OwlProperty("albumName", true)]
public string AlbumName /* ... */
[OwlProperty("year", true)]
public string Year /* ... */
[OwlProperty("genreName", true)]
public string GenreName /* ... */
[OwlProperty("comment", true)]
public string Comment /* ... */
[OwlProperty("fileLocation", true)]
public string FileLocation /* ... */

private string title;
private string artistName;
private string albumName;
private string year;
private string genreName;
private string comment;
private string fileLocation;

public Track(TagHandler th, string fileLocation)
{
this.fileLocation = fileLocation;
title = th.Track;
artistName = th.Artist;
albumName = th.Album;
year = th.Year;
genreName = th.Genere;
comment = th.Comment;
}
}

Nothing of note here except for the presence of a few all-important attributes which are used to give the persistence engine clues about how to generate URIs for the class, its properties and their values. Obviously this is a rudimentary implementation, so we don’t have lots of extra information about XSD types and versions etc. But for the sake of this illustration I’m sure you get the idea, that we can do for RDF pretty much what LINQ to SQL does for relational databases.

The attribute classes are also very simple:

[AttributeUsage(AttributeTargets.Class | AttributeTargets.Struct | AttributeTargets.Property)]
public class OwlResourceSupertypeAttribute : Attribute
{
public string Uri
{
get { return uri; }
}
private readonly string uri;
public bool IsRelativeUri
{
get { return isRelativeUri; }
}
private readonly bool isRelativeUri;
public OwlResourceSupertypeAttribute(string uri)
: this(uri, false){}
public OwlResourceSupertypeAttribute(string uri, bool isRelativeUri)
{
this.uri = uri;
this.isRelativeUri = isRelativeUri;
}
}

[AttributeUsage(AttributeTargets.Class | AttributeTargets.Struct | AttributeTargets.Property)]
public class OwlClassAttribute : OwlResourceSupertypeAttribute
{
public OwlClassAttribute(string uri)
: base(uri, false){}
public OwlClassAttribute(string uri, bool isRelativeUri)
: base(uri, isRelativeUri){}
}

[AttributeUsage(AttributeTargets.Property)]
public class OwlPropertyAttribute : OwlResourceSupertypeAttribute
{
public OwlPropertyAttribute(string uri)
: base(uri, false){}
public OwlPropertyAttribute(string uri, bool isRelativeUri)
: base(uri, isRelativeUri){}
}

[AttributeUsage(AttributeTargets.Class | AttributeTargets.Struct)]
public class OntologyBaseUriAttribute : Attribute
{
public string BaseUri
{
get { return baseUri; }
}
private string baseUri;
public OntologyBaseUriAttribute(string baseUri)
{
this.baseUri = baseUri;
}
}

OwlResourceSupertypeAttribute is the supertype of any attribute that can be related to a resource in an ontology – that is anything that has a URI. As such it has a Uri property, and in addition it has an isRelativeUri property which indicates whether the URI is relative to a base URI defined elsewhere. Although I haven’t implemented my solution that way yet, this is intended to allow the resources to reference a base namespace definition in the triple store or in an RDF file. The OwlClassAttribute extends the OwlResourceSupertype restricting its usage to classes or structs. You use this (or the parent type if you want) to indicate the OWL class URI that the type will be persisted to. So for the Track class we have an OWL class of “Track”. In an ontology that Track will be relative to some URI, which I have defined using the OntologyBaseUriAttribute. That attribute defines the URI of the ontology the Class and Property URIs are relative to in this example (i.e. “file:///C:/dev/prototypes/semantic-web/src/Mp3ToRdf/“).

For each of the properties of the class Track I have defined another sublass of OwlResourceSupertype called OwlPropertyAttribute that is restricted solely to Property members. Another simplification that I have introduced is that I am not distinguishing between ObjectProperties and DatatypeProperties, which OWL does. That would not be hard to add, and I’m sure I’ll have to over the next few days.

So, I have now annotated my class to tell the persistence engine how to produce statements that I can add to the triple store. These annotations can be read by the engine and used to construct appropriate URIs for statements. We still need a way to construct instances in the ontology. I’ve done that in a very simple way – I just keep a counter in the scanner, and I create an instance URI out of the Class Uri by adding the counter to the end. So the first instance will be “file:///C:/dev/prototypes/semantic-web/src/Mp3ToRdf/Track_1” and so on. This is simple, but would need to be improved upon for any serious applications.

Next I need to reflect over an instance of class Track to get a set of statements that I can add to the triple store. For this I have exploited the extension method feature of C# 3.5 (May CTP) which allows me to write code like this:
foreach (Track t in GetAllTracks(txtFrom.Text))
{
t.InstanceUri = GenTrackName(t);
store.Add(t);
}

The triple store is called store, GetAllTracks is an iterator that filters the files under the directory indicated by whatever is in txtFrom.Text. GenTrackName creates the instance URI for the track instance. I could have used a more sophisticated scheme using hashes from the track location or somesuch, but I was in a rush ;-). The code the the persistence engine is easy as well:

public static class MemoryStoreExtensions
{
public static void Add(this MemoryStore ms, OwlInstanceSupertype oc)
{
Debug.WriteLine(oc.ToString());
Type t = oc.GetType();
PropertyInfo[] pia = t.GetProperties();
foreach (PropertyInfo pi in pia)
{
if(IsPersistentProperty(pi))
{
AddPropertyToStore(oc, pi, ms);
}
}
}
private static bool IsPersistentProperty(PropertyInfo pi)
{
return pi.GetCustomAttributes(typeof (OwlPropertyAttribute), true).Length > 0;
}
private static void AddPropertyToStore(OwlInstanceSupertype track, PropertyInfo pi, MemoryStore ms)
{
Add(track.InstanceUri, track.GetPropertyUri(pi.Name), pi.GetValue(track, null).ToString(), ms);
}
public static void Add(string s, string p, string o, MemoryStore ms)
{
if(!Empty(s) && !Empty(p) && !Empty(o))
ms.Add(new Statement(new Entity(s), new Entity(p), new Literal(o)));
}
private static bool Empty(string s)
{
return (s == null || s.Length == 0);
}
}

Add is the extension method which iterates the properties on the class OwlInstanceSupertype. OwlInstanceSupertype is a supertype of all classes that can be persisted to the store. As you can see, it gets all properties and checks each on to see whether it has the OwlPropertyAttribute. If it does, then it gets persisted using AddPropertyToStore. AddPropertyToStore creates URIs for the subject (the track instance in the store), the predicate (the object property on class Track) and the object property (which is a string literal containing the value of the property). That statement gets added by the private Add method, which just mirrors the Add API on the MemoryStore itself.

And that’s it. Almost. The quick and dirty ontology I defined for the music tracks looks like this:

@prefix rdf: .
@prefix daml: .
@prefix log: .
@prefix rdfs: .
@prefix owl: .
@prefix xsdt: .
@prefix : .

:ProducerOfMusic a owl:Class.
:SellerOfMusic a owl:Class.
:NamedThing a owl:Class.
:TemporalThing a owl:Class.
:Person a owl:Class;
owl:subClassOf :NamedThing.
:Musician owl:subClassOf :ProducerOfMusic, :Person.
:Band a :ProducerOfMusic.
:Studio a :SellerOfMusic, :NamedThing.
:Label = :Studio.
:Music a owl:Class.
:Album a :NamedThing.
:Track a :NamedThing.
:Song a :NamedThing.
:Mp3File a owl:Class.
:Genre a :NamedThing.
:Style = :Genre.
:title
rdfs:domain :Track
rdfs:range xsdt:string.
:artistName
rdfs:domain :Track
rdfs:range xsdt:string.
:albumName
rdfs:domain :Track
rdfs:range xsdt:string.
:year
rdfs:domain :Album
rdfs:range xsdt:integer.
:genreName
rdfs:domain :Track
rdfs:range xsdt:string.
:comment
rdfs:domain :Track
rdfs:range xsdt:string.
:isTrackOn
rdfs:domain :Track
rdfs:range :Album.
:fileLocation
rdfs:domain :Track
rdfs:range xsdt:string.

When I run it over my podcasts, the output persisted to N3 looks like this:

<file:///C:/dev/prototypes/semantic-web/src/Mp3ToRdf/Track_1> <file:///C:/dev/prototypes/semantic-web/src/Mp3ToRdf/title> “History 5 | Fall 2006 | UC Berkeley” ; <file:///C:/dev/prototypes/semantic-web/src/Mp3ToRdf/artistName> “Thomas Laqueur” ;
<file:///C:/dev/prototypes/semantic-web/src/Mp3ToRdf/albumName> “History 5 | Fall 2006 | UC Berkeley” ;
<file:///C:/dev/prototypes/semantic-web/src/Mp3ToRdf/year> “2006” ;
<file:///C:/dev/prototypes/semantic-web/src/Mp3ToRdf/genreName> “History 5 | Fall 2006 | UC Berkeley” ;
<file:///C:/dev/prototypes/semantic-web/src/Mp3ToRdf/comment> ” (C) Copyright 2006, UC Regents” ;
<file:///C:/dev/prototypes/semantic-web/src/Mp3ToRdf/fileLocation> “C:\\Users\\andrew.matthews\\Music\\hist5_20060829.mp3” .

You can see how the URIs have been constructed from the base URI, and the properties are all attached to instance Track_1. Next on the list will probably be a bit of cleaning up to use a prefix rather than this longhand URI notation, then I’ll show you how to query the store to slice and dice your music collections every which way.

A simple semantic web application in C#

The latest update of the SemWeb library from Josh Tauberer includes a C# implementation of the Euler reasoner. This reasoner is able to go beyond simplistic RDFS reasoning – being able to navigate the class and property relationships – to make use of rules. The ontology I’ve been using to get used to coding in the framework models a simple state machine. The ontology couldn’t be simpler. Here’s an N3 file that declares the classes and their relationships.


@prefix daml: .
@prefix rdfs: .
@prefix owl: .
@prefix : .

#Classes
:State a owl:Class;
daml:comment "states the system can be in";
daml:disjointUnionOf ( :S1 :S2 :S3 ).

:InputToken a owl:Class;
daml:comment "inputs to the system";
daml:disjointUnionOf ( :INil :I1 :I2 ).

:Machine a owl:Class.
:System a owl:Class.

#properties
:isInState
rdfs:domain :Machine;
rdfs:range :State;
owl:cardinality "1".

:hasInput
rdfs:domain :System;
rdfs:range :InputToken;
owl:cardinality "1".

#Instances
:Machine1
a :Machine;
:isInState :S1.

:This a :System;
:hasInput :INil.

As with any deterministic finite state machine, there are two key classes at work here. :State and :InputToken. State is a disjoint union of :S1, :S2 and :S3. That means that :S1 is not an :S2 or an :S3. If you don’t specify such a disjunction, the reasoners cannot assume it – if there is no rule that says they are disjoint, the reasoner won’t be able to assume they’re different – just because Machine1 is in state S1, doesn’t mean it isn’t potentially in state S2 as well. You have to tell it that an S1 is not an S2. Pedantry is all-important in ontology design, and while I have gained a fair measure of it over the years as a defensive programmer I was shocked at the level of semantic support I get from the programming languages I use. OWL provides you with such a rich palette to work with, but less conventional support. It is kind of liberating to be designing class libraries in OWL vs. OO languages. Kind of like when you go from DOS to Bash.

Anyway, the rules for this little ontology define a transition table for the state machine:


@prefix log: .
@prefix rdfs: .
@prefix owl: .
@prefix : .

# ~>
{ :Machine1 :isInState :S1. :This :hasInput :I1. }
=>
{ :Machine1 :isInState :S1. :This :hasInput :INil. }.

# ~>
{ :Machine1 :isInState :S1. :This :hasInput :I2. }
=>
{ :Machine1 :isInState :S2. :This :hasInput :INil. }.

# ~>
{ :Machine1 :isInState :S1. :This :hasInput :I3. }
=>
{ :Machine1 :isInState :S3. :This :hasInput :INil.}.

I got into problems initially, since I thought about the problem from an imperative programming perspective. I designed it like I was assigning values to variables. That’s the wrong approach – treat this as adding facts to what is already known. So, rather than saying if X, then do Y, think of it as if I know X, then I also know that Y. The program to work with it looks like this:


internal class Program
{
private static readonly string ontologyLocation =
@"C:\dev\prototypes\semantic-web\ontologies\20074\states\";

private static string baseUri = @"file:///C:/dev/prototypes/semantic-web/ontologies/2007/04/states/states.rdf#";
private static MemoryStore store = new MemoryStore();
private static Entity Machine1 = new Entity(baseUri + "Machine1");
private static Entity Input1 = new Entity(baseUri + "I1");
private static Entity Input2 = new Entity(baseUri + "I2");
private static Entity theSystem = new Entity(baseUri + "This");
private static string hasInput = baseUri + "hasInput";
private static string isInState = baseUri + "isInState";

private static void Main(string[] args)
{
InitialiseStore();
DisplayCurrentStates();
SetNewInput(Input2);
DisplayCurrentStates();
}

private static void DisplayCurrentStates()
{
SelectResult ra = store.Select(new Statement(Machine1, new Entity(isInState), null));
Debug.Write("Current states: ");
foreach (Statement resource in ra.ToArray())
{
Debug.Write(resource.Object.Uri);
}
Debug.WriteLine("");
}

private static void InitialiseStore()
{
string statesLocation = Path.Combine(ontologyLocation, "states.n3");
string rulesLocation = Path.Combine(ontologyLocation, "rules.n3");
Euler engine = new Euler(new N3Reader(File.OpenText(rulesLocation)));
store.Import(new N3Reader(File.OpenText(statesLocation)));
store.AddReasoner(engine);
}

private static void SetNewInput(Entity newInput)
{
Resource[] currentInput = store.SelectObjects(theSystem, hasInput);
Statement input = new Statement(theSystem, hasInput, Input1);
store.Remove(new Statement(theSystem, hasInput, currentInput[0]));
store.Add(new Statement(theSystem, hasInput, newInput));
Resource[] subsequentState = store.SelectObjects(Machine1, isInState);
Statement newState = new Statement(Machine1, isInState, subsequentState[0]);
store.Replace(new Statement(Machine1, isInState, null), newState);
}
}

The task was simple – I wanted to set the state machine up in state :S1 with input :INil, then put input :I1 in, and see the state change from :S1 to :S2. In doing this I am trying to do something that is a little at odds with the expressed intention of ontologies. They are more static declarations of a body of knowledge as much as a specification for a dynamically changing body of facts. What that means is that they are additive – the frameworks and reasoners allow you to add to a body of knowledge. That makes reuse and trust possible on the semantic web[1, 2]. If I can take your ontology and change it to mean something other than what you intended then no guarantees can be made about the result. The ontology should stand alone – if you want to base some data on it, that’s up to you, and you will have to manage it. In practical terms, that means you have to manually change the entries for the input and the states as they change. What the ontology adds is a framework for representing the data, and a set of rules for working out what the next state should be. That’s still powerful, but I wonder how well it would scale.

What to notice

There are a few things in here that you should pay very close attention to if you want to write a semantic web application of your own using SemWeb. Firstly, the default namespace definition in the ontology and rules definition files. Generally, the examples of N3 files on the W3C site use the following format to specify the default namespace of a file:

@prefix : <#>

Unfortunately that leaves a little too much room for manoeuvring within SemWeb, and the actual URIs that it will use are a little unpredictable. Generally they are based upon the location that SemWeb got the files from. Instead, choose a URL like so:

@prefix : <http://aabs.purl.org/ontologies/2007/04/states/states.rdf#&gt;.

This was not the location of the file – it was just a URL I was using prior to using N3 format. The point is that you just need to give an unambiguous URL so that the semweb and its reasoner can distinguish resources properly, when you ask it questions. I used the same URL in the rules.n3 file, since most of what I was referring to was defined in the namespace above. I could just as easily have defined a new prefix for states.n3 and prepended all the elements in the rules with that prefix. The point is to have a non-default URL so that semweb is in no doubt about what the URL of the resources are that you are referring to.

Next, remember that you will have to dip into the instances in the store to make manual changes to their state – this is no different from any relational database application. Although, I was disconcerted at first, because I had hoped that the reasoner would make any changes I needed for me. Alas that is not in the spirit of the semantic web apparently, so be prepared to manage the system closely.

I think that the use of OWL ontologies would be very easy for question answering applications, but there may be a little more work required to place a reasoner at the core of your system. Of course, I could be wrong about that, and it could be my incorrigable procedural mindset that is to blame – I will keep you posted. I’ll be posting more on this over the next few weeks, so if there are things you want me to write about, or answers to questions you have, pllease drop me a line or comment here, and I’ll try to come up with answers.

2000 Year Old Computer Recreated

Archaeologists from the University of Cardiff have finally worked out how to put together a complex mechanism that was found in the 2000 year old ruins of a sunken boat off the coast of Greece. The announcement was impressive because this mechanical calculating device had defied researcher’s attempts to understand it for over a hundred years. The device was used to calculate the position of both the moon and the sun during their orbit, and to calculate eclipses with remarkable accuracy. It is also believed that the device was able to predict the positions of the planets as well. This mechanism is orders of magnitude more complicated than anything that we thought the ancients were capable of. We now know that the Greeks were not just great philosophers, but were technologists as well – they were able to turn their cosmological ideas into working models of the solar system. That’s no mean feat in both scientific and engineering terms.

The findings have forced researchers to reassess their ideas about just how technically advanced the ancient Greeks were. If they were able to calculate orbits with such accuracy it implies a level of sophistication that has only been rediscovered in the last few centuries. It makes me consider just how far we fell in the intervening Christian period. The dark ages were not just a period where civilisation took a back-step into an age of ignorance – it was an age so obsessed with religious orthodoxy that no possibility existed for societies to raise themselves out of their benighted state. This discovery highlights just how much was lost during that period. The Greeks had a better understanding of how the world worked than anyone up to the time of Newton and the enlightenment. What could have caused us to lose what must have been a fantastically useful body of knowledge?

The lack of a printing press made it hard to disseminate ideas, but knowledge that valuable wouldn’t have existed in a vacuum. The techniques employed would have been arrived at by groups of artisans or engineers over a long period of research – the knowledge would have been in the possession of many people, not few. So how was it lost? We’ll probably never know. My guess would be that it was at odds with some piece of religious orthodoxy, and was therefore actively ignored or suppressed. But one thing is sure – orthodoxy of any kind (especially the religious kind) is anathema to progress, and can easily lead to a dark age of thousands of years.

We should not be complacent about our secular society’s stability and progress. This ancient computer makes it all too clear that what has been gained can even more easily be lost.

Is it really impossible to choose between LINQ and Stored Procedures?

For the mathematician there is no Ignorabimus, and, in my opinion, not at all for natural science either. … The true reason why [no one] has succeeded in finding an unsolvable problem is, in my opinion, that there is no unsolvable problem. In contrast to the foolish Ignoramibus, our credo avers:
We must know,
We shall know.

It’s that time of the month again – when all of the evangelically inclined mavens of Readify gather round to have the traditional debate. Despite the fact that they’ve had similar debates for years, they tend to tackle the arguments with gusto, trying to find a new angle of attack from which to sally forth in defence of their staunchly defended position. You may (assuming you never read the title of the post 🙂 be wondering what is it that could inspire such fanatical and unswerving devotion? What is it that could polarise an otherwise completely rational group of individuals into opposing poles that consider the other completely mad?

What is this Lilliputian debate? I’m sure you didn’t need to ask, considering it is symptomatic of the gaping wound in the side of modern software engineering. This flaw in software engineering is the elephant in the room that nobody talks about (although they talk an awful lot about the lack of space).

The traditional debate is, of course:

What’s the point of a database?

And I’m sure that there’s a lot I could say on the topic (there sure was yesterday) but the debate put me in a thoughtful mood. The elephant in the room, the gaping wound in the side of software engineering is just as simply stated:

How do we prove that a design is optimal?

That is the real reason we spend so much of our time rehearsing these architectural arguments, trying to win over the other side. Nobody gets evangelical about something they just know – they only evangelise about things they are not sure about. Most people don’t proclaim to the world that the sun will rise tomorrow. But like me, you may well devote a lot of bandwidth to the idea that the object domain is paramount, not the relational. As an object oriented specialist that is my central creed and highest article of faith. The traditional debate goes on because we just don’t have proof on either side. Both sides have thoroughly convincing arguments, and there is no decision procedure to choose between them.

So why don’t we just resolve it once and for all? The computer science and software engineering fraternity is probably the single largest focussed accumulation of IQ points gathered in the history of mankind. They all focus intensively on issues just like this. Surely it is not beyond them to answer the simple question of whether we should put our business logic into stored procedures or use an ORM product to dynamically generate SQL statements. My initial thought was “We Must Know, We Will Know” or words to that effect. There is nothing that can’t be solved given enough resolve and intelligence. If we have a will to do so, we could probably settle on a definitive way to describe an architecture so that we can decide what is best for a given problem.

Those of you who followed the link at the top of the post will have found references there to David Hilbert, and that should have given you enough of a clue to know that there’s a chance that my initial sentiment was probably a pipe dream. If you are still in the dark, I’m referring to Hilbert’s Entscheidungsproblem (or the Decision Problem in English) and I beg you to read Douglas Hofstadter’s magnificent Gödel, Escher, Bach – An eternal golden braid. This book is at the top of my all-time favourites list, and among the million interesting topics it covers, the decision problem is central.

The Decision Problem – a quick detour

One thing you’ll notice about the Entscheidungsproblem and Turing’s Halting Problem is that they are equivalent. They seem to be asking about different things, but at a deeper level the problems are the same. The decision problem asks whether there is a mechanical procedure to determine the truth of any mathematical statement. At the turn of the century they might have imagined some procedure that cranked through every derivation of the axioms of mathematical logic till it found a proof of the statement returning true. The problem with that brute-force approach is that mathematics allows a continual complexification and simplification of statements – it is non-monotonic. The implication is that just because you have applied every combination of the construction rules on all of the axioms up to a given length you can’t know whether there are new statements of the same length that could be found by the repeated application of growth and shrinkage rules that aren’t already in your list. That means that even though you may think you have a definitive list of all the true statements of a given length you may be wrong, so you can never give a false, only continue till you either find a concrete proof or disproof.

Because of these non-monotonic derivation rules, you will never be sure that no answer from your procedure means an answer of false. You will always have to wait and see. This is the equivalence between the Entscheidungsproblem and Alan Turing’s Halting Problem. If you knew your procedure would not halt, then you would just short-circuit the decision process and immediately answer false. If you knew that the procedure would halt, then you would just let it run and produce whatever true/false answer it came up with – either way, you would have a decision procedure. Unfortunately it’s not that easy, because the halting decision procedure has no overview of the whole of mathematics either, and can’t give an answer to the halting question. Ergo there is no decision procedure either. Besides, Kurt Gödel proved that there were undecidable problems, and the quest for a decision procedure was doomed to fail. he showed that even if you came up with a more sophisticated procedure than the brute force attack, you would still never get a decision procedure for all mathematics.

The Architectural Decision Problem

What has this got to do with deciding on the relative merits of two software designs? Is the issue of deciding between two designs also equivalent to the decision problem? Is it a constraint optimisation problem? You could enumerate the critical factors, assign a rank to them and then sum the scores for each design? That is exactly what I did in one of my recent posts entitled “The great Domain Model Debate – Solved!” Of course the ‘Solved!‘ part was partly tongue-in-cheek – I just provided a decision procedure for readers to distinguish between the competing designs of domain models.

One of the criticisms levelled at my offering for this problem was that my weights and scores were too subjective. I maintained that although my heuristic was flawed, it held the key to solving these design issues because there was the hope that there are objective measures of the importance of design criteria for each design, and it was possible to quantify the efficacy of each approach. But I’m beginning to wonder whether that’s really true. Let’s consider the domain model approach for a moment to see how we could quantify those figures.

Imagine that we could enumerate all of the criteria that pertained to the problem. Each represents an aspect of the value that architects place in a good design. In my previous post I considered such things as complexity, density of data storage, performance, maintainability etc. Obviously each of these figures varies in just how subjective it is. Complexity is a measure of how easy it is to understand. One programmer may be totally at home with a design whereas another may be confused. But there are measures of complexity that are objective that we could use. We could use that as an indicator of maintainability – the more complex a design is, the harder it would be to maintain.

This complexity measure would be more fundamental than any mere subjective measure, and would be tightly correlated with the subjective measure. Algorithmic complexity would be directly related to the degree of confusion a given developer would experience when first exposed to the design. Complexity affects our ability to remember the details of the design (as it is employed in a given context) and also our ability to mentally visualise the design and its uses. When we give a subjective measure of something like complexity, it may be due to the fact that we are looking at it from the wrong level. Yes, there is a subjective difference, but that is because of an objective difference that we are responding to.

It’s even possible to prove that such variables exist, so long as we are willing to agree that a subjective dislike that is completely whimsical is not worth incorporating into an assessment of a design’s worth. I’m thinking of such knee-jerk reactions like ‘we never use that design here‘ or ‘I don’t want to use it because I heard it was no good‘. Such opinions whilst strongly felt are of no value, because they don’t pertain to the design per-se but rather to a free-standing psychological state in the person who has them. The design could still be optimal, but that wouldn’t stop them from having that opinion. Confusion on the other hand has its origin in some aspect of the design, and thus should be factored in.

For each subjective criterion that we currently use to judge a design, there must be a set of objective criteria that cause it. If there are none, then we can discount it – it contributes nothing to an objective decision procedure – it is just a prejudice. If there are objective criteria, then we can substitute all occurrences of the subjective criterion in the decision procedure with the set of objective criteria. If we continue this process, we will eventually be left with nothing but objective criteria. At that point are we in a position to choose between two designs?

Judging a good design

It still remains to be seen whether we can enumerate all of the objective criteria that account for our experiences with a design, and its performance in production. It also remains for us to work out ways to measure them, and weigh their relative importance over other criteria. We are still in danger of slipping into a world of subjective opinions over what is most important. We should be able to apply some rigour because we’re aiming at a stationary target. Every design is produced to fulfil a set of requirements. Provided those requirements are fulfilled we can assess the design solely in terms of the objective criteria. We can filter out all of the designs that are incapable of meeting the requirements – all the designs that are left are guaranteed to do the job, but some will be better than others. If that requires that we formally specify our designs and requirements then (for the purposes of this argument) so be it. All that matters is that we are able to guarantee that all remaining designs are fit to be used. All that distinguishes them are performance and other quality criteria that can be objectively measured.

Standard practice in software engineering is to reduce a problem to its component parts, and attempt to then compose the system from those components in a way that fulfils the requirements for the system. Clearly there are internal structures to a system, and those structures cannot necessarily be weighed in isolation. There is a context in which parts of a design make sense, and they can only be judged within that context. Often we judge our design patterns as though they were isolated systems on their own. That’s why people sometimes decide to use design patterns before they have even worked out if they are appropriate. The traditional debate is one where we judge the efficacy of a certain type of data access approach in isolation of the system it’s to be used in. I’ve seen salesmen for major software companies do the same thing – their marks have decided they are going to use the product before they’ve worked out why they will do so. I wonder whether the components of our architectural decision procedure can be composed in the same way that our components are.

In the context that they’re to be used, will all systems have a monotonic effect on the quality of a system? Could we represent the quality of our system as a sum of scores of the various sub-designs in the system like this? (Q1 + Q2 + … + Qn) That would assume that the quality of the system is the sum of the quality of its parts which seems a bit naive to me – some systems will work well in combination, others will limit the damage of their neighbours and some will exacerbate problems that would have lain dormant in their absence. How are we to represent the calculus of software quality? Perhaps the answer lies in the architecture itself? If you were to measure the quality of each unique path through the system, then you would have a way to measure the quality of that path as though it was a sequence of operations with no choices or loops involved. You could then sum the quality of each of these paths weighted in favour of frequency of usage. That would eliminate all subjective bias and the impact of all sub designs would be proportional to the centrality of its role within the system as a whole. In most systems data access plays a part in pretty much all paths through a system, hence the disproportionate emphasis we place on it in the traditional debates.

Scientific Software Design?

Can we work out what these criteria are? If we could measure every aspect of the system (data that gets created, stored, communicated, the complexity of that data etc) then we have the physical side of the picture – what we still lack is all of those thorny subjective measures that matter. Remember though that these are the subjective measures that can be converted into objective measures. Each of those measures can thus be added to the mix. What’s left? All of the criteria that we don’t know to ask about, and all of the physical measurements that we don’t know how to make, or don’t even know we should make. That’s the elephant in the room. You don’t know what you don’t know. And if you did, then it would pretty immediately fall to some kind of scientific enquiry. But will we be in the same situation as science and mathematics was at the dawn of the 20th Century? Will we, like Lord Kelvin, declare that everything of substance about software architecture is known and all the future holds for us is the task of filling in the gaps?

Are these unknown criteria like the unknown paths through a mathematical derivation? Are they the fatal flaw that unhinges any attempt to assess the quality of a design, or are they the features that turns software engineering into a weird amalgam of mathematics, physics and psychology? There will never be any way for us to unequivocally say that we have found all of the criteria that truly determine the quality of a design. Any criteria that we can think of we can objectify – but it’s the ones we can’t or don’t think of that will undermine our confidence in a design and doom us to traditional debates. Here’s a new way to state Hilbert’s 10th Problem:

Is there a way to fully enumerate all of the criteria that determine the quality of a software design?

Or to put it another way

Will we know when we know enough to distinguish good designs from bad?

The spirit of the enlightenment is fading. That much is certain. The resurgence of religiosity in all parts of the world is a backward step. It pulls us away from that pioneering spirit that Kant called a maturing of the human spirit. Maturity is no longer needing authority figures to tell us what to think. He was talking about the grand program to roll back the stifling power of the church. In software design we still cling to the idea that there are authority figures that are infallible. When they proclaim a design as sound, then we use it without further analysis. Design patterns are our scriptures, and traditional wisdom the ultimate authority by which we judge our designs. I want to see the day when scientific method is routinely brought to bear on software designs. Only then will we have reached the state of maturity where we can judge each design on its objective merits. I wonder what the Readify Tech List will be like then?

Domain Modeling and Ontology Engineering


The semantic web is poised to influence us in ways that will be as radical as the early days of the Internet and World Wide Web. For software developers it will involve a paradigm shift, bringing new ways of thinking about the problems that we solve, and more-importantly bringing us new bags of tricks to play with.

One of the current favourite ways to add value to an existing system is through the application of data mining. Amazon is a great example of the power of data mining; it can offer you recommendations based on a statistical model of purchasing behaviour that are pretty accurate. It looks at what the other purchasers of a book bought, and uses that as a guide to make further recommendations.

What if it were able to make suggestions like this: We recommend that you also buy book XYZ because it discusses the same topics but in more depth. That kind of recommendation would be incredible. You would have faith in a recommendation like that, because it wasn’t tainted by the thermal noise of purchaser behaviour. I don’t know why, but every time I go shopping for books on computer science, Amazon keeps recommending that I buy Star Trek books. It just so happens that programmers are suckers for schlock sci-fi books, so there is always at least one offering amongst the CompSci selections.

The kind of domain understanding I described above is made possible through the application of Ontology Engineering. Ontology Engineering is nothing new – it has been around for years in one form or another. What makes it new and exciting for me is the work being done by the W3C on semantic web technologies. Tim Berners-Lee has not been resting on his laurels since he invented the World Wide Web. He and his team have been producing a connected set of specifications for the representation, exchange and use of domain models and rules (plus a lot else besides). This excites me, not least because I first got into Computer Science through an interest in philosophy. About 22 years ago, in a Sunday supplement newspaper a correspondent wrote about the wonderful new science of Artificial Intelligence. He described it as a playground of philosophers where for the first time hypotheses about the nature of mind and reality could be made manifest and subjected to the rigours of scientific investigation. That blew my mind – and I have never looked back.

Which brings us to the present day. Ontology engineering involves the production of ontologies, which are an abstract model of some domain. This is exactly what software developers do for a living, but with a difference. The Resource Description Framework (RDF) and the Web Ontology Language (OWL) are designed to be published and consumed across the web. They are not procedural languages – they describe a domain and its rules in such a way that inference engines can reason about the domain and draw conclusions. In essence the semantic web brings a clean, standardised, web enabled and rich language in which we can share expert systems. The magnitude of what this means is not clear yet but I suspect that it will change everything.

The same imperatives that drove the formulation of standards like OWL and RDF are at work in the object domain. A class definition is only meaningful in the sense that it carries data and its name has some meaning to a programmer. There is no inherent meaning in an object graph that can allow an independent software system to draw conclusions from it. Even the natural language labels we apply to classes can be vague or ambiguous. Large systems in complex industries need a way to add meaning to an existing system without breaking backwards compatibility. Semantic web applications will be of great value to the developer community because they will allow us to inject intelligence into our systems.

The current Web2.0 drive to add value to the user experience will eventually call for more intelligence than can practically be got from our massive OO systems. A market-driven search for competitiveness will drive the software development community to more fully embrace the semantic web as the only easy way to add intelligence to unwieldy systems.

In many systems the sheer complexity of the problem domain has led software designers to throw up their hands in disgust, and opt for data structures that are catch-all buckets of data. Previously, I have referred to them as untyped associative containers because more often than not the data ends up in a hash table or equivalent data structure. For the developer, the untyped associative container is pure evil on many levels – not least from performance, readability, and type-safety angles. Early attempts to create industry mark-up languages foundered on the same rocks. What was lacking was a common conceptual framework in which to describe an industry. That problem is addressed by ontologies.

In future, we will produce our relational and object oriented models as a side effect of the production of an ontology – the ontology may well be the repository of the intellectual property of an enterprise, and will be stored and processed by dedicated reasoners able to make gather insights about users and their needs. Semantically aware systems will inevitably out-compete the inflexible systems that we are currently working with because they will be able to react to the user in a way that seems natural.

I’m currently working on an extended article about using semantic web technologies with .NET. As part of that effort I produced a little ontology in the N3 notation to model what makes people tick. The ontology will be used by a reasoner in the travel and itinerary planning domain.

:Person a owl:Class .
:Need a owl:Class .
:PeriodicNeed rdfs:subClassOf :Need .
:Satisfier a owl:Class .
:need rdfs:domain :Person;
rdfs:range :Need .
:Rest rdfs:subClassOf :Need .
:Hunger rdfs:subClassOf :Need .
:StimulousHunger rdfs:subClassOf :Need .
:satisfies rdfs:domain :Satisfier;
rdfs:range :Need .
:Sleep a :Class;
rdfs:subClassOf :Satisfier ;
:satisfies :Rest .
:Eating a :Class;
rdfs:subClassOf :Satisisfier;
:satisfies :Hunger .
:Tourism a :Class;
rdfs:subClassOf :Satisisfier;
:satisfies :StimulousHunger .

In the travel industry, all travel agents – even online ones – are routed through centralised bureaus that give flight times, take bookings etc.  The only way that an online travel agency can distinguish themselves is if they are more smart and easier to use. They are tackling the later problem these days with AJAX, but they have yet to find effective ways to be more smart. An ontology that understands people a bit better is going to help them target their offerings more ‘delicately’. I don’t know about you, but I have portal sites that provide you with countless sales pitches on the one page. Endless checkboxes for extra services, and links to product partners that you might need something from. As the web becomes more interconnected, this is going to become more and more irritating. The systems must be able to understand that the last thing a user wants after a 28 hour flight is a guided tour of London, or tickets to the planetarium.

The example ontology above is a simple kind of upper ontology. It describes the world in the abstract to provide a kind of foundation off which to build more specific lower ontologies. This one just happens to model a kind of Freudian drive mechanism to describe how people’s wants and desires change over time (although the changing over time bit isn’t included in this example). Services can be tied to this upper ontology easily – restaurants provide Eating, which is a satisfier for hunger. Garfunkle’s restaurant (a type of Restaurant) is less than 200 metres from the Cecil Hotel (a type of Hotel that provides sleeping facilities, a satisfier of the need to rest) where you have a booking. Because all of these facts are subject to rules of inference, the inference engines can deduce that you may want to make a booking to eat at the hotel when you arrive, since it will have been 5 hours since you last satisfied your hunger.

The design of upper ontologies is frowned upon mightily in the relational and object oriented worlds – it smacks of over-engineering. For the first time we are seeing a new paradigm that will reward deeper analysis. I look forward to that day

StumbleUpon Toolbar Stumble It!

The Greate Domain Model Debated – Continued.

I recently got this excellent comment from Jeff Grigg on my domain model debate article. I started to reply with a comment, but the reply got to the point where it became a bit of a blog post in its own right, so I thought it worth giving it first class status. Jeff wrote this:

Interesting concept: I particularly like the concept of using numeric weights and rankings, rather than just claims of preference. But we need to realize that we can use numbers to express our level of subjective feelings, and that doing so doesn’t make the results objective.

I notice that, according to your numbers, the top two recommended approaches are: (#1) Anemic Domain Model, and (#2) Rich Domain Model, with all others trailing behind these. The difference between the top two is about 10% of the difference between the top and bottom (best and worst) approaches. So I’d say that even you’re numbers aren’t a strong argument against using RDM; they only show a moderate advantage of ADM, while showing that RDM is still better than anything else (other than ADM).

About 46% of the difference between ADM and RDM is that you feel that ADM is twice as “encapsulatable.” I think it would be helpful to have a better idea of what you mean by the values, like “encapsulatable.” When I consider how well a class encapsulates its fields, I use the information hiding principles, concluding that an ADM object that exposes all fields with getters and setters is not well encapsulated, while a RDM object that hides its fields and exposes only operations is very well encapsulated.

For about 23% of the difference between ADM and RDM, you feel that ADM is nearly twice as maintainable. My experience has generally been the opposite: For example, when interpretation and enforcement of codes and states is distributed throughout all the procedural modules that use a record, rather than centralized and encapsulated in the class that represents that record, then ensuring that all actions conform to the rules and changing the rules can be difficult, costly and risky.

(For completeness, the remaining differences are: technological intrusiveness (15%), complexity of programming model (11%), and ease of extensibility (5%).)

One last thought about the numbers: I find it interesting that “naked objects” gets a much different and much worse score than either ADM or RDM; only three others scored worse. I find this interesting because the Naked Objects Framework, which you link to in your article (see http://nakedobjects.org/ for more info) is based heavily on the assumption that you will build a Rich Domain Model: With Naked Objects, there are no additional layers of logic between the GUI and the domain model, so unless all operations the user is interested in are in the domain model, the user will not be able to invoke them. And unless the operations are directly on the relevant business domain classes, the system will probably be difficult and confusing to use.

Overall, I found this article and in particular your approach interesting and informative. I like the ideas; I just don’t agree with the conclusions.

Thanks for the thoughtful reply!

I acknowledge that this article is primarily a numerical statement of my bias, and that some of the frameworks I compared I had a lot less experience with than ADM, which was already my preferred choice.

I didn’t want the model to be interpreted as objective truth, rather as a means for me to determine whether ADM really was the best choice for me (on the kinds of projects I’ve been working with). I thought it possible that I might have scored some other technique higher, and thus should abandon ADM as my preferred choice. I guess this was a long shot, but until I gave it sufficiently systematic thought I wasn’t going to be able to say with any confidence that I thought that ADM is what you ought to be using.

I also wanted to make it clear that (for any given designer) it is possible to express opinions in a way that (as you so clearly demonstrated) others can easily highlight deficiencies in. Blogging is such a linear medium that I would have had to write an endless succession of posts to cover the kind of material I did in those two images! I think that would have taxed even the most patient of my readers. Being able to clearly express my thoughts in terms of a numerical weight means that you can easily compare the performance or maintainability of two techniques and take issue with my assessment. If your arguments are persuasive, then I can at the very least update my spreadsheet. For example, I will revisit the Naked Objects framework, and give it another try, since it’s clear I didn’t get the whole picture when I first read about it. It’s possible that the framework has undergone a lot of refinement in the time since I read about it, and is now more worthy of attention than it was. I can’t think of a better way (short of blogging to excess on minutiae) to get that kind of broad picture of a person’s experiences with object modelling than for them to score a model like this.

Given a broadly agreed set of benchmarks, it might be possible to add a level of objectivity to the figures. It must be said that most of the papers I’ve ver read on the efficacy of certain software engineering practices were little more than folklore. That is especially true in the case of the maintainability of software systems. If a design doesn’t interfere with the efforts of the maintenance programmers assigned to it does that mean that (1) the design was good, (2) the maintainers were the original developers, (3) the maintainers were very experienced with such designs. It’s clear that most information we have about the efficacy of programming and design techniques is for the most part anecdotal (even when dressed up in academic guise) and is of little more worth than our own personal experience of working with the code of others. We could really benefit from a degree of objectivity in these issues – there are so many orthogonal criteria on which a design can be judged. the problem is that measuring how well a design performed on each criteria is largely a subjective measurement. How can we find a way to get objectivity out of a comparison of design methods. Those do a cook off, and then interview the developers afterwards studies are not very useful. And of course you can get the same developers to do the project again using different techniques – they’d find the project too easy the second time round because all the problems were familiar.

What I wanted was to create a simple way to help people to think for themselves about the domain model debate. I’ve recently been listening to a great podcast about the enlightenment by Thomas Lacqueur, and one particular idea stuck with me which I think pertains to the current issue. Kant said that the enlightenment movement was a maturing of the consciousness of the population. Maturity for him meant not looking to an authority figure for guidance, but instead working solutions out for yourself. In his case he was referring to religion, and statecraft, but I think the principle applies equally well now. We should seek ways to make our own minds up, or at least have a truly solid objectively measurable argument in favour of a design principle rather than just endorsement by famous designers. Just because XYZ designer disparages ADM, does that mean we should abandon it? Not necessarily, we should first look at his or her reasons and what kind of projects they were using it for.

It sometimes seems to me that opinioneers are what they are because we require it of them. They have to have an opinion, and as time goes by those opinions have to get more extreme to be heard above the background noise. That is not currently as true in the object oriented design community as it is in the arena of politics, but the more people blindly follow design guidance the more it will become so.

Follow the tradition of the enlightenment – make your own mind up, and feel free to use or expand on my spreadsheet to do so. And if you can’t see my spreadsheet properly, then blame wordpress for only putting out narrow stylesheets.