databases

Relational Modeling? Not as we know it!

Marcello Cantos commented on my recent post about the ways in which RDF can transcend the object-oriented model. He posed the question of what things RDF can represent more easily than the relational model. I know Marcello is a very high calibre software engineer, so it’s not just an idle question from a relational dinosaur, but a serious question from someone who can push the envelope far with a relational database.

Since an ontology is most frequently defined (in compsci) as a specification of a conceptualization, a relational model is a kind of ontology. That means a relational model is by definition a knowledge representation system. That’d be my answer if I just wanted to sidestep the real thrust of his question; Is the relational model adequate to do what can be done by RDF?

That’s a more interesting question, and I’d be inclined to say everything I said in my previous post about the shortcomings of object oriented programming languages applies equally to the relational model. But lets take another look at the design features of RDF that make it useful for representation of ‘knowledge’.

○ URI based
○ Triple format
○ Extensible
○ Layered
○ Class based
○ Meta-model

URI Based

By using URIs as a token of identification and definition, and by making identifications and definitions readable, interchangeable and reusable the designers of RDF exposed the conceptualisation of the ontology to the world at large. Could you imagine defining a customer in your database as ‘everything in XYZ company’s CRM’s definition of a customer, plus a few special fields of our own‘. It is not practical. Perhaps you might want to say, everything in their database less some fields that we’re not interested in. Again – not possible. Relational models are not as flexible as the concepts that they need to represent. That is also the real reason why interchange formats never caught on – they were just not able to adapt to the ways that people needed to use them. RDF is designed from the outset to be malleable.

Triple Format

At their foundation, all representations make statements about the structure or characteristics of things. All statements must have the form (or can be transformed into that format). The relational model strictly defines the set of triples that can be expressed about a thing. For example, imagine a table ‘Star’ that has some fields:

Star (
	StarId INT,
	CommonName nvarchar(256),
	Magnitude decimal NOT NULL,
	RA decimal NOT NULL,
	DEC decimal NOT NULL,
	Distance decimal NOT NULL,
	SpectralType nvarchar(64)
	)

Now if we had a row

(123, 'Deneb', 1.25, 300.8, 45.2, 440, 'A2la')

That would be equivalent to a set of triples represented in N3 like this:

[]
  StartId 123;
  CommonName "Deneb";
  Magnitude 1.25^xsd:decimal;
  RA 300.8^xsd:decimal;
  DEC 45.2^xsd:decimal;
  Distance 440^xsd:decimal;
  SpectralType "A2la" .

Clearly there’s a great deal of overlap between these two systems and the one is convertible into the other. But what happens when we launch a new space probe capable of measuring some new feature of the star that was never measurable before? Or what happens when we realise that to plot our star very far into the future we need to store radial velocity, proper motion and absolute magnitude. We don’t have fields for that, and there’s no way in the database to add them without extensive modifications to the database.

RDF triple stores (or runtime models or files for that matter) have no particular dependence on the data conforming to a prescribed format. More importantly class membership and instance-hood are more decoupled so that a ‘thing’ can exist without automatically being in a class. In OO languages you MUST have a type, just as in RDBMSs, a row MUST come from some table. We can define an instance that has all of the properties defined in table ‘Star’ plus a few others gained from the Hipparchos catalog and a few more gleaned from the Tycho-1 catalog. It does not break the model nor invalidate the ‘Star’ class-hood to have this extra information, it just happens that we know more about Deneb in our database than some other stars.

This independent, extensible, free-form, standards-based language is capable of accommodating any knowledge that you can gather about a thing. If you add meta-data about the thing then more deductions can be made about it, but its absence doesn’t stop you from adding or using the data in queries.

Extensible, Layered, Class Based with Meta-model

Being extensible, in the case of RDF, means a few things. It means that RDF supports OO-style multiple inheritance relationships. See my previous post to see that this is the tip of the iceberg for RDF class membership. That post went into more detail about how class membership was not based on some immutable Type property that once assigned can never by removed. Instead it, can be based on more or less flexible criteria.

Extensibility in RDF also means providing a way to make complex statements about the modelling language itself. For example once the structure of triples is defined (plus URIs that can be in subjects, predicates or objects) in the base RDF language, then RDF has a way to define complex relationships. The language was extended with RDF Schema which in turn was extended with several layers in OWL, which will in turn be extended by yet more abstract layers.

Is there a mechanism for self reference in SQL? I can’t think of a way of defining one structure in a DB in terms of the structure of another. There’s no way that I can think of of being explicit about the nature of the relationship between two entities. Is there a way for you to state in your relational model facts like this:

{?s CommonName ?c.} => {?s Magnitude ?m. ?m greaterThan 6.}

i.e. if it has a common name then it must be visible to the naked eye. I guess you’d do that with a relational view so that you could query whether the view ‘nakedEyeStars’ contains star 123. Of course CommonName could apply to botanical entities (plants) as well as to stars, but I imagine you’d struggle to create a view that merged data from the plant table and the star table.

So, in conclusion, there’s plenty of ways that RDF specifically addresses the problems it seeks to address – data interchange, standards definition, KR, mashups – in a distributed web-wide way. RDBMSs address the problems faced by programmers at the coal face in the 60s and 70s – efficient, standardized, platform-independent data storage and retrieval. The imperative that created a need for RDBMSs in the 60s is not going away, so I doubt databases will be going away any time soon either. In fact they can be exposed to the world as triples without too much trouble. The problem is that developers need more than just data storage and retrieval. They need intelligent data storage and retrieval.

Not another mapping markup language!

Kingsley Idehen has again graciously given LinqToRdf some much needed link-love. He mentioned it in a post that was primarily concerned with the issues of mapping between the ontology, relational and object domains. His assertion is that LinqtoRdf, being an offshoot of an ORM related initiative, is reversing the natural order of mappings. He believes that in the world of ORM systems, the emphasis should be in mapping from the relational to the object domain.

I think that he has a point, but not for the reason he’s putting forward. I think that the natural direction of mapping stems from the relative richness of the domains being mapped. The impedence mismatch between the relational and object domains stems from (1) the implicitness of meaning in the relationships of relational systems and (2) the representation of relationships and (3) type mismatches.

If the object domain has great expressiveness and explicit meaning in relationships it has a ‘larger’ language than that expressible using relational databases. Relationships are still representable, but their meaning is implicit. For that reason you would have to confine your mappings to those that can be represented in the target (relational) domain. In that sense you get a priority inversion that forces the lowest common denominator language to control what gets mapped.

The same form of inversion occurs between the ontological and object domains, only this time it is the object domain that is the lowest common denominator. OWL is able to represent such things as restriction classes and multiple inheritance and sub-properties that are hard or impossible to represent in languages like C# or Java. When I heard of the RDF2RDB working group at the W3C, I suggested (to thunderous silence) that they direct their attentions to coming up with a general purpose mapping ontology that could be used for performing any kind of mapping.

I felt that it would have been extremely valuable to have a standard language for defining mappings. Just off the top of my head I can think of the following places where it would be useful:

  1. Object/Relational Mapping Systems (O/R or ORM)
  2. Ontology/Object Mappings (such as in LinqToRdf)
  3. Mashups (merging disparate data sources)
  4. Ontology Reconciliation – finding intersects between two sets of concepts
  5. Data cleansing
  6. General purpose data access layer automation
  7. Data export systems
  8. Synchronization Systems (i.e. keeping systems like CRM and AD in sync)
  9. mapping objects/tables onto UIs
  10. etc

You can see that most of these are perennial real-world problems that programmers are ALWAYS having to contend with. Having a standard language (and API?) would really help with all of these cases.

I think such an ontology would be a nice addition to OWL or RDF Schema, allowing a much richer definition of equivalence between classes (or groups or parts of classes). Right now one can define a one-to-one relationship using the owl:equivalentClass property. It’s easy to imagine that two ontology designers might approach a domain from such orthogonal directions that they find it hard to define any conceptual overlap between entities in their ontologies. A much more complex language is required to allow the reconciliation of widely divergent models.

I understand that by focusing their attentions on a single domain they increase their chances of success, but what the world needs from an organization like the W3C is the kind of abstract thinking that gave rise to RDF, not another mapping markup language!


Here’s a nice picture of how LinqToRdf interacts with Virtuoso (thanks to Kingsley’s blog).

How LINQ uses LinqToRdf to talk to SPARQL stores

How LINQ uses LinqToRdf to talk to SPARQL stores

Creating A LINQ Query Provider

As promised last time, I have extended the query mechanism of my little application with a LINQ Query Provider. I based my initial design on the method published by Bart De Smet, but have extended that framework, cleaned it up and tied it in with the original object deserialiser for SemWeb (a semantic web library by Joshua Tauberer).

In this post I’ll give you some edited highlights of what was involved. You may recal that last post I provided some unit tests that i was working with. For the sake of initial simplicity (and to make it easy to produce queries with SemWeb’s GraphMatch algorithm) I restricted my query language to make use of Conjunction, and Equality. here’s the unit test that I worked with to drive the development process. What I produced last time was a simple scanner that went through my podcasts extracting metadata and creating objects of type Track.

[TestMethod]
public void QueryWithProjection()
{
CreateMemoryStore();
IRdfQuery<Track> qry = new RdfContext(store).ForType<Track>();
var q = from t in qry
where t.Year == 2006 &&
t.GenreName == "History 5 | Fall 2006 | UC Berkeley"
select new {t.Title, t.FileLocation};
foreach(var track in q){
Trace.WriteLine(track.Title + ": " + track.FileLocation);
}
}

This method queries the Tracks collection in an in-memory triple store loaded from a file in N3 format. It searches for any UC Berkley pod-casts produced in 2006, and performs a projection to create a new anonymous type containing the title and location of the files.

I took a leaf from the book of LINQ to SQL to crate the query object. In LINQ to SQL you indicate the type you are working with using a Table<T> class. In my query context class, you identify the type you are working with using a ForType<T>() method. this method instantiates a query object for you, and (in future) will act as an object registry to keep track of object updates.

The RDFContext class is very simple:

public class RdfContext : IRdfContext
{
public Store Store
{
get { return store; }
set { store = value; }
}
protected Store store;
public RdfContext(Store store)
{
this.store = store;
}
public void AcceptChanges()
{
throw new NotImplementedException();
}
public IRdfQuery<T> ForType<T>()
{
return new RdfN3Query<T>(store);
}
}

As you can see, it is pretty bare at the moment. It maintains a reference to the store, and instantiates query objects for you. But in future this would be the place to create transactional support, and perhaps maintain connections to triple stores. By and large, though, this class will be pretty simple in comparison to the query class that is to follow.

I won’t repeat all of what Bart De Smet said in his excellent series of articles on the production of LINQ to LDAP. I’ll confine myself to this implementation, and how it works. So we have to start by creating our Query Object:

public class RdfN3Query<T> : IRdfQuery<T>
{
public RdfN3Query(Store store)
{
this.store = store;
this.originalType = typeof (T);
parser = new ExpressionNodeParser<T>();
}

First it stores a reference to the triple store for later use. In a more real world implementation this might be a URL or connection string. But for the sake of this implementation, we can be happy with the Memory Store that is used in the unit test. next we keep a record of the original type that is being queried against. this is important because later on you may also be dealing with a new anonymous type that will be created by the projection. This will not have any of the Owl*Attribute classes with which to work out URLs for properties and to perform deserialisation.

The two most important methods in IQueryable<T> are CreateQuery and GetEnumerable. CreateQuery is the place where LINQ feeds you the expression tree that it has built from your initial query. You must parse this expression tree and store the resultant query somewhere for later use. I created a string called query to keep that in, and created a class called ExpressionNodeParser to walk the expression tree to build tyhe query string. This is equivalent to the stage where the SQL SELECT query gets created in DLINQ. My CreateQuery looks like this:

public IQueryable<TElement> CreateQuery<TElement>(Expression expression)
{
RdfN3Query<TElement> newQuery = new RdfN3Query<TElement>(store);
newQuery.OriginalType = originalType;
newQuery.Project = project;
newQuery.Properties = properties;
newQuery.Query = Query;
newQuery.Logger = logger;
newQuery.Parser = new ExpressionNodeParser<TElement>(new
StringBuilder(parser.StringBuilder.ToString()));
MethodCallExpression call = expression as MethodCallExpression;
if (call != null)
{
switch (call.Method.Name)
{
case "Where":
Log("Processing the where expression");
newQuery.BuildQuery(call.Parameters[1]);
break;
case "Select":
Log("Processing the select expression");
newQuery.BuildProjection(call);
break;
}
}
return newQuery;
}

You create new query because you may be doing a projection, in which case the type you are enumerating over will not be the original type that you put into ForType<T>(). Instead it may be the anonymous type from the projection. You transfer the vital information over to the new Query object, and then handle the expression that has been passed in. I am handling two methods here: Where and Select. There are others I could handle, such as OrderBy or Take, but that will have to be for a future post.

Where is the part where the expression representing the query is passed in. Select is passed the tree representing the projection (if there is one). The work is passed off to BuildQuery and BuildProjection accordingly. these names were gratefully stolen from LINQ to LDAP.

BuildQuery in LINQ to LDAP is a fairly complicated affair, but in LINQ to RDF I have paired it right downb to the bone.

private void BuildQuery(Expression q)
{
StringBuilder sb = new StringBuilder();
ParseQuery(q, sb);
Query = Parser.StringBuilder.ToString();
Trace.WriteLine(Query);
}

We create a StringBuilder that can be passed down into the recursive descent tree walker to gather the fragments of the query as each expression gets parsed. the result is then stored in the Query property of the Query object. BuildProjection looks like this:

private void BuildProjection(Expression expression)
{
LambdaExpression le = ((MethodCallExpression)expression).Parameters[1] as
LambdaExpression;
if (le == null)
throw new ApplicationException("Incompatible expression type found when building a projection");
project = le.Compile();
MemberInitExpression mie = le.Body as MemberInitExpression;
if (mie != null)
foreach (Binding b in mie.Bindings)
FindProperties(b);
else
foreach (PropertyInfo i in originalType.GetProperties())
properties.Add(i.Name, null);
}

Much of it is taken directly from LINQ to LDAP. I have adapted it slightly because I am targeting the May 2007 CTP of LINQ. I’ve done this only because I have to use VS 2005 during the day, so I can’t use the March 2007 version of Orcas.

ParseQuery is used by BuildQuery to handle the walking of the expression tree. Again that is very simple since most of the work is now done in ExpressionNodeParser. It looks like this:

private void ParseQuery(Expression expression, StringBuilder sb)
{
Parser.Dispatch(expression);
}

Parser.Dispatch is a gigantic switch statement that passes off the expression tree to handler methods:

public void Dispatch(Expression expression)
{
switch (expression.NodeType)
{
case ExpressionType.Add:
Add(expression);
break;
case ExpressionType.AddChecked:
AddChecked(expression);
break;
case ExpressionType.And:
And(expression);
break;
case ExpressionType.AndAlso:
AndAlso(expression);
//...

Each handler method then handles the root of the expression tree, breaking it up and passing on what it can’t handle itself. For example, the method AndAlso just takes the left and right side of the operator and recursively dispatches them:

public void AndAlso(Expression e)
{
BinaryExpression be = e as BinaryExpression;
if (be != null)
{
Dispatch(be.Left);
Dispatch(be.Right);
}
}

The equality operator is the only operator that currently gets any special effort.

public void EQ(Expression e)
{
BinaryExpression be = e as BinaryExpression;
if (be != null)
{
MemberExpression me = be.Left as MemberExpression;
ConstantExpression ce = be.Right as ConstantExpression;
QueryAppend(tripleFormatStringLiteral,
InstancePlaceholderName,
OwlClassSupertype.GetPropertyUri(typeof(T),
me.Member.Name),
ce.Value.ToString());
}
MethodCallExpression mce = e as MethodCallExpression;
if (mce != null && mce.Method.Name == "op_Equality")
{
MemberExpression me = mce.Parameters[0] as MemberExpression;
ConstantExpression ce = mce.Parameters[1] as ConstantExpression;
QueryAppend(tripleFormatStringLiteral,
InstancePlaceholderName,
OwlClassSupertype.GetPropertyUri(typeof(T),
me.Member.Name),
ce.Value.ToString());
}
}

The equality expression can be formed either through the use of a binary expression with NodeType.EQ or as a MethodCallExpression on op_Equality for type string. If the handler for the MethodCallExpression spots op_Equality it passes the expression off to the EQ method for it to render instead. EQ therefore needs to spot which type of Node it’s dealing with to know how to get the left and right sides of the operation. In a BinaryExpression there are Right and Left properties, whereas in a MethodCallExpression these will be found in a Parameters collection. In our example they get the same treatment.

You’ll note that we assume that the left operand is a MemberExpression and the right is a ConstantExpression. That allows us to form clauses like this:

where t.Year == 2006

but it would fail on all of the following:

where t.Name.ToUpper() == "SOME STRING"
where t.Name == t.Other
where t.Year.ToString() == "2006"

Each of these cases will have to be handled individually, so the number of cases we need to handle can grow. As Bart De Smet pointed out, some of the operations might have to be performed after retrieval of the results since semantic web query languages are unlikely to have complex string manipulation and arithmetic functions. Or at least, not yet.

The QueryAppend forms an N3 Triple out of its parameters and appends it to the StringBuilder that was passed to the Parser initially. At the end of the recursive tree walk, this string builder is harvested and preprocessed to make it ready to pass to the triple store. In my previous post I described an ObjectDeserialisationsSink that was passed to SemWeb during the query process to harvest the results. This has been reused to gather the results of the query from within our query.

I mentioned earlier that the GetEnumerator method was important to IQueryable. An IQueryable is a class that can defer execution of its query till someone attempts to enumerate its results. Since that’s done using GetEnumerator the query must be performed in GetEnumerator. My implementation of GetEnumerator looks like this:

IEnumerator<T> IEnumerable<T>.GetEnumerator()
{
if (result != null)
return result.GetEnumerator();
query = ConstructQuery();
PrepareQueryAndConnection();
PresentQuery(query);
return result.GetEnumerator();
}

result is the List<TElement> variable where I cache the results for later use. What that means is that the query only gets run once. Next time the GetEnumerator gets called, result is returned directly. This reduces the cost of repeatedly enumerating the same query. Currently the methods ConstructQuery, PrepareQueryAndConnection, and PresentQuery are all fairly simple affairs that exist more as placeholders so that I can reuse much of this code for a LINQ to SPARQL implementation that is to follow.

As you’ve probably worked out, there is a huge amount of detail that has to be attended to, but the basic concepts are simple. the reason why more people haven’t written LINQ query providers before now is simply that fact that there is no documentation about how to do it. When you try though, you may find it easier than you thought.

There is a great deal more to do to LINQ to RDF before something it is ready for production use, but as a proof of concept that semantic web technologies can be brought into the mainstream it serves well. Thereason why we use ORM systems such as LINQ to SQL is to help us overcome the Impedance Mismatch that exists between the object and relational domains. An equally large mismatch exists between the Object and Semantic domains. tools like LINQ to RDF will have to overcome the mismatch in order for them to be used outside of basic domain models.

Is it really impossible to choose between LINQ and Stored Procedures?

For the mathematician there is no Ignorabimus, and, in my opinion, not at all for natural science either. … The true reason why [no one] has succeeded in finding an unsolvable problem is, in my opinion, that there is no unsolvable problem. In contrast to the foolish Ignoramibus, our credo avers:
We must know,
We shall know.

It’s that time of the month again – when all of the evangelically inclined mavens of Readify gather round to have the traditional debate. Despite the fact that they’ve had similar debates for years, they tend to tackle the arguments with gusto, trying to find a new angle of attack from which to sally forth in defence of their staunchly defended position. You may (assuming you never read the title of the post :) be wondering what is it that could inspire such fanatical and unswerving devotion? What is it that could polarise an otherwise completely rational group of individuals into opposing poles that consider the other completely mad?

What is this Lilliputian debate? I’m sure you didn’t need to ask, considering it is symptomatic of the gaping wound in the side of modern software engineering. This flaw in software engineering is the elephant in the room that nobody talks about (although they talk an awful lot about the lack of space).

The traditional debate is, of course:

What’s the point of a database?

And I’m sure that there’s a lot I could say on the topic (there sure was yesterday) but the debate put me in a thoughtful mood. The elephant in the room, the gaping wound in the side of software engineering is just as simply stated:

How do we prove that a design is optimal?

That is the real reason we spend so much of our time rehearsing these architectural arguments, trying to win over the other side. Nobody gets evangelical about something they just know – they only evangelise about things they are not sure about. Most people don’t proclaim to the world that the sun will rise tomorrow. But like me, you may well devote a lot of bandwidth to the idea that the object domain is paramount, not the relational. As an object oriented specialist that is my central creed and highest article of faith. The traditional debate goes on because we just don’t have proof on either side. Both sides have thoroughly convincing arguments, and there is no decision procedure to choose between them.

So why don’t we just resolve it once and for all? The computer science and software engineering fraternity is probably the single largest focussed accumulation of IQ points gathered in the history of mankind. They all focus intensively on issues just like this. Surely it is not beyond them to answer the simple question of whether we should put our business logic into stored procedures or use an ORM product to dynamically generate SQL statements. My initial thought was “We Must Know, We Will Know” or words to that effect. There is nothing that can’t be solved given enough resolve and intelligence. If we have a will to do so, we could probably settle on a definitive way to describe an architecture so that we can decide what is best for a given problem.

Those of you who followed the link at the top of the post will have found references there to David Hilbert, and that should have given you enough of a clue to know that there’s a chance that my initial sentiment was probably a pipe dream. If you are still in the dark, I’m referring to Hilbert’s Entscheidungsproblem (or the Decision Problem in English) and I beg you to read Douglas Hofstadter’s magnificent Gödel, Escher, Bach – An eternal golden braid. This book is at the top of my all-time favourites list, and among the million interesting topics it covers, the decision problem is central.

The Decision Problem – a quick detour

One thing you’ll notice about the Entscheidungsproblem and Turing’s Halting Problem is that they are equivalent. They seem to be asking about different things, but at a deeper level the problems are the same. The decision problem asks whether there is a mechanical procedure to determine the truth of any mathematical statement. At the turn of the century they might have imagined some procedure that cranked through every derivation of the axioms of mathematical logic till it found a proof of the statement returning true. The problem with that brute-force approach is that mathematics allows a continual complexification and simplification of statements – it is non-monotonic. The implication is that just because you have applied every combination of the construction rules on all of the axioms up to a given length you can’t know whether there are new statements of the same length that could be found by the repeated application of growth and shrinkage rules that aren’t already in your list. That means that even though you may think you have a definitive list of all the true statements of a given length you may be wrong, so you can never give a false, only continue till you either find a concrete proof or disproof.

Because of these non-monotonic derivation rules, you will never be sure that no answer from your procedure means an answer of false. You will always have to wait and see. This is the equivalence between the Entscheidungsproblem and Alan Turing’s Halting Problem. If you knew your procedure would not halt, then you would just short-circuit the decision process and immediately answer false. If you knew that the procedure would halt, then you would just let it run and produce whatever true/false answer it came up with – either way, you would have a decision procedure. Unfortunately it’s not that easy, because the halting decision procedure has no overview of the whole of mathematics either, and can’t give an answer to the halting question. Ergo there is no decision procedure either. Besides, Kurt Gödel proved that there were undecidable problems, and the quest for a decision procedure was doomed to fail. he showed that even if you came up with a more sophisticated procedure than the brute force attack, you would still never get a decision procedure for all mathematics.

The Architectural Decision Problem

What has this got to do with deciding on the relative merits of two software designs? Is the issue of deciding between two designs also equivalent to the decision problem? Is it a constraint optimisation problem? You could enumerate the critical factors, assign a rank to them and then sum the scores for each design? That is exactly what I did in one of my recent posts entitled “The great Domain Model Debate – Solved!” Of course the ‘Solved!‘ part was partly tongue-in-cheek – I just provided a decision procedure for readers to distinguish between the competing designs of domain models.

One of the criticisms levelled at my offering for this problem was that my weights and scores were too subjective. I maintained that although my heuristic was flawed, it held the key to solving these design issues because there was the hope that there are objective measures of the importance of design criteria for each design, and it was possible to quantify the efficacy of each approach. But I’m beginning to wonder whether that’s really true. Let’s consider the domain model approach for a moment to see how we could quantify those figures.

Imagine that we could enumerate all of the criteria that pertained to the problem. Each represents an aspect of the value that architects place in a good design. In my previous post I considered such things as complexity, density of data storage, performance, maintainability etc. Obviously each of these figures varies in just how subjective it is. Complexity is a measure of how easy it is to understand. One programmer may be totally at home with a design whereas another may be confused. But there are measures of complexity that are objective that we could use. We could use that as an indicator of maintainability – the more complex a design is, the harder it would be to maintain.

This complexity measure would be more fundamental than any mere subjective measure, and would be tightly correlated with the subjective measure. Algorithmic complexity would be directly related to the degree of confusion a given developer would experience when first exposed to the design. Complexity affects our ability to remember the details of the design (as it is employed in a given context) and also our ability to mentally visualise the design and its uses. When we give a subjective measure of something like complexity, it may be due to the fact that we are looking at it from the wrong level. Yes, there is a subjective difference, but that is because of an objective difference that we are responding to.

It’s even possible to prove that such variables exist, so long as we are willing to agree that a subjective dislike that is completely whimsical is not worth incorporating into an assessment of a design’s worth. I’m thinking of such knee-jerk reactions like ‘we never use that design here‘ or ‘I don’t want to use it because I heard it was no good‘. Such opinions whilst strongly felt are of no value, because they don’t pertain to the design per-se but rather to a free-standing psychological state in the person who has them. The design could still be optimal, but that wouldn’t stop them from having that opinion. Confusion on the other hand has its origin in some aspect of the design, and thus should be factored in.

For each subjective criterion that we currently use to judge a design, there must be a set of objective criteria that cause it. If there are none, then we can discount it – it contributes nothing to an objective decision procedure – it is just a prejudice. If there are objective criteria, then we can substitute all occurrences of the subjective criterion in the decision procedure with the set of objective criteria. If we continue this process, we will eventually be left with nothing but objective criteria. At that point are we in a position to choose between two designs?

Judging a good design

It still remains to be seen whether we can enumerate all of the objective criteria that account for our experiences with a design, and its performance in production. It also remains for us to work out ways to measure them, and weigh their relative importance over other criteria. We are still in danger of slipping into a world of subjective opinions over what is most important. We should be able to apply some rigour because we’re aiming at a stationary target. Every design is produced to fulfil a set of requirements. Provided those requirements are fulfilled we can assess the design solely in terms of the objective criteria. We can filter out all of the designs that are incapable of meeting the requirements – all the designs that are left are guaranteed to do the job, but some will be better than others. If that requires that we formally specify our designs and requirements then (for the purposes of this argument) so be it. All that matters is that we are able to guarantee that all remaining designs are fit to be used. All that distinguishes them are performance and other quality criteria that can be objectively measured.

Standard practice in software engineering is to reduce a problem to its component parts, and attempt to then compose the system from those components in a way that fulfils the requirements for the system. Clearly there are internal structures to a system, and those structures cannot necessarily be weighed in isolation. There is a context in which parts of a design make sense, and they can only be judged within that context. Often we judge our design patterns as though they were isolated systems on their own. That’s why people sometimes decide to use design patterns before they have even worked out if they are appropriate. The traditional debate is one where we judge the efficacy of a certain type of data access approach in isolation of the system it’s to be used in. I’ve seen salesmen for major software companies do the same thing – their marks have decided they are going to use the product before they’ve worked out why they will do so. I wonder whether the components of our architectural decision procedure can be composed in the same way that our components are.

In the context that they’re to be used, will all systems have a monotonic effect on the quality of a system? Could we represent the quality of our system as a sum of scores of the various sub-designs in the system like this? (Q1 + Q2 + … + Qn) That would assume that the quality of the system is the sum of the quality of its parts which seems a bit naive to me – some systems will work well in combination, others will limit the damage of their neighbours and some will exacerbate problems that would have lain dormant in their absence. How are we to represent the calculus of software quality? Perhaps the answer lies in the architecture itself? If you were to measure the quality of each unique path through the system, then you would have a way to measure the quality of that path as though it was a sequence of operations with no choices or loops involved. You could then sum the quality of each of these paths weighted in favour of frequency of usage. That would eliminate all subjective bias and the impact of all sub designs would be proportional to the centrality of its role within the system as a whole. In most systems data access plays a part in pretty much all paths through a system, hence the disproportionate emphasis we place on it in the traditional debates.

Scientific Software Design?

Can we work out what these criteria are? If we could measure every aspect of the system (data that gets created, stored, communicated, the complexity of that data etc) then we have the physical side of the picture – what we still lack is all of those thorny subjective measures that matter. Remember though that these are the subjective measures that can be converted into objective measures. Each of those measures can thus be added to the mix. What’s left? All of the criteria that we don’t know to ask about, and all of the physical measurements that we don’t know how to make, or don’t even know we should make. That’s the elephant in the room. You don’t know what you don’t know. And if you did, then it would pretty immediately fall to some kind of scientific enquiry. But will we be in the same situation as science and mathematics was at the dawn of the 20th Century? Will we, like Lord Kelvin, declare that everything of substance about software architecture is known and all the future holds for us is the task of filling in the gaps?

Are these unknown criteria like the unknown paths through a mathematical derivation? Are they the fatal flaw that unhinges any attempt to assess the quality of a design, or are they the features that turns software engineering into a weird amalgam of mathematics, physics and psychology? There will never be any way for us to unequivocally say that we have found all of the criteria that truly determine the quality of a design. Any criteria that we can think of we can objectify – but it’s the ones we can’t or don’t think of that will undermine our confidence in a design and doom us to traditional debates. Here’s a new way to state Hilbert’s 10th Problem:

Is there a way to fully enumerate all of the criteria that determine the quality of a software design?

Or to put it another way

Will we know when we know enough to distinguish good designs from bad?

The spirit of the enlightenment is fading. That much is certain. The resurgence of religiosity in all parts of the world is a backward step. It pulls us away from that pioneering spirit that Kant called a maturing of the human spirit. Maturity is no longer needing authority figures to tell us what to think. He was talking about the grand program to roll back the stifling power of the church. In software design we still cling to the idea that there are authority figures that are infallible. When they proclaim a design as sound, then we use it without further analysis. Design patterns are our scriptures, and traditional wisdom the ultimate authority by which we judge our designs. I want to see the day when scientific method is routinely brought to bear on software designs. Only then will we have reached the state of maturity where we can judge each design on its objective merits. I wonder what the Readify Tech List will be like then?

AudioFiler – My Search is {over/just begun}*

* delete as appropriate

I was cruising around the ID3.org site looking for a .NET ID3 tag API, when I came upon AudioFiler. AudioFiler is a great application, and one I’ve been searching for for years. Perhaps I should explain. I started creating and collecting MP3s back in about 1997, and since then my collection has grown a little. Back then there was a cool tool called TrackManager, by Nick de Jong. It was the perfect collection browser, it allowed filtration and multidimensional views. It allowed you to keep collections in multiple places (i.e. CDROMs) and then merge their contents to let you know where to find something your after. All in all it was a great application. It was ugly as hell, but that didn’t bother me that much because it started up Winamp when you wanted to play something so it didn’t matter.

The problem was that it started to get a bit flaky when your collection got above 500 songs (which happened pretty quickly) so I had to abandon it. Since then I have tried to use the media managers from a thousand different players including iTunes and just found that they didn’t help when I tried to visualise the whole collection from different angles. At last AudioFiler provides a search engine comparable to TrackManager. It’s Ugly, and it invokes Winamp, but I think it may be just what I need. Perhaps now I’ll rediscover all that stuff I know I’ve got but can never find. Anyone for some Riot Grrl?

A gaping hole in my education – Dimensional Databases

I love to be educated. I also love the fact that I can absorb new ideas continuously without ever sating my thirst for data. And where better than at Wikipedia, which has obviously been lavished with the attentions of a database expert in recent days. I have to admit that I have never heard of, let alone worked on a dimensional database, so I am intrigued to know more.