Never leave your laptop unattended. If you do, your son will convert this:
My son is a living embodiment of the second law of thermodynamics. And is attracted to buttons and keyboards in the same way that a moth is attracted to a flame.
Thankfully, he doesn’t know how to hit the save button yet.
I should have brought the code up to date weeks back – but other things got in the way. Still – all the unit tests are in the green. And the code has been minimally converted over to the new .NET 3.5 framework. I say ‘minimally’ because with the introduction of beta 2 there is now an interface for IQueryProvider that seems to be a dispenser for objects that support IQueryable. I suspect that with IQueryProvider, there is now a canonical architecture that is recommended by the LINQ team. Probably that will mean moving more responsibility into the RDF<T> class away from the QuerySupertype. Time (and more documentation from MS) will tell.
There are several new expression types that are not yet supported (such as the coalescing operator on nullable types) – it remains to be seen whether they are supportable in SPARQL at all. Further research required. The solution doesn’t currently support WIX – I’m not sure whether WIX 3 will work with 2008 yet. Again, more research required. What that means is that there will not be any MSI releases produced till WIX supports the latest drop of VS.NET.
Enjoy – and don’t forget to give us plenty of feedback on your experiences
To got to the google code project click here.
It’s clear that the unofficial policy of both Microsoft and Google is “we don’t believe in the semantic web“. It may not be clear why. The answer is unsurprising when you give it some thought, though: Big Business. Semantic search holds out the hope of users being able to compose meaningful queries and get relevant results. The price is that someone somewhere has to write down the answers in a meaningful way.
Meaning is a tricky word to play with, but here I mean complex structured data designed to adequately describe some domain. In other words – someone has to write and populate an ontology for each domain that the users want to ask questions about. It’s painstaking, specialized work that not just anyone can do. Not even a computer scientist – whilst they may have the required analysis and design skills, they don’t have the domain knowledge or the data. Hence the pace of forward progress has been slow as those with the knowledge are unaware of the value of an ontology or the methods to produce and exploit it.
Compare this to the modus operandi of the big search companies. Without fail they all use some variant on full-text indexing. It should be fairly clear why as well – they require no understanding of the domain of a document, nor do their users get any guarantees of relevance in the result sets. Users aren’t even particularly bothered when they get spurious results. It just goes with the territory.
Companies that hope or expect to maintain a monopoly in the search space have to use a mechanism that provides broad coverage across any domain, even if that breadth is at the expense of accuracy or meaningfulness. Clearly, the semantic web and monolithic search engines are incompatible. Not surprising then that for the likes of Microsoft and Google the semantic web is not on their radar. They can’t do it. They haven’t got the skills, time, money or incentive to do it.
If the semantic web is to get much of a toehold in the world of search engines it is going to have to be as a confederation of small search engines produced by specialized groups that are formed and run by domain experts. In a few short years Wikipedia has come to rival the likes of Encyclopedia Britannica. The value of its crowd-sourced content is obvious. This amazing resource came about through the distributed efforts of thousands across the web, with no thought of profit. Likewise, it will be a democratized, decentralized, grass-roots movement that will yield up the meaningful information we all need to get a better web experience.