science

Quantum Reasoners Hold Key to Future Web

Last year, a company called DWave Systems announced their quantum computer (the ‘Orion’) – another milestone on the road to practical quantum computing. Their controversial claims seem worthy in their own right but they are particularly important to the semantic web (SW) community. The significance to the SW community was that their quantum computer solved problems akin to Grover’s Algorithm speeding up queries of disorderly databases.

Semantic web databases are not (completely) disorderly and there are many ways to optimize the search for matching triples to a graph pattern. What strikes me is that the larger the triple store, the more compelling the case for using some kind of quantum search algorithm to find matches. DWave are currently trialing 128qbit processors, and they claim their systems can scale, so I (as a layman) can see no reason why such computers couldn’t be used to help improve the performance of queries in massive triple stores.

 What I wonder is:

  1. what kind of indexing schemes can be used to impose structure on the triples in a store?
  2. how can one adapt a B-tree to index each element of a triple rather than just a single primary key – three indexes seems extravagant.
  3. are there quantum algorithms that can beat the best of these schemes?
  4. is there is a place for quantum superposition in a graph matching algorithm (to simultaneously find matching triples then cancel out any that don’t match all the basic graph patterns?)
  5. if DWave’s machines could solve NP-Complete problems, does that mean that we would then just use OWL-Full?
  6. would the speed-ups then be great enough to consider linking everyday app data to large scale-upper ontologies?
  7. is a contradiction in a ‘quantum reasoner’ (i.e. a reasoner that uses a quantum search engine) something that can never occur because it just cancels out and never appears in the returned triples? Would any returned conclusion be necessarily true (relative to the axioms of the ontology?)

Any thoughts?

UPDATE
DWave are now working with Google to help them improve some of their machine learning algorithms. I wonder whether there will be other research into the practicality of using DWave quantum computing systems in conjunction with inference engines? This could, of course, open up whole new vistas of services that could be provided by Google (or their competitors). Either way, it gives me a warm feeling to know that every time I do a search, I’m getting the results from a quantum computer (no matter how indirectly). Nice.

Object Modeling is Vocabulary Design

Andrew Cantos raised some interesting philosophical points in reply to my partially tongue in cheek post The Great Domain Model Debate – Solved the other day. As ever, my short reply turned into a blog post and this is it. Andrew’s point was that there is a metaphorical link between objects in a domain model and elementary particles in some physical system. The ability of these elements to take part in the wider system is often a function of their sheer simplicity rather than being loaded with complex properties. He use the example of Oxygen, as an example of something that can take part in many reactions, but which does not define or characterize those reactions. I extended the metaphor to observe that the same holds true when comparing Anemic Domain Models with their Rich brethren.

I like his metaphor. The metaphor I tend use when I think about this issue is related to human languages. Words are poor carriers of meaning on their own, in the same sense that rich objects are poor carriers of business functionality. A word’s specific value comes within the dynamic context of a sentence. I.e it’s exact meaning and value can only be resolved when composed together in a richer context.

Likewise, the same happens in an OO system – the analogue of the ‘sentence’ here is the thread of execution, or the transaction script or whathaveyou. they give meaning to the data carried by the anemic object. Without that context the object is worthless. What a RDM seeks to do is carry with the object the full set of possible contexts. It also seeks to restrict that set of contexts to a manageable set.

I can sympathize with that aim – ADM’s do little to guarantee that they get used right. RDMs do. However, I think that as with a science, an enterprise system needs to have a commonly agreed shared vocabulary. With that, a greater richness of communication becomes possible. If however, you were restricted in the ways you could use these words, you may have greater precision, but communication would become a chore, and you probably wouldn’t bother.

You can extend this whole ‘enterprise vocabulary’ metaphor even further. If you look at a typical, poorly coded or governed system, you will often see a situation where there are a multitude of little DTOs all of which contain roughly the same data, but just those fields that are needed to service a given page or screen. This situation is analogous to the situation in an immature science where there are many words freighted with slightly different meanings. Confusion results when a speaker intends different things from the listener. So too in software, where the lack of a commonly agreed object model serves to add confusion to the development process, and to increase he likelihood of errors creeping into a system.

What does this imply? It seems to me that the right approach (assuming the metaphor holds true) is that there ought to be a  well defined, definitive and widely shared object model within an organization. All system should use it, and the organization should mint new classes with great care and forethought. This of course ties in with the efforts of various groups in the Semantic Web area, who are attempting to do just that in domains a widely flung as life sciences and disaster relief. The fact that the efforts are inter-organizational means that there will be less tolerance for poorly advised ‘pragmatism’.

Which can only be a good thing in the long run. Right?

Wireless Power Transmission

I spent years bemoaning the fact that the laws of physics didn’t allow the wireless transmission of power. Now it seems that I bemoaned prematurely. Researchers at MIT have a found a way to use inductance to power a 60W (think similar power rating to a laptop) lightbulb from a distance of up to 2 metres.

I  know it doesn’t sound like much, but as I look at my desk I can’t help thinking that a short range power transmitter would help. I’m sure it beats Quartz crystals as a source of magic moonbeams as well.

cables

An Ominous Blog Post

Normally I have a very optimistic outlook. Especially when it comes to technological breakthroughs. But this morning I was given pause for thought. MAKE magazine carried a news article today about a highly accurate DNA replicator for $10. I am fully convinced that such breakthroughs can be used to tackle the issues of world poverty, but I’ve just finished reading Tomorrow’s War by David Shukman. It was written over 10 years ago but was gloomy, even then about our chances of controlling the proliferation of expertise in the production of WMD, and it was written before 9/11.

At this rate, the techniques and the resources for biological weapons development will be freely available, but the skills needed to combat them will not. I just think of the irresponsibility of computer hackers and virus writers – who often wreak havoc without any thought of the costs or consequences. If such power can be unleashed in the real world, then we are in way more danger than we ever were during the cold war.

That is a doomsday scenario, if you ask me.

Science Headlines of the Day

Some of the low and high points of today’s science headlines:

Smoking and Caffeine inversely associated with Parkinson’s disease

Arsenic in chicken feed may pose health risk to humans

Guns in homes strongly associated with higher rates of suicide

Good behaviour, Religiousness may be genetic

Why small dogs are small

Study links propensity toward worry to early death

No, Einstein *wasn’t* religious

Segment of a “Quantum Repeater” demonstrated

Solar for 1/10th the cost: Use Haemoglobin

A Plastic wrapper today could be fuel tomorrow

The cost of anti GM-food “green” scaremongering

Belief in Reincarnation Tied to Memory Errors

Patient has wrong testicle removed

Christians being persecuted in Oregon

An imminent Crisis in Astronomy

Diplomatically pissing into the well of truth

Zealots gain the upper hand

The surge of religiosity in the US continues unabated. A survey from NEWSWEEK is the current cause of distress – apparently over 90% of Americans profess to be religious, and almost half reject the findings of science. Alarmingly, 35% of college graduates (the most educated of Americans) accept the bible as fact and believe that mankind sprung mysteriously into existence less than 10,000 years ago.

Less than one in ten Americans was willing to own-up to being an atheist – I suspect that as at other times when religiosity was in the ascendant, like the dark ages, there has been a danger of civic censure and communal intolerance of the atheist. It is already an established fact that government office is dependent on public displays of piety, and government policy is also dictated by various religious or ethnically religious groups.

It fills me with dread to think of the single largest military power on planet being held under the sway of religious fundamentalists. Of course for those of us in the west – Christianity is (laughably) regarded as the religion of progress and multicultural tolerance and democracy. Those from christian cultures feel threatened by firebrand religious fanatics of Islam. Those of us, who are atheist, regard any kind of christian ascendancy as equally undesirable.

Even the spell checker on this god-damned word-processor wants to capitalise the word ‘christian’.

What can we do to stem the tide of this religious anti-renaissance?

Is it really impossible to choose between LINQ and Stored Procedures?

For the mathematician there is no Ignorabimus, and, in my opinion, not at all for natural science either. … The true reason why [no one] has succeeded in finding an unsolvable problem is, in my opinion, that there is no unsolvable problem. In contrast to the foolish Ignoramibus, our credo avers:
We must know,
We shall know.

It’s that time of the month again – when all of the evangelically inclined mavens of Readify gather round to have the traditional debate. Despite the fact that they’ve had similar debates for years, they tend to tackle the arguments with gusto, trying to find a new angle of attack from which to sally forth in defence of their staunchly defended position. You may (assuming you never read the title of the post :) be wondering what is it that could inspire such fanatical and unswerving devotion? What is it that could polarise an otherwise completely rational group of individuals into opposing poles that consider the other completely mad?

What is this Lilliputian debate? I’m sure you didn’t need to ask, considering it is symptomatic of the gaping wound in the side of modern software engineering. This flaw in software engineering is the elephant in the room that nobody talks about (although they talk an awful lot about the lack of space).

The traditional debate is, of course:

What’s the point of a database?

And I’m sure that there’s a lot I could say on the topic (there sure was yesterday) but the debate put me in a thoughtful mood. The elephant in the room, the gaping wound in the side of software engineering is just as simply stated:

How do we prove that a design is optimal?

That is the real reason we spend so much of our time rehearsing these architectural arguments, trying to win over the other side. Nobody gets evangelical about something they just know – they only evangelise about things they are not sure about. Most people don’t proclaim to the world that the sun will rise tomorrow. But like me, you may well devote a lot of bandwidth to the idea that the object domain is paramount, not the relational. As an object oriented specialist that is my central creed and highest article of faith. The traditional debate goes on because we just don’t have proof on either side. Both sides have thoroughly convincing arguments, and there is no decision procedure to choose between them.

So why don’t we just resolve it once and for all? The computer science and software engineering fraternity is probably the single largest focussed accumulation of IQ points gathered in the history of mankind. They all focus intensively on issues just like this. Surely it is not beyond them to answer the simple question of whether we should put our business logic into stored procedures or use an ORM product to dynamically generate SQL statements. My initial thought was “We Must Know, We Will Know” or words to that effect. There is nothing that can’t be solved given enough resolve and intelligence. If we have a will to do so, we could probably settle on a definitive way to describe an architecture so that we can decide what is best for a given problem.

Those of you who followed the link at the top of the post will have found references there to David Hilbert, and that should have given you enough of a clue to know that there’s a chance that my initial sentiment was probably a pipe dream. If you are still in the dark, I’m referring to Hilbert’s Entscheidungsproblem (or the Decision Problem in English) and I beg you to read Douglas Hofstadter’s magnificent Gödel, Escher, Bach – An eternal golden braid. This book is at the top of my all-time favourites list, and among the million interesting topics it covers, the decision problem is central.

The Decision Problem – a quick detour

One thing you’ll notice about the Entscheidungsproblem and Turing’s Halting Problem is that they are equivalent. They seem to be asking about different things, but at a deeper level the problems are the same. The decision problem asks whether there is a mechanical procedure to determine the truth of any mathematical statement. At the turn of the century they might have imagined some procedure that cranked through every derivation of the axioms of mathematical logic till it found a proof of the statement returning true. The problem with that brute-force approach is that mathematics allows a continual complexification and simplification of statements – it is non-monotonic. The implication is that just because you have applied every combination of the construction rules on all of the axioms up to a given length you can’t know whether there are new statements of the same length that could be found by the repeated application of growth and shrinkage rules that aren’t already in your list. That means that even though you may think you have a definitive list of all the true statements of a given length you may be wrong, so you can never give a false, only continue till you either find a concrete proof or disproof.

Because of these non-monotonic derivation rules, you will never be sure that no answer from your procedure means an answer of false. You will always have to wait and see. This is the equivalence between the Entscheidungsproblem and Alan Turing’s Halting Problem. If you knew your procedure would not halt, then you would just short-circuit the decision process and immediately answer false. If you knew that the procedure would halt, then you would just let it run and produce whatever true/false answer it came up with – either way, you would have a decision procedure. Unfortunately it’s not that easy, because the halting decision procedure has no overview of the whole of mathematics either, and can’t give an answer to the halting question. Ergo there is no decision procedure either. Besides, Kurt Gödel proved that there were undecidable problems, and the quest for a decision procedure was doomed to fail. he showed that even if you came up with a more sophisticated procedure than the brute force attack, you would still never get a decision procedure for all mathematics.

The Architectural Decision Problem

What has this got to do with deciding on the relative merits of two software designs? Is the issue of deciding between two designs also equivalent to the decision problem? Is it a constraint optimisation problem? You could enumerate the critical factors, assign a rank to them and then sum the scores for each design? That is exactly what I did in one of my recent posts entitled “The great Domain Model Debate – Solved!” Of course the ‘Solved!‘ part was partly tongue-in-cheek – I just provided a decision procedure for readers to distinguish between the competing designs of domain models.

One of the criticisms levelled at my offering for this problem was that my weights and scores were too subjective. I maintained that although my heuristic was flawed, it held the key to solving these design issues because there was the hope that there are objective measures of the importance of design criteria for each design, and it was possible to quantify the efficacy of each approach. But I’m beginning to wonder whether that’s really true. Let’s consider the domain model approach for a moment to see how we could quantify those figures.

Imagine that we could enumerate all of the criteria that pertained to the problem. Each represents an aspect of the value that architects place in a good design. In my previous post I considered such things as complexity, density of data storage, performance, maintainability etc. Obviously each of these figures varies in just how subjective it is. Complexity is a measure of how easy it is to understand. One programmer may be totally at home with a design whereas another may be confused. But there are measures of complexity that are objective that we could use. We could use that as an indicator of maintainability – the more complex a design is, the harder it would be to maintain.

This complexity measure would be more fundamental than any mere subjective measure, and would be tightly correlated with the subjective measure. Algorithmic complexity would be directly related to the degree of confusion a given developer would experience when first exposed to the design. Complexity affects our ability to remember the details of the design (as it is employed in a given context) and also our ability to mentally visualise the design and its uses. When we give a subjective measure of something like complexity, it may be due to the fact that we are looking at it from the wrong level. Yes, there is a subjective difference, but that is because of an objective difference that we are responding to.

It’s even possible to prove that such variables exist, so long as we are willing to agree that a subjective dislike that is completely whimsical is not worth incorporating into an assessment of a design’s worth. I’m thinking of such knee-jerk reactions like ‘we never use that design here‘ or ‘I don’t want to use it because I heard it was no good‘. Such opinions whilst strongly felt are of no value, because they don’t pertain to the design per-se but rather to a free-standing psychological state in the person who has them. The design could still be optimal, but that wouldn’t stop them from having that opinion. Confusion on the other hand has its origin in some aspect of the design, and thus should be factored in.

For each subjective criterion that we currently use to judge a design, there must be a set of objective criteria that cause it. If there are none, then we can discount it – it contributes nothing to an objective decision procedure – it is just a prejudice. If there are objective criteria, then we can substitute all occurrences of the subjective criterion in the decision procedure with the set of objective criteria. If we continue this process, we will eventually be left with nothing but objective criteria. At that point are we in a position to choose between two designs?

Judging a good design

It still remains to be seen whether we can enumerate all of the objective criteria that account for our experiences with a design, and its performance in production. It also remains for us to work out ways to measure them, and weigh their relative importance over other criteria. We are still in danger of slipping into a world of subjective opinions over what is most important. We should be able to apply some rigour because we’re aiming at a stationary target. Every design is produced to fulfil a set of requirements. Provided those requirements are fulfilled we can assess the design solely in terms of the objective criteria. We can filter out all of the designs that are incapable of meeting the requirements – all the designs that are left are guaranteed to do the job, but some will be better than others. If that requires that we formally specify our designs and requirements then (for the purposes of this argument) so be it. All that matters is that we are able to guarantee that all remaining designs are fit to be used. All that distinguishes them are performance and other quality criteria that can be objectively measured.

Standard practice in software engineering is to reduce a problem to its component parts, and attempt to then compose the system from those components in a way that fulfils the requirements for the system. Clearly there are internal structures to a system, and those structures cannot necessarily be weighed in isolation. There is a context in which parts of a design make sense, and they can only be judged within that context. Often we judge our design patterns as though they were isolated systems on their own. That’s why people sometimes decide to use design patterns before they have even worked out if they are appropriate. The traditional debate is one where we judge the efficacy of a certain type of data access approach in isolation of the system it’s to be used in. I’ve seen salesmen for major software companies do the same thing – their marks have decided they are going to use the product before they’ve worked out why they will do so. I wonder whether the components of our architectural decision procedure can be composed in the same way that our components are.

In the context that they’re to be used, will all systems have a monotonic effect on the quality of a system? Could we represent the quality of our system as a sum of scores of the various sub-designs in the system like this? (Q1 + Q2 + … + Qn) That would assume that the quality of the system is the sum of the quality of its parts which seems a bit naive to me – some systems will work well in combination, others will limit the damage of their neighbours and some will exacerbate problems that would have lain dormant in their absence. How are we to represent the calculus of software quality? Perhaps the answer lies in the architecture itself? If you were to measure the quality of each unique path through the system, then you would have a way to measure the quality of that path as though it was a sequence of operations with no choices or loops involved. You could then sum the quality of each of these paths weighted in favour of frequency of usage. That would eliminate all subjective bias and the impact of all sub designs would be proportional to the centrality of its role within the system as a whole. In most systems data access plays a part in pretty much all paths through a system, hence the disproportionate emphasis we place on it in the traditional debates.

Scientific Software Design?

Can we work out what these criteria are? If we could measure every aspect of the system (data that gets created, stored, communicated, the complexity of that data etc) then we have the physical side of the picture – what we still lack is all of those thorny subjective measures that matter. Remember though that these are the subjective measures that can be converted into objective measures. Each of those measures can thus be added to the mix. What’s left? All of the criteria that we don’t know to ask about, and all of the physical measurements that we don’t know how to make, or don’t even know we should make. That’s the elephant in the room. You don’t know what you don’t know. And if you did, then it would pretty immediately fall to some kind of scientific enquiry. But will we be in the same situation as science and mathematics was at the dawn of the 20th Century? Will we, like Lord Kelvin, declare that everything of substance about software architecture is known and all the future holds for us is the task of filling in the gaps?

Are these unknown criteria like the unknown paths through a mathematical derivation? Are they the fatal flaw that unhinges any attempt to assess the quality of a design, or are they the features that turns software engineering into a weird amalgam of mathematics, physics and psychology? There will never be any way for us to unequivocally say that we have found all of the criteria that truly determine the quality of a design. Any criteria that we can think of we can objectify – but it’s the ones we can’t or don’t think of that will undermine our confidence in a design and doom us to traditional debates. Here’s a new way to state Hilbert’s 10th Problem:

Is there a way to fully enumerate all of the criteria that determine the quality of a software design?

Or to put it another way

Will we know when we know enough to distinguish good designs from bad?

The spirit of the enlightenment is fading. That much is certain. The resurgence of religiosity in all parts of the world is a backward step. It pulls us away from that pioneering spirit that Kant called a maturing of the human spirit. Maturity is no longer needing authority figures to tell us what to think. He was talking about the grand program to roll back the stifling power of the church. In software design we still cling to the idea that there are authority figures that are infallible. When they proclaim a design as sound, then we use it without further analysis. Design patterns are our scriptures, and traditional wisdom the ultimate authority by which we judge our designs. I want to see the day when scientific method is routinely brought to bear on software designs. Only then will we have reached the state of maturity where we can judge each design on its objective merits. I wonder what the Readify Tech List will be like then?

From KurzweilAI – First General Purpose Quantum Computer

D-Wave Systems, Inc. plans to
demonstrate a technological first on
Feb. 13: an end-to-end quantum
computing system powered by a
16-qubit quantum processor, running
two commercial applications, live.
This is the core of a new quantum
computer to be unveiled by D-Wave
Systems, says Steve Jurvetson,
Managing Director of Draper Fisher
Jurvetson,…
http://www.kurzweilai.net/email/newsRedirect.html?newsID=6385&m=28179

 

A 16 qbit processor. Will it be demonstrating the ‘commercial application‘ MS-DOS?

A Cure for Cancer

From KurzweilAI. If this isn’t worth blogging about, then nothing is. Could this be the most historic announcement ever to arrive in my inbox?

Cheap, safe drug kills most cancers

NewScientist.com news service Jan. 20, 2007

University of Alberta scientists
have tested dichloroacetate (DCA) on
human cells cultured outside the
body and found that it killed lung,
breast and brain cancer cells, but
not healthy cells….
http://www.kurzweilai.net/email/newsRedirect.html?newsID=6368&m=28179

Google Space Imagery Coup – From KurzweilAI

NASA Ames research centre have announced a collaboration with Google to make available the gigaquads of data that they have gathered over the years. Hopefully this should make available data from lunar expeditions and some very interesting satellite imagery. I don’t know whether this will include data from things like the Hubble space telescope, but I hope so. One thing I am sure about, it will be a priceless resource for professional and amateur astronomers alike.

I wonder who else Google is talking to? The Human Genome Project? The Visible Human Project? Interpol? Imagine if they made it possible for researchers to publish their data sets when they publish a paper… Maybe in future, readers of scientific papers can draw their own conclusions about the validity of results by running their own analysis of the results. It might improve the quality of research if scientists knew that their results would come under greater scrutiny?

Read the article at KurzweilAI: http://www.kurzweilai.net/email/newsRedirect.html?newsID=6195&m=28179