Month: June 2005

Are we entering a dark age, or are we already in it?

This from Eurekalert:
You may think that with faster internet connectivity, internet phone calls and iPods, that we're living in a technological nirvana. But according to a new analysis we are fast approaching a new dark age. The results show that the number of technological breakthroughs and patents peaked a century ago and have been falling steadily ever since. But this is a controversial view not held by most futurologists.

The reason why an observation like this is controversial is that futurologists take a determinedly optimistic view of the future. I suspect this is a survey of American patent applications. There is a subjective increase in the wealth and comfort of Americans that leads them to think that they must be in a golden age. But wasn't Rome similar prior to its collapse? The Romans had become decadent, they suffered from an internal rot and cultural decline. They embarked on fruitless foreign adventures as a means of distracting an overlarge, and underused military. The recent resource wars in the middle east seem aimed at distracting the world with a sleight of hand allowing America to parasitize Iraq without feeling morally inferior.

If we truly are in a worldwide decline in creativity, what does that mean? Victorian scientists used to predict that all of the major discoveries had been made and the future was going to be a period of filling the gaps and creating a prosperous age of automation. Were they right? Our explanations of the fine detail of what goes on is a little more precise, but really we have just been adding decimal points to the accuracy of our picture. The whole of twentieth century physics has involved creating pictures of the world that even their inventors didn't understand or trust! Einstein made significant advances in quantum physics in an attempt to falsify it on aesthetic grounds!

Maybe the lull is exemplified by the history of Artificial Intelligence? Artificial Intelligence grew out of the availability of computing hardware in postwar America (called giant brains at the time) and the theoretical advances that had been made prewar by the likes of Turing, and von Neumann et al. Von Neumann and Turing both took a mechanistic view of the brain, that it was to all intents and purposes a "computer made of meat". There was great optimism that the algorithms of the brain would be quickly understood, and a means to emulate the brain would be found in "10 or 20 years". Minsky and others were finding ways to simulate neurons in hardware. All in all it seemed that AI would flourish in the 70s providing added impetus to the space race, and mankind's transcendence.

Researchers found that emulating human perceptual capabilities was actually much harder than performing the sort of tasks that humans find hard (like maths and logic). Philosophical problems arose over our very definition of intelligence and consciousness (it seemed to always be imminent). The whole effort became mired in attempts to work out what it was that they were really after. The over optimistic forecasts came back to haunt the researchers and the whole enterprise was scaled back to become a peripheral research enterprise in most universities. The high country was abandoned in favour of vocational education – computing for profit rather than fun.

Maybe we don't seem to be advancing as fast because we are now trying to solve the truly intractable problems of science that require a fundamental change in our understanding of the world or the brain or whatever. Maybe we can't solve these problems without skills that are not currently in our conceptual repertoire?

A moment of nostalgia

I was perusing my files, and came across this picture that I took from the harbour when Kerry and I were in Oslo. I wish I was back there right now, rather than sitting here pounding out stored procedures and poorly conceived business logic components.

grammatical ponderings

In my notation a simple grammatical structure is represented as a triple of the form [subject | action | object] with elements of the triple annotated with gliphs to represent tense, belief etc.

One of the elements that has never satisfied me is constructions that are kind of three way sentences such as “john threw the ball to tom”
which in the notation is represented as [john | throw_p | <to, tom, ball>] It could of course be written as two statements

[john | throw_p | ball]
[ball.target = tom]

But this is a little long winded, so what I need is to be able to consisely represent such a common sentence structure using N.

Any ideas? How about [john | throw_p | [ball.target = tom]]
still makes me think that we don’t break up the meaning of the sentence in that way when we use the three way construction, so what is going on in our heads? Is there a third party along with the subject and object?

Blissful Relief

Well, I’m back in work – a fact that would normally be an occasion for mourning. But, at least on this occasion, I am settling in OK. I am doing a short contract for a local classified ads company called the trading post (if you’ve ever seen the movie The Castle, you’ll recognise the name). I will be working on redeveloping their admin back-end, and shall get more stored procedure practice than at any other time in my programming career. I guess that’s a good thing – filling the skills gaps and all that. Anyway, I have an unmediated connection to the internet that allows peak download rates of around 1MB/s which is faster than I got at Telstra (bizarrely).

So expect me to be eulogising about the wonders of some huge piece of freeware or another, or to be talking about something large that I found on a P2P network.

Almost ready to go open source

I have been tweeking and fiddling with the framework in readiness for making it open source. There are still quite a few things I ought to do before making any attempt to publicise it. I thought that since I've been very remiss in not posting to this blog lately I could do some posting. How better than to list what I've got to do before the system is unveiled to the public? I know I'm not exactly going about this in the true open-source way, but I guess I'm a little wary about releasing my code to the public without having crossed as many 't's and dotted as many 'i's as I can find. I'm sure there are plenty of improvements that people will find when I release them, but I don't want any of them to be dumb or outmoded errors!

Anyway, enough of such insecurities! Here's what I have to do before D-day:

  • See if I can create a WIX installer that works. Currently I seem to be having problems with adding assemblies to the GAC. My best guess is that WIX is not able to add .NET 2.0 assemblies to the GAC because it's trying to use some sort of reflection on an incompatible assembly.
  • Create a user tutorial. I'll probably do that in this blog. It should be a pretty easy task, since I've gone to great lengths to make the framework as transparent as possible.
  • Document the API using NDoc. As ever, I've been slack in commenting my code. I guess I'll start with the main APIs and then keep at it in the months after release.
  • Run it through FxCop and make sure there are no obvious errors.
  • Run regression tests on the static code generation system! This is potentially of less value than the dynamic code generation system so I have ignored it for a while.
  • Make sure it still compiles on .NET 1.1! And how about Mono? I haven't seen any major commercial projects requiring Mono, but that may change…
  • Make sure that the compilation with NAnt works as well as the VS.NET 2005 beta 2 builds. Especially if WIX is not an option. I need to make tragets that will crerate binary and source distributions for release. This should then be used to upload overnight snapshots to the web each night.
  • Convert the system to work with CruiseControl.NET. I have (so far) had around 5 interviews with ThoughtWorks, so I really ought to be as obsequious as possible to boost my chances with them! ;-)

Once these tasks have been performed, then I ought to post it online and start trying to publicise it a bit more.
I can then think a bit more about using the framework in earnest. I want this system because I have an object relation mapping (ORM) system that is just short of being releasable and I would like to augment it with DBC to see whether I can productise it that way. For more information on ORMs, google for Scott Ambler's original postings on a design for a robust persistence layer. I also want to make the second release of the framework self-hosting. That is – I want to use DBC within the DBC framework itself. How much more could I practice what I preach, eh?