Month: February 2007

C#, Domain Models & the Semantic Web

I’ve recently been learning more about the OWL web ontology language in an attempt to find a way to represent SPARQL queries in LINQ. SPARQL and LINQ are very different, and the systems that they target are also dissimilar. Inevitably, it’s difficult to imagine the techniques to employ in translating a query from one language to the other without actually trying to implpement the LINQ extensions. I’ve got quite a long learning curve to climb. One thing is clear though: OWL, or some system very similar to it, is going to have a radical impact both on developers and society at large. The reason I haven’t posted in the last few weeks is because I’ve been writing an article/paper for publication in some related journal.  I decided to put up the abstract and the closing remarks of the paper here, just so you can see what I’ll be covering in more depth on this blog in the future: LINQ, OWL, & RDF.

PS: If you are the editor of a journal that would like to publish the article, please contact me by email and I’ll be in touch when it’s finished. As you will see, the article is very optimistic about the prospects of the semantic web (despite its seeming lack of uptake outside of academia), and puts forward the idea that with the right fusion of technologies and environments the semantic web will have an even faster and more pervasive impact on society than the world wide web. This is all based on the assumption that LINQ the query language is capable of representing the same queries as SPARQL.

(more…)

From KurzweilAI – First General Purpose Quantum Computer

D-Wave Systems, Inc. plans to
demonstrate a technological first on
Feb. 13: an end-to-end quantum
computing system powered by a
16-qubit quantum processor, running
two commercial applications, live.
This is the core of a new quantum
computer to be unveiled by D-Wave
Systems, says Steve Jurvetson,
Managing Director of Draper Fisher
Jurvetson,…
http://www.kurzweilai.net/email/newsRedirect.html?newsID=6385&m=28179

 

A 16 qbit processor. Will it be demonstrating the ‘commercial application‘ MS-DOS?

A Cure for Cancer

From KurzweilAI. If this isn’t worth blogging about, then nothing is. Could this be the most historic announcement ever to arrive in my inbox?

Cheap, safe drug kills most cancers

NewScientist.com news service Jan. 20, 2007

University of Alberta scientists
have tested dichloroacetate (DCA) on
human cells cultured outside the
body and found that it killed lung,
breast and brain cancer cells, but
not healthy cells….
http://www.kurzweilai.net/email/newsRedirect.html?newsID=6368&m=28179

PowerShell NOT inferior to Cygwin

In a previous post, I kinda outed myself as a lover of “find | xargs” and grep. It seems that I inadvertantly gave offence to Jeffrey Snover (the architect of powershell), for whom I have great respect. So I thought I ought to set the record straight.

I like powershell a lot, and I suspect that it was familiarity with Unix that drove me back to Cygwin. I was truly inspired by the Channel9 interview that Jeffrey Snover gave. I had a few problems with the way that Cygwin handled NT file security settings, that caused me no end of grief. So I was more than happy to take powershell for a spin.  I think the idea of having an object pipeline is serendipitous, and I particularly like the fact that I am dealing with the .NET framework. I’ve been working with that for years so I don’t have a huge learning curve to ascend in that regard.

Funnily enough, the reason I struggled is apropos to what I was discussing in the original post – a relative lack of documentation. I like to be able to read around a topic. To get a lot of people’s ideas. I initially found it hard to get started – I didn’t have a sense, at times, of the capabilities of powershell, and where I should turn to for things that I knew how to do in bash. I was tryinig out the beta version of monad which was a while back, so things may have changed. I have to try again.

Another reason why I opted to go back to cygwin - I get pretty much all of the mainstream Gnu utilities for free. It didn’t occur to me that with a little fiddling I could have the best of both worlds by accessing Cygwin from within powershell. I haven’t tried it yet, but once I have, I shall report back with instructions.

IDEs are not a panacea, but neither are they all bad

It seems I touched a bit of a raw nerve with Alec when I pointed out that some IDEs are less crap than others. Specifically, I was referring to the ones that have major corporate clout invested in their production. He pointed out that they require standardisation of tools within a development team as well as standardisation of APIs, SCM tools, documentation, and build tools. He also points out that they obscure information from the developer that is necessary for the proper understanding of a complex system.

As I pointed out in my first post on this topic, a good IDE doesn’t obscure information – it makes it more accessible. That’s why one of the first features integrated into an IDE is documentation. Even open source tools such as #develop provide that as a feature. Intellisense (the provision of context sensitive information as a tool tip) which one won’t find in tools such as emacs or vim, is another early inclusion in the feature set. They are not an obfuscatory feature, they do the opposite of what Alec alleges. Even if that were the case, which I doubt, there is a need in non-trivial systems for abstraction of layers. These days that is most often achieved through the layered design of systems. That is a deliberate ploy by programmers to hide information from themselves to avoid information overload.

Standardisation of tools is not generally a bad thing either. If you are working as part of a distributed open source development team, there is less face to face collaboration and problem solving. In that case, you might not be aware of the value of having the same toolset as your colleague until you ask them to help you solve a problem you’re working on. Them being able to use your IDE as though it were ther own is a positive thing. If all members of a co-located team were using different tools, the options for collaboration are lessened. Not a good thing in my experience.

Directory naming dependencies are a product of hard coding of configuration files rather than the use of IDEs. You can hard code your constants without the help of an IDE quite easily, I find. Enforced change control is absolutely vital whatever your working on. Large or small, pro or amateur, corporate or not. I wouldn’t work in a team that didn’t enforce such a thing. To me it’s unthinkable. Am I understanding Alec right? Is he saying a common source code control policy is a bad thing?

Documenting code for the purposes of code reuse is helped by most IDEs – VS.NET is able to flag such omissions as errors if you want. Getting code that is reusable is much harder, and neither an IDE or a bitty development environment (BDE) will help you there! Most IDEs can support build scripts. VS.NET can provide intellisense for NAnt build scripts or MSBuild scripts. If you want to provide a centralised build server you can use either of these, and in fact the most successful approach I’ve seen for build servers is where NAnt delegates building projects to an IDE, while it takes care of other environmental issues such as packaging, testing and deployment.

What I wonder is whether I am so firmly embedded in the non-open-source development process that I have become spoilt, and can’t function without the aid of tools that help to augment my memory and perception. Would I be a better programmer if I was happy to switch to a different command line tool for each task that I routinely perform?

Why open-source software development environments are crap

It seems that our little Anti-Agile spat generated a fair amount of traffic for both Alec and I. Alec’s excellent blog attracted a record number of visitors. Quite right too. Evidently Mitch’s comment about opinion being more of a draw than painstakingly researched and presented essays was right. Controversy drives up the ratings. I say “hey ho!” let’s go with the most direct way to get things going – Alec baiting. Alec, as you all must know now (since you’ve been to his blog now), is not only an Agile band-wagoneer but a bit of an aficionado of obscure and marginal operating systems – he has to write device drivers on the train in the morning to get his laptop working, no joke. Another thing you may know about Alec is that he creates Linux development distros for open source development teams, which brings me to the point of this post.

Open Source IDEs are Crap!

Now, I am not leveling this criticism at Alec, since he doesn’t write the IDEs. I am leveling this criticism at all of those IDEs that I have ever turned to in the vain hope that the time has come for me to migrate to Linux. I am a C#/.NET developer, but before that I was into Java and before that into Visual C++ and before that I used GCC and before that I used Eiffel and compiler suites for VMS and various Unix variants.

As a Java developer, I was (through budgetary constraints) obliged to work with open source development environments like NetBeans, and Eclipse. I ended up better off with Vi. They were slow, flaky and frustrating. I thought that this was just a case of them being based on Java, and thought little more of it. I’ve been able to work with Visual Studio ever since I left university in 1995. At all stages through that period, VS was regarded as the benchmark for other tool vendors. They never seemed to make the grade. I very soon abandoned the Java world – I couldn’t stand the tools, the debuggers were crap or non-existent and the documentation was sparse one-dimensional and uninspiring.

Visual Studio Team Suite has raised the bar again. During a recent stint with a client here in Melbourne I was required to knock together a development environment that was partially based on VS.NET 2003 but with a whole bunch of open source tools to fill in the gaps for revision control, bug tracking, team portals, task and requirements management. It worked and the team got stuff done, but when I moved on to the next project I used TFS and was forced to admit that the unified team development experience is fantastic. There is nothing out there in the open source world that can begin to compete with the fluidity of TFS.

I WANT to run Linux on my laptop. I gave up Windows PowerShell and went back to Cygwin. I always use find, grep and xargs in preference to windows search or Google desktop. I Love Unix and would love to go back to it. But I couldn’t bear to give up VS.NET 2005. Nothing comes close. As a .NET developer my options are:

  • Vi + NAnt
    A mighty powerful combination that can move mountains. Not exactly much in the way of intellisense or debugger support though, eh?
  • #Develop
    not bad, but lagging behind the pack. Partial support for web technologies. Doesn’t support Resharper. Won’t support LINQ for years, especially not on Linux. No team development or SCC integration.
  • MonoDevelop
    Crude port of KdeDevelop?

OK, I’m the first to admit that these reviews are biased and unspecific, but I am hoping that someone out there will prove to me that I’m wrong. I want to go over to Mono. I’m waiting, actually. How long am I going to have to wait?