Posts Tagged ‘ cool ’
I spent years bemoaning the fact that the laws of physics didn’t allow the wireless transmission of power. Now it seems that I bemoaned prematurely. Researchers at MIT have a found a way to use inductance to power a 60W (think similar power rating to a laptop) lightbulb from a distance of up to 2 metres.
I know it doesn’t sound like much, but as I look at my desk I can’t help thinking that a short range power transmitter would help. I’m sure it beats Quartz crystals as a source of magic moonbeams as well.
This picture would have been worth re-posting even if the photographer had not been lucky enough to catch an amazing bolt of fork lightning at the instant they took the picture.
I spent most of my evening converting Jena to .NET. Needless to say it was only at the end of the evening that I discovered that Andy Seabourne (from my old home town of Bristol) had already worked out how to use IKVM to convert the jar files into assemblies. I’m not bothered though; I produced make files (rather than shell scripts) that work better on cygwin. The best thing I got from Andy was his “don’t worry be happy” advice that IKVM spuriously complains about unfound classes – you don’t need to worry about it. Once I read that, I realised that I had successfully converted Jena about 4 hours earlier, and all my fiddling about trying to get the right pattern of dependencies was completely unnecessary – IKVM just works! (and rocks)
Had I realised just how easy it was to convert bytecode to IL, I might have gone trawling the apache jakarta project more often over the last few years. (sigh) Never mind – I now have the tools for working on semantic web applications in .NET. Yayyyy!!!! I don’t have to learn Python either. I’m not sure whether I’m sad about that.
I don’t have a place handy to put the assemblies, and wordpress won’t allow me to upload them, so I’ll do the next best thing and give you the make file. It assumes that you are using cygwin or something similar. If you aren’t just use the conventional windows path structure for ikvmdir. It also is based on Jena version 2.5.2. Read more
The semantic web is poised to influence us in ways that will be as radical as the early days of the Internet and World Wide Web. For software developers it will involve a paradigm shift, bringing new ways of thinking about the problems that we solve, and more-importantly bringing us new bags of tricks to play with.
One of the current favourite ways to add value to an existing system is through the application of data mining. Amazon is a great example of the power of data mining; it can offer you recommendations based on a statistical model of purchasing behaviour that are pretty accurate. It looks at what the other purchasers of a book bought, and uses that as a guide to make further recommendations.
What if it were able to make suggestions like this: We recommend that you also buy book XYZ because it discusses the same topics but in more depth. That kind of recommendation would be incredible. You would have faith in a recommendation like that, because it wasn’t tainted by the thermal noise of purchaser behaviour. I don’t know why, but every time I go shopping for books on computer science, Amazon keeps recommending that I buy Star Trek books. It just so happens that programmers are suckers for schlock sci-fi books, so there is always at least one offering amongst the CompSci selections.
The kind of domain understanding I described above is made possible through the application of Ontology Engineering. Ontology Engineering is nothing new – it has been around for years in one form or another. What makes it new and exciting for me is the work being done by the W3C on semantic web technologies. Tim Berners-Lee has not been resting on his laurels since he invented the World Wide Web. He and his team have been producing a connected set of specifications for the representation, exchange and use of domain models and rules (plus a lot else besides). This excites me, not least because I first got into Computer Science through an interest in philosophy. About 22 years ago, in a Sunday supplement newspaper a correspondent wrote about the wonderful new science of Artificial Intelligence. He described it as a playground of philosophers where for the first time hypotheses about the nature of mind and reality could be made manifest and subjected to the rigours of scientific investigation. That blew my mind – and I have never looked back.
Which brings us to the present day. Ontology engineering involves the production of ontologies, which are an abstract model of some domain. This is exactly what software developers do for a living, but with a difference. The Resource Description Framework (RDF) and the Web Ontology Language (OWL) are designed to be published and consumed across the web. They are not procedural languages – they describe a domain and its rules in such a way that inference engines can reason about the domain and draw conclusions. In essence the semantic web brings a clean, standardised, web enabled and rich language in which we can share expert systems. The magnitude of what this means is not clear yet but I suspect that it will change everything.
The same imperatives that drove the formulation of standards like OWL and RDF are at work in the object domain. A class definition is only meaningful in the sense that it carries data and its name has some meaning to a programmer. There is no inherent meaning in an object graph that can allow an independent software system to draw conclusions from it. Even the natural language labels we apply to classes can be vague or ambiguous. Large systems in complex industries need a way to add meaning to an existing system without breaking backwards compatibility. Semantic web applications will be of great value to the developer community because they will allow us to inject intelligence into our systems.
The current Web2.0 drive to add value to the user experience will eventually call for more intelligence than can practically be got from our massive OO systems. A market-driven search for competitiveness will drive the software development community to more fully embrace the semantic web as the only easy way to add intelligence to unwieldy systems.
In many systems the sheer complexity of the problem domain has led software designers to throw up their hands in disgust, and opt for data structures that are catch-all buckets of data. Previously, I have referred to them as untyped associative containers because more often than not the data ends up in a hash table or equivalent data structure. For the developer, the untyped associative container is pure evil on many levels – not least from performance, readability, and type-safety angles. Early attempts to create industry mark-up languages foundered on the same rocks. What was lacking was a common conceptual framework in which to describe an industry. That problem is addressed by ontologies.
In future, we will produce our relational and object oriented models as a side effect of the production of an ontology – the ontology may well be the repository of the intellectual property of an enterprise, and will be stored and processed by dedicated reasoners able to make gather insights about users and their needs. Semantically aware systems will inevitably out-compete the inflexible systems that we are currently working with because they will be able to react to the user in a way that seems natural.
I’m currently working on an extended article about using semantic web technologies with .NET. As part of that effort I produced a little ontology in the N3 notation to model what makes people tick. The ontology will be used by a reasoner in the travel and itinerary planning domain.
erson a owl:Class .
:Need a owl:Class .
eriodicNeed rdfs:subClassOf :Need .
:Satisfier a owl:Class .
:need rdfs:domain erson;
rdfs:range :Need .
:Rest rdfs:subClassOf :Need .
:Hunger rdfs:subClassOf :Need .
:StimulousHunger rdfs:subClassOf :Need .
:satisfies rdfs:domain :Satisfier;
rdfs:range :Need .
:Sleep a :Class;
rdfs:subClassOf :Satisfier ;
:satisfies :Rest .
:Eating a :Class;
:satisfies :Hunger .
:Tourism a :Class;
:satisfies :StimulousHunger .
In the travel industry, all travel agents – even online ones – are routed through centralised bureaus that give flight times, take bookings etc. The only way that an online travel agency can distinguish themselves is if they are more smart and easier to use. They are tackling the later problem these days with AJAX, but they have yet to find effective ways to be more smart. An ontology that understands people a bit better is going to help them target their offerings more ‘delicately’. I don’t know about you, but I have portal sites that provide you with countless sales pitches on the one page. Endless checkboxes for extra services, and links to product partners that you might need something from. As the web becomes more interconnected, this is going to become more and more irritating. The systems must be able to understand that the last thing a user wants after a 28 hour flight is a guided tour of London, or tickets to the planetarium.
The example ontology above is a simple kind of upper ontology. It describes the world in the abstract to provide a kind of foundation off which to build more specific lower ontologies. This one just happens to model a kind of Freudian drive mechanism to describe how people’s wants and desires change over time (although the changing over time bit isn’t included in this example). Services can be tied to this upper ontology easily – restaurants provide Eating, which is a satisfier for hunger. Garfunkle’s restaurant (a type of Restaurant) is less than 200 metres from the Cecil Hotel (a type of Hotel that provides sleeping facilities, a satisfier of the need to rest) where you have a booking. Because all of these facts are subject to rules of inference, the inference engines can deduce that you may want to make a booking to eat at the hotel when you arrive, since it will have been 5 hours since you last satisfied your hunger.
The design of upper ontologies is frowned upon mightily in the relational and object oriented worlds – it smacks of over-engineering. For the first time we are seeing a new paradigm that will reward deeper analysis. I look forward to that day
I’ve been taking the new Visual Studio Preview for a spin, and it’s got some pretty nice new additions. These aren’t just the headline features that have been publicized elsewhere, but the thoughtful additions to usability that will make our lives a little bit easier. They should also help us to make our code a lot cleaner and less complex. We see the introduction of easy performance reporting and differential comparison of profiling sessions. We also see the introduction of code metrics analysis. The March 2007 CTP also sees the long awaited intellisense support for LINQ in C# as well as some other nice features for exporting code and projects as templates.
The performance report comes with a single click. It helps you to target your optimizations.
After you’ve done your optimizations, you can rerun your profiler and do a comparison to see how well your efforts worked.
The code analysis tools now offer code complexity metrics that will indicate whether your code needs simplification or refactoring.
The refactoring tools have not received much attention, and still lag behind the likes of R#. There are a quite a few refactoring code snippets, that may allow us to craft our own refactorings. If so, then we have a power that has not been seen outside of UML tools like Rational XDE.
The tools for architects have come along way. It’s clear that they are targeting the aims of the Dynamic Systems Initiative (DSI). This is a very exciting piece of work, and will revolutionize development when it reaches fruition. Here you can see an Application diagram with a set of property pages allowing the architect to define the system properties required for deployment of a windows application called Test1.
The class diagram now allows you to directly enter your method definitions in the class details pane. Code is generated on the fly in the code window. I’m not sure if this is new, but I can’t recall seeing anything like this in 2005.
While you’re editing the properties of your methods, you can also directly edit the code commentary as well. Now there’s no excuse…
There is a new feature allowing you to search for web services defined in other projects, to insert them as web references in the current project. Great productivity feature for distributed applications.
The class view pane gets a new search box that allows you to search for classes fields and properties. This is really useful, and can save programmers hours of hunting on large projects. (provided they haven’t got R# of course)
Yes! LINQ intellisense. Nuff said.
Orcas also introduces a nice Export template feature that allows you to develop a project or group of classes for exporting as a new item template. This feature is a great way to allow architects to ease the development cycle. They can produce pieces of reference code and distribute them to the rest of their development team.
Just select the classes that you want to export.
Give an explanation of what they are intended to do.
And visual studio goes through them creating placeholders for class and namespace identifiers. These can then be filled in when the developer creates a new item.
there are still problems that need to be sorted out though. Like project properties…
These are a few of the things that I noticed immediately. I’m sure there are more to come. The documentation has not been overhawled much, and the LINQ section is still mostly empty pages. But my initial impression is that Orcas is going to be a delight to work with, for the whole team.
D-Wave Systems, Inc. plans to
demonstrate a technological first on
Feb. 13: an end-to-end quantum
computing system powered by a
16-qubit quantum processor, running
two commercial applications, live.
This is the core of a new quantum
computer to be unveiled by D-Wave
Systems, says Steve Jurvetson,
Managing Director of Draper Fisher
A 16 qbit processor. Will it be demonstrating the ‘commercial application‘ MS-DOS?
From KurzweilAI. If this isn’t worth blogging about, then nothing is. Could this be the most historic announcement ever to arrive in my inbox?
NewScientist.com news service Jan. 20, 2007
University of Alberta scientists
have tested dichloroacetate (DCA) on
human cells cultured outside the
body and found that it killed lung,
breast and brain cancer cells, but
not healthy cells….
Today I had record stats on my blog. I passed the 200 hits per day threshold.
Since Mitch Denny told me the kind of visitor numbers he got on his blog, I’ve been operating with a bit of an inferiority complex. At the time I was peaking at about 20 hits per day and at the time I was pleased with those levels. Since then (about 4 months ago) I have been trying to attract more visitors to the blog. I get a lot of fun out of writing the posts (and my mum reads some of them back in the UK) but in the end there’s not much point keeping a public weblog unless the public reads it.
I started watching which posts brought the most traffic, and unsurprisingly it was the ones that told readers in advance what they were going to read about. Obscure or humorous titles got nowhere. People want to know the topic before they expend the time and mental effort visiting the page. Choosing my titles and topics more wisely, and creating cross-links to well-known sites (such as Mitch’s) has helped a lot, as had search engine registrations (especially reddit which I had never heard of before). I also found that controversial-but-lite content (such as my Anti-Agile Gripe) got way more traffic than the more painstaking articles on configuration and LINQ series. I don’t know whether the LINQ series will get more traffic the closer to release Orcas gets?
I haven’t gotten much in the way of comments which has been disappointing – I can’t tell whether readers are just skimming through or reading the posts! I’m not sure what to do about that. Any ideas?
NASA Ames research centre have announced a collaboration with Google to make available the gigaquads of data that they have gathered over the years. Hopefully this should make available data from lunar expeditions and some very interesting satellite imagery. I don’t know whether this will include data from things like the Hubble space telescope, but I hope so. One thing I am sure about, it will be a priceless resource for professional and amateur astronomers alike.
I wonder who else Google is talking to? The Human Genome Project? The Visible Human Project? Interpol? Imagine if they made it possible for researchers to publish their data sets when they publish a paper… Maybe in future, readers of scientific papers can draw their own conclusions about the validity of results by running their own analysis of the results. It might improve the quality of research if scientists knew that their results would come under greater scrutiny?
Read the article at KurzweilAI: http://www.kurzweilai.net/email/newsRedirect.html?newsID=6195&m=28179