Oh Yea! Be it known that Nitrogen is to be taken to Tyburn tree and there hung by the hard-drive until all computation has ceased, then taken down and have its giblets taken out and rubbed vigorously with a binary level formatting tool before being paraded around the service pack update avenues for the edification and education of the operating system.
Already T**stra is beginning to fade into an unhappy recollection. Considering that I congratulated myself on my good judgment in getting myself a place in an Australian household name on my CV, I’m feeling a little (a lot) foolish now. I have worked for places with as pathological environment as Telstra before, but very seldom, and my job-hunting process has sensitized me now to the cultures of the corporations I’m approaching. I guess it is a tautology that in a large corporation, anywhere in the world, the effectiveness of individuals varies inversely in proportion to the size of the company. And lets face it, Telstra is the largest employer in the whole of Australia. Anyway, if you want to do architect level work in Australia, you pretty much have to give up any ideas about going near a compiler. I can’t see how I could be an effective architect without first-hand understanding of the technologies I’m using, and how else to get those than to use them?
So after four months of what should have been a smooth transition back into work, I have come out the other end with my sanity barely intact. Not a single thing I did there was used, and some of it was probably worth my time. I shouldn’t be surprised since it seems that understanding of, or experience in, the software lifecycle is not a necessary pre-requisite of participation in the management of a software project (here). I have even been told that it is an actual impediment to you getting a job on a project team.
This miasmic feeling of despondency is deepened by my reading matter at the moment. I’m reading “Red Rabbit” by Tom Clancy, which tries to bring to life the feelings and thoughts of people trapped in the midst of the KGB bureaucracy in the early eighties. Oddly I feel that the environments described were apropos in the extreme to my own situation. It heartens me a little to recall what happened to the soviet regime, and at least you can opt to leave Telstra without fear of banishment to Siberia. I guess the Telstra equivalent to a gulag work-camp in the Siberian tundra, would be a revenue protection assignment in the billing systems maintenance department, a fate I adroitly sidestepped only a month ago!
With my trusty brill-o pad in hand I am preparing to revolutionize my software development experience ™. Yes, the install media arrive through the post the other day for visual studio 2005…
… [Please Wait]
…diddle diddle dum. diddle diddle dum. And we arrive at the present moment. Yes I sat there like a zombie for a whole evening. But the end result is what counts and that’s the problem. Now all of my code doesn’t work any more. Boo Hoo.
So, I have to either debug the EDRA config app block, or go hunting for a .NET 2.0 version of a config tool. I probably also have to do the same with all the templating tools, logging tools, and every other trace of code reuse that I have incorporated into my Dbc system.
Undoubtedly – VS.NET 2005 beta 2 is by a long margin the slickest development environment that I have ever seen. But I really can’t be bothered to go through all this pain at the moment. Perhaps I should revisit the idea of using VPC 2004 for my dev environment. Problem is that there is a whole other bunch of problems associated with that route. So what do I do? I wanna be able to play with the latest toys. But I wanna do it without having to incur weeks of pointless twiddling after I realize my recklessness has rendered weeks of work either obsolete or too bleeding-edge to work. [sob]
Some swine has started using comments on my blog as a way to spam me. I may soon have to prevent anonymous users from posting comments. I got 104 comments on one of my posts, and it wasn’t even that interesting!
The graph below I found from a recruitment site. It lists the relative importance of programming language skills and how they have changed over the last few years.
Aside from the fact that C# isn’t even in there, which I find a little alarming, it shows that all of the languages listed are in decline. Especially my beloved C++, king of all languages. Does this graph indicate that there is a proliferation of languages and that the impact of previously major languages is being diluted.
Are we building a babel tower of scripting languages, that will come tumbling down around our ears? Perhaps this was foreseen by Microsoft and motivated them to target language independence of the .NET platform rather than platform independence as was the case with Java?
I love to be educated. I also love the fact that I can absorb new ideas continuously without ever sating my thirst for data. And where better than at Wikipedia, which has obviously been lavished with the attentions of a database expert in recent days. I have to admit that I have never heard of, let alone worked on a dimensional database, so I am intrigued to know more.
This from LiveScience via Kurtzweil AI: Monkeys Brains Alter to Work Robotic Arm
A new study finds a monkey’s brain structure adapts to treat a robotic arm as if it was a natural appendage. The finding bolsters the notion that the primate brain is highly adaptable, and it adds more knowledge to the effort to create useful prosthetic devices for…
This is a very thought provoking piece of news. To me it seems to point out the way that awareness inhabits the sensorium, no matter what the origin of that sensorium. If the monkey brain can adapt to the use of non-organic appendages, then maybe it is possible that one can go further and progressively substitute pieces of the body with cyborg replacements. It brings to my mind the thought experiments where functionally identical transistorised components are used to replace neurons in the brain. The thought experiment was intended to highlight the difference between those who hold that hard-AI is true, and those who believe that something ineffible is lost in the process, which I guess they would call consciousness or soul or some similarly ill-defined term.
Others have argued that the challenge is in replacing the neurons with functionally equivalent components because of the potential for brain cells to be small quantum computers, but in recent years even that is now seen as a hurdle that can be overcome. I’m encouraged with these results because they silence the objections of those who hold that the ineffable thing that is lost in a cyborg is an organic nature. As though Carbon-based molecules were somehow privileged and able to yield consciousness in a way that other assemblages of atoms were not.
Surely it is the pattern of signals going to and from the brain that counts, not their origin?
Apart from being my nemesis and the person I am most often mistaken for, he is a pretty good portrait photographer and is in residence that the Brighton Festival this year.
My favourite picture is over on the far right:- Jeff Noon, the only author to have rendered me unconscious just with the sound of his voice.
So, have any of this blog’s Brightonian readers (the REAL Brighton that is) been to any of the shows? Reply to this with your thoughts. I thought last year’s effort was a little pale – No fireworks for god’s sake!
This time I'm going to explore some of the issues I've found in converting the prototype over to work in a JIT environment. If you've seen any of my previous posts, you'll know that I have a working prototype that uses static code generation to produce proxies to wrap objects that identify themselves with the DbcAttribute. These proxies are produced solely for those objects, so the developer has to know which items have predicates on them in order to instantiate the proxies rather than the target objects. If the programmer forgets, or doesn't know, that an object has predicates, he may create the target object and not benefit from the DbcProxy capabilities.
I've therefore been looking at ways to guarantee that the proxy gets generated if the library designer has used DBC. There are a lot of ways to do this, and having not explored all of them, I want to find a way to construct the proxies that will allow any of these approaches to be employed at run time. That naturally makes me think of abstract class factories and abstraction frameworks. It is nice to allow the library designer to define predicates on interfaces, but that may not fit the design they are working on, so I don't want to mandate it.
The solution I've looking at uses the code injection system that comes with the System.Enterprise namespace's ContextBoundObject classes. This framework guarantees that the object gets a proxy if it has an attribute that derives from ProxyAttributes. I would derive my DbcAttribute from ProxyAttribute and get an automatic request to create a proxy. Sounds perfect? Yes! But as you will see, there is a gotcha that will prevent this approach. Read on to find out more…
Status of the prototype
The prototype uses a Windows console application that is invoked as part of the build process to create the proxies for an assembly. The program is invoked in association with an application config file to provide it with the details of the target assembly, where the resulting proxy assembly is to be stored, and what it's to be called. It has a few other config settings to define how it will name the proxy for a target object, and what namespace it shall live in. So far, I have not added any guidance to it on how to handle the assemblies in a not static way. I haven't implemented any dispenser to intercede in the object creation process. In this blog I'm going to take a look at what config settings I need to add to control the choice between dynamic and static proxy generation.
I'm using the Microsoft Enterprise Application Blocks to handle configuration, and I'm using log4net to take care of logging issues. NVelocity is being used to construct the source code for the proxy. None of that will change in the refactoring that is to follow. In fact, very little has to change in the event model to turn the system into a dual static/dynamic proxy generator. What changes is in the issue of how to handle the prodicates when you are using a dispatch interface. More on that to follow.
static code gen via CSharp compiler
The static code generation process has been described in a quite a bit of detail in previous posts, but in essence it works by scanning through all of the types in an assembly, looking for ones that are adorned wit the DbcAttribute. For each of those, it scans through each of the InvariantAttributes, and then through each of the members in the class to get their details and any predicates that are attached to them. It then creates a proxy class that imitates the target type, except that prior to each invocation it checks a set of pre-conditions, and afterwords checks post-conditions. This proxy type is kept around until all of the assemblies types have been scanned, and then a new proxy assembly is compiled and saved to disk. This Proxy assembly is then used instead of the target type.
The CSharpCompiler provides a set of options to allow you to keep the source code used to create the assembly, and to create debug information to tie the assembly to the source code at run-time. This is useful for debugging the DBC system, but should not be necessary when using the framework in the field. If errors occur they will be on either side of the proxy (assuming I debug the proxies properly) and the user shouldn't want to see the source for the proxy itself. We therefore need to be able to turn this off, along with debug information.
Another feature of the compiler is the ability to dispense an in-memory only assembly that is available for the lifetime of the process. This is just what we need for dynamic proxy generation.
New library for dynamic proxies
One refactoring that is inevitable is that we now need two front-ends to the DBC system. A command line tool for static proxy generation that can be invoked from VS.NET or NAnt. We also need something that can be used in-proc to dispense dynamic proxies on demand. This can't be another process unless you want to use remoting to access your objects. You may want to do that, but you need to have some control of that situation, and it would be a little intrusive to have that decision taken away from you.
Dynamic proxy code generation pattern
The process for the dynamic proxy is much the same as for the static proxy, it just tweaks the compiler settings a little, and provides some speed ups for handling the storage and retrieval of assemblies for types that have already been code generated. The procedure is as follows:
- client requests an objects of type X
- creation process is intercepted by the CLR, because the DbcAttributes is declared on the object or one of ins ancestors.
- The DbcAttributes provides an object that can be used to intercept calls to the object, and it passes the creation of that object off to the dynamic proxy code generator.
- The dynamic proxy CG checks in a HashTable whether the target type has been requested previously, if so it instantiates the generated proxy and returns.
- if no type has been generated for this type it creates a scanner and a code generator as per the static proxy CG, and requests the scanner to scan the type.
- After the type has been scanned, there will be exactly one element in the ProcessedTypes context variable in the CG.
- The dynamic proxy CG then requests that the CG generate an assembly for the type, specifying that the assembly must be in-memory only.
- The assembly is generated, and it is stored along with the target type in the HashTable for future reference.
- The new proxy type is dispensed by the DbcAttribute, and the target object creation commences as normal.
- Next time a call on the target object is performed the proxy object is allowed to intercept the call, and run its checks.
This seems almost exactly how I want the dynamic proxy to work. The client doesn't have an option but to use it if it is available. This convenience comes at a price. That price is detachment from the state of the target object. If you want to do any interesting kinds of test on the current state of the target object, you need a reference access to the various public members of the type.
Without access to the target instance you can't make pre or post or invariant checks, because you can't access the state to see how it has changed or take snapshots of the initial state. It is in effect unusable. So does that mean that the dynamic proxy is incompatible with DBC?
The problem here is not in the idea of a dynamically generated proxy, it is in the kind of proxy we are using, and in its relationship to the target object. It is not connected to the target object in any way – it could even be on a different machine from the target. Even if it was directly connected (within the enabling framework) it wouldn't be of much use since it is designed to deal in IMessage objects, not target object instances.
The IMessage instance is a property bag that contains name value pairs for each of the parameters that is passed into a single method called "Invoke" which is able to then process the parameters. This pattern will be familiar to those who ever wrote COM components that needed to be compatible with VB – it is a dispatch interface. This sort of thing is great if you want to log the values of parameter calls, or to perform rudimentary checks on the incoming parameters.
We want to go a lot further than that. You can't make many assertions about how an object behaves just by describing its input and output values. And that's the problem that I have with standard interfaces – apart from a name which may or may not mean anything they only have a signature. A signature is just a list of the parameters and return types. Nice to have obviously, but not a lot of value when it comes time to work out what happens inside the implementation(s) of the signature.
So, we need direct access to the target object to be able to go any further. Does that mean that the context bound object approach is dead in the water. I'm not sure and this is fairly sparsely documented area of the framework so I haven't been able to track down an answer to that question yet. Answers on a postcard if you know a definitive answer to this. I am going to assume so for the time being. This hasn't been a wasted time – we've learned a useful fact about the limitations of dynamic proxies – namely that they are not compatible with aspect oriented programming paradigms. If anything they can the end-point of an AOP proxy chain, but they cannot sit anywhere in the middle of a chain of responsibility. It's a depressing fact of life – but useful to know.
What else can we deduce from this? We know that any kind of DBC proxy is going to need access to the state that is somehow deterministic. The key concept is state here. Anything that transforms the state of the target object also changes the semantics. Imagine, for example that the DBC proxy sat at the far end of an AOP chain – it will have an instance to a target object but the target object will delegate all the way down to the real target object at the other end of the chain. All of the state inquiry methods are present, but are converted into IMessage objects and passed through the chain of proxies before arriving at the real target object. The AOP chain can modify the data going into the real target, and modify the results coming out of it. They can also change the timing characteristic making a synchronous call where previously the interface was not, or vice versa. They could transform XML documents or add and subtract values from parameters. The point is, when you are out of touch with the target object, you have no guarantees event hat the parameters you are making tests against are the same as are received by the real target object. You consequently face a situation where the semantics are similar but potentially subtly modified, and you have to track that in your assertions.
This may sound like a rather academic discussion of the potential to modify an interface to have diferent semantics, but that is the essence of the adapter design pattern, and is an occupational hazard with SOA architectures. Don't forget that when you invoke an API on a third party software system, you think you know what is happening, but you often don't. You generally have some documentation that acts as a contract saying what goes on inside the service when the service gets invoked. We all know that such documentation often gets out of date, and we also know that there is little you can do to stop a well-meaning-fool from breaking your system. That's Murphy's law at work again. If something can go wrong, it generally will. And normally it is introduced by someone who thought they were solving some other problem at the time.
So published interfaces aren't worth the paper they are written on, especially if you're playing a game of chinese wispers through layers of other people's code. We need a system that is direct and compulsory, with no middlemen to dilute the strength of the assertions that you make on a component.
Next time I will describe how I solve this problem – probably by paying homage to the Gang Of Four…