Moderate drinking can improve memory

A new study, just released, indicates that social-drinking lab rats have improved memories over those that do not. Researchers hung out in the same bars as the rats, asking them probing questions after nights on the townbench?

As an Englishman, I have been known to indulge, or even overindulge, in the past. But for some reason, that I can’t fathom, over the last year or two I have stopped drinking. Perhaps I just forgot to… do whatever it was we were talking about.

Policy and Configuration II

The last time I gave some thought to some of the limitations of configuration systems, I was concerned that people use configuration without thought about the consequences. Configuration seems to be a solution of first resort. I was trying to argue that that is probably an unwarranted strategy. It needs to be thought out. I argued in favour of a limitation on configuration. The points I made were:

  1. The implementation of configuration systems intrude into their client systems unnecessarily.
  2. Configuration systems should produce typed objects, not name-value containers as is all too often the case.
  3. Name-value containers impose an unnecessary overhead when their contents are used regularly and are converted into non-string objects.
  4. Configuration data is just state data so it should be created and managed in the same way as any other state data, unless there is an overriding reason to not do so. Developers need to think really hard about whether there really IS an overriding reason to use configuration.
  5. Configuration data can be split into policy data and system state. Policy data controls program flow, and system state determines outputs.
  6. Most configuration data should not even be configuration data.
  7. Hierarchical configurations are best represented using native type hierarchies.

These generalizations don’t apply across the board, and some systems are better than others. The general trend is towards typed state data, but point 4 inclines me to the idea that we should hide from ourselves the fact that we are even dealing with configuration data. Polymorphism allows us to incorporate points 1,2,3,4,6 and 7 seamlessly. In this post I want to explore the ideas further and see whether we can conclude anything about how we ought to be handling configuration data, and state data in general.

Are there different types of configuration data?

Previously, I mentioned that configuration data falls into two broad categories: Policy data and State data. By ‘policy’ I mean data that determines how a system behaves at run-time. As I mentioned before, policy data is a subset of the state space devoted to making decisions about how to do things. The rest of the configuration provides input to what gets calculated during program execution. Policy acts as input to flow of control statements. The following excerpt from a configuration file shows a fairly typical example:

<add key=”DefaultEmailFormat” value=”html” />
<add key=”SmtpServer” value=”” />
<add key=”ForgotPasswordMessageHtml” value=”<html><body>Dear {0},<br/>...” />
<add key=”ForgotPasswordMessageText” value=”Dear {0},\n\n ...” />

Both types of configuration data are present in this example. DefaultEmailFormat is quite clearly a policy variable, it is used to decide which ForgotPasswordMessageXXXX template is used to generate new emails. Likewise, SmtpServer decides where a message gets routed to when it is being sent. The ForgotPasswordMessageXXXX configurations are different. They are never consulted in deciding what to do. They are used instead to generate data that is intended to produce output. In this case the output is an email body.

My first thought when I noticed this division of configuration data was – surely there are items of configuration data that are both policy and configuration. What I’m speculating about in this post is whether config data should be divided in this way – whether such a division will allow better designs.

With a holistic view of the system, we might be able to cut through the propaganda about configuration and characterize what, if anything, distinguishes those variables that directly affect the state evolution, and why they ought not to be shoved into XML files or databases. In my previous posts on configuration I used an email system as an example to illustrate what I’m on about. So, consider the two variables: SmtpServer and DefaultEmailFormat.  Each of these variables that might be candidates for putting in a configuration file to initialize the email system. Is there a difference between these two variables in the way they are used, or how they affect the outcome of operations?

The SmtpServer variable tells the email system where to connect to send an email. The DefaultEmailFormat affects what form of formatting the email server should use for its emails. The SmtpServer setting is just telling the system what IP address to connect to. The system doesn’t behave differently if a new address is put in – the same path of execution gets followed, even though the external outcome may be different.  The DefaultEmailFormat variable, on the other hand, will cause different data to be sent. It will also cause different formatting tools and validators to be used, and may cause different email addresses to be retrieved. In other words, it more directly affects the flow of control in the system, without directly contributing to the state data or outputs. It doesn’t form part of the output of the system, and it is in existence from the very first moment that the program gets run.  Of course, you can imagine a system where these are not the case, but for the sake of this illustration I am making the point that variables can often fall into these two categories. If you were to modify the logic of the system so that the system responded to the SMTP server or paid no attention to the ShouldSendAsHtml variable. You would however just be changing the category into which these variables fell, without changing the nature of the categories. And it’s the fact of these categories that is the point that I’m trying to make – some variables have disproportionately more influence over the evolution of the state of the program than others.

A holistic view of stateful systems

I’ve distinguished between two types of configuration data, but I wonder whether there is any real difference between configuration data and the normal state data that we consider the grist for the mill of our programs?

In the example above, policy data controls how the state of the system evolves, because the state is controlled by the branches in execution on the way to producing a result.  Imagine an executing program as a sequence of states – an automaton. In an automaton, we represent the status of a system as a finite set of ‘states’. As the program progresses it moves between states, until it either loops back to it’s original state and starts again, or finishes on a completion or error state. All non-trivial computer programs have to maintain a record of their state, often in numerous variables and objects, and the combined set of these variables represents the current state of the system.

More often than not, we’ll find that the state variables are distinct from the normal data that the system is manipulating. Even more often they are implicitly stored in the position of the program’s thread of execution. The point is that the data that keeps track of where we are and what we need to be doing is kept apart from the data that we are manipulating. Imagine that our email program has to go through a set of states to do a mail merge. The system may start out in a ‘ready‘ state, move to a ‘collecting recipients‘ state, then on to ‘producing emails‘, ‘sending emails‘ before returning to its ‘ready‘ state again. Much of this will be implicit in the stack frames that define the sequence of method calls we make.

The system undergoes a sequence of state changes under the control of a set of policy data variables. For a program that is closed (i.e. no inputs), the evolution of states is perfectly determined by the initial state of the system at start-up. For a system that is not closed, i.e. one that receives input from outside, such as user input or data from some other source such as a database, we might be tempted to regard it as in some way random.  I prefer to think of the external input as still part of the state, but stored elsewhere. Hence what I meant by Holism.

Configuration data, although externally stored, is still state data. It’s just more obvious when it has been brought into the system and turned into objects. Likewise, we can treat user input and all other input as just another form of state. The user is a kind of external database for state data. The system uses the GUI as a form of retrieval mechanism (a query API?) to get that data on demand.

This is a bit of a simplification of the real state of affairs, because policy data and non-policy data can swap roles at will. SO perhaps it’s better to regard the whole body of data within reach of the system as state data. What would the consequences be if we could categorically claim that each item of data fell into one role or the other? Would that constitute a good or a bad design? Would it be harder or easy to think in these terms when designing object models? Would we treat members of those object models differently? Would wee be able to make stronger assertions about how state will change during the course of a method invocation?

Whenever I am designing a system my mind works as hard to break the conceptual design as it does to construct it in the first place. What would happen if we had a design that used a variable both as input to and output from a calculation and as a policy variable? Lets construct an example? Let’s take SmtpServer – imagine a piece of code that did the following:

string server = (DefaultEmailFormat.ToString()) + “”;
SendHtmlMail(server, “Message from ” + server);
SendTextMail(server, “Message from ” + server);

It’s a bit contrived, I know, but I’ve seen worse in production systems, believe me! Now, the server variable is constructed out of a policy variable and it then used as both a policy variable and as normal state data (as part of the message being sent). What conclusions can we draw about this piece of code? The first thing to note is that the code becomes more brittle after the change. Changing a single variable now affects not only the path of execution of the system, but also its outputs and they will potentially be compounded over time. By keeping the policy variable separate from the state variable we are able to independently manipulate the behaviour of the system or the content of its outputs. Thus we stand a better chance of being able to maintain the system over time. There is also a hit on the comprehensibility of the system that also makes it harder to maintain and extend. This is a form of coupling that once embedded into the logic of your system, can be very hard to weed out without extensive refactoring. It therefore seems to me that careful division of variables between policy and state can contribute to the long-term viability of a software system.

There is another issue that I want to discuss next, which is another defining feature of policy variables. The frequency with which they are accessed. The frequency of access of variables can be used to justify special treatment for policy variables. That point is a specific case of a general problem – the misplacement of data in the memory pyramid.

Frequency of Reference

Modular software systems tend to follow the ‘transaction script‘ pattern in making state changes. That means that at any stage they are working on making modifications to a specific subset of the data that is in the system. In outlook, you wouldn’t expect that when you add a new task that it would be also making modifications to the archived messages, or setting up new appointments. That principle is called the principle of locality of temporal reference (PLTR). If like or related data are kept together then changes and references to data will tend over time to cluster together. That is – if you access a variable now, the chances are you’re going to need it again soon. That’s the principle on which working sets in virtual memory systems are based. If you are working on a piece of data, bring it into memory for a while along with other data nearby. That way when you need to update a related piece of data it will probably be to hand.

A memory pyramid defines areas of access relative to chances of access. Therefore if you expect to refer to or change a piece of data frequently, don’t put it out to disk. Keep it in memory, so that you don’t incur unnecessary delays when you want to get to it. The concept of the memory pyramid stems from the insight of the PLTR in deciding where to put state data in a system with memory constraints. In a memory pyramid data is inherently mobile, since it moves up and down the memory pyramid as the chances of accessing it rise or fall. In many case the operating system takes care of moving data between levels of the pyramid. In some key cases, such as configuration you have to do it yourself. If you do not to make a choice, you still have made a choice – probably not one you were counting on.

Figure 1. A Memory Pyramid, showing types of data according to how much we can store there, and how fast we can get at it.

The majority of configuration data is incorrectly placed in the local disk tier of the memory pyramid. The majority is also generally stuck where the developer puts it, and cannot move. By drawing the distinction between policy variables and normal state variables, I hope to highlight that some configuration should be elevated to RAM and others should be relegated to remote servers.

Locality of Spatial Reference – or Use Case for short 😉

Most of us write modular code these days. With the help that modern refactoring tools provide, we can easily factor out duplication in our code, and move it about till we have a tidier, more normalised, class design. Which brings me to another kind of locality of reference principle – the principle of locality of spatial reference (PLSR). This is related to the PLTR, and works in the same way. When you access a piece of data, you are likely to access pieces of data nearby or related to it. That means you should cluster semantically related data together, and move it around in the memory pyramid as a chunk. So, another criticism of standard configuration systems such as that natively provided by .NET is that all configuration is treated as an undifferentiated mass without semantic linkage. Perhaps something like RDF should be used to store configuration data. That way, we could at least know what goes together and how to treat it.

There are policy variables for use in specific use cases that will, for a while, be accessed continually and then not be used again for seconds or hours. The PLSR tells us that if we keep them around in memory after that time, we are probably just wasting space. Likewise, if we leave them out of memory when they are most in use we are causing disk access that could be avoided.

What we should do is assess the usage patterns of our configuration data and use that as a guide on how to put it in its rightful place in the memory pyramid. That either means we need a much more sophisticated configuration system or, as I think, a vastly simpler one.

In Summary

This time round I’ve tried to highlight some other concerns about how we use configuration. In addition to criticisms of needless overheads and lack of type-safety, the way we treat configuration is symptomatic of a general issue about mobility of data. Next time I tackle configuration, I’ll try to focus on some real-world concerns that may influence how we should treat state data in modern applications. Issues such as thread safety, spatial distribution, consistency, mutability and system complexity. What do we know so far?

  • Some state data is used for decision making, while others are used as inputs and outputs to calculations
  • Designs that avoid this distinction can lead to brittleness, unreadability, inflexibility and maintenance issues
  • All data in a system (or within reach of it) can be thought of as system state.
  • That data can be accessed intensively over time and, when accessed, will likely see related data accessed as well.
  • State data exists in a memory hierarchy (a memory pyramid) – whether we like it or not.
  • Most configuration data is improperly placed in the memory pyramid. Too high in the case of normal state data, and too low in the case of policy data.
  • State data has a semantic structure that will affect its usage patterns, and these structures should determine how we handle them WRT memory management
  • All state data, but especially critical configuration, should be mobile inside the memory hierarchy.

I don’t think I have all the answers here – perhaps its more accurate to think that I have more interesting questions. So why don’t I finish off with some questions? Try answering those questions and let me know if you come up with different answers.

  • should policy variables be stored and treated differently from non-policy state data?
  • Where should policy data be stored?
  • Is policy data consulted more or less often than other state data?
  • Where should state data be stored generally?
  • Is caching an issue to consistency or performance? If so how should we cache state data generally, and how should we expose our technology to the rest of our systems?
  • Should configuration data be retrieved on demand, or brought into memory?
  • Should state data follow the memory hierarchy?
  • Does the treatment of state data need to follow the memory hierarchy according to usage patterns, or can we manage it all in a block? How do we represent a block of related data? How will that change between different types of data? (i.e. sparsely or highly interconnected data)
  • Do policy and state data fall into different parts of the memory hierarchy?
  • What about other types of data in the system? Are there other types of data classifications?
  • What are the forces at work in making decisions about how state data is managed? i.e. memory size, speed, latency, volatility etc

Cutting through the hype of SOA

Grady Booch seems to be back into his stride after his recent illness. In his most recent blog post (Thursday, October 12, 2006) he criticises the unscrupulous advocacy of SOA as though it were a silver bullet. He also points out that any successes of SOA were more than likely due to the application of Architecture, rather than Service Orientation. He points out that SOA is a platform specific kind of message passing architecture that capitalises on the benefits of web technologies to simplify message passing on the internet. He also points out that a considerable amount of design must go into producing a good design for an SOA architecture.

So what did I take away from this? That there is still no substitute for good design, and any project team that thinks it can automatically get a good system by using an acronym rather than devote time to design is heading for embarrassment…

AudioFiler – My Search is {over/just begun}*

* delete as appropriate

I was cruising around the site looking for a .NET ID3 tag API, when I came upon AudioFiler. AudioFiler is a great application, and one I’ve been searching for for years. Perhaps I should explain. I started creating and collecting MP3s back in about 1997, and since then my collection has grown a little. Back then there was a cool tool called TrackManager, by Nick de Jong. It was the perfect collection browser, it allowed filtration and multidimensional views. It allowed you to keep collections in multiple places (i.e. CDROMs) and then merge their contents to let you know where to find something your after. All in all it was a great application. It was ugly as hell, but that didn’t bother me that much because it started up Winamp when you wanted to play something so it didn’t matter.

The problem was that it started to get a bit flaky when your collection got above 500 songs (which happened pretty quickly) so I had to abandon it. Since then I have tried to use the media managers from a thousand different players including iTunes and just found that they didn’t help when I tried to visualise the whole collection from different angles. At last AudioFiler provides a search engine comparable to TrackManager. It’s Ugly, and it invokes Winamp, but I think it may be just what I need. Perhaps now I’ll rediscover all that stuff I know I’ve got but can never find. Anyone for some Riot Grrl?

Generic Algorithms in C#

This post describes generic programming and the capabilities of generics in C#. I conclude that C# Generics and C# generally are not able to solve issues that are easy in the Standard Template Library. Nothing would make me happier than if you were able to demonstrate to me that my conclusions are wrong. If thats true, please don’t keep it to yourself!!!!! 

Ever since moving to C# from C++, I’ve missed the expressive power and elegance of the Standard Template Library (STL). When generics were introduced into C# 2.0 I assumed that to gain runtime type safety, some of that expressive power had to be sacrificed. My initial impression was that it was good only for generic collections, but not for those idioms that revolutionized C++ when the STL was adopted, and after Alexandrescu published his eye-opening Modern C++ Design. I’d never found the time to confirm my misgivings, so I recently made time to explore the issues – I’ve been working on a .NET project based on legacy code which was – how can I put it delicately – based on the best practice of yesteryear! I have a pressing incentive to find clean, expressive and sexy ways to enhance the code-base and architecture. Obviously, I’m breathless with anticipation of Linq in all its forms, but I have a job to do now with the tools at hand.

I don’t think I’ve ever seen syntactic and idiomatic support in any framework to better STL as it stood a decade ago. Don’t get me wrong – I love C# – but I regret that I can’t combine the best of both worlds. Are generics in C# 2.0 too impoverished to be able to provide the power of STL? Are we able to provide the full power that has until now been the exclusive reserve of STL developers?

Algorithms + Data Structures = Programs

The STL stems from work done by Alex Stepanov and Meng Lee, and earlier work on generic programming by Stepanov and Musser. Their object was to produce a set of general purpose container classes and associated algorithms that could exist independently. In languages that don’t support generics, you have four choices when you want to provide standard algorithms on containers:

  • you implement every method that you need on each container class (The MFC approach).
  • you code all containers and algorithms in terms of some common data type (the .NET System.Collections approach)
  • you implement a code generation system that creates strongly typed containers and algorithms (the solution in GCC prior to the introduction of templates in C++)
  • you don’t bother implementing algorithms at all (The C5 approach).

None of these options is particularly desirable, so Stepanov & Co developed an idiom for coding in C++ that allowed algorithms to be developed independently of data structures. This is an important point – you can develop a sort method that will sort a linked list as easily as an array. It’s not easy. His approach was to employ iterators, a set of coding conventions, and type exports.


Iterators are vital – they provide the topology independent mechanism for getting at the elements in a collection. The Iterators in STL are based on pointer semantics, and in the case of arrays may actually be implemented using a pointer. The key feature of a pointer is that it provides a way to dereference a collection element (using operator *) to move forward and backward through the collection (operator ++ and operator —). in .NET, IEnumerator<T> provides roughly the same functionality, and in general it is provided as a nested class within a collection. There’s nothing sacred about pointer syntax. We can implement an iterator without using operator overloading. The key thing is pointer semantics. Pointers treat a block of memory as ordered, contiguous and typed. By being typed you know how far to move forward to get to the beginning of the next object. In other words, you can easily move forward through the ordered collection of objects. In the case of C/C++ pointers by using the operator++ method, but it could just as easily be by using a MoveNext method.

Option 2 is what we are currently faced with in C#’s System.Collections – all container classes store references to System.Object. If you want to ensure type safety in your system you’ll probably find yourself with all accesses to the container using casts and try-catch blocks to ensure no foreign bodies are introduced by accident. It’s nasty, messy and these days unnecessary when you have generics. The burning question is – if you can have generic data structures does that mean you can also have generic algorithms?

The answer to that is a resounding NO! Which is the reason why a lot of ex C++ programmers were sceptical at the usefulness of generics in C#. Sure they can help you store stuff easier, but they alone can’t provide a rich language in which to define generally useful reusable functionality. In STL, iterators are hand-coded nested classes in a collection. Their behaviour is dictated by the iterator category to which they belong. Iterators are classified as forward, reverse, bi-directional, or random access. They can also be const or non-const.

It’s worth quickly noting at this point that we can’t tack iterators onto the collection classes using external methods. We can attach methods such as Begin and End to the collections, but the iterators can’t access the storage managed by those collections – they’re strictly limited to accessing public methods, properties and fields. They are limited by the accessors already provided by the collections. The iterators could potentially be limited to one form of iteration – the forward iterator of IEnumerator<T>. That isn’t a hard and fast rule, but it is a concern!

In C#, the standard collection classes define iterators by implementing the IEnumerable<T> interface. IEnumerable<T> provides a means for algorithms to get at the IEnumerator<T> interface which is the actual iterator. However, STL would classify the in-built iterators as ‘Trivial Iterators‘. With the advent of the yield keyword, we can now define more than one iterator on a collection. This overcomes one hurdle to generic algorithms – different algorithms will require different types of iterator – if the framework only supports one kind, then some algorithms may be ruled out.

The interface to IEnumerator<T> is very simple.

public interface IEnumerator<T>
  bool MoveNext();
  T Current { get; }
  void Reset();

It’s clear that we can’t do random access with this type of iterator, and we can’t move backwards through the collection either. Since yield allows only trivial iterators we will probably have to use an alternative mechanism to iterate collections. Perhaps we need to expand the capabilities of IEnumerator<T>? The iterators will need to be constrained by coding conventions (of which more later on). There is nothing in principle to stop us doing whatever was possible with iterators in C++, in the same way – after all there is no in-built runtime support for iterators in C++, yet it copes well.

Generic Algorithms

In generic methods you often define your method signatures in terms of iterators. If you defined (in C++) a vector of integers like so:

vector<int> vint = new vector<int>();

and you defined an algorithm called Double like so:
void Double<Iterator>(Iterator start, Iterator end)
  for(Iterator i = start; i != end; i++){
    i *= 2;

Then you could double the elements in the collection like so:

Double(vint.begin(), vint.end());

This performs an in-place doubling of each element between start and end inclusive. Sadly, you just can’t do this in C#. It’s not possible. The C# compiler doesn’t know at compile time whether the elements returned by an Iterator are numerical and can support the ‘*=’ operator. We cannot define operators on an interface, and therefore we cannot constrain the iterators to those defined on collections that store objects that support the ‘*=’ operator. Numerical types don’t share a common base class that we could use to distinguish them from other structs. Besides, we couldn’t define a constraint against a type argument of another generic class. For example, say you defined a List of Integers, and you passed a pair of iterators to Double, Double would not know that they were iterators over a collection containing integers, because all it knows is that it has a type argument called Iterator that could be anything at all.

Lets summarise the list of problems with this simple example for later reference:

  1. Can’t define operators on an interface , thus you can’t constrain to types that support operators
  2. Can’t issue constraints against type arguments of type arguments
  3. No consistent last item
  4. No static type checking

It would be nice to write some code like this:

if(Iterator.T is Numeric){
  for(Iterator i = start; i != end; i++){
    i *= 2;

Or, better still

void Double(Iterator start, Iterator end) where Iterator.T : Numeric
  // ...

But it is not to be – at least in this version. Even if we could define constraints that specified the operators, we would not be able to retroactively attach the interfaces to types in the framework. Oh, and another thing – IEnumerator<T> doesn’t support Equality comparison, that rules out iteration between ranges. Iterators in STL support the ‘one beyond the end’ concept that enables a collection to have a definite immutable value for the end of a collection that doesn’t get out of date if new elements are added to the end of a collection. Contrast that with the following implementation of System.CharEnumerator.Current. 

public char Current {
  get {
    if (this.index == -1) {
      throw new InvalidOperationException (Environment.GetResourceString( "InvalidOperation_EnumNotStarted"));
    if (this.index >= this.str.Length) {
      throw new InvalidOperationException (Environment.GetResourceString( "InvalidOperation_EnumEnded"));
    return this.currentElement;

As soon as the iterator gets to the end of the collection it will not allow a dereference of collection elements. Instead it begins to throw exceptions. If this is typical of the framework, it doesn’t bode well for algorithms like Double above, since you can’t dereference one beyond the end of the collection without throwing exceptions.

Without a common, immutable end of collection marker that can be passed to an algorithm, we are going to have problems. Firstly, we will have to provide algorithms for both the whole collection and for a range within that collection. Perhaps that’s why the canonical generic algorithm in System.Collections.List is ForEach? Without a guaranteed end of collection iterator value, we can’t be sure of iterating the whole of a collection that is being inserted to while it is being iterated.

Shortcomings of C#

Using a convention about how collections expose their type parameters, STL allows a thorough introspection into data types used in a collection and its contained elements, without any prior knowledge of the types.  In a Genetic Algorithm, I never need to know how my Genomes are encoded to be able to evolve them. In C# I can get an iterator such as IEnumerable<T>, but if T has further detail inside it, I have no easy way to get at it, because there are no typedefs or their equivalent inside IEnumerable. The code in my example above is completely illegal. You can’t do Iterator.T because T is not accessible from Double. All it knows about is Iterator. It could define a constraint to say that its iterators must be IEnumerator<T> but even then, it still can’t work without explicitly specifying the types iterated over.

Some collection libraries use a pattern of defining T (the element type) on the generic method, and then define their interfaces to accept IList<T>or IEnumerator<T>. This is partially generic – it allows the iterated elements to be parameterised, but not the collection types or their iterators.

What’s so bad about that, you might ask. Well, the type parameters would contain redundant type definitions which would be messy and would make the code brittle. In addition, that approach would only be useful if you knew how many levels you would need to dig into the type definitions. If you wanted to look inside T of IEnumerator<T>, you would have to provide extra parameters in the cases where T was itself generic. Obviously, messing with the parameter lists of a generic type or method won’t help. So, we need some sort of type export mechanism.

So, how do I find out more about the type parameter of IEnumerable<T> without using reflection, or what elegant workarounds and/or conventions could I adopt to make such type access easy? These problems are not specific to the realm of evolutionary programming – I just chose that because it’s fun – there is currently work being done by Anders Hejlsberg and the rest of the C# design team to find (or bake in) ways to achieve this in future versions of C#.

Exporting Types

One feature of C++ that is used liberally in the STL is typedefs. typedef in C++ is useful for allowing type parameters to be inspected by algorithms. I’ll show a few examples of typedefs at work below. This is a critical difference between the template system of C++ and the generics system of C#. In C# there is very little information flow between the generic type argument and the type constraints. C++ uses static type checking to check for type errors at compile time. It does not do this by checking type arguments against a lookup table of permissible types as is the case with C#. Instead it attempts to find bindings for methods, fields and typedefs for each specific constructed type. Consider the following example. I’ve defined a class A with a public method Foo. I’ve then defined a template method Bar
that takes a vector of T. 

class A {public: int Foo(){/* ... */}}
// ...
template<typename T, typename R>
R Bar(vector<T> arg1){return arg1.Foo();}
// ...
int result = Bar<A, int>(someVector);

There is nothing about the definition of Bar
that tells it that it is dealing with objects of type A. A C#
compiler would produce a compile-time error for code like this. There is no constraint that it should restrict Bar to classes of type A or its descendants. C++ gets around this problem using static type checking. It can see that when we invoke Bar
with a vector<A>
we are OK, but with some other type we are not. It will only issue compilation errors in the case where you try to invoke Bar
with a vector of some other type. This static type checking is powerful. It allows generic methods to access typedefs in its type arguments. Imagine you were writing a genetic algorithm, and you wanted to do a random mutation on randomly chosen population members.

template<typename PopType>
PopType::GenomeType MutatePopulation(PopType population){
 for(PopType::iterator i = population.begin(); i != population.end(); i++){
  PopType::GenomeType gen = StochasticInvoke(Mutate(*i), PopType.MutationProbability);
  *i = gen;

In the above example I have accessed type definitions to get iterators, and genome types used to declare local variables. Without this kind of thing we are forced to add internal types on generic parameter lists:

GenomeType MutatePopulation<PopulationType, PopulationIteratorType, GenomeType>( PopulationType pop){
 for(PopulationIteratorType i = pop.begin(); i != pop.end(); i.MoveNext()){
  GenomeType gen = StochasticInvoke(Mutate(i.Current), pop.MutationProbability);
  *i = gen;

As explained earlier this is nasty. It means that if I were to find a different way to represent GenomeType, I would have to change every place that uses a genome, rather than just the one place that stores the genomes. With the aid of typedefs I can just modify the declaration of my population collection, and the type will flow out into every generic method that works with populations. I want to be able to define the data type of a variable once and thereafter not have to mention it again, since clients can get it from the parameters I pass around.

Type exporting using external methods

The lack of static type checking in C# means we can’t declare variables of types exported inside of type arguments. We can pass them around, though. Using extension methods (that are currently in pre-alpha release in the May CTP of C# 3.0) we can augment System.Type to provide access, with the aid of reflection.

namespace System{
  public class TypedefAttribute : Attribute { }
  public static class TypeExtensions {
    public static IDictionary<string, Type> typedefs;
    public static Type GetTypedef(this Type source, string typedef) {
      if (typedefs == null) {
        typedefs = ReflectTypeExports(source);
      if (typedefs.ContainsKey(typedef))
        return typedefs[typedef];
        throw new ApplicationException("No type export named '" + typedef + "'");
    public static IDictionary<string, Type> ReflectTypeExports(Type source) {
      Dictionary<string, Type> result = new Dictionary<string, Type>();
      // Get all properties adorned with TypedefAttribute from source
      var q = from p in source.GetProperties()
              where p.GetCustomAttributes(typeof(TypedefAttribute), rue)
                 .Length > 0
              select p;
      foreach (System.Reflection.PropertyInfo pi in q) {
        result[pi.Name] = (Type)pi.GetValue(null, null);
      return result;
namespace extensions {

public class A<T> {
    [Typedef] public static Type TType { get { return typeof(T); } }
  public class B<T> {
    [Typedef] public static Type TType { get { return typeof(T); } }
  }  class Program {
    static void Main(string[] args) {

This is fine for passing types around, but you can’t do much with it other than treat it as a variable. You can’t declare a variable like so

typeof(PopulationType).GetTypedef("GenomeType").GetTypedef("AlleleType") allele = GetRandomAllele(iterator.Current);

You could use the old


trick, but then you’ve got a System.Object, and what do you cast it to? There’s no type safety.

Lets take a look at a good collection of generic algorithms. The PowerCollections library from Wintellect. Here’s a typical example from the generic methods class Algorithms:

public static IEnumerable<T> Rotate<T>(IList<T> source, int amountToRotate)
  if (source == null)
    throw new ArgumentNullException("source");
  int count = source.Count;
  if (count != 0) {
    amountToRotate = amountToRotate % count;
  if (amountToRotate < 0)
    amountToRotate += count;
  // Do it in two parts.
  for (int i = amountToRotate; i < count; ++i)
    yield return source[i];
  for (int i = 0; i < amountToRotate; ++i)
    yield return source[i];

The method declares type parameters for the collected type, and then fixes the type for the collection. That is pretty much all you can do in C# without typedefs. What happens if you decide that your algorithm could be applied to any ordered type that can be iterated, rather than just ILists? Well, you could use iterators, but we know from the earlier discussion that currently, IEnumerator<T> doesn’t use a fixed end point for the end of a collection. We may run into consistency and thread safety issues, because the end of a collection could change continually, so it can’t be passed around.


We’ve seen that the standard template library is a product of convention and integrated language support. You can get a long way with just iterators. Do those iterators need to be stable in a multithreaded environment? Probably. I attended a talk at TechEd Sydney given by Joel Pobar a few weeks back, where he pointed out that for developers to exploit the coming multi-core machines, they would have to embrace multi-threading. That implies that our generic algorithms will have to be designed to work in a thread safe way. You can sacrifice flexibility in your choice of containers to allow iteration, as has been done in the PowerCollections library. You could try to augment the iterator mechanisms to allow the integrated language support of C# to dovetail with a better collections library design. This was done in the C5 library, where const iterators and bi-directional iterators have been grafted onto the existing collection iteration mechanism.

For truly generic algorithms (rather than shallow transformative operations such as the PowerCollections’ Rotate method given above) we need types to be flexible and informative. They are informative, but only about the type itself. Not about the relationships that the type has with other types. And what relationships the type does have, are not usable at compile time. That’s where the typedef mechanism in C++ comes in. In C#, we are hindered by the richness of the reflection mechanism. Types are runtime things passed around just like other objects. We need them to be as flexible and informative at compile time. Currently they aren’t. We may be able to do extraordinary things in future releases of C#, but currently we will never have the same degree of genericity as we had in C++. However, with a bit of good design and some compromise, we can produce some very flexible algorithms of lasting value.

If I could ask for one thing to be included in future versions of C# – it would be static type checking, allowing generic type arguments to carry more information than the minimum required to create a concrete type at runtime. I want typedefs.