An Ominous Blog Post

Normally I have a very optimistic outlook. Especially when it comes to technological breakthroughs. But this morning I was given pause for thought. MAKE magazine carried a news article today about a highly accurate DNA replicator for $10. I am fully convinced that such breakthroughs can be used to tackle the issues of world poverty, but I’ve just finished reading Tomorrow’s War by David Shukman. It was written over 10 years ago but was gloomy, even then about our chances of controlling the proliferation of expertise in the production of WMD, and it was written before 9/11.

At this rate, the techniques and the resources for biological weapons development will be freely available, but the skills needed to combat them will not. I just think of the irresponsibility of computer hackers and virus writers – who often wreak havoc without any thought of the costs or consequences. If such power can be unleashed in the real world, then we are in way more danger than we ever were during the cold war.

That is a doomsday scenario, if you ask me.

A Cure for Cancer

From KurzweilAI. If this isn’t worth blogging about, then nothing is. Could this be the most historic announcement ever to arrive in my inbox?

Cheap, safe drug kills most cancers news service Jan. 20, 2007

University of Alberta scientists
have tested dichloroacetate (DCA) on
human cells cultured outside the
body and found that it killed lung,
breast and brain cancer cells, but
not healthy cells….

Dodgy theory of the week

Aymeric put me onto a cool application called Neuro Programmer 2 (from the sinister sounding Transparent Corporation – makes me think of Umbrella Corporation), that generates brainwave entrainment sessions. We tried out one called Caffeine Replacement, and were quite amazed at the effect it had on us. We both ended up euphoric afterwards.

That got me to thinking, if you can entrain your brainwaves deliberately to give you more vitality and concentration, isn’t it possible to accidentally find yourself in an environment that has the opposite effect on you? I wonder whether the distinctive sounds of computer fans, air conditioning, strip lights and conversational hubbub in an open plan office can combine to have a suppressive brainwave entrainment effect. Maybe that contributes to sick building syndrome?

Imagine how cool it might be to be able to use your laptop’s speakers to modulate the background noise to put you in the frame of mind you require. You could have a menu in the system tray with options like [Receptive & Creative], [Relaxed], [Energetic]. I would like that.

[drool] Nanotube Muscle Fibres Create Superhuman Strength

Researchers at the University of Texas have developed a technique to create artificial muscle fibres using Carbon Nanotubes woven into a yarn a mere 2 microns thick. Each fibre has a strength equal to 100 times that of equivalent human muscle fibres. It is a expected that there will have revolutionary consequences for the prosthetic limb and robotics fields. I wonder whether the long-term outlook will be for elective muscular enhancement. Imagine having your strength augmented (invisibly) by 100 times. Just 10 times would probably create a such a sense of ease and well being, that it would be worth the money. Imagine all those who currently suffer from wasting or spastic muscleature.

I wonder if they would be able to attach these fibres to bones? What kind of control infrastructure would it take to control these fibres. I imagine that at human scales these muscles might have to be paired up with ceramic bones – I imagine bones might seem a bit more fragile when we have the possibility to exert these kinds of forces.

It appears that these yarns are not very elastic, and so can become slack over time. There are also problems of scaling these yarns up to human dimensions. Hopefully, these issues are not show stoppers – I would REALLY like to see this technology come to fruition.

Nanotube Muscle Fibres
Fig. 1. Nanotube Muscle Fibres

Click on the image to go to the original article.

Moderate drinking can improve memory

A new study, just released, indicates that social-drinking lab rats have improved memories over those that do not. Researchers hung out in the same bars as the rats, asking them probing questions after nights on the townbench?

As an Englishman, I have been known to indulge, or even overindulge, in the past. But for some reason, that I can’t fathom, over the last year or two I have stopped drinking. Perhaps I just forgot to… do whatever it was we were talking about.

Cord blood yields ‘ethical’ embryonic stem cells

New scientist is carrying a story about the use of cord blood in the preparation of embryonic stem cells. When K & I saw references to preservation of cord blood around the hospital we thought that it was to be preserved for later use by the baby in the case of operations and anemia. Now I am wondering whether they actually want it for other people. In which case my eagerness is slightly less!

And there’s another thing – what is different about the blood flowing through the cord anyway? I thought that a baby’s immune system was suppressed during gestation and for a while after (while breast-feeding?) and that that was necessary since the baby was receiving umbilical infusions of its mother’s blood (and antibodies etc). Is the blood different in the placenta? Is the placenta maintaining its own blood supply that is cut off from that of the mother? How does it get oxygenated?

So many questions.

Brain modeling – first steps

The following appeared in Kurtzweil AI:

IBM and Switzerland's Ecole Polytechnique Federale de Lausanne (EPFL) have teamed up to create the most ambitious project in the field of neuroscience: to simulate a mammalian brain on the world's most powerful supercomputer, IBM's Blue Gene. They plan to simulate the brain at every level of detail, even going down to molecular and gene expression levels of processing.

Several things come to mind after the initial "coooooooooooooool!!!!!". The first is that they are considering a truly vast undertaking here. Imagine the kind of data storage and transmission capacity that would be required to run that sort of model. Normally when considering this sort of thing, AI researchers produce an idealised model where the physical structure of a neuron is abstracted into a cell in a matrix that is able to represent the flow of information in the brain in a simplified way. What these researchers are suggesting is that they will model the brain in a physiologically authentic way. That would mean that rather than idealising their models at the cellular level they would have to model the behaviour of individual synapses. They would have to model the timing of the signals within the brain asynchronously, which would increase both the processing required and the memory imprint of the model.

Remember the success of the model of auditory perception that produced super-human recognition a few years ago? That was based on a more realistic neural network model, and had huge success. From what I can tell it never made it into the mainstream voice-recognition software because it was too processor intensive. This primate model would be orders of magnitude more expensive to run, and despite the fact that Blue Gene can make calculations at a rate of 2 per micrometre at the speed of light, it will have a lot of those to do. I wonder how slow this would be compared to the brain modeled.

I also wonder how they will quality check their model. How do you check that your model is working in the same way as a primate brain? Would this have to be matched with a similarly ambitious brain scanning project?

Another thing that this makes me wonder (after saying cool a few more times) is what sort of data storage capacity they would have to expend to produce such a model? Lets do a little thumbnail sketch to work out what the minimum value might be based on a model of a small primate like a squirrel monkey with similar cellular brain density to humans but a brain weighing only 22 grams (say about 2% of the mass of the brain).

  • Average weight of adult human brain = 1,300 – 1,400gm
  • Number of synapses for a "typical" neuron = 1,000 to 10,000
  • Number of molecules of neurotransmitter in one synaptic vesicle = 10,000-100,000
  • Average number of glial cells in brain = 10-50 times the number of neurons
  • Average number of neurons in the human brain = 10 billion to 100 billion

If we extrapolate these figures for a squirrel monkey the number of neurons would be something like 1 billion cells, each with (say) 5,000 synapses each with 50,000 neurotransmitters. Now if we stored some sort of data for the 3D location each of those neurotransmitters we would need a reasonably high precision location maybe a double precision float. That would be 8bytes * 3 dimensions * 50000 * 5000 * 1,000,000,000 which comes out at 6,000,000,000,000,000,000 bytes or 5-6 million terabytes. Obviously the neuro-transmitters are just a part of the model. The patterns of connections in the synapses would have to be modeled as well. If there are a billion neurons with 5000 synapses, there would have to be at least 5 terabytes of data for synaptic connections. Each one of those synapses would have its own state, and timing information. Maybe another 100 bytes or more for that information or another 500 terabytes.

I suspect that the value of modeling at this level is marginal, when they could represent the densities of neurotransmitters over time they could save the cost of data storage hugely. I wonder of there is 6 million terabytes of storage in the world! If each human on earth contributed a megabyte of storage then we might be able to store that sort of data.

Lets assume that they were able to compress the data storage requirements through abstraction to a millionth of the total I just described, or around 6 terabytes. I assume that at all times the synapses would be visited to update their status. That means that if the synapses were updated once every millisecond (which rings a bell, but may be too fast) then the system would have to perform 6 *10^15 operations per second. Assuming also that the software would have numerous housekeeping tasks and structural tasks to perform, so it might be no more than 25% efficient, in which case we are talking 2.4*10^16 operations per second. Blue Gene/L system they will be using is able to perform 2.28*10^12 flops, therefore they would take around 1000 seconds to perform one cycle of updates.

They will be restricting their models to cortical columns that would restrict their model to about 100 million synapses, which would be much more manageable in the short term. I wonder how long it will take before they are able to produce a machine that can process a full model of the human brain?