[FoRK] intelligence and turbo coding

Dave Long < dave.long at bluewin.ch > on > Sat Nov 18 07:53:40 PST 2006

So it turns out that, if one wants to get close to the Shannon limit, 
instead of calculating symbol statistics and optimizing codes, it's a 
lot better to just randomly pick some sparse constraints, send them 
along with the original message, and then try to satisfy them in 
parallel.  (turbo codes/low density parity check codes)

The downside to this approach is that the received message is a 
problem, not an algorithm, and hence requires some iteration, 
propagating beliefs about bits between the parallel solvers, until 
everything converges and they agree on the recovered message.

That propagation, and ultimate convergence, reminds me of a very 
impressive visual segmentation system that was in Nature earlier this 
year.  It starts out bottom up, declaring that these bits of image are 
like their neighbors, but these are not -- and then using the different 
local statistics that it gathers to refine the boundaries -- finally 
repeating with the new coarser grains.  Eventually it is able to to 
cluster entire hierarchies even when their local statistics are 
globally different, and pick out squirrels with bushy tails against 
tree bark, or spotted leopards against the dapple of a lake.

If one views the job of intelligence as providing the best decoding of 
the signal from a very noisy universe, then perhaps it has evolved more 
along the lines of a turbo decoder than along logical AI-like lines.  
It certainly goes along with misdirections and visual illusions -- the 
various parallel decoders only exchange information about the 
borderline bits, and will ignore bad calls that they agree upon, unless 
another constraint shows it to be inconsistent.  It would also give a 
smooth path for development, all the way from the rather basic problem 
of "which of these areas of my visual field cohere with each other" 
right up to the very formal problem of "which of these graphs of 
symbols cohere with T, and which with F, and are there more possible 
clusters in this analysis?"


More information about the FoRK mailing list