[FoRK] Software reuse is information theoretically hard?

Jef Allbright <jef at jefallbright.net> on Sun Nov 4 10:38:48 PST 2007

On 11/4/07, Russell Turpin <deafbox at hotmail.com> wrote:
>
> jef at jefallbright.net:
> > This goes to the heart of an argument, er, discussion I was having
> > with a friend last week about the effective extensibility of genetic
> > algorithms. My point was that a particular GA can work well within a
> > particular problem domain, with recombination of code corresponding
> > to synergies already discovered, but must "reinvent itself" itself
> > at some point in order to continue growing (as for a hypothetical
> > "general" machine intelligence.)
>
> Yep. My intuition is that biological evolution follows exactly this
> pattern: long periods of "evolution within a domain," followed by a
> phase change. That's why we share Hox genes with sea squirts.

So does anyone here know of any accessible theories on the nature or
expected distribution of the classes of "problems" represented by the
phase transitions?  I'm fairly familiar with work at the Santa Fe
Institute and similar, but I'm not aware of anything we can say about
the nature of these except that they're information-theoretically
"hard."  I keep expecting to see something along the lines of fractal
self-similarity.

- Jef

More information about the FoRK mailing list