[FoRK] [fonc] Final STEP progress report abandoned?

Eugen Leitl eugen at leitl.org
Fri Sep 6 03:22:02 PDT 2013


----- Forwarded message from Chris Warburton <chriswarbo at googlemail.com> -----

Date: Fri, 06 Sep 2013 11:16:40 +0100
From: Chris Warburton <chriswarbo at googlemail.com>
To: Fundamentals of New Computing <fonc at vpri.org>
Subject: Re: [fonc] Final STEP progress report abandoned?
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/24.2 (gnu/linux)
Reply-To: Fundamentals of New Computing <fonc at vpri.org>

John Nilsson <john at milsson.nu> writes:

> Even if the different domains are different it should still be possible to
> generalize the basic framework and strategy used.
> I imagine layers of models each constrained by the upper metamodel and a
> fitness function feeding a generator to create the next layer down until
> you reach the bottom executable layer.
> In a sense this is what humans do no? Begin with the impact map model ,
> derive from that an activity model, derive from that a high level activity
> support model, derive from that acceptance criteria, derive from that
> acceptance test examples, derive from that a low level interaction state
> machine an so on...
>
> In the human case I belive the approach modelled by the kanban katas seems
> appropriate. Nested stacks of hypotheses to try in a disciplined PDCA
> cycle.

The problem with (naively) adding meta-levels is that our costs go up
dramatically. Using your example, we might define a test suite and
evolve a state-machine which passes all the tests. We might then decide
to replace our hard-coded tests with a higher-level optimisation
process: we define our acceptance criteria and evolve a test suite for
those criteria and a state-machine for those tests.

This is much more expensive, since in order to evaluate a test suite we
need to evolve a state-machine for it. Likewise, if we add another level
and define, say, a model of our business and market, we could evolve
product features based on their predicted return on investment.
Evaluating each potential feature would require we evolve the acceptance
criteria for its implementation; each candidate set of acceptance
criteria requires its own evolved test suite; each candidate test suite
requires an evolved state-machine.

When we consider that realistic optimisation algorithms can require
upwards of a million fitness evaluations, it's clear that we can't
naively bolt extra meta-levels on when we get stuck.

There are ways around this though, by collapsing the levels into one
self-improving level or two co-evolving levels. An example of a
one-level system is evolving an economy of virtual agents, where rewards
come in the form of virtual currency which can be used to bid for CPU
time. Bankrupt agents can be discarded and currency/CPU time can be
traded between agents. This allows meta-agents to make a living by
spotting opportunities for the other agents. Any agent which is too meta
to be worth it will either go bankrupt or will be out-bidded by more
efficient less-meta agents. One example is the Hayak Machine.

This can be augmented by techniques like autoconstructive evolution,
where the process for generating new agents is part of the agents
themselves, and thus can evolve.

Co-evolving systems use each other as their meta-level. For example,
we might let each system play a zero-sum game involving the problem
domain (this rewards learning more about the domain); we might have one
system pose questions/problems for the other to solve (again, rewarding
domain knowledge); we might have each system predict the other's
behaviour and give reward based on unpredictability (rewarding novelty
and exploration).

I highly recommend Juergen's publications page, which covers many
different optimisation algorithms:
http://www.idsia.ch/~juergen/onlinepub.html

On a related not I've been making some toy implementations of many
optimisation algorithms in Javascript (some are still empty):
http://chriswarbo.net/index.php?page=cedi&type=misc&id=1%2F4%2F28%2F29

Cheers,
Chris
_______________________________________________
fonc mailing list
fonc at vpri.org
http://vpri.org/mailman/listinfo/fonc

----- End forwarded message -----
-- 
Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://ativel.com http://postbiota.org
AC894EC5: 38A5 5F46 A4FF 59B8 336B  47EE F46E 3489 AC89 4EC5


More information about the FoRK mailing list