Fumbling Towards the Meaning of Life

James Rogers jamesr at best.com
Sat Oct 25 01:15:38 PDT 2003


On 10/20/03 10:43 PM, "jbone at place.org" <jbone at place.org> wrote:
> 
> First, the test individual has to be / express some notion of "self."


It would be essentially impossible for any decent non-axiomatic reasoning
system (the only kind viable here) to not develop a very strong sense of
"self" through interaction with the environment (whatever that may be).
Because just about everything you do will force you to infer your own
existence, "self" will be one of the strongest vectors in any such system
with sufficient complexity to encode it.  One could suppress "self"
emergence to a certain extent by greatly reducing or even eliminating
environmental feedback, but that would significantly limit practical
application.

 
> Second, necessarily, there has to be some notion --- some instinct to,
> some urge for --- self-preservation.


Not important, and biological issue as most people will interpret it.
Self-preservation would likely emerge as a sub-goal for achieving most
super-goals.

 
> Third, there has to be some ability to project from immediate
> circumstances to an eventual desired goal.


Any machine that expresses "intelligence" in the general mathematical sense
will be able to do this ipso facto.  Whether or not it will have the
experience to do it effectively is another matter.

 
> Fourth, that desired goal has to be --- possibly under some
> hypotheticals, but acceptably transitively --- beyond
> "self-preservation."


Unnecessary bullet point.  If self-preservation is not a necessary sub-goal
to achieve some super-goal, then self-preservation won't be a factor in its
reasoning.

 
> Fifth, there has to be a way to plan from current situation to future
> situation that doesn't involve self.


I don't follow.  Can't one assume this within the theoretical limits of
predictive precision for a FSM?

 
> Sixth, there has to be a method for evaluating strategies for goal
> fulfillment that doesn't ultimately weight self-preservation.
> 
> Seventh, there (reflexively) has to be some mechanism to weight
> ultimate "self" goal more than the value for self-preservation.


Rehash of the same basic points above.

 
> Eighth, "ultimate goal" has to be something other than axiomatic.  It
> has to be existentially self-determined.


A rational "ultimate goal" has to be non-axiomatic, and any useful
implementation would necessarily do this reflexively.  Unless you actually
have an infinite tape in your Turing machine.

 
> Ninth:  per Turing, the "person" in question has to be unrecognizable
> vs. a biological human in an unbounded but (inherently, unavoidably)
> time-limited verbal  interaction with a human.  (That is:  despite the
> number or durations of all interactions between the candidate and a
> single human, the human can never decide...)


While I give Turing a pass on this, his "test" being quite early in the
theory timeline, it is an awful indicator of intelligence.  Most things that
are identifiably human are also not behaviors you would necessarily expect
from a purely rational machine.  Humans may be relatively intelligent on
this planet, but they are an extremely coarse and dirty base line that
offers very low resolution.

 
> I'm probably missing in this the necessity for the individual to both
> form meta-models of its own goals, axioms, and behavior and to
> reverse-engineer axiomatic, goal-specific, and behavioral models of
> those it encounters.


If you have this, none of the other stuff above matters because it is all
derivable.  Goals are actually external to intelligence (biasing factors
mostly), and something beyond "just answer the fooking question" is not
strictly necessary even for extremely high intelligence systems.

"Real" goal systems for AI are eminently Faustian in nature.

Cheers,

-James Rogers
 jamesr at best.com

"Algorithmic Information Theory Is Typically Your Friend"



More information about the FoRK mailing list