[FoRK] Super-Intelligent Humans Are Coming

Eugen Leitl eugen at leitl.org
Sun Oct 19 12:11:50 PDT 2014


On Fri, Oct 17, 2014 at 06:57:00AM -0700, Dr. Ernie Prabhakar wrote:
> HI Eugen,
> 
> On Oct 17, 2014, at 2:03 AM, Eugen Leitl <eugen at leitl.org> wrote:
> 
> > While Moore is obviously entering into saturation, and the race of collapse
> > with Singularity so far looks like collapse is winning by a fair margin, we 
> > obviously can improve systems much faster than biological organisms (it takes
> > 35+ years to create a borderline competent human being).
> 
> That isn’t immediately obvious to me.  Improvement along which metrics?

Creating naturally intelligent systems. Defined as ability to solve complex
problems the way a human would, at same level or better.
 
> Sure, I can teach arithmetic to a computer faster than I can teach it to a kid.
> 
> But is that the kind of improvement we need to prevent the collapse?

I do not see a viable way to make the next half century suck less.
We basically kept taking the wrong road forks since 1970s, and by
now the cone of all possible future trajectories is now limited to
at least some degree of disruption to extreme (population bottleneck)
levels of collapse.
 
> I would argue that what we need is not more intelligence, but better empathy, creativity, and values.

I would say we need both. I don't see a good use case for a very capable yet
thoroughly inhuman system. Potential for misunderstanding
would be too high (a bit like wish-fullfilling genies of
the legend).

I think superintelligence would thoroughly understand us, but
appear opaque to us due to entirely different motivations.
 
> And while that is incredibly difficult to teach human beings, it is even more difficult to teach to [other] systems.

But you could clone a capable system far more rapidly than
reproducing human expertise. 
 
> > So once we go much beyond exascale and can image neurobiology at 
> > molecular scale and combine that with in vivo total activity we
> > should start making progress.
> 
> Again, progress along which dimension?
> 
> At The Swan Factory, we’re tackling that hard problem face on: how do we create institutions that leverage technology to help produce better human beings in a timescale of months, rather than decades?

I'm afraid that would require fine control of coercive
rearrangements of neural circuitry between people's ears.
It would work, if we could do it, but clearly the non-consent 
part is at least somewhat abusive. 
 
> And by “better”, we mean humans who have the skills, values, and humility to actually make the world a better place. Regardless of their intellectual capacities.

Rational selfishness does require a lot of modelling in order
to result in cooperative strategies, rather than becoming
trapped in suboptimal local minima.

Of course superintelligent evil is quite scary. But it seems a
degenerated case, a superintelligence can't be consistently evil
in the human sense of the word unless it's playacting, or it wouldn't 
be a superintelligence.
 
> — Ernie P.


More information about the FoRK mailing list