[FoRK] Super-Intelligent Humans Are Coming

Stephen D. Williams sdw at lig.net
Thu Oct 23 14:13:40 PDT 2014


On 10/19/14, 2:28 PM, Dr. Ernie Prabhakar wrote:
> Hi Eugen,
>
> Sent from my iPhone
>
>> On Oct 19, 2014, at 12:11, Eugen Leitl <eugen at leitl.org> wrote:
>>
>> Of course superintelligent evil is quite scary. But it seems a
>> degenerated case, a superintelligence can't be consistently evil
>> in the human sense of the word unless it's playacting, or it wouldn't
>> be a superintelligence.
> You seem to have a very different definitions of super intelligent than I do. I am not aware of any particular correlation between intelligence and morality. To me, morality is fundamentally about the core assumptions we choose to start reasoning from.
>
> Or do you believe it is possible to rationally derive Morality starting from nothing, given sufficient computational horsepower?

What do you mean by "starting from nothing"?

Existing with others in the physical universe is a fairly rich starting point.  Existing as a human, especially with modern 
understanding of what that means, is a very rich starting point. Add to that any reasonable subset of story-based culture, and, 
well, you can go right or wrong there depending on which subset.

A fairly solid core of generalized principles, just like we already have for logic, math, science, computer science, etc. can go 
far. Why do communist and socialist systems tend to fail while republics tend to succeed?  Etc.

The interesting question is in what cases does morality != efficiency?  What are valid and invalid goals?  Is there a convergence or 
divergence between human and non-human goals?  Would an intelligent system that we produce necessarily be human-like?  Of course 
various things could go bad, just as in an instant human.  Is there a fundamental difference?

>
> E

sdw



More information about the FoRK mailing list