[FoRK] Super-Intelligent Humans Are Coming
J. Andrew Rogers
andrew at jarbox.org
Fri Oct 24 07:18:24 PDT 2014
> On Oct 24, 2014, at 12:48 AM, Stephen Williams <sdw at lig.net> wrote:
> What example did you have in mind for "predicated on near-parity of intelligence”?
Agency and assumptions of free will are tied to it. Even in the law there is recognition that significant disparities in intelligence can be exploited to compel action by the lesser, even when those disparities are small in absolute terms. More generally, the lesser can be compelled without any ability to be aware of it, effectively becoming a mechanical extension of the super-intelligence.
Near parity of intelligence is central to the notion of practical free will. It is what limits the possibility of the above.
> I would divide things people call "morals" into categories. Some categories are like common law and natural law. (Murder, theft, etc.)
These are just game theoretic heuristics. They evolved under a set of constraints that are peculiar to historical human reality. If you blow up the constraints then you blow up the (local) optimality of the heuristics. We like these heuristics because in many ways they are self-serving; we are not the entities holding the short-end of that stick.
> The ability of humans or any intelligence to falsely rationalize rather than correctly applying known facts is either laziness, lack of ability, or the presence of mendacity. Your "constructing the necessary context" really sounds like falsely rationalizing by a method like straw man, false premises, false equivalence, correlation used as causation, or selective memory.
Rationalization is just the way humans describe their decision processes. A “true” rationalization, carefully and morally constructed, can still be for a decision that was compelled by a super-intelligence beyond the human’s ability to discern. From the perspective of the super-intelligence, that context construction is mechanically pulling levers and turning wheels. Driving a car is creating the context that causes a complex machine to go where you want it to. We don’t assign evil motives to the driver for driving.
Why would I assign agency to my toaster just because my toaster thinks it is making toast of its own free will? A super-intelligence would no more recognize the claims of normal human free will because, from its perspective, normal humans have none. It is a relative measure.
But enough about ad targeting… ;-)
More information about the FoRK