[FoRK] Human behavior is 93% predictable

Jebadiah Moore jebdm at jebdm.net
Tue Mar 2 01:01:54 PST 2010

On Tue, Mar 2, 2010 at 8:14 AM, J. Andrew Rogers <andrew at ceruleansystems.com
> wrote:

> That's what most people assume, but they are wrong. Humans are essentially
> incapable of producing randomness that cannot be defeated by a machine. We
> might not like the idea, but it is reality. Humans are very deterministic
> even when they try not to be. Actually, especially when they try not to be.

There are a couple of methods I can think of by which you could avoid being
predictable (IANAExpert, and I am quite interested in reasons why the
following wouldn't work):

1) RNG

Presumably any particular model isn't going to fully model the universe at
particle level.  Instead, it's going to approximate human actions at some
particular level of atomicity in some set of dimensions--probably location,
relation of location to person (at home, at work, at friend's, etc.), basic
state (eating, working, socializing, sex, etc.).

There will be a degree of unavoidable predictability due to physical
limitations; if I haven't slept in a week, you know it won't be long; I'm
likely to be near the same place I was an hour ago; etc.).  But let's say,
you enumerated the possible "next states" that you could be in, per the
machine's model, and you used some source of "true" randomness to choice
which of those states to transition into next.  You wouldn't become
completely unpredictable, of course (especially in location), but you might
become unpredictable enough to get away with something.

2) Breaking the model

Since it's a model and an approximation, there's likely to be some
particular action which the model does not include as a possibility, or some
ugly edge case in which the model predicts poorly.  Find those cases, and
exploit them.

3) Information asymmetry

Any prediction mechanism in the large is likely to rely on sensors that
detect limited information, without complete reliability.  You are most
likely able to collect information on yourself more reliably, even if you
are using the same sorts of sensors, and thus you are able to include more
information in a model you maintain yourself, about yourself.  This is
probably exploitable.

4) RNG redux

If you have access to the model being used to predict your actions: find out
what it predicts you to do, then roll a die.  If it's a 1, do what it says;
otherwise, do something else.  Even if the model is able to take this into
account, it will greatly reduce its confidence.

Jebadiah Moore

More information about the FoRK mailing list