[FoRK] Human behavior is 93% predictable
J. Andrew Rogers
andrew at ceruleansystems.com
Tue Mar 2 09:36:49 PST 2010
On Mar 2, 2010, at 1:01 AM, Jebadiah Moore wrote:
> There are a couple of methods I can think of by which you could avoid being
> predictable (IANAExpert, and I am quite interested in reasons why the
> following wouldn't work):
To be clear, I am not saying you can't be unpredictable, just that it is much, much harder than most people intuit.
> 1) RNG
> Presumably any particular model isn't going to fully model the universe at
> particle level. Instead, it's going to approximate human actions at some
> particular level of atomicity in some set of dimensions--probably location,
> relation of location to person (at home, at work, at friend's, etc.), basic
> state (eating, working, socializing, sex, etc.).
While the information is discrete, states like you outline above are not. Looking at it as a database of time-place-activity logs is incorrect. The models work at a different level of abstraction, being pretty purely information theoretic. You can pull surprising patterns out of very diffuse bits.
> 2) Breaking the model
> Since it's a model and an approximation, there's likely to be some
> particular action which the model does not include as a possibility, or some
> ugly edge case in which the model predicts poorly. Find those cases, and
> exploit them.
The model is high-order and inductive, you can't reason about the model this way. To break it requires having a copy of the (constantly updated) data that created the model. Attempting to break the model becomes part of the model.
> 3) Information asymmetry
> Any prediction mechanism in the large is likely to rely on sensors that
> detect limited information, without complete reliability. You are most
> likely able to collect information on yourself more reliably, even if you
> are using the same sorts of sensors, and thus you are able to include more
> information in a model you maintain yourself, about yourself. This is
> probably exploitable.
Yes, if you can build and maintain a model of yourself that is as detailed as what someone else may have, you can use it to monkey wrench the model. This would require serious technical competence and is not foolproof, but it would greatly increase the cost of someone else maintaining a usable model. Of course, it would lead to a very strange and probably less than pleasant lifestyle.
> 4) RNG redux
> If you have access to the model being used to predict your actions: find out
> what it predicts you to do, then roll a die. If it's a 1, do what it says;
> otherwise, do something else. Even if the model is able to take this into
> account, it will greatly reduce its confidence.
The caveat is that enough decisions cannot be randomized this way that it injects far less uncertainty than you might assume.
More information about the FoRK