This message will self-destruct

Rohit Khare (rohit@uci.edu)
Wed, 8 Sep 1999 20:28:51 -0700


At 10:50 PM -0400 9/8/99, Robert S. Thau wrote:
>thom stuart writes:
> > 1on1mail [1] is a new service offering secure online email with the
> > capability of predefining a chronological time-to-live, after which the
> > email will "delete itself from the recipient's computer". Hrm. Proprietary
> > client required, natch.
>
>And one which somehow keeps people from doing cut-and-paste from the
>contents of the email (possible, but unfriendly), or doing any screen
>captures (more awkward to reuse, but essentially impossible). Also,
>this email can't have any attachments, or you'd be able to save the
>bits from the viewers, completely vitiating the effect of the "self
>destruct".
>
>rst

In fact, Stuart Haber (of surety/timestampting fame) introduced me to
the "disappearing data problem" -- how can you, really, seal digital
data like traffic tickets that vanish after five years?

The debate only led to a few generic thoughts. First, it would have
to entangle your access with the worlds. That's the trick to Haber's
forward solution, the trustless timestamp. So long as a whole bunch
of different people show up in the same timeslot, no one stamp can be
forged without the involvement of about half.

[aside: you send in a hash of your doc, and a hash-tree is created,
entangling your hash with your neighbors, and the next timeslot's,
and so on, so that all you get back to verify your own notarization
is a chain of hashes back to the top, and the top is published weekly
in the NY Times]

Now, one solution is to choose keys that should be just about
breakable exhaustively when the data expires, so that in five years'
time, you'll be able to repudiate the message -- anyone could have
forged it by then.

Still, there's a second aspect which Robert alluded to: since any
Turing machine can be pickled and re-run in the future, it can't be a
pure Turing machine. That is to say, you can't put the 'cleartext'
up, ever. If I saw in 1999 that you had a DWI, I can have that
screenshot notarized and pillory you with it in the next election
nonetheless.

So that aspect must be some physical entropy. That is to say, you can
*never* hand out the actual answer, only some probability of a
correct one. Consider a drop of food coloring in water: thermal
noise eventually, and permanently, erases the original concentration,
with essentially no probability of its ever reforming.

Yes, you could photograph the fishbowl right at the start, and even
without tweaking the Uncertainty Principle too far, you can stash
away an accurate picture of the original 'cleartext'.

So you can't be too sure of the data to begin with, either. Perhaps
the "cleartext" is a loaded coin, which when flipped often enough
will say "drunk" but over time will converge to 50/50 (no bits of
information left).

Of course, there's still an enforcement problem: how can I require my
neighbors' (enemies') assent every time I flip the coin? Because
after each flip, something has to be done to the stable digital
storage medium to ensure anisotropy.

And while this might work for audio or video data, which will degrade
in ways familiar to archivists today, what does it mean for text? or
binary machine code? Those things don't degrade well at all.

So consider all this a reminder that there is *no* theoretical basis
for 'lockboxes' and other digital copyright use limitations. In
practice, we can enforce them with trusted hardware (region-coded DVD
players) or social mechanisms (DAT taxes), but not by cryptography.

Rohit