Re: Signatures and Authentication information must go at end of

Rohit Khare (
Wed, 7 Feb 96 21:29:06 -0500

Man, this guy is such a font of wisdom... this is completely on-the-fly!


PS. for those joining in progress, it's Phill complaining about probabilities..

Begin forwarded message:

To: Ned Freed <>
Subject: Re: Signatures and Authentication information must go at end of meesage.
In-Reply-To: Your message of "Wed, 07 Feb 96 17:13:42 PST."
Date: Wed, 07 Feb 96 18:15:56 PST
From: Jeffrey Mogul <>

Some random but related references on the topic of trusting stochastic

I guess you don't ever use Ethernet then because apparently you
object to it on principle. Or any transistors, for that matter.

Dave Boggs, one of the co-inventors of Ethernet, and (for about
10 years) a colleague of mine at Digital, is quite proud of a
letter he once received (back in the early days of Ethernet)
from a physicist-type who had "proved" that it couldn't possibly

And I could swear that Dave Clark (or possibly someone else
from his group at MIT) once wrote a paper showing that the
probability of certain kinds of Ethernet failures was roughly
the same as a seemingly "impossible" stochastic failure in
Token Ring networks.

And you forgot to mention a phenomenon called "arbiter failure"
or "synchronization failure" that crops up in almost any digital
system design. This involves a metastable state that, in principle,
could persist for an infinitely long time, but in practice digital
designers know how to make this highly unlikely. I'm told that
David Wheeler, of Cambridge University, uses this as the basis
for an amusing talk titled "Why Computer Don't Work."

[Hmm. All the people I listed above are named "David." What
are the odds of *that* happening by chance?]

Apparently your college tutor didn't do a very good job -- it seems
he forgot to introduce you to one of the most basic principles of
error analysis, which is when the demonstrable probability of one
sort of error is orders of magnitude less than the demonstrable
probability of other sorts of errors the first source can be

This is exactly the case that arises with properly implemented
stochastic boundary marker generation -- guaranteeing access to
enough bits of entropy to make the probability of collision far
less than the probability of an undetectable network error is very
easy to do in practice.

Here's an interesting data point: on,
out of 859585299 TCP packet received, 224952 were discarded for
bad TCP checksums (this is *after* all of the bad link-level
and IP checksums are weeded out). Since the TCP checksum
is a 16-bit value, which allows 2^16 = 65536 different values
(actually, 0xFFFF and 0x0000 are equivalent for this checksum),
there's obviously a fairly high likelihood that the system
might have computed a "good" checksum on at least a few "bad"

I don't know what the "collision rate" for a good 64-bit random
number generator would be, but I would probably trust this more
than I would trust the 16-bit TCP checksum.

Finally, Yogen Dalal wrote a nice paper many years ago:
author="{Y.K. Dalal and R.S. Printis}",
title="{48-bit absolute internet and ethernet host numbers}",
note="Proceedings published as {\em Computer Communication Review} 11(4)"
that argued that it would be *safer* to generate Ethernet address ROMs
using a good random number generator, instead of the actual practice
of handing out blocks of addresses to vendors and then letting them
generate in sequence, because of the relative chances of failure. (There
are other reasons for block-based allocation of Ethernet address;
allocation of blame, for one.)