Interplanetary Internet Notes

Rohit Khare (rohit@uci.edu)
Sat, 13 Nov 1999 18:15:08 -0800


IPN @ U Maryland, 12 Nov 99
[Warning: almost none of this is quoted directly, so all errors are mine.. RK]

www.ipnsig.org // info@ipnsig.org

===presentation 1=====
Claims: there are four technical challenge areas
Deployed planetary internets
Inter-IN "dialog"
Stable IPN Backbone
Interplanetary Nodes

====
"Earthnet" is becoming untethered. Do "edge markets" on Earth
represent the leading edge of the IPN problem?
Significant delay & errors
power/bw constraints
disjoint connectivity
corruption as source of loss (vs. just congestion, over fiber)
Asymmetric channels.

=======================================
Similarities between today's IP and IPN:

Satellite IP:
high bw*delay product
bit-errro dataloss
asymmetric data rates

Cellular IP: solar power is hard without much sun
self-organizing
episodic/disjoint because of terrain features

Backbone: selective reliability (variable qos, even within corrupted packets)
Store-and-forward == paging == IPN messaging

WDM networks: extreme B*D products. So in both cases, "ping-pong" of
bursts replaces any kind of streaming.
(rather than viewing it as a bundle of soda straws, view it as
needing an admission control scheme for bursty access.

========
NEAT slide on the convergence of Space and Internet standards/alphabet soup

There's a massive diagram I need to find online, for the "IPN
Representative topology" -- "A Day in the life of an IPN Packet"

=======
There's an open IPN SIG under ISOC ($35/yr for ISOC, of course)
IPN Project: JPL MITRE, GST, Sparta
USC/UCLA/UDel/UMd/Caltech listed as academic implicateees.; they're
looking for places to present seminars on the topic and jumpstart
research interests.
Combination with IRTF?
CCSDS: ISO-based Consutative Committee on Space Data Systems.

"We need to engage the public, but there's a fairly high "nut content""
Outreach to the The Planetary Society, California Science Center.

========== presentation 2 ===========
Important aspects:
Light-times are long
buffers are finite on gateways
Contact times are short (in terms of RTTs) (window of visiblity) --
days/weeks between things coming into view
Trasaction sizes will be small compared to B*D product
power, weight, and volumes are the figures of merit to optimize for.

Note: laser space comm reaches exabit capacity, which will
"unbufferable", requiring exabit to yottabit buffer sizes. Ummm...
for now we don't think we could fill such a channel... well, why not
broadcast all the web, all the time. [RK: the ULTIMATE delay-line
memory; how many yottabytes woul d be in flight at any time !...]

"needs to be flow control/backpressure because of buffer scarcity"
BUT reactive control is hard, since RTTs are so large.

Things need to be as NONinteractive as possible.

====
First Round Conclusions

* USE IP in low-latency work, i.e. lunar exploration, satellite to Europa, etc.

* BRIDGE high latency environments with an IPN backbone. Today's Deep
Space Network is very "earth-centric"

* GATEWAY the difference b/w high- and low- latency

========
BUNDLE SPACE: "the core of IPN operations"
This is the postal model. Or, the "Amazon" model of data delivery.
"We're looking for people to suspend disbelief and get mission people
to reimagine their application area"

In the current Deep Space Network, you can often have your
"reservation" pre-empted; it sounds much like the system for
reserving telescope time (because it basically is :-)

["Sorry, out of buffer. Recompressing with higher JPEG loss...
sending thumbnails only... turn image-loading off before proceeding"]

one -fellow-commented: All existing applications are in several
classes: telepresence and information access (web browsing). So what
we need is latency-aware UIs. Caching hits, say.

[RK: is there a 'LEGAL' requirement for wiretapping all
communications? I assume the current management process expects that
every single bit from space is recorded to some media, before ANY
processing begins. Is that appropriate for a packet flow? Doesn't
every researcher deserve access to any downloaded resource? I.E. Will
we ever want 'end-to-end' access to researcher-desks, or only via a
Library of Congress-style common permanent versioned WebDAV cache?]

[Question: when the hell would you ever FTP a file from Mars TWICE?
I.e. you download it, but it's now in the public cache, so ALL
further hits are earth-local. Bottom line, what's the point]

[RK: will there be planetary names? pTLDs? .pluto? .pluto.sol?]

=====
Design Choices for a Network of Internets:
* NAMES are the means of reference
* LATE-BINDING of names to addresses. (separate 'domains' for each Internet)
* INDIRECTION depends on intermediate relays with some common
end-to-end 'bundle' mechanism.

[Note: no one is proposing active networks for this... odd]

Need MX-type records to local termination points ("the mars local
proxy to earth")

[ICANN ain't seen nothing yet with the policy constraints of
light-cone delays :-) what if the aliens land and mail in a
registration for grays.mars.sol, and find themselves preempted by
light delay since the simultaneous arrival of the first
sighting-photos allowed terrestrial domain squatters to complete the
transaction before them.]

[DHCP?? are you kidding me? We can't even account for each machine we
send out? No 99.9999%-reliability trained engineer is going to toss
in zeroconf just "for cleanliness"]

PlaNATs? No, there are a lot more than 9 planets: L3 points, solar
orbits. Vint apparently has an opinion about whether there should be
a sol-wide unique addr, or replicated (overlapping) planetary
namespaces.

"there's always some fate-sharing that goes on when you use a
(stateful) gateway" -- so a reliable rover shouldn't garbage collect
until earth-acked, in extremis.

Lots of open debate about whether the moon is part of the Earth
domain. arguments over visibility, power/awake resources, maximizing
bandwidth.

[How does this whole effort depend on standard metadata for space
telemetry? I mean, I see a control hypothesis as follows: solve a
smaller version of the problem which is just to define a standard
transmission format (read: 822). then you can get pretty great
interoperability gains well short of deploying space gateways (just
treat them as near-earth extensions of Deep Space Network,
maintaining a NASA hub-and-spoke architecture (perhaps with these
deep-space, but dumb, 'repeaters' to combat R^2 bandwidth losses. ]

[Security: will we allow competing nations to share a secured gateway?]

===
One Name-Space with Late Name-to-Address Bindings

[Looks like they're re-inventing Host Identification Payload.]

Must minimize DNS traffic -- no interplanetary zone transfers
Wildcarded A-records (as in give this gateway IP addr to any host
under mars.sol when you're on earth, for a legacy IP application)
An new indirection I-record.

[C'mon, there are NO legacy applications that work over this system. ]

===================
QUOTE: "from Telephony to the Pony Express"
[yes! yes! they hit my preferred metaphor! 19th century internetworking...]
[metaphor was Attributed to "Dave" at the "Saddleback meeting"]

==============
BUNDLE SERVICES
end-to-end xfer
reliability/qos/security coming up
Carries names from end-to-end
"weather reports" on queue lengths, projected data flows [EEK!
Signaling bad! sorry, my knee has stopped jerking now...]
Carries some "transfer identifier" back to the application. [Duh! an
eTag URI...]

PARCEL TRANSFER PROTOCOL
only end-to-end metadata is the name
potentially applicable to new applications
not a transactions, no commit phases
A SINGLE REQUEST, SINGLE-RESPONSE system [please, more imagination :-]

They call bundlespace Layer 5A or Layer 7-- ('minus minus')
Nodes may not able able to construct a whole bundle (BS!) and need proxies.
3 Flavors of IPN Nodes:
Bundle Agent -- builder and consumer of bundles. Can be an OS service.
Gateway -- custody transfer an routing between IPN domains
Relay -- mitigation for R^2 effects, no custody transfer

[Question: does the Earth indeed have cache capacity >> the entire
IPN? I believe so]

[Not surprisingly, I ended up in a debate with a "don't dare break
legacy apps!" fellow, whence the podium took Klensin's name in vain.
Namely, I agreed with my colleague (and nevertheless, friend :-) and
amplified that the last fifteen years of SW Engg research has proven
how futile it is to mask, or even tolerate latency without rewriting
it. ]

[I'm having a distinctly grandiose feeling I could smash this volley
back over the net: I see an embedding of most of my thesis ideas in
this problem, much as I suspected -- and that I'm intellectually
armed to body-slam the problem to the mat. The economics angle may be
the key added value (ba-dum-bump). It's been about a month I've been
kicking around Interplanetary Software Engineering as an umbrella for
many of my ideas, including paranoid security and xenophobic
ontologies and profligate computing (waste disk, waste bw). Three
words: SolWideWeb]

[I'm tuning out, because they're discussing this whole hacked-up
how-many-frat-boys-can-we-stuff-into-a-paradigm attempt to get a
*legacy FTP* client to get a file from Mars... sigh
Not that I'm not above thinking of end-user legacy applications. A
Day in the Life of a Martian Day Trader is a damn, damn good scenario
to think about: when do trades execute, what ticker delays are
acceptable, where might the "gotcha" rats be hiding that sabotage the
entire order by another RTT or two?]

[Blame this one on Scott: it's the Energizer network: it just keeps
hanging, and hanging, and hanging... :-]

[This *must* become the first Transfer-Layer Network, abandoning
Transport-Layer Network.]

[Someone mentioned SIP. I don't know why]

"intra-internet communications" :-)

[I launched into a mini-rant about the fundamental difference between
a Transport-Layer Net and a Transfer-Layer Net. Yes, TCP has value,
but ONLY in the intra-internet case. Nomadic edge devices -- whether
on earth or mars -- will not interwork on lower layers; they will be
the first generation of solely-Transfer-Layer-interoperable tools.

===========================================
Howard Weiss: Security Considerations for the IPN
SPARTA Inc

Security of "user data" -- RK believes it's isomorphic to IP today.
Security of Backbone state -- RK wants to know exactly what the
DELTAs are: what's new vs. just a carrier-grade fiber backbone.

Lots of the text on this talk is boilerplate to crypt-heads, but
still very useful for this community. He was very honest about that...

Basic 'Duh' Moment: the conclusion that IPsec won't work -- that is,
transport-layer security or below -- and we have to move to secure
email (transfer-layer). Can't take the RTT hit for many of these
handshakes.

How long do keys last? still session-by-session? this little
conversation was unclear. Something about "this is really tunnel
mode".

Some BS around the front of the room about why there are "bandwidth"
and "processing power" issues that could prevent the use of PK or
D-H. Criminy, just set up a few million keys at once, then, and pay
the RTT every so often. But more to the point, lofting sats that
can't do RSA efficiently already sucks -- how are they going to image
compression? BTW, this is a wonderful argument for RAW style FPGAs
throughout the sat. Just ship it with a few million spare gates for
future expansion. Kind of a counter-point to the Active Networks
thinking... the motif recurs again, at an upper layer.

Howard concluded that S/MIME or PGP would be best.

Main delta on the backbone security: we don't have "premises
security", as I would put it: no ability to repair or hit the
emergency reboot switch, so to speak. See RK's old Teledesic Security
Model memo....

There is very little discussion of who the real user community is.
One IETF disease is the "hypothetical user". For now, and the entire
time horizon of the first-generation IPN deployments, the ONLY
authorized users will be EXTREMELY motivated to get brand new
software and tools. It's not for Joe Doaks to telnet to Pathfinder
for the heck of it... This may be some of the "nut content" referred
to earlier speaking: imagining human colonies and so on. But for now,
this really is an ENGINEERING upgrade the systems for communicating
back to, essentially, JPL (and maybe its Russian equivalent).. and
then back out to the Earthnet over IP.

==========
Backbone notes

Voyagers are now 8 light-hours away. At this point, it moves one
light-second in an RTT :-)
We'd like to operate this out to the Oort Cloud, which is O(10^5)
seconds. (3 hours).

Comparison chart:
IPN is 10-10,000 sec vs <1 sec
Known, basically fixed topology -- BUT! there's constant relative motion!
Deployment cost >>
Operations >> b/c electricity is much more expensive

Implied Differences;
*Bandwidth is expensive. [Really? what are the limits of laser-coding
and ultra-low-power semiconductors?]
*Interactive protocols don't work. Even reliable and/or order
preservation may not be cheap. [Gotta move to MUCH more error-visible
APIs, so you skip ahead to valid records, etc. Even delivering
"corrupt" packets for video codecs, etc, even over "reliable" *TP]

[RK: "rate control" in this context at most means queue admission
control. Appears to be more confusion in the terminology flying
around.]

[Question: will each sat know enough about celestial mechanics and
trusted telemetry (i.e. no Byzantine failures) to know the "exact"
network map now and into the future? This may make source-routed UUCP
much, much more practical. And if not today, why not by 2040? So in
the interim, just load the routing tables in "by hand" from Earth,
which damn well knows how to calculate all those maps ("off to Mars
now, little packet, and when you get there, wait an hour and take the
next flight to Ceres..."). I'm also assuming very reliable spaceborne
clocks! == positions! Timeout determination may depend on Phy-layer
scheduling (laser tracking error/occultation). Also, there's a
knapsack problem in here (assuming my economic 'postage' model is in
here: how to pack the optimal queue into the finite transmission
window (including the option of RE transmitting very valuable
messages for FEC). ]

Terminology to learn and live as one's own: "occultation". This could
even be useful in explaining nomadicity on the ground (e.g. in NFSv4
vs DAV comparisons).

What ARE the CCSDS link-layer protocols?

Wow. I just realized that in one sense my very first 'commercial' app
back in college was already a Transfer-layer app, trying valiantly to
fetch email as a logical object over shell connections as a
transport. I guess I could fake a backstory that I've cared about
Transfer-layer and standards processes all my life :-)

Multicasting? What's the use case, again? the probes out there may
need intra-planetary control channels, but Sol-wide?? [@@READ@@ Look
for MDP, multicast dissemination protocols. Also lottery scheduling
(Kleinrock's old student, Yechiam Yemini)]

[Hey -- notice the complete blind-spot for market-mechanisms: I may
have a thesis corner carved out yet: the economics of
sat-munchkins... Actually, I did give a minirant on microcurrencies
at some point]

"the big challenge at the transport layer is how can we do flow
control and congestion control over this evironment" -- I suspect
this question may be somewhat obsolete at the "packet" granularity.

=============
Durst's closing talk
"Uses the domain name system as a common name space for locating
nodes" -- ummm.... maaayybeee. It's a long time into the future...

Contexts:
* Single lander with an IPN gateway to a real or virtual internal net
* Small number of cooperating nodes (e.g. single rover, single lander)
* Orbiter-to-surface coordination(e.g sample return)
* Multiple beyond-line -of sight missions connected by low-orbit comsats
* planet-stationary sats for relay and gateway
* Spacecraft onboard LANs

"We talked to Karen Sollins yesterday, and think URNs may solve the problems.."

Gave my little speech about conflation of routing & hostid within IP;
and organization-name and directory-service in DNS -- which are being
rent apart by growth, so please don't be backwards-compatible with
the Ancien Regime

Functions:
* Science data & telemetry return (kind of thought of as streams &
files, respectively)
* Command and Control of in situ elements -- long link to earth,
short intra-mission links
* Telescience/Virtual Presence
-- initally, just back-haul to earth
-- second, in support of robotic control or robotic exploration
-- finally, in support of human-in-situ control of robotic explorations

Wired / MANET/ Cellular comparison
Power: not-critical/important/overriding
Signal-Noise: fiber clean / low/ very lower
Infrastructure: fixed/ mobile/ satellite

"We don't have an FCC on Mars... yet!"

EFFECTS:

Phy-Layer:
Solar conversion is our primary power source for forseeable future.
- Mars orbit has less than 1/2 Earth solar. 590 W/m^2 vs 1370 W/m^2
-- dust limits sun, erodes panel, seasonal variations

Spectrum coordination is still necessary.
-- e.g. 400MHz is overly popular due to diffraction effects (eases
beyond-line-of-sight comm) and moderate free-space losses Q. What's
Mars' ionosphere.

Tracking antennas can help
vibration-tolerant phased-array antenna for wideband communications
to sats from slow-mobile units [E.g. Teledesic terminals.]

Link Layer:
managing poor SNR
Some coding schemes make different tradeoffs: delay budget
(convolutional bad), concatenated is clean-or-out (Reed-Solomon
doesn't degrade) or Turbo codes (best for long code blocs). Perhaps
even agile coding??
Reservation at the MAC layer? Closed-loop robotic control may require
controlling the ENTIRE stack to manage delay budgets.
New APIs, of course, with even more data back up than the current
Congestion Manager proposal.

Network Layer:
fixed+mobile nodes, so it's at least minimally interesting. Rovers,
balloons are slow, orbiters and UAVs are much faster. [reminds me
of the F-16 flying by a tank-column screw case from PAGERNET
proposals at UCI]
Self-configuration [!!!] zeroconf stuff. so far;
auto-re-hierarchical-organization is a little more novel.

Transport Layer:
mixed-loss environment . error trend indications. Adaptive power
control (emitted RF)
Greater than 100:1 asymmetries must be accommodated -- without
degrading the high-rate and STILL getting use on the :1 side!

Application Layer:
Service location (@@suresh singh, colorado, SLIRP?)
network management

[RK: note that the space-specific concerns diminish as you ascend the
stack, but the cross-cutting issues raised by latency are overriding.
That is, we *definitely* have a lot of work ahead of us in software
engineering: event-based system, speculative execution, economics,
bw-awareness.]