[FoRK] PubSub NG Re: MQTT : Exploring the Protocols of IoT - News - SparkFun Electronics

J. Andrew Rogers andrew at jarbox.org
Fri Feb 27 17:08:20 PST 2015

> On Feb 27, 2015, at 3:21 PM, Stephen D. Williams <sdw at lig.net> wrote:
> I'm working on other things and haven't surveyed the state of this area recently, but given a lot of other background, off the top of my head:
> Security: authentication, authorization, reliability (also see comments below)
> Configuration: Messages, settings
> Data: Bulk, periodic, streaming, subscription
> Power: Timed intermittent, beaconing, low-power monitoring
> Timing: synchronization, clock, time
> Error: best effort vs. reliable, watchdog, recovery / restart, reliable firmware update w/ security and rollback

These are not embedded platforms in the traditional sense but (usually) full-blown Linux servers at the edge in non-server form factors running sustained load. How the sensor data becomes available to Linux is not a significant concern; there is no reason for the transport to go all the way to the sensor. The correct mental model is a server as an endpoint, rarely smaller than 64-bit ARM.

Also, these sensor platforms are increasingly being built using the modern data center model: reliability through cheap, redundant units. If a unit fails, no one cares.

The missing part is a protocol for a global-scale dynamic fabric that connects these “servers” together into local ad hoc computing environments given the constraints and typical applications for sensor data models and mobile platforms. 

> Security ought to be somehow done as an independent layer so that it can be replaced, fixed, done as a separate chip, etc.
> Everything should be encrypted, signed, nonced, timestamped, and strongly protected under one or more security modes.
> It ought to be possible to configure a device in simple shared key mode and alternately strong PKI/GPG modes.  TLS (HTTPS, SSH, etc.) is probably going to be preferred for at least some modes.

No need to over-engineer this, it isn’t the web. Thinking it is like the web is what is so broken about current protocols. No one needs HTTPS, SSH, GPG, etc for this. You can always proxy out to some other protocol at an endpoint if you wish or tunnel another protocol. 

Assume you have SHA-256 and AES-128/256 in silicon, because you often do. Basic PK exchange is in software (RSA or maybe EC). That is the toolset.

> Ideally, should support direct web-browser use or debugging, probably as an option, maybe via a standard mapping proxy.

Wut?  Do you realize that your PubSub stream in these models is 1-10 Gbps per source and that a single *logical* stream may be the aggregate of many sources? It is truly decentralized at a fundamental level because it has to be. You feed your client with a distributed constraint running remotely that is constantly moving between resources in the fabric such that it is never really running in a single, concrete place. Traditional PubSub applications have data rates that are sufficiently tiny that you can get away with a lot of subtle and not-so-subtle centralization in the protocol. Not so here.

Indeed, what makes this different in the abstract is that centralization shortcuts do not work anymore. Can’t and doesn’t exist in big fabrics like this. Data is not located at a fixed point in the network and relativistic effects thwart a God’s eye view. Nonetheless, you can compose a locally consistent view of a logical stream using decentralized constraints within the limits of the local bandwidth. 

Trust me, you get used to it. 

More information about the FoRK mailing list