[FoRK] Non-speech, non-keyboard direct communications will create a new class of humans
J. Andrew Rogers
andrew at jarbox.org
Tue Jun 25 16:31:22 PDT 2013
On Jun 24, 2013, at 11:40 PM, Stephen Williams <sdw at lig.net> wrote:
> On 6/24/13 10:07 PM, J. Andrew Rogers wrote:
>> - They are necessarily "n+k" dimensional constructs for theoretical reasons. There is no 1D, 2D, etc concept bootstrap into non-visualizable number of dimensions because the simplest interesting examples are non-visualizable.
>>
>
> I presume the interesting property is effecting clustering distance in the 1D distance / magnitude, a la Hilbert. You should be able to visualize distribution and density in 1, 2, or 3D. Or perhaps zigzag (space filling!) 1D projection onto 2D or 3D.
It is more complicated than that.
When using a curve as a computational structure there are some abstract types that are not tractable on curves that share their dimensionality. They can be mapped via a transform function onto a higher dimensionality curve in which they are more tractable. You could visualize this as a graph of subspaces but it is not explanatory. For every visual depicting a good transform function, there are an unbounded number of obvious but defective transform functions that will produce an identical visual. Don't underestimate the subtlety of the theory; it stumped academics for a couple decades.
A second reason higher dimensionality is necessary is to guarantee that a set of algorithmic cuts exist that will give a uniform data distribution across e.g. a distributed data structure while preserving spatial locality. This is more obvious.
Brilliant for building distributed data structures, and more scalable than hash tables with about as much code. Much harder to understand though.
>> - The curves are neither regular nor self-similar. Every case generates a unique curve from an infinite set. We are not talking about a tidy Hilbert curve that is easy to draw or recurse, or a single curve used repeatedly. The only reliable and repeatable property is that the curve fills the space i.e. it maps to the set of natural numbers and can be used to carve up the same.
>>
>
> Sounds like customized Gray-like codes to cause a particular distribution mapping. Create a library of Hilbert-like, Gray-like, hash-mapping, linear, non-linear of various types, domain specific (Earth surface location, solar system gravity wells / orbits / trajectories), and machine learning categorizers, then use them in combination to get explicit (similarity results) and implicit (nearby block storage or compression) clustering.
Using libraries of curves are quite brittle for most applications -- terrible skew sensitivity. It was largely abandoned as a viable approach by the early 1990s (though heavily patented at the time). It tends to break in real systems with real data sources.
Adaptive and/or algorithmic curve generation addresses many of these issues but you'll want to design a curve description algebra that is very simple or you'll go insane trying to reason about the correctness of the implementation. I can't think of any publicly described curve generators of this type but I've built a few over the years for parallel algorithm work.
>> - Curves are polymorphic. Operators over pairs of curves dynamically bend the operand curves into other fugly curves suitable for the operation instance. You can't just pick an "easy" curve and stick with it; doing anything useful means being able to dynamically bend it into an unbounded number of other fugly, hyper-dimensional shapes.
>>
>
> Correlation, discorrelation, random hash, circular hash, and other relational algorithms to cause desired clustering or distribution?
What value would a hash have in such a system?
The point of the polymorphism is that you can do it logically without moving data around. Useful for parallelizing some types of complex operations.
> A la Hilbert description vs. implementation? Expected in many cases.
People understand the logical spatial model pretty easily but working with that directly in software is slow.
You can implement it using a fairly straightforward data model oblivious algebra where all the operands are 1-dimensional infinite intervals (less weird than it sounds). This can be implemented with epic but non-obvious bit-twiddling algorithms that are blindingly fast. Conceptually, it makes some sense since the dimensionality of space-filling curves is largely an illusion.
Practically, it takes people a while to grok how that pass through the bit-twiddling always produces the correct result for a data model with seemingly so little context.
> Perhaps there is a better way to represent the concepts. Many ideas take time to absorb, like first contact with key parts of calculus (at least the way it used to be taught), but most are just not taught as well as they could. For instance, I am very annoyed that I was taught trig in a rote way that left me able to reason in equations, but without an intuitive, always-accessible mental model. Current best practices produce this at the start.
I am sure it could be taught better but in practice few people have the foundations. There are a number of relatively obscure theoretical equivalencies that are used fluidly that take people a while to absorb into their thinking. It would be like jumping into calculus before people are comfortable with algebra. No amount of good presentation will allow them to be immediately facile when it comes to manipulating those relationships cognitively.
Remember, the original argument was that people are limited by communication bandwidth. :-) I think we have a hard time using the bandwidth we already have. Most of what passes for "fast communication" is an example of low information content with a lot of shared context padding.
More information about the FoRK
mailing list