[FoRK] Non-speech, non-keyboard direct communications will create a new class of humans

Stephen Williams sdw at lig.net
Mon Jun 24 23:40:40 PDT 2013


On 6/24/13 10:07 PM, J. Andrew Rogers wrote:
> On Jun 24, 2013, at 1:06 AM, Stephen D. Williams <sdw at lig.net> wrote:
>> Space filling curves are reasonably understandable in progression from 2D to 3D to ND.  I don't find it that difficult to imagine using space-filling curves for clustering or N-dimensional to 1-dimensional key mapping or similar operations.  I'm sure your case is far more sophisticated, but space filling curves are a perfect example of something awkward to explain verbally but easy to understand visually.  Isn't the usual progression to understand these kinds of things in 1D, 2D, 3D visually and intuitively, then map to symbol space and operations, then extrapolate to higher dimensions there?  This seemed very straightforward in machine learning class also: While you may be fitting discriminators in arbitrary n-dimensional space, you can illustrate what is going on in 2D or 3D.
>
> Examples like the above are selected for simplicity -- I use them myself sometimes -- but not for conveying any deep understanding. Almost every concept of value is factored out of the simple cases like the classic Hilbert curve. Generalized curves useful for computation are not easily grokkable and here are some reasons why:
>
> - They are necessarily "n+k" dimensional constructs for theoretical reasons. There is no 1D, 2D, etc concept bootstrap into non-visualizable number of dimensions because the simplest interesting examples are non-visualizable.

I presume the interesting property is effecting clustering distance in the 1D distance / magnitude, a la Hilbert.  You should be 
able to visualize distribution and density in 1, 2, or 3D.  Or perhaps zigzag (space filling!) 1D projection onto 2D or 3D.

For visualization of multidimensional data, we need more innovation, but there have been some successes.  And we do some degree 
naturally.  It's not difficult to imagine 3D data changing over time, or containing different temperatures or material or 
pressure or motion.

>
> - The curves are neither regular nor self-similar. Every case generates a unique curve from an infinite set. We are not talking about a tidy Hilbert curve that is easy to draw or recurse, or a single curve used repeatedly. The only reliable and repeatable property is that the curve fills the space i.e. it maps to the set of natural numbers and can be used to carve up the same.

Sounds like customized Gray-like codes to cause a particular distribution mapping.  Create a library of Hilbert-like, Gray-like, 
hash-mapping, linear, non-linear of various types, domain specific (Earth surface location, solar system gravity wells / orbits 
/ trajectories), and machine learning categorizers, then use them in combination to get explicit (similarity results) and 
implicit (nearby block storage or compression) clustering.

>
> - Curves are polymorphic. Operators over pairs of curves dynamically bend the operand curves into other fugly curves suitable for the operation instance. You can't just pick an "easy" curve and stick with it; doing anything useful means being able to dynamically bend it into an unbounded number of other fugly, hyper-dimensional shapes.

Correlation, discorrelation, random hash, circular hash, and other relational algorithms to cause desired clustering or 
distribution?

>
> And if you can get through that, you'll notice that the infamously inscrutable implementations look nothing like what was just described because they leverage obscure theoretical equivalencies for the sake of computational efficiency. Which is also grok-resistant on its own. The set of concepts is actually elegant and compact but this does not make communicating them any easier because for the vast majority of people, even those with CS degrees, several of the concepts are novel.

A la Hilbert description vs. implementation?  Expected in many cases.

>
>
> Absorbing a new concept is slow. Absorbing several new concepts at once is even slower. Actually being able to mentally synthesize constructs from several new concepts is slower yet. The limitation is not bandwidth, it is the rate at which the brain can integrate new concepts.

In addition to communication representation and organization, absorption speed usually depends on the concept vocabulary size 
(in the internal mental representation sense), pattern matching ability, and ability to break down and recombine ideas, i.e. 
creativity.

>
>
>
>>> The ultimate limitation is that you, literally, cannot visualize and manipulate an appropriate level of complexity. Nor can I. You overestimate the abilities of a monkey brain.
>>>
>> Maybe.  Or maybe we haven't come up with a suitable visualization.  In any case, you just brought up the straw man of communicating n-dimensional problems.
>
> It wasn't a straw man, I've had to teach it for years. Nor is the complexity the fact that it is n-dimensional. The complexity is that to understand it, you need to grok several concepts that most people are not familiar with and that greatly slows understanding.

Perhaps there is a better way to represent the concepts.  Many ideas take time to absorb, like first contact with key parts of 
calculus (at least the way it used to be taught), but most are just not taught as well as they could.  For instance, I am very 
annoyed that I was taught trig in a rote way that left me able to reason in equations, but without an intuitive, 
always-accessible mental model.  Current best practices produce this at the start.

>
> If you narrow the case to things that are already bloody obvious to almost everyone or nearly so, your argument isn't all that compelling. Information complexity measures account for the relative similarity of context models on both sides of the communication channel. You seem to be ignoring that part of the equation.

Not entirely.  And in a different sense, just about everything is multidimensional.
  

sdw



More information about the FoRK mailing list