[FoRK] Non-speech, non-keyboard direct communications will create a new class of humans
J. Andrew Rogers
andrew at jarbox.org
Mon Jun 24 00:12:45 PDT 2013
On Jun 23, 2013, at 11:34 PM, Stephen D. Williams <sdw at lig.net> wrote:
> I agree, but I also feel the narrowness of linear speech and text to be a severe bottleneck. Why not support mental drawings too? 3D structures? We're hampered by tools and access / display convenience now.
In this particular case, visualizations won't help. Can you visualize 7-dimensional computational structures? Most humans can't. But competent computer scientists can eventually work such things out on paper. It just doesn't happen quickly but some problems can't be solved any other way. :-)
I think you seriously underestimate the practical complexities. It is not a presentation problem in many cases; the domain is intrinsically and exquisitely complex. In the area I work in, everything is pervasively in dimensionality higher that anyone can visualize or manipulate mentally. A few freaks like me fake it well enough to get something done.
> It wouldn't be difficult to leverage past knowledge and experience, along with in and out editing capability, in an efficient way. Just marking terms that you understand some, fair, well, or expertly would be enough to assist in determining what to auto-request more detail on. Right now, we have slow, laborious, half-duplex (or fractional duplex in groups), thought to English, English to thought translation. We're going to move past that in multiple ways as soon as we have digital output and digital / graphic input.
The ultimate limitation is that you, literally, cannot visualize and manipulate an appropriate level of complexity. Nor can I. You overestimate the abilities of a monkey brain.
Low-value stuff we already communicate pretty well.
More information about the FoRK