[FoRK] Machine-Learning Maestro Michael Jordan on the Delusions of Big Data and Other Huge Engineering Efforts
J. Andrew Rogers
andrew at jarbox.org
Fri Nov 14 15:38:56 PST 2014
> On Nov 14, 2014, at 12:19 PM, Stephen D. Williams <sdw at lig.net> wrote:
> On 11/14/14, 10:22 AM, J. Andrew Rogers wrote:
>>
>> I know the Watson team is aware of this theoretical limitation because they asked me if I could fix it. (I preferred to do something else.)
>
> I would have been very attracted to that kind of problem, although it would be a challenge to compete with the talent pool they should be attracting.
It was more rote than a research project, not that interesting. They needed someone that could design obscure algorithms.
>> The difference is that almost all algorithms and data structures in computer science are built from graph-like representational primitives. A set of data structures and algorithms built from non-integer, non-graph (abstract) primitives is not something you can just lookup or download but they are necessary.
>
> From a certain point of view, everything looks like a graph, matrix, triples or quads, etc. Currently, we build everything in the logical, structured world and encapsulate the NN and other fuzzy bits.
You are not quite getting what I mean. There is a very interesting problem buried here that requires changing the architecture of how you solve these kinds of problems. It is related to why platforms like Watson and other quasi-AI platforms are poor at certain types of reasoning.
The default conceptual primitive in data structures and algorithms is the finite bit string i.e. the classic 32-bit integer and similar. While there are no limits to theoretical expressiveness of algorithms and data structures that can be composed using these primitives, there *are* limits to the tractability of some algorithms as a consequence of using that primitive. It is analogous to being forced to use vanilla silicon when you really need a quantum computer due to the algorithms you are running. For a few algorithm families, notoriously some spatial algorithms but also some graph stuff, the classic integer primitive is limiting us to toy cases.
Is there a general alternative to the integer primitive? While virtually never considered, the answer turns out to be yes: intervals of countably infinite length. It changes the way your data structures and algorithms are built — things like simple order relationships go away, integer operations mean something different — but it is also efficiently representable on computers designed for integers. (Obviously they have a finite description sufficient to precisely compute operations over them even if the literal value is not enumerable.)
Functional analogues to all the classical algorithms and data structures for integer values can be built this way but usually look quite different. Some, mostly in the problem areas, are much better than the integer equivalent. For many applications it does not matter much but for generalized machine learning it matters quite a bit. You can’t get where you need to go at scale with a conventional neural network expression, deep learning or not.
Back to the main point: I am skeptical of the current “deep learning” fad because the proponents of it tend to make claims about what is possible that I know their computer science can’t support even ignoring the limitations of their NN models. Not that this is unusual for any software paradigm trend du jour.
Tangentially, I bailed on general purpose AI about a decade ago when I realized that elegant expressions — as a class — had no tractable implementation using the data structures and algorithms known at the time. Someone could design a nearly optimal AI and it would not do much of value without solving a raft of unrelated problems in computer science. That was then though.
More information about the FoRK
mailing list