[FoRK] Machine-Learning Maestro Michael Jordan on the Delusions of Big Data and Other Huge Engineering Efforts

J. Andrew Rogers andrew at jarbox.org
Fri Nov 14 10:22:06 PST 2014


> On Nov 14, 2014, at 3:20 AM, Stephen D. Williams <sdw at lig.net> wrote:
> 
> Since we haven't seem the architecture of Watson et al, and we haven't been able to test it yet, we can't be sure whether it can handle it or not.


I know the Watson team is aware of this theoretical limitation because they asked me if I could fix it. (I preferred to do something else.)

This, by the way, is a great litmus test for machine learning systems and their designers. You ask them if they can express an apparently mundane type of reasoning that they lack the computer science to express. IBM Research passes that test but most computer scientists do not (a pervasive issue in the field of AI).

If you recall, not three weeks ago you were using the existence of specialized neurons for spatial-like processing to argue for pervasive specialization. At the time, I pointed out this is unsurprising because some kinds of reasoning we take for granted is not expressible without certain kinds of operators that don’t work with a graph-like data representation. Same story here. 

The difference is that almost all algorithms and data structures in computer science are built from graph-like representational primitives. A set of data structures and algorithms built from non-integer, non-graph (abstract) primitives is not something you can just lookup or download but they are necessary. 






More information about the FoRK mailing list