[FoRK] Machine-Learning Maestro Michael Jordan on the Delusions of Big Data and Other Huge Engineering Efforts
Stephen D. Williams
sdw at lig.net
Fri Nov 14 03:20:33 PST 2014
On 11/13/14, 5:46 PM, J. Andrew Rogers wrote:
>> On Nov 13, 2014, at 2:39 PM, Stephen D. Williams <sdw at lig.net> wrote:
>> Deep learning is not _just_ a rebranding of neural networks. That's like saying that AWS/Google+Linux+Docker+... is just a rebranding of operating systems that we had in the 60s. Deep learning etc. is _working_ neural networks for a wider range of applications and to a degree of efficacy that we didn't have before. We should all be chagrined that the AI Winter prevented us from discovering the tweaks that transformed almost-working neural networks to very powerful and efficient tools that we now. Those tweaks, like dropout, are the key to the whole thing working resiliently.
> Deep learning is obscure neural network computer science from ~20 years ago with better marketing. Being unreasonably
Changes have been made and those changes have made the crucial difference. It's the difference between 60-70% success and 98+% success.
Don't you agree? The current methods did not exist completely 20 years ago. Most of the form did exist, they were close in many
ways. But those differences were hard won.
> computationally expensive on the hardware of the day was not the only reason it was abandoned. Most people jumping onto the
Neural nets weren't abandoned because they were too expensive. It was because they were perceived to have major limitations and had
limited applicability and performance, regardless of performance.
> fad seem to be unaware that it is the second time around. That said I am not sure what a “neural network” actually is anymore. The best deep learning systems are not biologically inspired.
The whole connectionist approach was biologically inspired originally. It was a completely different way of doing computation and
of posing problems. The fact that it pays no attention to further details of biological systems doesn't mean anything.
>> It is pretty clear that the brain doesn't in any way propagate signals backwards to train? Is it magic?
> I can’t speak for Jordan but back prop is a pretty limited learning mechanism regardless of whether or not neurons can do it.
What kinds of limitations do you currently perceive? Are you talking about current backprop (CNN, DL, RNN) or old backprop?
> There are more productive things you can do with neurons that can transmit signals bidirectionally if learning is your goal.
>>> Michael Jordan: I think data analysis can deliver inferences at certain levels of quality. But we have to be clear about what levels of quality. We have to have error bars around all our predictions. That is something that's missing in much of the current machine learning literature.
>> Really? Half of machine learning is all about detecting fit vs. overfit vs. effectiveness. Probability reasoning, understanding and incorporating likelihood and success rates, has been the core of the best AI for the last decade.
> I think mathematics has a better handle on limits of predictability than the machine learning literature at this point. It is partly because many machine learning models are not amenable to that kind of analysis, which in my estimation is indicative of a flaw in a machine learning model.
The difference between theory and practice in practice is greater than
the difference between theory and practice in theory.
But here, the fact you can't prove that something is useful isn't proof that it isn't useful. A working system is its own proof.
The fact that someone is uncomfortable with lack of completeness in the theory of why it works and why it works so well doesn't
necessarily detract from the value of it working. I'm the first to point out problems of researchers indirectly programming
(genetic programming) or ignoring major limitations and pitfalls.
I think your two sentences above are nonsensical. You state that mathematics has a better handle but then, that they can't be
applied, then further that this is a problem of the model, which is sort of ad hominem to the model.
Mathematical models, especially in the sense of formal proofs of correctness, completeness, or betterness can't really be applied to
everything anyway. You can't really compare AWS with alternatives for instance.
>>> Michael Jordan: I am sure that Google is doing everything I would do. But I don't think Google Translate, which involves machine translation, is the only language problem. Another example of a good language problem is question answering, like "What's the second-biggest city in California that is not near a river?" If I typed that sentence into Google currently, I'm not likely to get a useful response.
>> IBM Watson? Siri? This is natural language + linked data (DBPedia etc.) with some forms of machine learning. Various systems have elements of this already.
> Nope, Jordan is correct here, though he might not know why. That question is not (efficiently) answerable with graph-like data structures e.g. Watson or linked data. Assuming otherwise is a common error; a surprising amount of money has been wasted on startups and systems where apparently no one noticed.
Since we haven't seem the architecture of Watson et al, and we haven't been able to test it yet, we can't be sure whether it can
handle it or not.
The linked data and similar projects weren't specifically for AI or natural language, although everyone knows that a machine
readable and queryable form of data will be helpful. We can hand write queries that are fairly direct versions of English
sentences. We mainly need to solve the natural language understanding, which we seem to be close on.
More information about the FoRK