[FoRK] why we should stop using brain metaphors when we talk about computing

Stephen D. Williams sdw at lig.net
Thu Nov 13 13:34:43 PST 2014


Nevermind the existence of fairly usable any language to any language translation.  Or surprisingly effective image understanding 
running in Javascript in a browser. Or even more resilient OCR, now including color-based recognition of signs and street 
addresses.  Or how about a fairly simple system that learns (undirected) to play easier video games, by directly "seeing" pixels, at 
human levels.  We regularly talk to computers.  And then there is SLAM, particle systems, and self-driving and self-flying 
vehicles.  Watson.  People should understand the limitations of our current level of technology, but they should also understand 
what has been accomplished and where it is trending.  Lately, a lot and in promising directions.

Misunderstanding a tool may lead to minor disasters.  But we're safely past the possibility of another AI winter due to over eager 
cheerleading.  We're on the cusp of an avalanche of uses for already solid machine learning and other techniques.

Additionally, we do know a lot about how our brain and animal brains work.  We're rapidly cycling between biological, psychological, 
and artificial neural net insights, with each informing the other.

http://deeplearning.net/tutorial/lenet.html
>
>
>     Motivation
>
> Convolutional Neural Networks (CNN) are biologically-inspired variants of MLPs. From Hubel and Wiesel's early work on the cat's 
> visual cortex [Hubel68] <http://deeplearning.net/tutorial/references.html#hubel68>, we know the visual cortex contains a complex 
> arrangement of cells. These cells are sensitive to small sub-regions of the visual field, called a /receptive field/. The 
> sub-regions are tiled to cover the entire visual field. These cells act as local filters over the input space and are well-suited 
> to exploit the strong spatially local correlation present in natural images.
>
> Additionally, two basic cell types have been identified: Simple cells respond maximally to specific edge-like patterns within 
> their receptive field. Complex cells have larger receptive fields and are locally invariant to the exact position of the pattern.
>
> The animal visual cortex being the most powerful visual processing system in existence, it seems natural to emulate its behavior. 
> Hence, many neurally-inspired models can be found in the literature. To name a few: the NeoCognitron [Fukushima] 
> <http://deeplearning.net/tutorial/references.html#fukushima>, HMAX [Serre07] 
> <http://deeplearning.net/tutorial/references.html#serre07> and LeNet-5 [LeCun98] 
> <http://deeplearning.net/tutorial/references.html#lecun98>, which will be the focus of this tutorial.
>

https://en.wikipedia.org/wiki/Deep_learning

I have a lot of detailed opinions about different areas, techniques, trends, etc.  At an abstract level, I think:
   - Creating machine learning mechanisms that train and work well for certain types of input is just the start as those techniques 
will be repeatedly applied in new and clever ways.
   - Trying and failing to apply techniques that worked well in another case is what leads to better understanding and more refined 
techniques.

The latter is what I think of as being a "scienteer" - An engineer + scientist, i.e. applies scientific methods to the combination 
and use of both known engineering principles and new conjectures. I'm @scienteer.  There's art in there somewhere too, but 
scienteertist

https://www.brainyquote.com/quotes/quotes/j/jamesabal119800.html
Those who say it can't be done are usually interrupted by others doing it. - James A. Baldwin

http://impossiblehq.com/25-impossible-quotes

Stephen

On 11/13/14, 6:56 AM, Gregory Alan Bolcer wrote:
> http://spectrum.ieee.org/robotics/artificial-intelligence/machinelearning-maestro-michael-jordan-on-the-delusions-of-big-data-and-other-huge-engineering-efforts/ 
>
>
> The overeager adoption of big data is likely to result in catastrophes of analysis comparable to a national epidemic of collapsing 
> bridges. Hardware designers creating chips based on the human brain are engaged in a faith-based undertaking likely to prove a 
> fool's errand. Despite recent claims to the contrary, we are no further along with computer vision than we were with physics when 
> Isaac Newton sat under his apple tree.




More information about the FoRK mailing list