[FoRK] Newton Re: why we should stop using brain metaphors when we talk about computing

Stephen D. Williams sdw at lig.net
Fri Nov 14 12:37:12 PST 2014


On 11/14/14, 5:48 AM, Dr. Ernie Prabhakar wrote:
>> On Nov 13, 2014, at 11:17 AM, Gregory Alan Bolcer <greg at bolcer.org> wrote:
>>
>>> what our Theory of Computation would be like if it had been invented by Isaac Newton instead of
>>> mathematicians…
>>>
>>   Buckets of water and state transition phases?
> Close: channels of water and stateful gates.  That was how a family friend at Bell Labs explained transistors to me when I was in fifth grade. :-)
>
> The deeper point is that Newton started from one concrete experience (the apple) and figured out a small set of abstract concepts that allowed him to link it to other concrete experiences (e.g., the orbit of the moon).
>
> I have become convinced that the original sin of Computer Science is starting from abstract Boolean logic (operators/math) rather than concrete physical objects (transistors/state).

Or a number of other starting points.  I'll have to think about this later, if and when I or we start seriously trying to improve 
computer science education.  Many are pursuing that right now.

>
>> On Nov 14, 2014, at 1:47 AM, Dave Long <dave.long at bluewin.ch> wrote:
>>
>>> we are no further along with computer vision than we were with physics when Isaac Newton sat under his apple tree.
>> Speaking of Newton, I keep expecting someone like Conor McBride to come up with some kind of effective calculus-analogue for informatics.
> We’re working on that at The Swan Factory. Check back with me in a month. :-)

Sounds cool!!

>
>>   But with seven years of hindsight (in my case, at least double that for McBride), it doesn't seem that anyone has yet stumbled across a simple model which —like calculus— could replace heavy creative analysis with a bit of plug-and-chug on scratch paper.
>>
>> http://stackoverflow.com/questions/25554062/zipper-comonads-generically/25572148#25572148 <http://stackoverflow.com/questions/25554062/zipper-comonads-generically/25572148#25572148>
>> http://www.cis.upenn.edu/~byorgey/pub/species-pearl.pdf <http://www.cis.upenn.edu/~byorgey/pub/species-pearl.pdf>
>>
>> (don't be put off by the large amount of FP machinery used; the underlying ideas are simple enough that one can apply them (and people have been, for ages, eg. buffer-gap editors) even in machine-sympathetic environments. The basic "Midas Touch" problem in programming is that while code can always take data to any isomorphic form (and small amounts of shimmering are harmless, if not actually useful), after composing enough of these transformations together one is no longer dealing with relatively simple atomic behaviors, but instead automata whose intermediate states lead to relative complication)
> Exactly.  The problem with the abstractions we currently use in computation is that they aren’t really composable.  This is why our theories are non-intuitive and our programs crash.
>
> Newton’s genius was that he figured out the right primitive metrics for physical systems -- space, time, and mass — along with the right rules for combining them.   The result was a scale-independent system that could not only be applied for everything from electrons to galaxies, but could also tell you *when* and *how* to ignore the internal details so we could focus on a higher level of abstraction.

Physics is easy in that way.  How would you characterize the progression of biology?  Chemistry?

>
>>> On Nov 13, 2014, at 2:39 PM, Stephen D. Williams <sdw at lig.net> wrote:
>>>
>> We are searching for the important essence of things, in this case the fundamental useful properties of neural systems.  We may guess wrong or focus on the wrong aspects or create a model that doesn't work the same way.  We don't build bridges by imitating trees in all details.  It is not patently false that these methods weren't inspired what we believed at different times about neuroscience.  It is entirely possible, and probable really, that we will use algorithms that work better for our purposes than what actually happens with neural systems.  So what?  There's no guaranteed advantage in being fundamentalist about imitating neural systems exactly.  We will also do that, but few expect the overhead of a more exact simulation to be competitive.
> I believe you — but the very fact that people here don’t see that indicates (to me) a fundamental flaw in how we’re going about it.

Hence our discussion to enlighten. ;-)

>
> The beauty of Newton’s work — which is literally the paradigm for all of physics — is that he *first* started with an explicit conceptual model, and then derived the mathematics from that.  Yes, he got it wrong (hence Einstein), but his model was precise enough we could critique it intelligently.

Starting with models didn't work for biology, chemistry, psychology, or politics for that matter.  One subject's model is another 
subjects fantasy.  (Well, sort of.)

>
> What worries me about computation is that we don’t have a culture of explicitly formulating and critiquing the conceptual models, so we end up critiquing specific implementations or comparing them to the good old days.  We have isolated individuals who try, but without a common meta-paradigm for what we are supposed to be doing we never get a virtuous cycle of clarification.

The details and dependencies are way too complicated.  Tiny details can change everything.  It is chaos science in some key ways.  
Let it go and surf the results.  Having lived a fair spectrum of roles from customer, designer, architect, developer, debugger, 
security, etc., I guess the fluidity doesn't bother me too much: Much of the time you have to reason with partial information and 
new paradigms. It is like being dropped into a scifi story with a new set of world rules that you have to reason with.  While I 
frequently deep dive to understand the whole stack (a character flaw compulsion perhaps, although it has probably paid off well 
overall), sometimes I don't steal enough time.

That being said, we should try to get there.  But we shouldn't hold back people who are surfing the edge.  That would be like 
requiring the hottest song writers to prove their theory and reach tenure before they could release pop songs: it would slow things 
down massively and likely destroy the cycle.  Little or no theory or method escapes from those who do it well, who may not even 
understand their method, but we all benefit well enough and new talent comes along frequently enough that we get by.

>
>> On Nov 13, 2014, at 1:34 PM, Stephen D. Williams <sdw at lig.net> wrote:
>>
>>
>> https://en.wikipedia.org/wiki/Deep_learning <https://en.wikipedia.org/wiki/Deep_learning>
>>
>> I have a lot of detailed opinions about different areas, techniques, trends, etc.  At an abstract level, I think:
>>   - Creating machine learning mechanisms that train and work well for certain types of input is just the start as those techniques will be repeatedly applied in new and clever ways.
>>   - Trying and failing to apply techniques that worked well in another case is what leads to better understanding and more refined techniques.
>>
>> The latter is what I think of as being a "scienteer" - An engineer + scientist, i.e. applies scientific methods to the combination and use of both known engineering principles and new conjectures. I'm @scienteer.  There's art in there somewhere too, but scienteertist

Did I miss finishing that thought?  Scienteertist fails aesthetics. Scienteer it is.  The art is silent and understood.

> I applaud you for being a scienteertist.  But I’m still shocked that the field of programming has virtually nothing that I (as a physicist) would recognize as a scientific community (versus lone ‘natural philosophers’).

The natural philosophers / scienteers are usually the creators, makers, architects, those who are surfing the edge.  They usually 
know of each other and sometimes talk above the babble.

"Natural philosophers" doesn't quite say it since that implies too much tinkering and not enough building, but maybe that's just my 
impression of the phrase.

>
>> On Nov 14, 2014, at 1:47 AM, Dave Long <dave.long at bluewin.ch> wrote:
>>
>> (It's probably worth mentioning that most biomass doesn't find analytic thought, let alone intelligence, very useful.)
>>
>> -Dave
> 99% the time it is not useful at all.  But one percent of the time it enables positive virtuous cycles of meme-creation that literally transform the world.
>
> Unfortunately, finding the right balance between analytic thought and practical experience is not solvable by one or the other. Which is perhaps why different individuals and communities idolize their particular local maxima…
>
> — Ernie P.
>
>
sdw



More information about the FoRK mailing list