[FoRK] mappings

Jeff Bone jbone at place.org
Thu Oct 22 19:25:53 PDT 2009


On Oct 22, 2009, at 8:47 PM, Jeff Bone wrote:

> ...your insistence that any of these things have any particular  
> *equivalence* is about as useful as saying that hash tables and  
> lists are "equivalent" because they are both examples of data  
> structures. ;-)

(Apologies for the abysmal formatting in the previous;  I plead  
Mail.app. ;-)

While the above is pithy and to the point, an even better statement  
might be that Stephen's conjecture is about as useful as saying that  
hash tables and lists are "equivalent" because they are both data  
structures that can be represented by s-expressions (or strings, or  
whatever.)

If I write the following in some Lisp-ish:

	(('a 1) ('b 2) ('c 3))

...then, while it has a single representation (given) above, it as  
several meanings.  First, to the reader / tokenizer, it's a string.   
To the interpreter, it's an s-expression, particularly a nested list s- 
expr.  To some piece of code that implicitly understands the  
programmer's intent, it's a representation of a hash table or  
dictionary.  THAT'S where the semantics happens.

Don't be fooled by the surface similarity of these various models e.g.  
ANNs vs. Bayesian networks vs. Rete networks vs. semantic networks.   
You can print them all out as graphs, and at some level working with  
all of them is just graph traversal.  But what the topology and  
weights of e.g. an ANN *mean* and *do* are quite different from what  
e.g. a semantic network *means* and *does*.  And a strict ontological  
semantic network is *very* different in its capabilities from a fuzzy,  
defeasible semantic network.

And indeed, even among ANNs there are some very significant  
differences between what happens with and what can be done with  
different topologies.  The simplest example of this was the genesis of  
our first conversation about this:  a single-layer neural network can  
have no "statefulness."  It can learn, for example, the relationships  
between all of the inputs it receives simultaneously, and can classify  
each such set of weights into some set of output classes over any  
number of input examples;  what it *is structurally incapable of  
doing* is learning the relationships, if any, between *sequences of  
sets of inputs*.

To prove this to yourself, wire up an single-layer / non-recurrent ANN  
and feed it some input data in some certain order.  Then, take the  
same set-of-sets-of-inputs and feed them to another single-layer non- 
recurrent ANN *in random order.*  Given the same training data, you  
will end up with identical networks (identical sets of weights)  
regardless of the order of the sets of inputs fed to it to train it  
--- the order of *sets of inputs* doesn't matter because the single- 
layer non-recurrent network has no "memory" or "perception" or  
"state."  It doesn't "remember" the order in which the learn examples  
occurred;  it just learned how to fit an n-line between the various  
target classes.

Point being, this is just one example --- but there's a gestalt  
lurking here.  Don't be too quick to see patterns where their aren't  
any.  Your own internal classifier may not be looking at a large  
enough volume of the feature space in the potential training set. ;-)

$0.02,


jb



More information about the FoRK mailing list