sdw at lig.net
Sat Oct 24 17:45:13 PDT 2009
Jeff Bone wrote:
> Further clarification for Stephen...
>> With a rich version of neural nets, I think you start getting close
>> to the kind of structure that is equivalent to the results of
>> automatic training of Markov / Bayesian networks
> So to some extent we're talking past each other, largely because the
> terminological rigor in the field in general has become so sloppy that
> anything vaguely connectionist-looking is called a "neural network."
> So fair enough on that point.
Agreed, a friend who has spent 9+ years working on AI/AGI doesn't
consider her connectionist work to be a NN at all...
I think that all kinds of structures arise in real neural networks that
we can't detect well yet, so the fact that our first imitations were
primitive and overly simple doesn't obviate the ongoing use of the term
to me. I suppose I would use "connectionist" when doing something
radically different than what a NN could do. Anyway, connectionist is a
reasonable general term for the whole area.
> Back to a couple of your specific examples: there are even
> substantial differences between ANNs and e.g. Bayesian networks. The
> former, as mentioned, can *only* learn a classification n-line and,
> unless recurrent, can learn "moment in time" snapshot-patterns but not
> patterns about things that develop over time. (Don't get hung up
> about the recurrency issue; even recurrent ANNs with the typical
> model and learning algorithms aren't that powerful. I'm not talking
> about the XOR problem, and no, Minsky and Papert weren't "discredited"
> in this; there analysis was sound, it was just over-interpreted by
> them and everyone else.)
Which means that their interpretation was discredited. Same gist.
> Your further example in your equivalenc^H^H^H^H "similarity" class is
> Markov models. Markov models can be understood as a weaker variant
> encoding of the kinds of conditional dependencies that you see in a
> full Bayesian network. There's *actually* a pretty good
> correspondence between the two, though the Bayesian networks and
> learning algorithms over them are more abstract. (Consider whether a
> Bayesian network can "learn" a Markov model. Then consider the
> converse. Consider in the context of generalization over unseen data.)
A Bayesian model can be converted to a Markov model, and vice versa.
The graph changes in certain cases, but the resulting capability is the
same, or the same in most cases.
Just the first paper found to support this:
> That said, it's all about what you're actually attempting to do.
> Surprisingly many real-world phenomenon can be well-understood (at
> least to the level of decent prediction) without even directly
> modeling any sort of conditional dependency, logical entailment, etc.
> HTMs are just a biologically-inspired, turbocharged BN at some level.
> They employ an online-learning algorithm, some more complex and
> layered topology, and --- critically --- an a priori semantics imposed
> implicitly by the learning method, one which considers spatial and
> temporal aspects of its inputs per se. That's a reasonable thing to
> do from a biological metaphor perspective: it's like differentiating
> inputs and regions of the network based on which sense is providing
> the data, which of course the human neocortex and support systems
> *do.* But that's a far cry from what you see in the usual ANN or BN,
> so I would say that an HTM is a *highly advanced* form of BN, almost
> to the point of no longer really being a BN. (You certainly could use
> an HTM where a BN would work, but why would you want to? But there
> are many things for which an HTM may be suited that would be entirely
> unsuitable applications for a BN.)
> Small nit:
>> PGMs can embody semantics, including formal, mathematical systems,
>> but with full probability partial knowledge implication.
> If you've got semantics, then your system isn't "formal."
> Definitionally. ;-)
> (But yes, I understand what you're attempting to say here.)
> Semantics is the study of meaning, usually in language.
I know that "semantics" is used in several ways, however this seems
consistent with "semantic technology" a la RDF a la representation of
knowledge rather than disconnected data. A PGM explicitly encodes:
"given a, b, and c, then we know d value 1 is x% likely, value 2 is y%
> ... a formal system ... consists of a formal language together with a
deductive system ... which consists of a set of inference rules and/or
A Bayesian/Markov PGM seems to also fit that as any logical inference,
including xor, can be represented by the conditional probability
models. The deductive system is the reasoning algorithm that computes
the remaining conditional probabilities given certain knowledge.
A PGM is usually not used as a formal logic system in the full sense,
although it should be capable of that.
But did you mean something more fundamental? Formal systems can't have
or map to meaning?
> FoRK mailing list
More information about the FoRK