> To me it seems a shame to see the noise level on this list creeping up =
> with this stuff.
Sorry to been the cause of some of it. Here's the definitive reply :-),
centered around followups to Ron's comments, interlacing others.
> Doug Lea wrote:
> >I suspect that `object' is the
> >term Turing or Von Neumann would have used instead of TM or automaton
> >if they had had the opportunity to build interesting software. The
> >defining characteristics of objects are just the same as the defining
> >characteristics of machines. Except a little broader in that
> >degenerate cases also count as objects yet not machines.
> This is a rather intriguing, and I think you'd agree - uncommon in
> traditional OO circles- way of looking at things.
But not, I think, among people working at them systems level dealing
with distributed, concurrent, persistent, etc objects.
My offhand diagnosis is that traditional OOA/OOD/OOP accounts don't
often deal with reflection. The resulting `naive OOP realism', where
the only definitions of objects look like those at the OOP level, is
probably is conceptually easier for most people than `reflective
object multilevelism', but doesn't help them build useful systems these
(`Naive OOP realism'? `Reflective object multilevelism'? Yuck!
There's gotta be better terms.)
This is not so much about computational reflection (as in meta-object
protocols), but consciously reflective design, in which you realize
that you are programming one object to simulate and/or manipulate
others. Reflection is intrinsic to the notions of firing up
interpretors (as in threads), creating, sending, and reconstructing
copies of objects (as in serialization, mobility, etc),
interpositioning protocol handlers (as in reliable distributed object
systems), building three-tier systems (where object states are kept in
databases, explicitly manipulated by middle-tier code) and so on. In
other words, you just cannot escape it at the OO plumbing level. And
the resulting issues increasingly pervade application-level OOP, which
really puts a strain on naive OOP realism.
I'm not sure why so few people write about the underlying concepts and
models here. For me, it's just the (usually implicit) consensus view
in systems-level OO work. And I don't think this view conflicts with
naive OOP realism, it just extends it by incorporating uncontroversial
observations about properties of Universal Turing Machines and the
like. The main novel aspects compared to more classical work stem
from exploitation of the fact that objects at `lower' levels normally
have more interesting behavior than state reads and writes, which
makes embedding and coordination more challenging.
> Now, of course, I've seen the light... ha... actually, I accept
> objects as being pretty nice, especially in a distributed setting.
> Still, it surprises me like not-at-all that so much (current) research
> focusses on the need to break encapsulation (or otherwise get around
> the problems of encapsulation) to gain performance.
Yes. Nice example. Reflective approaches provide a foothold for
dealing with such things in at least slightly more principled ways. In
fact, the very notion of `breaking encapsulation' is another one of
those intrinsically reflective concepts. See especially Gregor
Kiczales work on open implementations and aspect-oriented programming
> Some people view "objects" as being not much different than data
> modeling or knowledge representation. Maybe that is related to this
Yes. Data-model objects are the most degenerate kinds of objects.
They scarcely seem like machines at all. Which leads people into
dumb-data + smart-code approaches, which historically haven't often
been a big win. But with several notable exceptions -- sometimes it
does pay off to treat data-model objects as glorified memory cells.
> In fact, I recall that you made similar
> statements in CPJ (Concurrent Programming in Java), about 2/3
> of the way through - I don't have the book here in Brazil for
> an exact page reference. The discussion there dealt with looking at
> objects as "more like machines" (active, with process&data) or
> as "more like memory" (passive data, acted upon).
> (BTW - I found it rather strange that such statements would come
> buried into later chapters - I think that material could be better
> put up front, with more beefing up.)
Yes. I usually cover this first when I give talks and tutorials based
on CPJ. I'm planning to rework to place earlier in second edition,
including a section based loosely on my `WaterTank' post to dist-obj a
few months back. (I'm still not sure how to pull this off without
making the second edition have an even scarier academic tone than the
first though. I think this stuff has to be made approachable, even
ordinary-sounding. Otherwise it is hard for people (well, at least for
me) to think about what they are doing when they write concurrent
> In both the book,
> and the comments you've just posted, I felt myself willing to be
> persuaded by this argumentation,
> but still not entirely so - I'd like more clarification if possible.
> For example, software objects - the kinds we normally discuss -
> have hairy issues of identity. Because it is so easy to replicate
> bits, you can copy objects effortlessly - often, far too effortlessly.
> That leads to all the problems of object identity, equality,
> depth of copying, etc.
Yes. Identity especially is tricky to deal with in multilayered
systems. Even easy cases are not so easy. Consider what it takes to
remove dependence on identities of memory cells via hardware-assisted
virtual memory systems and the like.
> When I use the term "machine", even "abstract machine", as in
> a Turing machine, issues of copying and replication and identity
> don't seem to crop up... or do they?
Check out Von Neumann's writings on cellular automata, which can in a
few places be retrospectively interpreted to be about some of these
things. As I said, I think the main reason these kinds of issues were
not dealt with in the early foundational work on computation is that
no one had a chance to build interesting software systems in which
they turn out to play such prominent roles. And the fact that they
were not initially addressed sent programming in very different
> For another thing, just looking at them as English words,
> "object" to me seems a very generalist word. I've often done
> mental substitutions of the word "thing" when I see the word "object",
> just to reinforce to myself the notion that an object, physical or
> virtual, can be just about anything. In fact, I believe that it's
> because the word "object" is coopted so strongly by the OO
> camp, who in their hijacking created the rule (according to
> Booch, at least), that a language must support inheritance to be
> an OOPL, that then caused everyone to go out and start
> questing for "components", just to have another general term
> where inheritance was optional.
Yes. More evidence of the rift between naive OOP realism vs
systems-oriented approaches. It has existed for a while. For example
in 1991(?), there was an OOPSLA panel on this topic: Why people
studing concurrent objects thought that inheritance was inessential,
and even a nuisance; whereas people doing traditional OOA/D/P all
thought it was definitional to OO. (The panel was not very
informative; I wouldn't bother looking for proceedings.) Much of this
was a reaction to papers on `the inheritance anomaly', a catch-all
term describing common subclassing problems that arise more often in
concurrent than sequential programming. (See CPJ for a brief summary
of the issues. I remain a middle-of-the-roader about it. I think that
OOP-style inheritance is great when it happens to apply.)
> A definitive strength of OO is the ability to deal with procedural
> structures (whether the chain of dependency is, in fact, procedural
> like in Java, C, etc. or whether the chain is functional or even
> logical) as if they were more like the "things" we deal with via
> our sensory/motor skills. But, this strength backfires when the
> "things" being dealt with need to be divisible. And there are
> two facets to this objection: 1) objects are not themselves
> arbitrarily divisible (i.e. the movement from a coarse system to
> a fine system requires a complete redesign of the system). And
Mostly off the subject, but this is one of the main attractions of
group/channel-based approaches. There are a bunch of postings on this
in dist-obj archives. (Sorry for being too lazy to look them up and
cite urls :-)
> - the Theory view -- there is no denotational semantics for them (not
> much by way of operational semantics either).
> FWIW a lot of compiler writers use the lambda calculus, esp. in the
> functional programming community.
Right. Yet more legacies from our common heritage. Even if they are
equivalent at some level, the machine-view vs the function-view of
computation led in vastly different directions. Some of these
differences stem from viewing computation as the control of physical
processes versus viewing computation as the realization of
mathematical abstractions. For a different story about it, check out
slides from Luca Cardelli's `Everything is an Object' talk at
> ... decomposition and encapsulation having existed before
> objects were en vogue ...
And before there was even such a thing as software. There are only a
handfull of Big Ideas in engineering of any kind, mainly surrounding
decomposition and abstraction. (Not to keep harping on it, but one
additional Big Idea in software is reflection. See my essay on
Christopher Alexander (http://gee.cs.oswego.edu/dl/ca/ca/ca.html) for
a few flakey remarks about this (near the end of the paper).)
> I say that objects do not necessarily guarantee a good design because of
> an experience I had managing a student term project here at Caltech in
> which the students didn't understand good design -- instead of writing
> their program in 21 modular units, they wrote it as a single monolithic
> 12,000+ line file of pseudo-Java which they then ran through a
> preprocessor that they custom-wrote to take their pseudo-Java, convert
> it to real Java, and in the process break it up into 21 files each
> containing the appropriate objects which could then be compiled by the
> Java compiler.
Imagine trying to build a bridge, a house, an airplane, a CPU chip,
etc using such an approach. Sometimes I think the worst thing about
programming is that poorly engineered code does not irrevocably fall
into a big pile of rubble. There is always some way to patch it up
just enough to make it look like it works.
> Hype... Mindshare... a few billion bucks ... churn ...
Sigh. In particular:
> Well I just finished chapter one in "The Essential Distributed Object
> Survial Guide", this is what I was opposed to...
> I percieve an attitude from some members of the OO and dist-obj camp
> that their methodologies are going to replace EVERYTHING and that book
> isn't helping any. Well, they are just wrong.
Right. This is a sorry excuse for `methodology'. Compare the use of
the term in other engineering disciplines. I wish people would start
growing up about it.
> >2. How it can be that XML and other non-computationally-complete but
> > useful languages somehow serve as alternatives to objects.
> I don't think they do.
Thanks! I think I'm just barely beginning to get it.
Your answer also made me appreciate some of the replies to the first
part of my post: Being an systems-OO-insider, I can only cringe at
what people must think after reading breathless superficial accounts
of it. But being an XML-outsider, I have mostly read breathless
superficial accounts of it, not ever appreciating the engineering
compromises that led to it.
As I've said a bunch of times about Java: It's not particulalry good
in any absolute sense, but it is good enough for people to build
better software that they used to. XML might similarly be good enough
to attach necessary semantics, protocols, and pragmatics to
objects/components. If so, we ought to be happy.