From: S. Mike Dierken (firstname.lastname@example.org)
Date: Mon Sep 18 2000 - 15:36:38 PDT
> The conventional wisdom is that you do all your work internally,
> then always
> convert to XML and then convert that to appropriate HTML, XHTML, WML, etc.
> for your client.
You don't have to convert to XML then convert to HTML, etc.
You only need a transformable information system. There are lots of them -
SQL is popular, J2EE sounds nice.
The transformation can be declarative, procedural whatever.
The temptation is that once you have a mass of data, wouldn't it be nice to
re-use and integrate that data in new ways? Integrating different systems -
like trying to talk to J2EE systems from code that is used to making SQL
calls - can be difficult. It's tempting to put an abstraction layer above
these systems so access and re-use can be simplified. (but are you
integrating data or behavior... hmmm... or just the presentation?)
> At the core of this is the idea that users will visit the same site using
> their Web TV, thier Palm, thier Linux-powered wrist watch or the tactile
> response browser embedded in their bridge work. And that for any
> of these,
> there as specialized markup language (or a specialization of an existing
> markup language) that can be transformed to from the XML.
Multi-device isn't necessarily at the core of the XML->otherML approach.
Getting to the data by non-browser applications also has a lot of value.
> 2) XSL transformation. O(n)? O(n ^ 2)? O(n!)? This stuff
> might work fine on the bench, but a bad rule that leads to O(n ^ 2) or
better is going to
> bring your application server to its knees. I haven't done much with XSL,
> but it looks like a declarative scripting language; a rules engine for
> transforming the DOM tree. Still, any language needs a debugger
> .. have one handy for XSL? Didn't think so.
Yep. I agree.
> There's also some issues with transformation and memory usage ...
> is keeping a gigantic tree of temporary objects (with a life span of a
> request cycle) practical?
This is a matter of the implementation of the data source provider (DOM). At
DataChannel, we implemented a virtual DOM which only loaded data that was
touched during the transformation. It turned out to look a lot like
ODBC/JDBC, but with queries that can talk about heirarchies.
> 5) Re-use. Is this the best approach for re-use? Isn't the
> whole argument
> based on the idea of doing *way* too much in the presentation layer? J2EE
> and/or Jini provides all the mechanisms for good back ends, where the data
> comes from real objects and the presentation layer is a thin, but critical
> presentation layer.
Yes, you probably are right that good back ends are more appropriate. Not
all good back-ends have the characteristics that standards-based
data-integration fiends want: read/write, address/reference, discovery,
query, transform. For example, there is no standard way to get a list of
databases in an RDBMS, or reference a particular table in a particular
database in a particular RDMBS on a particular machine (well, maybe there is
a JDBC style URL...)
> 6) Finesse. Users demand a certain level of finesse for thier
> ... on the desktop, they all but demand involved interfaces super-charged
> rules ... think out of the box ... do something wierd that isn't going to
> fit into a normal XSL rule. How complicated are those XSL rules going to
> become ... and how far will the stretch before breaking.
Yep. I agree.
> Re-use the back end objects, use subclassing and
> aggregation to create variations of the controllers and components in the
> presentation layer...
How would a different machine, using a different language get access to read
information from the back-end objects? Without strongly typed network based
distributed computing, it might be that you need an passive data exchange
This archive was generated by hypermail 2b29 : Mon Sep 18 2000 - 15:41:16 PDT