I think "antiFUD" is too innocuous a term. How about "whipping up
enthusiasm while building memetically unsound conceptual models"?
> ..... While HTML knows a great deal about words, it knows
> nothing at all about information.
>.... XML, on
> the other hand, describes data, not pages.
> The power of XML, then, is that it makes applications aware of what
> they are about. An XML search engine, for example, wouldn't have to
> drag back all the text and analyze it for content. It would just
> send out a message saying "All pages that are about fly fishing,
> please identify yourselves!" And they would.
> XML makes web content intelligent. ...
> Once your spreadsheet talks XML, it can link across the
> Net into other spreadsheets and into server-based applications that
> offer even greater power.
> That's at the heart of Microsoft's .NET (dot-NET) initiative, which
> puts little XML stub applications on your PC that don't actually do
> much until they are linked to the big XML servers Microsoft will be
> running over the Internet. All your office applications become
> XML-aware, which means you can do powerful things on tiny computers
> as long as you continue to pay rent to Microsoft. The effect of
> dot-NET is cooperative computing, but the real intent is to smooth
> Microsoft's cash flow and make it more deterministic.
At least he got that part right.
> XML changes all that by introducing the concept of metadata -- data
> about data. In XML, each piece of data not only includes the data
> itself, but also a description of the data, what it means. Now your
> XML database can have a list of names (that's the data) and a tag on
> the data saying that these are customer names (that's the
OK, let's conveniently ignore the metadata tags on HTML, the keyword
tags, etc etc and pretend it's all new & shiny.
> Should some Y2K-like catastrophe afflict our XML
> database, it would be easy for any programmer to look at the
> metadata to reconstruct the database program. In fact the metadata
> is the program, which is how those fly fishing pages were able to
> announce themselves in an earlier example.
Earth to Cringeley-- and just HOW did those fly fishing pages "announce
themselves"? Could it be that there was some kind of SERVER listening
for an XML request, and that it was acting as an intermediary? Was it
doing a brute-force search of all pages locally to find those with
XML data tags that matched the query from your magic spreadsheet? Or
was there perhaps a friendly middleware layer? I prefer the quaint vision
of little web pages jostling around port 80, hopeful and haunted, denied
access to the great DataPortal, watching the incoming packets
with those big velvet-painting eyes, waiting for an XML request that
calls their name. Ah, the joyous scramble! "Who's got a SYNC packet?
Now, now, no shoving, the query wanted you ALL to visit!"
> Once we embrace XML, and nearly the entire computer industry already
> has, then wondrous things begin to happen. Airline ticket databases
> suddenly are aware that's what they are. So within the constraints
> of a vocabulary limited to words like "passenger" and "seat number,"
> finding the cheapest way from here to there becomes a matter of just
> asking. The query -- the question you are trying to answer by
> analyzing data -- becomes the database, itself.
When I think about all the work that's gone into Z39.50, and is largely
ignored by all these industries, I find it harder to get jazzed up about
XML. Anyone else remember how Z39.50 was going to revolutionize search
technology, since you'd be able to specify if the Gaugin you were looking
for was a painting, a biography, a news article, original writing, etc?
The problem with all these magic solutions is that to make the web
itself a pseudo-expert system, somebody has to sit down and do a lot
of labelling, typing, and coding. Who's going to sign up to label all
those fly-fishing pages that are already out there? Maybe they'll just
use a script that converts all the current keywords into XML tags.
Which reminds me-- exactly why won't people who are selling fishing rods,
or boat trips, or outdoor gear, or anything else at all, be labelling
their pages as "about fly fishing" so that Cringeley's magic spreadsheet
will load them? He's got to either trust the tags or parse the body,
so he's not home free. XML spamming will be HUGE, and thanks to the
accelerating arms race between spammers/index-poisoners and everyone
else, we won't get a long period of uninterrupted enjoyment of XML
enabled search/p3p apps before we start getting crap back with every request.
XML will enable some interesting things to happen, but it's not going to
do so overnight or without hard work on anyone's part. Fairytale castles
of the magic web are nice to dream about, but encourage everyone to sit
back and work on applications to leverage all the hard work that content
and data labellers will be doing to retrofit the "legacy web". Oh, wait--
they're sitting back and waiting for the cool apps to emerge so that they
can see if the expense and trouble is justified? Well, as long as the new
content producers are convinced and generate XML-tagged data, wait a few
years and there'll be more of it...
-- ======================================================================== Strata Rose Chalup [KF6NBZ] strata "@" virtual.net VirtualNet Consulting http://www.virtual.net/ ** Project Management & Architecture for ISP/ASP Systems Integration ** =========================================================================
This archive was generated by hypermail 2b29 : Sun May 06 2001 - 08:04:37 PDT