From: Gavin Thomas Nicol (firstname.lastname@example.org)
Date: Tue Sep 05 2000 - 08:45:05 PDT
> Web server? All you need is a mechanism which adds an URI plus
> fulltext index of the text fields (don't get me started on semantic
> web) of the inserted object at each insert (inverse operation at
> delete). If it's possible, you could just iterate over all objects in
> a database which are allowed to be indexed. Then place the index in a
> standard location, and (maybe) notify the web spider.
I'm not quite sure I understanmd what you're proposing. If you
have a single document... say the Boeing 747 manuals, and
provide a means to (at runtime) convert that into pages, TOC's
etc. and have all that done using a number of possible stylesheets,
it's basically impossible to generate all the links for a robot
to consume. It get's worse when you take the browser into
account, or user preferences.
About the best you can do is provide a link to your *own*
fulltext search engine *or* dump the vocabulary list to the
crawler, with a URL pointing to the root (or perhaps root
with query applied).
My guess is you were proposing the latter?
This archive was generated by hypermail 2b29 : Tue Sep 05 2000 - 08:42:18 PDT