Re: URI: Name or Network Location?

Rdf was designed to help us annotate in a machine readable format resources
on the web, in particular, resources available over Hypertext Transfer
Protocol(HTTP) services.  The URI scheme http:// is the most common scheme
for identifying resources in the realm of the web.  I presume the info URI
scheme begins to accommodate those resources that have not fallen naturally
into some namespace on the web.  I don't think a URI has any bearing
whatsoever on the actual ___location of a resource it is describing.  In Rdf we
are free to use or make up any valid URI scheme.  The URL on the other hand
gives us a locator - so that a machine understands what protocol and what
address under that protocol a resource can be retrieved.  You should be able
to prod a URL with the appropriate technology and get a response.  I think
RSS 1.0 does a good job of both clarifying and confusing the separation of
URI and URL.  A channel has a link which must be a URL, whereas an image has
a rule that one is to use the about URI as also the URL of the image itself.

Section 1.2 of a URI rfc is also helpful [1] when thinking about the
interpretation of URI and URL.

My opinion is that if you want to reference the ___location of something, then
add some structures to your Rdf, e.g. through a property, that allows an
unambiguous interpretation.

regards
Matt

[1] http://www.isi.edu/in-notes/rfc2396.txt

----- Original Message ----- 
From: "Patrick Stickler" <patrick.stickler@nokia.com>
To: "ext Phil Dawes" <pdawes@users.sourceforge.net>
Cc: "Hammond, Tony (ELSLON)" <T.Hammond@elsevier.com>;
<www-rdf-interest@w3.org>
Sent: Wednesday, January 28, 2004 9:09 PM
Subject: Re: URI: Name or Network Location?


>
>
> On Jan 27, 2004, at 05:46, ext Phil Dawes wrote:
>
> > Hi Tony,
> >
> > Nobody else has answered this, so I'll have a stab.
> >
> > Hammond, Tony (ELSLON) writes:
> >>
> >>> I simply can't fathom any real benefit to having a URI
> >>> which, by definition, cannot be used to access such knowledge.
> >>
> >> The reason is to keep the barrier to entry as low as possible. By
> >> explicitly
> >> excluding dereference we have devised a very simple, focussed
> >> registration
> >> mechanism which requires almost zero maintenance and is consistent
> >> across
> >> the whole INFO namespace with a predictable behaviour (i.e.
> >> disclosure of
> >> identity). This is a baseline service - think of it as something like
> >> the
> >> Model T.
> >>
> >> I agree that it would be useful to have resource representations
> >> sitting out
> >> there on some network endpoint - but that is just way too expensive
> >> for the
> >> namespaces we are interested in fostering. There are no (human)
> >> resources
> >> available to maintain such an undertaking. The conclusion is that we
> >> either
> >> go this zero-resolution route or we accept that many of these
> >> namespaces
> >> will continue not to be represented on the Web. Which means that we
> >> will
> >> continue to be frustrated by not being able to 'talk' about well-known
> >> public information assets in Web description technologies.
> >>
> >
> > At work we've been using tag uris for the last 6 months in our
> > internal RDF knowledge base (which is still reasonably small: ~50000
> > triples), for much the same reason as the info URI scheme was created
> > - that we wanted to represent abstract concepts and physical things
> > without the dereference baggage and confusion.
> >
> > However, I've recently been convinced by Patrick's and Sandro's
> > arguments for using http uris to denote abstract concepts. It gives us
> > more flexibility in the future, for practically no cost.
> > (The fact that Sandro was one of the inventors of the tag uri scheme
> > give his arguments additional weight.)
> >
> > We are in the process of transitioning thus:
> >
> > I've registered a subdomain (call it sw.foo.com for illustrative
> > purposes), and put a static html page up there which explains that
> > this URI space is for abstract URIs used on the semantic web.
> > Job Done.
> >
> > Now anybody who attempts to resolve a URI
> > (e.g. http://sw.foo.com/marketmaker/2004/01/trades#mytrade35) gets a
> > web page explaining that this uri represents an abstract concept or
> > physical object on the semantic web.
> >
> > This sorts out the initial confusion that relates to using http URIs
> > for abstract things, since anybody who is confused is most likely to
> > try to resolve the http URI in an HTML web browser.
> >
> > So the cost is one webpage (plus a bit of webserver config), and a DNS
> > subdomain entry.
> >
>
> Precisely.
>
> > Does this make sense
>
> Absolutely.
>
> > or am I missing something?
>
> Nope.
>
> This is exactly how I personally think it should be done.
>
> Though I'd go one step further (eventually, even if not at first) and
> make that server URIQA enlightened, so that, for those resources which
> have RDF descriptions, if folks dereference the URI, they get a metadata
> description of the resource, rather than just a boilerplate response.
>
> E.g., when you do an HTTP GET on http://sw.nokia.com/VOC-1/Vocabulary
> there is no "typical" representation available for that resource, so
> the URIQA enlightened server falls back to trying to provide a
> description of that resource. If there weren't any description either,
> it would return a 404 response (or could return a friendly boilerplate
> response such as you describe above) but in this case, there is a
> description
> so it returns the description as the representation (which it is).
>
> If there *were* some other representation provided for typical GET
> requests, you could still obtain that description either using MGET
> or via a direct request to the http://sw.nokia.com/uriqa? portal.
>
> But as a first step (or even only step) the approach you've adopted
> is IMO the way to go, and leaves the door open to adding functionality
> in the future.
>
> Cheers,
>
> Patrick
>
> --
>
> Patrick Stickler
> Nokia, Finland
> patrick.stickler@nokia.com
>
>

Received on Thursday, 29 January 2004 00:17:18 UTC