Nicola Guarino: Remarks on the contribution of Horst Nowacki

Surely words are symbols, whose meaning has to be established by some person. In logic, natural language words are often used as predicate symbols, while an ``interpretation function'' is used to assign meaning to these symbols (their ``semantics''). For instance, a common noun like ``product'' may be the name of a unary predicate; its ``meaning'' is given by a function assigning to such name a certain subset of a reference domain, usually constituted by real world entities. In logic (at least in the Tarskian semantics), names of predicates HAVE meaning, of course within a certain *model*, i.e. a particular way to assign the interpretation function. Only by assigning meaning to predicates we can assign meaning to formulas: without entering into the meaning of their constituting terms, formulas can be either true or false, without ``meaning'' anything useful.

After having defended the importance of giving meaning to words, remains of course the problem of making such a meaning clear. In order to capture the *intended* meaning of a predicate, we cannot *just* use other predicates (i.e. other words, I agree with Horst here), but also *constrain* their joint interpretation, stating for instance that a product cannot be a cartesian point. Technically, this amounts to moving from a set of predicates (a *vocabulary*) to a set of *axioms* (called ``meaning axioms'', or ``meaning postulates'') constraining the meaning of this vocabulary. Of course, such axioms will never define *exactly* the meaning of a single predicate, but with a clever design of *both* the vocabulary and the axioms, it will be possible to avoid a number of misunderstandings when two agents (human beings or computers) are going to use the same vocabulary for practical applications. Moreover, it will be always possible to make the meaning more precise in an incremental way, by adding further axioms when a potential ambiguity becomes evident. The vocabulary PLUS the axioms will constitute an ``ontology''. This is why logic could make an immense benefit in capturing intended meaning and reducing ambiguity.

Coming to STEP, we could simply establish the vocabulary (without meaning axioms) at the level of the integrated resources, leaving to the ARMs the task to make its meaning explicit. However, this would lead to the paradox of having, say, the term ``product'' mean something in ARM1, and something COMPLETELY DIFFERENT in ARM2, vanishing therefore the utility of a common vocabulary. In my opinion, the difference between the various levels of the STEP architecture (Integrated Resources, Application Interpreted Constructs, Application Reference Models, and Application Protocols) should be in the different degree of specialization and task-dependence, not in the different ways of interpreting the same word. In general, technical words should always have the same meaning within the whole standard, and polysemies should be rare and strictly controlled.

My point is that when a term is introduced at some level of genericity, its intended meaning should be made clear at that level of genericity. In my opinion this criterion has not been followed in the Integrated Resources, since it is possible (at least in principle, but it seems also in practice) to use a term in different, incompatible ways in different APs. I hope to find soon the time to navigate in the STEP documentation in order to get some documented examples of this affirmation. Sure it would be very useful to have a browser where you could ask ``give me the various occurrences of the term XXX in the whole standard''.

The theses posed above are discussed in more detail in my papers, accessible via the WWW site ( http://www.ladseb.pd.cnr.it/infor/Ontology/ontology.html). In particular, in ``Ontologies and knowledge bases: towards a terminological clarification'' I (together with Daniele Giaretta) discuss the problem of clarifying the intended meaning of a very simple vocabulary used to describe the arrangement of some blocks on a table; in ``Understanding, building and using ontologies'' I discuss the so-called ``interaction problem'', i.e. the degree of dependence of an ontology on the particular task at hand. In the same paper, I propose a distinction between domain ontologies, task ontologies and application ontologies, which resembles somehow the distinction between Integrated Resources (domain ontologies and [maybe?] task ontologies) and Application Protocols.


ProKSI-97 Report