Janos J. Sarbo
University of Nijmegen, The Netherlands
janos@cs.kun.nl (01)
Abstract. Peirce's signs provide a common language for knowledge. Because a Web is essentially a kind of knowledge representation, we argue that our model of Peirce's signs ([7]) can be useful for a successful implementation of a Pragmatic Web. (02)
Recently we introduced a computational model for knowledge representation (KR) ([3],[7]) based on Peirce's pragmatic theory of signs ([5]) and a theory of cognition ([4]). Because, from the computational point of view, a Web can be considered a kind of KR, we argue that our model can be useful for the realization of a Pragmatic Web. We believe that a suitable KR is a prerequisite for an efficient communication of knowledge on the Web. (04)
A fundamental problem of the realization of a Pragmatic Web is aptly summarized in the following fragment of the workshop proposal. (06)
"In a Pragmatic Web, everything [documents, tools, requirements, or definitions] would be viewed as an argument, or set of related inferences that could be recognized by someone as an argument." (07)
This characterization of the Web clearly identifies the need for a common language for the representation of knowledge on the Web. Such a language must be low-level so that everything could be viewed as an argument. But it also must be high-level so that everything could be recognized as an argument. Indeed, a comprehensible representation of knowledge requires that any concept, simple or complex, can be easily referenced, typically by a single sign. (08)
The need for such a language, although in the field of compiler construction, dates back to the early years of computer science, but the concept has remained an ideal (cf. the UNCOL-problem [1]). In this paper we will argue that the common language problem of KR (and also, of the Pragmatic Web) may be less complex. Experience shows that human communication, which involves some type of a KR, is typically fast and easy. We argue that a reason for this could be that human KR is based on signs. Our research ([7]) has revealed that such a representation is both universal and efficient. (09)
We argue that from Peirce's principle of pragmaticism ([5]) the conditions for a KR can be derived. This paper is an attempt to show that our model of signs meets those conditions. (011)
"Consider what effects, that might conceivably have practical bearings, we conceive the object of our conception to have. Then, our conception of these effects is the whole of our conception of the object." [5.402] (012)
Following Peirce's maxim, concepts emerge from the effects of an object. Because any effect is a `real' world phenomenon, it follows that we may know about the effects of an object via perception. The conceived effects form the basis for our conception of an object which is knowledge. Because, as Peirce maintains, we may know about phenomena by means of signs (for which reason they must be universal), we may finally conclude that the pragmatic maxim is about knowledge, which is emerging from perception and represented by means of signs. The existence of such knowledge is a necessary condition for conceiving the practical bearings of effects (which include habits), and for conceiving those effects as the whole of the conception of an object (which involves reasoning). (013)
From the cognitive nature of knowledge it follows that we need a model of perception. According to such a model ([2]), the perception of any phenomenon begins with the input stimulus which is continuously transformed by the senses into an internal representation. The senses consist of a finite number of elementary `units' (an example is the rods and cones of the eyes). The output of such a unit is called a quality which is a sign of the stimulus (inasmuch as the essence of any sign is that it stands for something else than itself and represents it). (015)
An interesting feature of perception is that we may only know about the input qualities if there is a change. In such a case the brain samples the output of the senses in a percept (a percept may also contain qualities from the memory, but this aspect is beyond the scope of this paper). We may assume that there exists a previous percept which, by virtue of the change, must be different from the current one. By comparing the two percepts the brain can distinguish between two sorts of qualities: one, which was there and remained there (continuant); and another, which, though it was not there, is there now, or the other way round (occurrent). By means of selective attention, these qualities are further classified by the brain in two types: one, which we are focussing on (observed); and another, which we are not (complementary). (016)
A subset of qualities is a sign of the stimulus and, via the stimulus, of the observed phenomenon. For example, a single quality can refer to an appearing black color; a subset of qualities can signify something that we know as a stove. We will assume that qualities possess a relational meaning (but which is only potentially present on this level of cognition). For example, the black color may belong to some `thing'; a stove can have some `property' which is a color. The relational properties of qualities are habits which are subject to learning. (017)
The final result of the cognition of a phenomenon is a proposition (which can have a lingual or logical representation). Such a proposition is a complex sign which describes the relation between the subject and predicate of a phenomenon. If there exist complex signs, then there must also exist primary signs. A basic hypothesis of our KR approach is that the input qualities are the primary signs, and that the proposition of a phenomenon is generated only from those signs. The collection of such primary signs will be called the input. (019)
With respect to a theory of signs, we follow Peirce's semiotics. Instead of a definition of signs we will illustrate his theory with an example. (021)
Assume, for instance, that we observe smoke. If we find out that the observed smoke can refer to fire, then from our observation of smoke we may conclude that there is danger (for example). In sum, we interpret smoke as a sign and generate its meaning as `fire-signifying-smoke-as-danger'. In this process, the sign (smoke) is said to mediate between its object (fire) and meaning (interpretant). (022)
In practice, signs almost never occur in such a clear-cut form. Besides smoke, we may observe something `not-smoke', e.g. a burning roof. Signs are typically context embedded (which context may also involve memory signs) therefore, in order to interpret smoke as a sign, we have to (i) separate it from the context, and (ii) clarify its relation with it. The latter can be necessary, because smoke can signify danger if, for example, the roof is burning, but it can be interpreted as the sign of rescue, if we are lost in the jungle. (023)
It could be argued that smoke and context together function as a sign. However, from the KR's point of view, such a perspective may not be satisfactory. Following cognition theory ([6]), our knowledge about smoke, for example, is represented in the declarative semantic memory by the prototype of smoke. We assume that we also have knowledge about the potential relational properties of (the qualities of) that prototype. We argue that, by means of those properties, we are able to recognize a specific meaning of smoke as an instance of its general meaning complemented by the context. (024)
Let us return to the four types of qualities of our model of perception. Such qualities will become a sign because we interpret them as a representation of the input stimulus. A continuant quality which is something `stable' will be considered the sign of some `thing'; an occurrent one which is something `changing' will be interpreted as the sign of some `property'. Such signs which are qualities are called by Peirce a qualisign. (026)
We argue that a proposition can be generated from the qualisigns by means of re-presentations. Inasmuch as we represent qualisigns, such a process will involve the generation of signs; because we re-present them, such signs will be a relation of the input qualities (remember that qualisigns are qualities). (027)
Besides his definition of signs, Peirce also introduced an ingenious classification of signs consisting of nine types or classes. A sign class is defined in terms of a set of aspects. On the basis of this classification, in [3] we introduced a computational model for the generation of signs which also serves as the basis for our approach to knowledge representation. The essence of this model is that, starting from the qualisigns, instances of the different types of signs are generated, typically, an instance of each sign class, in increasing order of complexity. Because in this process each sign type is involved, we argue that Peirce's nine types are necessary for the recognition of the (context embedded) sign of a phenomenon, and that his hierarchy of signs is a suitable structure for the representation of knowledge. (029)
For the sake of completeness we briefly recapitulate our computational model which we will alternatively call the process of recognition, or sign generation (the type of a sign generated will be given in parentheses). (031)
Qualisigns are the primary signs of a phenomenon. The set of such signs defines the input. Which qualisigns are referring to the observed qualities of the phenomenon is determined in an initialization step which is called sorting. As a result, the sign of the parts of the phenomenon (icon), and of their simultaneous occurrence (sinsign) are generated. For an independent recognition of the continuant and occurrent qualities, we have to separate them from each other. This is carried out in an abstraction step yielding the signs of their abstract meaning (rheme) and rule-like compatibility relation (legisign). Additionally, the sign of the embedding context is generated (index). Subsequently, the relation between those abstract meanings and the context is established in a complementation step generating the signs of the actual subject (dicent) and predicate (symbol). In the final step, those meanings are merged in a predication step, yielding a proposition sign (argument) of the observed phenomenon. (032)
Examples illustrating the above process can be found in [7] and [8]. Because phenomena can be embedded in each other, the process of sign generation can be recursive. This aspect is discussed in [7]. (033)
The process of sign recognition is a phenomenon itself, in which, signs are generated recursively, revealing gradually more accurate approximations of the full richness of a sign of the observed phenomenon. These approximations arise by means of the operations: sorting, abstraction, complementation es predication. In [3] it is argued that these operations can be equivalently characterized as interactions between two signs. The interpretant of an interaction is a (new) sign which, from the computational point of view, can be defined as the result of the corresponding operation. An interaction is valid if the qualities represented by the interacting signs are compatible. Because, in our model, any sign is a relation of the continuant and occurrent qualities, valid sign interactions are always meaningful. (034)
Signs form a universal language which, following our model of sign generation, is structurally simple. A special property of this language is that only the aspects of signs (and thereby also the number of their types) are defined as constants. This may also explain the ease of translation from one system of signs to another. (036)
The advantage of the Peircean approach is that (i) by virtue of their universal character, any type of knowledge can be specified in terms of signs; (ii) because such signs are present in any phenomenon or problem, the task of KR is to find out how they are called in a given problem (notice that the number of the types of signs is finite); (iii) such signs or conceptions can be generated by means of the operations sorting, abstraction, complementation and predication. (037)
The Peircean approach has the potential for combining general, prototypical and individual, actual knowledge (cf. complementation) and thereby, representing knowledge in context. From (ii) and (iii) above, it follows that such an approach enables us to systematically find and represent the meaning of a phenomenon. Because in this respect the traditional methods of KR are typically not supportive, we may finally conclude that the Peircean approach can provide a more suitable basis for the realization of a Pragmatic Web. (038)
Knowledge emerges from signs, and signs are necessary for ascertaining the real meaning of any concept, e.g. via communication. A method supporting this process is the ultimate goal of a Pragmatic Web. We argue that our computational model for sign generation allows for a suitable basis for such a method. The properties of our model can be summarized as follows. (040)
Signs form a universal language which is structurally very simple, because only the aspects (or types) of signs are defined as constants. Notice that this may also explain the ease of translation between different systems of signs. Peirce's notion of a sign is inherently dynamic (because an interpretant can become a sign, too). For the computational model this means that sign generation can be recursive. From the above features, the differences of our approach with those of CG ([9]) and FCA ([10]) can be derived. We believe that those differences can be resolved, but this needs further research. (041)
We would like to thank Aldo de Moor for his invaluable comments on an earlier version of this paper. (043)
1. J.J. Strong et. al. The problem of programming communication with changing machines: a proposed solution. Communications of the ACM, 1(8):12-18, August 1958. (045)
2. J.I. Farkas and J.J. Sarbo. A Peircean framework of syntactic structure. In W. Tepfenhart and W. Cyre, editors, ICCS'99, volume 1640 of LNAI, pages 112-126, Blacksburg (VA), 1999. Springer-Verlag. (046)
3. J.I. Farkas and J.J. Sarbo. A Logical Ontology. In G. Stumme, editor, Working with Conceptual Structures: Contributions to ICCS2000, pages 138-151, Darmstadt (Germany), 2000. Shaker Verlag. (047)
4. S. Harnad. Categorical perception: the groundwork of cognition. Cambridge University Press, Cambridge, 1987. (048)
5. C.S. Peirce. Collected Papers of Charles Sanders Peirce. Harvard University Press, Cambridge, 1931. (049)
6. E. Rosch. Principles of of categorisation. In E. Rosch and B.B. Lloyd, editors, Cognition and categorization, Hillsdale, NJ, 1978. Lawrence Erlbaum. (050)
7. J.J. Sarbo and J.I. Farkas. A Peircean Ontology of Language. In H. Delugach and G. Stumme, editors, ICCS'2001, volume 2120 of LNAI, pages 1-14, Stanford (CA), 2001. Springer-Verlag. (051)
8. J.J. Sarbo and J.I. Farkas. A linearly complex model for knowledge representation. In U.Priss and D. Corbett, editors, ICCS'2002, LNAI, Springer-Verlag, 2002. (052)
9. J.F. Sowa. Conceptual Structures: Information Processing in Mind and Machine. Addison-Wesley, Reading, MA, 1984. (053)
10. R. Wille. Restructuring lattice theory: An approach based on hierarchies of concepts. In I. Rival, editor, Ordered sets, pages 445-470. D. Reidel Publishing Company, Dordrecht-Boston, 1982. (054)