Re: [port-peer-review] reviews
Thank you, Philippe! I will be traveling for a few days, and know that
Gary is away, too, but we will get on with this practise, by the weekend.
--MK (01)
On Wed, 19 Jun 2002, Philippe Martin wrote: (02)
>
> Here are my reviews of the articles other than mine.
> A bit earlier than I expected and more provocative too. Well,
> at least this is not anonymous and the authors can answer
> (and they still have my own article to review :-)).
>
> Cheers,
>
> Philippe
>
> P.S. Eugene, if that helps you, the reviews below are also in
> http://meganesia.int.gu.edu.au/~phmartin/WebKB/articles/iccs02/reviewsForPORT.html
>
>
>
> Review 1
> --------
>
> Paper's title: Creating Conceptual Access:
> Faceted Knowledge Organization in the Unrev-II email archives
>
> Paper's author: Kathryn La Barre and Chris Dent
>
> Summary. The authors describe an experiment to use, combine and
> compare various document indexation tools (especially in latent semantic analysis)
> for the creation of "conceptual" clusters of terms/concepts/facets/documents
> (however, the only relations between the terms/concepts/facets/documents/clusters
> seem to be measures of similarity/divergence calculated by the semantic analysis
> tools). The evaluation (and refinement) of the created clusters, and hence what
> the authors call the "access structure" of the set of documents, is expected to be
> done by people via a manual ranking and tagging (keywords or short phrase) of
> documents. It is also expected that each phase of refinement will be usable as
> input for another phase of automated clustering.
>
> Clarity and precision. I found the description too general and hence difficult
> to understand. There are many repetitions but at the same level of generality,
> i.e. without precision, definition or example, even about the most frequently
> used words/expressions: "cluster", "facet", "access structure", "coding messages".
> The figures are not much helpful since they do not show any term/concept/facet and
> the nature of the relations between the nodes is not explicited.
> My understanding of the article relies on the information that the output of
> classic document indexation tools is used, and my conviction that therefore
> there cannot be anything really "conceptual" or "structured" to exploit.
> Hence, "facet" must refer to a simple keyword, and "access structure" to some
> calculated similarity relations which do not have any commonsense meaning.
> Which document indexation tools have been used, and how, is also unspecified.
>
> Originality. The proceedings of the WWW conferences are full of descriptions
> of tools creating clusters of documents (based on classic document indexation
> techniques) and permitting to navigate within and between them. I do not know
> (or like the output of) these approaches enough to appreciate their originality.
> I was more surprized by the absence of references to the use of
> Formal Concept Analysis (or similar methods) for the structuration and navigation
> of a base of documents and the terms used for indexing it. Indeed, this approach
> has the advantages of producing a genuinely understandable and structured index
> on the documents. It is also quite common now. In the CG community,
> (i) Guy Mineau classified documents in that way, and permitted navigation in the
> lattice, about 10 years ago now;
> (ii) this is of course a classic application for the Darmstadt group;
> (iii) in my own team (KVO, http://www.kvocentral.com/), Richard Cole has
> finished a PhD thesis, partially about this subject too, about a year ago.
> One of his tools is specialized for e-mails classification/access and is called
> "Email Concept Analysis" tool. Navigation along specialization links
> between concepts (term/document sets), filtering according to attributes
> (term, email author, destinations, date, subject, ...) and
> generation of FCA contexts/scales is possible. I mention it as a comparison to
> the navigation envisaged by the authors of the article.
>
> Interest of the approach. This question is not discussed in the article.
> The abstract mentions that the hypothesis is that classic document indexation
> techniques "may be worthwhile tools to generate meaningful clusters in the
> dataset". However, there is no indication this is so in the article.
> I personally do not think this is so. I do not even have much interest in using
> the "Email Concept Analysis" tool of my friend because it is not enough
> "conceptual"/"knowledge-based" to me: I am not interested in retrieving sets
> of e-mails/documents according to terms/authors/..., I am interested in getting
> precise answers to precise questions, in seeing all the
> (conceptual/rhetorical/argumentation) relations from one object (category or
> sentence) and navigate along the relations (e.g. navigate along an argumentation
> path). Classic or FCA-based "document-indexation" techniques cannot provide that.
> Part of the problem is that each document includes many sentences (facts, ideas,
> argumentation for ideas, etc.) and sentences related to an idea/object
> are scattered in many documents. Only knowledge-based techniques permit
> to collect (and organize, if necessary) the relevant sentences to answer
> precisely to a precise query. Of course, until natural languages parsing
> techniques actually permit to extract the meaning of sentences in documents,
> the downside of the approach is that knowledge bases have to be built
> more or less manually. Certain uses of informal methods like Topics Maps might
> offer an acceptable compromise between precision and ease-of-representation.
>
>
>
>
> Review 2
> --------
>
> Paper's title: Making Doug's Dream Come True: Collaboratories in Context
>
> Paper's author: Aldo De Moor
>
> Summary. The author claims that a high-level design approach (or "framework",
> or set of "design principles") such as Douglas Engelbart's is needed to design
> and integrate collaborative tools. Some distinctions (design principles?)
> are presented, and then used to present a few requirements for PORT.
>
> Clarity and precision. I do not know any precise, clear and easily appliable
> framework/methodology, e.g. in software engineering, knowledge engineering, and
> that includes the one I developped for "explanation generation" (as an extension
> and application of KADS) during my Honours thesis. I do not think there is an
> escape to that.
>
> Originality. Other frameworks are not cited although I guess this one
> competes or is complementary to quite a few.
>
> Interest of the approach. I am not convinced the few very general
> distinctions presented in the article are really guiding. Was the author
> guided by them to come up with the content (not the presentation) of his
> few notes on PORT? Wasn't this content already common knowledge and only
> a glimpse of the tip of the iceberg of the actually needed requirements?
> Ok, these notes are examples only but important requirements such as
> lexical/structural/semantic/ontological conventions (e.g. such as those
> I presented at ICCS'00) are not mentionned, nor is the need for
> knowledge-based cooperation protocols (asynchronous such as those in
> WebKB-2 or CO4, or synchronous shuch as those in Tadzebao and
> WebOnto).
> Can any high-level methodology or set of "design principles" for
> "collaboratories" lead developpers to come up with new ideas, or ease the
> technical integration of already developped tools?
>
> Miscellaneous. Assuming that "links" do not impose constraints but only
> bring more information that may be exploited if necessary, I do not understand
> the last part of the second sentence in the following quote:
> "The more system processes are linked, the higher the level of integration.
> In some cases, a high level of integration may be desirable, whereas in other
> cases, having loosely coupled processes may be more useful."
> http://lab.bootstrap.org/port/papers/2002/demoor.html#nid021
>
>
>
> Review 3
> --------
>
> Paper's title: On the practical bearings of Peirce's maxim
>
> Paper's author: Janos J. Sarbo
>
> Summary. The author presents certains distinctions among signs and,
> if I understood, claims that thanks to these distinctions, the signs
> can be recursively and automatically combined to represent knowledge,
> and for people, to understand/interpret the real world.
>
> Clarity and precision. At a sufficiently high level of abstraction, guessing
> and describing what might happen in people's brains is not a particularly risky
> or difficult activity. There is an infinity of possible abstractions and
> any high level reasonable model is unlikely to be proven wrong. But how does
> that help us to implement the human functions? This article claims the
> presented model is a computational model and that the sign can be "generated"
> via their automatic combination. How? And how are the presented distinctions
> a guide? Furthermore, what is the expected input: formal representations
> already taking into account the proposed distinctions, free natural language,
> real world (audio/visual sensors)? In the last two cases, the additional problem
> is of course to reach the first case.
>
> Originality. I do not have enough information to comment on that.
>
> Interest of the approach. I do not have enough information to comment on that.
>
> Miscellaneous.
> In http://lab.bootstrap.org/port/papers/2002/sarbo.html#nid08, the author
> says an ideal language would allow everything to be viewed as an argument
> and permit everything to be recognized as an argument. Well, LISP permits
> about everything to be used as an argument, even code itself, and what
> developpers may be able to program to "recognize" arguments seems
> more limited by their time and abilities than the language itself. What more
> abstract representational function is not and cannot be covered using LISP-like
> syntaxes? I am likely to be on the wrong track.
> In http://lab.bootstrap.org/port/papers/2002/sarbo.html#nid09, the author
> argues that human KR is based on signs. What's the alternative? If symbols
> are signs, every language (every symbolic representation means) is based
> on signs.
>
>
> (03)