DOGMA

For other uses, see Dogma (disambiguation).

DOGMA, short for Developing Ontology-Grounded Methods and Applications, is the name of research project in progress at Vrije Universiteit Brussel's STARLab, Semantics Technology and Applications Research Laboratory. It is an internally funded project, concerned with the more general aspects of extracting, storing, representing and browsing information.[1]

Methodological Root

DOGMA, as a dialect of the fact-based modeling approach, has its root in database semantics and model theory.[2] It adheres to the fact-based information management methodology towards Conceptualization and 100% principle of ISO TR9007.

The DOGMA methodological principles include:

  1. Data independence: the meaning of data shall be decoupled from the data itself.
  2. Interpretation independence: unary or binary fact types (i.e. lexons) shall be adhere to formal interpretation in order to store semantics; lexons themselves do not carry semantics [3]
  3. Multiple views on and uses of stored conceptualization. An ontology shall be scalable and extensible.
  4. Language neutral. An ontology shall meet multilingual needs.[4]
  5. Presentations independence: an ontology in DOGMA shall meet any kinds of users' needs of presentation. As an FBM dialect, DOGMA supports both graphical notations and textual presentation in a controlled language.[5] Semantic decision tables, for example, is a means to visualize processes in a DOGMA commitment. SDRule-L [6] is to visualize and publish ontology-based decision support models.
  6. Concepts shall be validated by the stakeholders.
  7. Informal textual definitions shall be provided in case the source of the ontology is missing or incomplete.

Technical introduction

DOGMA [7] is an ontology approach and framework that is not restricted to a particular representation language. This approach has some distinguishing characteristics that make it different from traditional ontology approaches such as (i) its groundings in the linguistic representations of knowledge[8] and (ii) the methodological separation of the domain-versus-application conceptualization, which is called the ontology double articulation principle.[9] The idea is to enhance the potential for re-use and design scalability. Conceptualisations are materialised in terms of lexons. A lexon is a 5-tuple declaring either (in some context G):

  1. taxonomical relationship (genus): e.g., < G, manager, is a, subsumes, person >;
  2. non-taxonomical relationship (differentia): e.g.', < G, manager, directs, directed by, company >.

Lexons could be approximately considered as a combination of an RDF/OWL triple and its inverse, or as a conceptual graph style relation (Sowa, 1984). The next section elaborates more on the notions of context.

Language versus conceptual level

Another distinguishing characteristic of DOGMA is the explicit duality (orthogonal to double articulation) in interpretation between the language level and conceptual level. The goal of this separation is primarily to disambiguate the lexical representation of terms in a lexon (on the language level) into concept definitions (on the conceptual level), which are word senses taken from lexical resources such as WordNet.[10] The meaning of the terms in a lexon is dependent on the context of elicitation.[11]

For example, consider a term “capital”. If this term was elicited from a typewriter manual, it has a different meaning (read: concept definition) than when elicited from a book on marketing. The intuition that a context provides here is: a context is an abstract identifier that refers to implicit and tacit assumptions in a domain, and that maps a term to its intended meaning (i.e. concept identifier) within these assumptions.[12]

Ontology evolution

Ontologies naturally co-evolve with their communities of use. Therefore, in De Leenheer (2007)[13] he identified a set of primitive operators for changing ontologies. We make sure these change primitives are conditional, which means that their applicability depends on pre- and post-conditions.[14] Doing so, we guarantee that only valid structures can be built.

Context dependency types

De Leenheer and de Moor (2005) distinguished four key characteristics of context:

  1. a context packages related knowledge: it defines part of the knowledge of a particular domain,
  2. it disambiguates the lexical representation of concepts and relationships by distinguishing between language level and conceptual level,
  3. it defines context dependencies between different ontological contexts and
  4. contexts can be embedded or linked, in the sense that statements about contexts are themselves in context.

Based on this, they identified three different types of context dependencies within one ontology (intra-ontological) and between different ontologies (inter-ontological): articulation, application, and specialisation. One particular example in the sense of conceptual graph theory[15] would be a specialisation dependency for which the dependency constraint is equivalent to the conditions for CG-specialisation[16]

Context dependencies provide a better understanding of the whereabouts of knowledge elements and their inter-dependencies, and consequently make negotiation and application less vulnerable to ambiguity, hence more practical.

See also

References

  1. "Welcome to VUB STARLab". Retrieved 2008-07-26.
  2. Peter Spyns, Yan Tang and Robert Meersman, An Ontology Engineering Methodology for DOGMA, Journal of Applied Ontology, special issue on "Ontological Foundations for Conceptual Modeling", Giancarlo Guizzardi and Terry Halpin (eds.), Volume 3, Issue 1-2, p.13-39 (2008)
  3. Meersman, R: Ontologies and databases: more than a fleeting resemblance. In A. d'Atri & M. Miskoff (eds.), OES/SEO 2001 Rome Workshop, Luiss Publications
  4. Yan Tang Demey and Clifford Heath, Towards Verbalizing Multilingual N-ary Relations, in book “Towards the Multilingual Semantic Web”, Paul Buitlaar and Philipp Cimiano (eds.), ISBN 978-3-662-43584-7, Chapter 6, 2014
  5. FBM Working Draft, European Space Agency.
  6. Yan Tang and Robert Meersman, SDRule Markup Language: Towards Modeling and Interchanging Ontological Commitments for Semantic Decision Making, Chapter V. (Section I) in "Handbook of Research on Emerging Rule-Based Languages and Technologies: Open Solutions and Approaches", IGI Publishing, ISBN 1-60566-402-2, USA, 2009
  7. (Jarrar, 2005, Jarrar et al., 2007, De Leenheer et al., 2007)
  8. (Jarrar, 2006)
  9. (see Jarrar, 2005, Jarrar et al., 2007)
  10. (Fellbaum, 1998)
  11. (De Leenheer and de Moor, 2005)
  12. (Jarrar et al., 2003).
  13. (De Leenheer et al., 2007)
  14. (Banerjee et al., 1987)
  15. (Sowa, 1984)
  16. (Sowa, 1984: pp. 97).

Further reading

This article is issued from Wikipedia - version of the Wednesday, November 26, 2014. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.