Psychology Wiki
Psychology Wiki

In information science, an upper ontology (top-level ontology, or foundation ontology) is an ontology which describes very general concepts that are the same across all knowledge domains. The most important function of an upper ontology is to support very broad semantic interoperability between a large number of ontologies accessible "under" this upper ontology. As the metaphor suggests, it is usually a hierarchy of entities and associated rules (both theorems and regulations) that attempts to describe those general entities that do not belong to a specific problem domain.

The seemingly conflicting use of metaphors implying a solid rigorous bottom-up "foundation" or a top-down imposition of somewhat arbitrary and possibly political decisions is no accident - the field is characterized by controversy, politics, competing approaches and academic rivalry.[How to reference and link to summary or text]

Debates notwithstanding, it can be said that a very important part of each upper ontology can be considered as the computational implementation of natural philosophy, which itself is a more empirical method for investigating the topics within the philosophical discipline of physical ontology.

Library classification systems predate these upper ontology systems. Though library classifications organize and categorize knowledge using general concepts that are the same across all knowledge domains, neither system is a replacement for the other.

Development[]

Upper ontologies are also commercially valuable, creating competition to define them. Peter Murray-Rust has claimed that this leads to "semantic and ontological warfare due to competing standards" [1], and accordingly any standard foundation ontology is likely to be contested among commercial or political parties, each with their own idea of 'what exists'.[How to reference and link to summary or text]. An important factor exacerbating the failure to arrive at a common approach is the absence of open-source applications that would permit the testing of different ontologies in the same computational environment. The differences are debated largely on theoretical grounds, or are merely the result of personal preferences, with no method to objectively compare practical performance.

No one upper ontology has yet gained widespread acceptance as a de facto standard. Different organizations are attempting to define standards for specific domains. The 'Process Specification Language' (PSL) created by the National Institute for Standards and Technology (NIST) is one example.

An important factor leading to the absence of wide adoption of any existing upper ontology is the complexity. An upper ontology typically has from 2000 to 10,000 elements (classes, relations), with complex interactions among them. The resulting complexity is similar to that of a human language, and the learning process can be even longer than for a human language because of the unfamiliar format and logical rules. The motivation to overcome this learning barrier is largely absent because of the paucity of publicly accessible examples of use. As a result, those building domain ontologies for a local application tend to create the simplest possible domain-specific ontology, not related to any upper ontology. Such domain ontologies may function adequately for the local purpose, but are very time-consuming to relate accurately to other domain ontologies.

There is debate over whether the concept of using a single, shared upper ontology is even feasible or practical at all. There is further debate over whether the debates are valid - often leading to outright censorship and boosterism of particular approaches in supposedly neutral sources including this one. Some of these arguments are outlined below, with no attempt to be comprehensive. Please do not censor them because you promote some ontology.

Why an upper ontology is not feasible[]

Historically, many attempts in many societies have been made to impose or define a single set of concepts as more primal, basic, foundational, authoritative, true or rational than others.

In the kind of modern societies that have computers at all, the existence of academic and political freedoms imply that many ontologies will simultaneously exist and compete for adherents. While the differences between them may be narrow and appear petty to those not deeply involved in the process, so too did many of the theological debates of medieval Europe, but they still led to schisms or wars, or were used as excuses for same. The tyranny of small differences that standard ontologies seek to end may continue simply because other forms of tyranny are even less desirable. So private efforts to create competitive ontologies that achieve adherents by virtue of better communication may proceed, but tend not to result in long standing monopolies.

A deeper objection derives from ontological constraints that philosophers have found historically inescapable. Some argue that a transcendental perspective or omniscience is implied by even searching for any general purpose ontology - see God's eye view- since it is a social / cultural artifact, there is no purely objective perspective from which to observe the whole terrain of concepts and derive any one standard.

A narrower and much more widely held objection is implicature: the more general the concept and the more useful in semantic interoperability, the less likely it is to be reducible to symbolic concepts or logic and the more likely it is to be simply accepted by the complex beings and cultures relying on it. In the same sense that a fish doesn't perceive water, we don't see how complex and involved is the process of understanding basic concepts.

  • There is no self-evident way of dividing the world up into concepts, and certainly no non-controversial one
  • There is no neutral ground that can serve as a means of translating between specialized (or "lower" or "application-specific") ontologies
  • Human language itself is already an arbitrary approximation of just one among many possible conceptual maps. To draw any necessary correlation between English words and any number of intellectual concepts we might like to represent in our ontologies is just asking for trouble. (WordNet, for instance, is successful and useful precisely because it does not pretend to be a general-purpose upper ontology; rather, it is a tool for semantic / syntactic / linguistic disambiguation, which is richly embedded in the particulars and peculiarities of the English language.)
  • Any hierarchical or topological representation of concepts must begin from some ontological, epistemological, linguistic, cultural, and ultimately pragmatic perspective. Such pragmatism does not allow for the exclusion of politics between persons or groups, indeed it requires they be considered as perhaps more basic primitives than any that are represented.

Those who doubt the feasibility of general purpose ontologies are more inclined to ask “what specific purpose do we have in mind for this conceptual map of entities and what practical difference will this ontology make?” This pragmatic philosophical position surrenders all hope of devising the encoded ontology version of “everything that is the case,” (Wittgenstein, Tractatus Logico-Philosophicus).

According to Barry Smith in The Blackwell Guide to the Philosophy of Computing and Information (2004), "the initial project of building one single ontology, even one single top-level ontology, which would be at the same time non-trivial and also readily adopted by a broad population of different information systems communities, has largely been abandoned." (p. 159)

Finally there are objections similar to those against artificial intelligence. Technically, the complex concept acquisition and the social / linguistic interactions of human beings suggests any axiomatic foundation of "most basic" concepts must be cognitive, biological or otherwise difficult to characterize since we don't have axioms for such systems. Ethically, any general-purpose ontology could quickly become an actual tyranny by recruiting adherents into a political program designed to propagate it and its funding means, and possibly defend it by violence. Historically, inconsistent and irrational belief systems have proven capable of commanding obedience to the detriment of harm of persons both inside and outside a society that accepts them. How much more harmful would a consistent rational one be, were it to contain even one or two basic assumptions incompatible with human life?

Why an upper ontology is feasible[]

Many of those who doubt the possibility of developing wide agreement on a common upper ontology fall into one of two traps: (1) they assert that there is no possibility of universal agreement on any conceptual scheme; but they ignore the fact that a practical common ontology does not need to have universal agreement, it only needs a large enough user community to make it profitable for developers to use it as a means to general interoperability, and for third-party developer to develop utilities to make it easier to use; and (2) they point out that developers of data schemes find different representations congenial for their local purposes; but they do not demonstrate that these different representation are in fact logically inconsistent. In fact, different representations of assertions about the real world (though not philosophical models), if they accurately reflect the world, must be logically consistent, even if they focus on different aspects of the same physical object or phenomenon. If any two assertions about the real world are logically inconsistent, one or both must be wrong, and that is a topic for experimental investigation, not for ontological representation. In practice, representations of the real world are created as and known to be approximations to the basic reality, and their use is circumscribed by the limits of error of measurements in any given practical application. Ontologies are entirely capable of representing approximations, and are also capable of representing situations in which different approximations have different utility. Objections based on the different ways people perceive things attack a simplistic, impoverished view of ontology. The objection that there are logically incompatible models of the world are true, but in an upper ontology those different models can be represented as different theories, and the adherents of those theories can use them in preference to other theories, while preserving the logical consistency of the necessary assumptions of the upper ontology. The necessary assumptions provide the logical vocabulary with which to specify the meanings of all of the incompatible models. It has never been demonstrated that incompatible models cannot be properly specified with a common, more basic set of concepts, while there are examples of incompatible theories that can be logically specified with only a few basic concepts.

Many of the objections to upper ontology refer to the problems of life-critical decisions or non-axiomatized problem areas such as law or medicine or politics that are difficult even for humans to understand. Some of these objections do not apply to physical objects or standard abstractions that are defined into existence by human beings and closely controlled by them for mutual good, such as standards for electrical power system connections or the signals used in traffic lights. No single general metaphysics is required to agree that some such standards are desirable. For instance, while time and space can be represented many ways, some of these are already used in interoperable artifacts like maps or schedules.

Objections to the feasibility of a common upper ontology also do not take into account the possibility of forging agreement on an ontology that contains all of the primitive ontology elements that can be combined to create any number of more specialized concept representations. Adopting this tactic permits effort to be focused on agreement only on a limited number of ontology elements (under 10,000). By agreeing on the meanings of that inventory of basic concepts, it becomes possible to create and then accurately and automatically interpret an infinite number of concept representations as combinations of the basic ontology elements. Any domain ontology or database that uses the elements of such an upper ontology to specify the meanings of its terms will be automatically and accurately interoperable with other ontologies that use the upper ontology, even though they may each separately define a large number of domain elements not defined in other ontologies. In such a case, proper interpretation will require that the logical descriptions of domain-specific elements be transmitted along with any data that is communicated; the data will then be automatically interpretable because the domain element descriptions, based on the upper ontology, will be properly interpretable by any system that can properly use the upper ontology. An upper ontology based on such a set of primitive elements can include alternative views, provided that they are logically compatible. Logically incompatible models can be represented as alternative theories, or represented in a specialized extension to the upper ontology. The proper use of alternative theories is a piece of knowledge that can itself be represented in an ontology.

Most proponents of an upper ontology argue that several good ones may be created with perhaps different emphasis. Very few are actually arguing to discover just one within natural language or even an academic field. Most are simply standardizing some existing communication. Another view advanced is that there is almost total overlap of the different ways that upper ontologies have been formalized, in the sense that different ontologies focus on a different aspect of the same entities, but the different views are complementary and not contradictory to each other; as a result, an internally consistent ontology that contains all the views, with means of translating the different views into the other, is feasible. Such an ontology has not thus far been constructed, however, because it would require a large project to develop so as to include all of the alternative views in the separately developed upper ontologies, along with their translations. The main barrier to construction of such an ontology is not the technical issues, but the reluctance of funding agencies to provide the funds for a large enough consortium of developers and users.

Several common arguments against upper ontology can be examined more clearly by separating issues of concept definition (ontology), language (lexicons), and facts (knowledge). For instance, people have different terms and phrases for the same concept. However, that does not necessarily mean that those people are referring to different concepts. They may simply be using different language or idiom. Formal ontologies typically use linguistic labels to refer to concepts, but the terms that label ontology elements mean no more and no less than what their axioms say they mean. Labels are similar to variable names in software, evocative rather than definitive. The proponents of a common upper ontology point out that the meanings of the elements (classes, relations, rules) in an ontology depend only on their logical form, and not on the labels, which are usually chosen merely to make the ontologies more easily usable by their human developers. In fact, the labels for elements in an ontology need not be words - they could be, for example, images of instances of a particular type, or videos of an action that is represented by a particular type. It cannot be emphasized too strongly that words are *not* what are represented in an ontology, but entities in the real world, or abstract entities (concepts) in the minds of people. Words are not equivalent to ontology elements, but words *label* ontology elements. There can be many words that label a single concept, even in a single language (synonymy), and there can be many concepts labeled by a single word (ambiguity). Creating the mappings between human language and the elements of an ontology is the province of Natural Language Understanding. But the ontology itself stands independently as a logical and computational structure. For this reason, finding agreement on the structure of an ontology is actually easier than developing a controlled vocabulary, because all different interpretations of a word can be included, each *mapped* to the same word in the different terminologies.

A second argument is that people believe different things, and therefore can't have the same ontology. However, people can assign different truth values to a particular assertion while accepting the validity of certain underlying claims, facts, or way of expressing an argument with which they disagree. (Using, for instance, the issue/position/argument form.) This objection to upper ontologies ignores the fact that a single ontology can represent different belief systems, representing them as different belief systems, without taking a position on the validity of either.

Even arguments about the existence of a thing require a certain sharing of a concept, even though its existence in the real world may be disputed. Separating belief from naming and definition also helps to clarify this issue, and show how concepts can be held in common, even in the face of differing belief. For instance, wiki as a medium may permit such confusion but disciplined users can apply dispute resolution methods to sort out their conflicts. It is also argued that most people share a common set of "semantic primitives", fundamental concepts, to which they refer when they are trying to explain unfamiliar terms to other people. An ontology that includes representations of those semantic primitives could in such a case be used to create logical descriptions of any term that a person may wish to define logically. That ontology would be one form of upper ontology, serving as a logical "interlingua" that can translate ideas in one terminology to its logical equivalent in another terminology.

Advocates argue that most disagreement about the viability of an upper ontology can be traced to the conflation of ontology, language and knowledge, or too-specialized areas of knowledge: many people, or agents or groups will have areas of their respective internal ontologies that do not overlap. If they can cooperate and share a conceptual map at all, this may be so very useful that it outweighs any disadvantages that accrue from sharing. To the degree it becomes harder to share concepts the deeper one probes, the more valuable such sharing tends to get. If the problem is as basic as opponents of upper ontologies claim, then, it applies also to a group of humans trying to cooperate, who might need machine assistance to communicate easily.

If nothing else, such ontologies are implied by machine translation, used when people cannot practically communicate. Whether "upper" or not, these seem likely to proliferate.

Available ontologies[]

Cyc[]

Main article: Cyc

A well-known and quite comprehensive ontology available today is Cyc, a proprietary system under development since 1986, consisting of a foundation ontology and several domain-specific ontologies (called microtheories). A subset of that ontology has been released for free under the name OpenCyc, and a more or less unabridged version is made available for non-commercial use under the name ResearchCyc.

Basic Formal Ontology (BFO)[]

Main article: Basic Formal Ontology

The BFO or Basic Formal Ontology framework developed by Barry Smith and his associates consists in a series of sub-ontologies at different levels of granularity. The ontologies are divided into two varieties: SNAP (or snapshot) ontologies, comprehending continuant entities such as three-dimensional enduring objects, and SPAN ontologies, comprehending processes conceived as extended through (or as spanning) time. BFO thus incorporates both three-dimensionalist and four-dimensionalist perspectives on reality within a single framework. Interrelations are defined between the two types of ontologies in a way which gives BFO the facility to deal with both static/spatial and dynamic/temporal features of reality. Each SNAP ontology is an inventory of all entities existing at a time. Each SPAN ontology is an inventory (processory) of all the processes unfolding through a given interval of time. Both types of ontology serve as basis for a series of sub-ontologies, each of which can be conceived as a window on a certain portion of reality at a given level of granularity. An example of an application of BFO can be seen in the Ontology for Biomedical Investigations (OBI). A list of the large number of ontologies based on BFO can be found here.

DOLCE and DnS[]

Developed by Nicola Guarino and his associates at the Laboratory for Applied Ontology (LOA), the Descriptive Ontology for Linguistic and Cognitive Engineering (DOLCE) is the first module of the WonderWeb foundational ontologies library. As implied by its acronym, DOLCE has a clear cognitive bias, in that it aims at capturing the ontological categories underlying natural language and human commonsense. DOLCE, however, does not commit to a strictly referentialist metaphysics related to the intrinsic nature of the world. Rather, the categories it introduces are thought of as cognitive artifacts, which are ultimately depending on human perception, cultural imprints and social conventions. In this sense, they intend to be just descriptive (vs prescriptive) notions, that assist in making already formed conceptualizations explicit. DOLCE is an ontology of particulars, in the sense that its domain of discourse is restricted to them. Of course, universals are used to organize and characterize the particulars, but they are not themselves subject to being organized and characterized (e.g., by means of metaproperties).

DnS (Descriptions and Situations), developed by Aldo Gangemi (LOA, Rome), is a constructivist ontology that pushes DOLCE’s descriptive stance even further. DnS does not put restrictions on the type of entities and relations that one may want to postulate, either as a domain specification, or as an upper ontology, and it allows for context-sensitive ‘redescriptions’ of the types and relations postulated by other given ontologies (or ‘ground’ vocabularies). The current OWL encoding of DnS assumes DOLCE as a ground top-level vocabulary. DnS and related modules also exploit ‘CPs’ (Content ontology design Patterns), a newly created tool which provides a framework to annotate ‘focused’ fragments of a reference ontology (i.e., the parts of an ontology containing the types and relations that underlie ‘expert reasoning’ in given fields or communities). The combination of DOLCE and DnS has been used to build a planning ontology known as DDPO[2] (DOLCE+DnS Plan Ontology).

Both DOLCE and DnS are particularly devoted to the treatment of social entities, such as e.g. organizations, collectives, plans, norms, and information objects. The DOLCE-2.1-Lite-Plus OWL version, including a number of DnS-based modules, has been and is being applied to several ontology projects.

A lighter OWL axiomatization of DOLCE and DnS, which also simplifies the names of many classes and properties, adds extensive inline comments, and thoroughly aligns to the repository of Content patterns (available at the ODP wiki) is now available as DOLCE+DnS-Ultralite (abbreviated: DUL). Despite its simplification, which greatly speeds up consistency checking and classification of OWL domain ontologies that are plugged to it, the expressivity of DUL is not significantly different from the previous DOLCE-Lite-Plus. DOLCE OWL versions, DOLCE+DnS-Ultralite and the pattern repository are developed and maintained by Aldo Gangemi and his associates at Rome's Semantic Technology Lab.

General Formal Ontology (GFO)[]

Main article: General Formal Ontology

The General Formal Ontology (GFO), developed by Heinrich Herre and his colleagues of the research group Onto-Med in Leipzig, is a realistic ontology integrating processes and objects. It attempts to include many aspects of recent philosophy, which is reflected both in its taxonomic tree and its axiomatizations. GFO allows for different axiomatizations of its categories (such as the existence of atomic time-intervals vs. dense time). The basic principles of GFO are published in the Onto-Med Report Nr. 8 and in General Formal Ontology (GFO): A Foundational Ontology for Conceptual Modelling.

Two GFO specialties, among others, are its account of persistence and its time model. Regarding persistence, the distinction between endurants (objects) and perdurants (processes) is made explicit within GFO by the introduction of a special category, a persistant. A persistant is a special category with the intention that its instances "remain identical" (over time). With respect to time, time intervals are taken as primitive in GFO, and time-points (called "time boundaries") as derived. Moreover, time-points may coincide, which is convenient for modelling instantaneous changes.

IDEAS[]

The upper ontology developed by the IDEAS Group is higher-order, extensional and 4D. It was developed using the BORO Method. The IDEAS ontology is not intended for reasoning and inference purposes; its purpose is to be a precise model of business.

WordNet[]

Main article: WordNet

WordNet, a freely available database originally designed as a semantic network based on psycholinguistic principles, was expanded by addition of definitions and is now also viewed as a dictionary. It qualifies as an upper ontology by including the most general concepts as well as more specialized concepts, related to each other not only by the subsumption relations, but by other semantic relations as well, such as part-of and cause. However, unlike Cyc, it has not been formally axiomatized so as to make the logical relations between the concepts precise. It has been widely used in Natural language processing research.

Suggested Upper Merged Ontology[]

Main article: Suggested Upper Merged Ontology

The Suggested Upper Merged Ontology (SUMO) is another comprehensive ontology project. It includes an upper ontology, created by the IEEE working group P1600.1 (predominantly by Ian Niles and Adam Pease). It is extended with many domain ontologies and a complete set of links to WordNet. It is freely available.

IDM (Integration, Differentiation, and Meaning)[]

Through an analysis of the manner in which the brain appears to deriving categories of meaning, an upper ontology is established by creation of an abstract domain model usable to translate meanings across specialist domain models, where such models are derived from recursion, and in so doing highlighting the core methodology used in the brain for meaning derivation, real or imagined. The focus of the work is on (a) identifying the general methodology for meaning processing by humans and (b) introducing a basic ground for meaning derivation/communication by AI systems. In the process of identifying the general methodology in meaning derivation we identify a core property of recursion that allows for the emergence of language and consciousness-as-an-agent-of-mediation. IDM

Biomedical ontology[]

Examples of domain ontologies can be found at the Open Biomedical Ontology site. They act as an umbrella organisation for many ontologies specific to biological topics (such as cellular organelles).

COSMO[]

COSMO (COmmon Semantic MOdel) is an ontology that was initiated as a project of the COSMO working group of the Ontology and taxonomy Coordinating Working Group. The current version is an OWL ontology, but a Common-Logic compliant version is anticipated in the future. The ontology and explanatory files are available at the COSMO site. The goal of the COSMO working group was to develop a foundation ontology by a collaborative process that will allow it to represent all of the basic ontology elements that all members feel are needed for their applications. The development of COSMO is fully open, and any comments or suggestions from any sources are welcome. After some discussion and input from members in 2006, the development of the COSMO has been continued primarily by Patrick Cassidy, the chairman of the COSMO Working Group. Contributions and suggestions from any interested party are still welcome and encouraged. Many of the types (OWL classes) in the current COSMO have been taken from the OpenCyc OWL version 0.78, and from the SUMO. Other elements were taken from other ontologies (such as BFO and DOLCE), or developed specifically for COSMO. Recent development of the COSMO has focused on including representations of all of the words in the Longman Dictionary of Contemporary English (LDOCE) controlled defining vocabulary (2148 words). These words are sufficient to define (linguistically) all of the entries in the LDOCE. It is hypothesized that the ontological representations of the concepts represented by those terms will be sufficient to specify the meanings of any specialized ontology element, thereby serving as a basis for general Semantic Interoperability. The current (May 2009) OWL version of COSMO has over 6400 types (OWL classes), over 700 relations, and over 1400 restrictions.

See also[]

  • Foundations of mathematics
  • Physical ontology
  • Process ontology
  • Formal Ontology
  • Semantic interoperability
  • Commonsense knowledge

External links[]

References[]

  1. Reported by Edd Dumbill in [1]
  2. Gangemi, A., Borgo, S., Catenacci, C., and Lehman, J. (2005). Task taxonomies for knowledge content (deliverable D07). Laboratory for Applied Ontology (LOA).
This page uses Creative Commons Licensed content from Wikipedia (view authors).