|Ontology: is the set of material or immaterial objects, of which a theory assumes that it can make statements about them. According to classical logic, an existence assumption must be assumed. In other fields of knowledge, the question of whether relations really exist or are merely mental constructs, is not always regarded as decisive as long as one can work with them. Immaterial objects are e.g. linguistic structures in linguistics. See also existence, mathematical entities, theoretical entities, theoretical terms, reality, metaphysics, semantic web._____________Annotation: The above characterizations of concepts are neither definitions nor exhausting presentations of problems related to them. Instead, they are intended to give a short introduction to the contributions below. – Lexicon of Arguments. |
|Norvig I 437
Ontology/AI research/Norvig/Russell: instead of trying to represent everything - which is impossible - we will leave placeholders where new knowledge for any domain can fit in. Complex domains such as shopping on the Internet or driving a car in traffic require (…) general and flexible representations. (…) these representations [concentrate] on general concepts - such as events, time, physical objects, and beliefs. >Knowledge representation/Norvig.
Norvig I 438
Upper ontology: The general framework of concepts is called an upper ontology because of the convention of drawing graphs with the general concepts at the top and the more specific concepts below them (…).A more general ontology would consider signals at particular times, and would include the wire lengths and propagation delays. This would allow us to simulate the timing properties of the circuit, and indeed such simulations are often carried out by circuit designers.
Norvig I 439
General purpose ontology: A general-purpose ontology should be applicable in more or less any special-purpose domain (with the addition of domain-specific axioms). In any sufficiently demanding domain, different areas of knowledge must be unified, because reasoning and problem solving could involve several areas simultaneously.
Norvig I 440
Categories: The organization of objects into categories is a vital part of knowledge representation. Although interaction with the world takes place at the level of individual objects, much reasoning takes place at the level of categories. Categories also serve to make predictions about objects once they are classified. One infers the presence of certain objects from perceptual input, infers category membership from the perceived properties of the objects, and then uses category information to make predictions about the objects.
There are two choices for representing categories in first-order logic: predicates and objects.
Norvig I 445
Objects: >Individuation/Philosophical theories, >Mass terms/Philosophical theories, >Intrinsic/extrinsic/Philosophical theories, >Categories/philosophical theories, >Description logic/AI research.
Norvig I 469
Interest in larger-scale ontologies is increasing, as documented by the Handbook on
Ontologies (Staab, 2004)(1). The OPENCYC project (Lenat and Guha, 1990(2); Matuszek et al.,
2006(3)) has released a 150,000-concept ontology, with an upper ontology (…)I as well as specific concepts like “OLED Display” and “iPhone,” which is a type of “cellular phone,” which in turn is a type of “consumer electronics,” “phone,” “wireless communication device,” and other concepts.
The IEEE working group P1600.1 created the Suggested Upper Merged Ontology (SUMO) (Niles and Pease, 2001(4); Pease and Niles, 2002(5)), which contains about 1000 terms in the upper ontology and links to over 20,000 domain-specific terms. Stoffel et al. (1997)(6) describe algorithms for efficiently managing a very large ontology. A survey of techniques for extracting knowledge from Web pages is given by Etzioni et al. (2008)(7).
On the Web, representation languages are emerging. RDF (Brickley and Guha, 2004)(8) allows for assertions to be made in the form of relational triples, and provides some means for evolving the meaning of names over time. OWL (Smith et al., 2004)(9) is a description logic that supports inferences over these triples.
So far, usage seems to be inversely proportional to representational complexity: the traditional HTML and CSS formats account for over 99% of Web content, followed by the simplest representation schemes, such as microformats (Khare, 2006)(10) and RDFa (Adida and Birbeck, 2008)(11), which use HTML and XHTML markup to add attributes to literal text. Usage of sophisticated RDF and OWL ontologies is not yet widespread, and the full vision of the Semantic Web (Berners-Lee et al., 2001)(12) has not yet been realized. The conferences on Formal Ontology in Information Systems (FOIS) contain many interesting papers on both general and domain-specific ontologies. >Knowledge representation/AI research.
An inspirational discussion of the general project of commonsense knowledge representation appears in Hayes’s (1978(13), 1985b(14)) “Naive Physics Manifesto.”
Norvig I 470
Problems: Doubts about the feasibility of a single ontology for all knowledge are expressed by Doctorow (2001)(15), Gruber (2004)(16), Halevy et al. (2009)(17), and Smith (2004)(18), who states, “the initial project of building one single ontology . . . has . . . largely been abandoned.”
1. Staab, S. (2004). Handbook on Ontologies. Springer.
2. Lenat, D. B. and Guha, R. V. (1990). Building Large Knowledge-Based Systems: Representation and Inference in the CYC Project. Addison-Wesley.
3. Matuszek, C., Cabral, J., Witbrock, M., and DeOliveira, J. (2006). An introduction to the syntax and semantics of cyc. In Proc. AAAI Spring Symposium on Formalizing and Compiling Background knowledge and Its Applications to Knowledge Representation and Question Answering.
4. Niles, I. and Pease, A. (2001). Towards a standard upper ontology. In FOIS ’01: Proc. International conference on Formal Ontology in Information Systems, pp. 2-9.
5. Pease, A. and Niles, I. (2002). IEEE standard upper ontology: A progress report. Knowledge Engineering Review, 17(1), 65–70.
6. Stoffel, K., Taylor, M., and Hendler, J. (1997). Efficient management of very large ontologies. In Proc. AAAI-97, pp. 442–447.
7. Etzioni, O., Banko, M., Soderland, S., and Weld, D. S. (2008). Open information extraction from the web. CACM, 51(12).
8. Brickley, D. and Guha, R. V. (2004). RDF vocabulary description language 1.0: RDF schema. Tech. rep., W3C.
9. Smith, M. K., Welty, C., and McGuinness, D. (2004). OWL web ontology language guide. Tech.
10. Khare, R. (2006). Microformats: The next (small) thing on the semantic web. IEEE Internet omputing, 10(1), 68-75.
11. Adida, B. and Birbeck, M. (2008). RDFa primer. Tech. rep., W3C
12. Berners-Lee, T., Hendler, J., and Lassila, O. (2001). The semantic web. Scientific American, 284(5), 4-43.
13. Hayes, P. J. (1979). The logic of frames. In Metzing,D. (Ed.), Frame Conceptions and Text Understanding, pp. 46–61. de Gruyter.
14. Hayes, P. J. (1985a). Naive physics I: Ontology for liquids. In Hobbs, J. R. andMoore, R. C. (Eds.), Formal Theories of the Commonsense World, chap. 3, pp. 71–107. Ablex.
15. Doctorow, C. (2001). Metacrap: Putting the torch to seven straw-men of the meta-utopia.
16. Gruber, T. (2004). Interview of Tom Gruber. AIS SIGSEMIS Bulletin, 1(3).
17. Halevy, A., Norvig, P., and Pereira, F. (2009). The unreasonable effectiveness of data. IEEE Intelligent Systems, March/April, 8–12.
18. Smith, B. (2004). Ontology. In Floridi, L. (Ed.), The Blackwell Guide to the Philosophy of Computing and Information, pp. 155–166.Wiley-Blackwell_____________Explanation of symbols: Roman numerals indicate the source, arabic numerals indicate the page number. The corresponding books are indicated on the right hand side. ((s)…): Comment by the sender of the contribution. The note [Author1]Vs[Author2] or [Author]Vs[term] is an addition from the Dictionary of Arguments. If a German edition is specified, the page numbers refer to this edition.
Stuart J. Russell
Artificial Intelligence: A Modern Approach Upper Saddle River, NJ 2010