Replies: 3 comments
-
Directive information seem particularly open to this issue as an imperative that didn't describe or designate the task to be done would be difficult to carry out. I think at least part of the issue is solved by drawing an analogy to other types of entities and their components. Animals have hearts but we're not inclined to argue that they are both animals and hearts. In recent weeks I've heard BFO is considering the introduction of a class labelled 'Capability' to capture the fact that entities have as capabilities the functions of their parts. So an animal has the capability to pump blood. I think this is the right way to look at some cases of information seemingly being of different types. For example, when a GPS directs that I turn right onto I-95 South, that is a directive that has part designative. That doesn't make the directive a designative, it just expresses that it has the capability to designate. Your example is somewhat general and I couldn't tell if you were using it as a counterexample in which the same information (not different parts of the ontology) was of multiple types. Can more specific examples be produced in which the same information in the same context is of multiple types? |
Beta Was this translation helpful? Give feedback.
-
Thank you for the feedback. Your suggestion about some parts being descriptive or designative and others directive is a good idea, and maybe capabilities could further clarify those relationships. My motivation is to understand what kind of information content is contained within formal ontologies. Here is a better example of what I have in mind, using simplified RDF: obo:BFO_0000002 owl:disjointWith obo:BFO_0000003 . # nothing is both a continuant and occurrent
obo:BFO_0000015 rdfs:subClassOf obo:BFO_0000003 . # processes are occurrents
cco:Person rdfs:subClassOf obo:BFO_0000002 . # persons are continuants
cco:has_process_part rdf:type owl:ObjectProperty ; # processes can have process parts
rdfs:domain obo:BFO_0000015 ;
rdfs:range obo:BFO_0000015 .
:Tim a cco:Person ;
cco:has_process_part :Tims_Childhood . I’m assuming this is example of descriptive information. But a reasoner like ELK can, and should, automatically infer that these triples are inconsistent. One reason why ELK is able to do that is because it implements the OWL specification, which is directive information. But is that the only reason why? It looks to me like these particular axioms are wholly serving as specifications for certain inferences. The I’m using this example because I want to understand what formal ontologies are for practical reasons, not to get into any kind of broader discussion about the so-called normativity of meaning. Is an ontology a prescriptive specification of inferences (if not of a "conceptualization")? Here are some options I’m seeing: Option 1: The axioms are descriptive information that has directive information as a part (or maybe vice versa). Option 2: The axioms are only descriptive, but the OWL specification is directive. Taken together, these prescribe inferences. Option 3: The symbols used in my RDF example can be interpreted as either concretizing descriptive information (by humans for example) or directive information (by machines for example), but not information that is both. Option 4: The axioms are both wholly descriptive and directive because they describe real people and prescribe inferences. Maybe the inferences in this case are processes which are realizable by a computer, which result in some new information content. Thanks again! |
Beta Was this translation helpful? Give feedback.
-
@johnbeve @tmprd -- moving to the Discussion Board for evaluation and documentation under the ICE initiatives ongoing. |
Beta Was this translation helpful? Give feedback.
-
Hello!
Directive Information Content Entity
is currently disjoint withDescriptive Information Content Entity
and disjoint withDesignative Information Content Entity
.But can’t a blueprint include descriptive information?
The contents of a particular (realist) OWL ontology may describe something outside of a computer, but also may prescribe certain inferences and constraints on data interpreted with the ontology.
A particular OWL ontology may also prescriptively specify certain artifactual implementations, like those of the Java OWL API (maybe an instance of OWLOntology), while simultaneously describing something else, such as universals.
These considerations might also apply to reference systems (see #184), but they seem especially obvious to me with formal ontologies.
The definition of
prescribes
suggests a similar, though different idea about a prescription possibly serving as a “model”:I suggest that these are not disjoint because some information content can both prescribe one thing and describe another. Maybe, instead, it’s not possible for some information content to both prescribe and describe the very same thing.
Beta Was this translation helpful? Give feedback.
All reactions