The Fantastic Combinations and Permutations of Coordinate Systems' Characterising Options
The Game of Constructional Ontology
The multi-level modelling community’s raison d'être is its vision of the ubiquity and importance of multi-level-types: the ascending levelled hierarchy of types in conceptual models; starting with types of things, then types of these types, then types of these types of types, and so on. The community both promotes this vision and investigates this hierarchy, looking at how it can be accommodated into existing frameworks. In this paper, we consider a specific domain, coordinate systems’ characterising options. While we recognise that, unsurprisingly, this domain contains a ubiquity of multi-level-types, our interest is in investigating a new and different approach to understanding them. For this we needed to develop a new framework. We devise one focussing on this case, based upon scaling down to simple compositional algorithms (called constructors) to form a new, radically simpler foundation. From the simple operations of these constructors emerges the scaled up multi-level structures of the domain. We show how the simple operations of simple constructors give rise to compositional connections that shape – and so explain – different complex hierarchies and levels, including the familiar multi-level-types and relatively unknown multi-level-tuples. The framework crystallises these connections as metaphysical grounding relations. We look at how simple differences in the shape and operation of constructors give rise to different varieties of these hierarchies and levels – and the impact this has. We also look at how the constructional approach reveals the differences between foundational constructors and derived constructors built from the foundational constructors – and show that conceptual modeling’s generalisation relations are secondary and dependent upon the more foundational instantiation relations. Based upon this, we assemble a constructional foundational ontology using the BORO Foundational Ontology as our starting point. We then use this to reveal and explain the formal levels and hierarchies that underlie the options for characterising coordinate systems.
Developing Thin Slices
An Introduction to the Methodology for Developing the Foundation Data Model and Reference Data Library of the Information Management Framework
This Developing Thin Slices report provides a technical description of the process at the heart of the Thin Slices Methodology with the aim of providing a common technical resource for training and guidance in this area. As such it forms part of the wider effort to provide common resources for the development of the Information Management Framework.
It focuses on the process at the core of the Thin Slices Methodology. In particular, it identifies a requirement for a minimal foundation for these kinds of processes. In the companion report, Top-Level Categories (Partridge, forthcoming), the foundation adopted by the Information Management Framework is described. Together, the two reports cover the details of the developing thin slices process.
Top-Level Categories
Categories for the Top-Level Ontology of the Information Management Framework
This report identifies the top categories that characterise the top-level ontology that will underpin the Information Management Framework’s Foundation Data Model (where top categories exclusively and exhaustively divide the world’s entities by their fundamental kinds or natures). With these in place, the IMF’s top-level ontology has been characterised.
A thin slices approach (described in Developing Thin Slices (Partridge, forthcoming)) has been adopted for the development of the foundation data model. The category structure described in this report is being used as the foundation for that process. With these categories in place, that process has a firm foundation.
Ontology Architecture: Top Ontology Architecture
In various disciplines, when working on larger projects there is a tradition of thinking in terms of an architecture (E.g. Enterprise / Systems / Software Architecture).
Firstly a meta-methodological point; this suggests a good methodology for approaching large ontology projects should have an architectural component.
Here architecture is used in a loose sense, there is extensive discussion on what exactly an architecture is, which is not directly relevant to the points made. Agreeing it used in a loose sense avoids this – however interesting a rabbit hole it seems.
The points are illustrated the points with examples from the development and application of top ontologies such as BORO, IDEAS and MODEM.
MODEM MODAF Migration: Providing an ontological foundation
This report on the MODEM project is in three sections: 1) An executive summary that explains the motivation for the MODEM work. 2) An introduction to the real world analysis that was done as part of the MODEM work, which gives a deeper understanding of the ideas that underlie it and provides examples of their use. 3) A detailed technical IDEAS analysis explaining the IDEAS MODEM model. The detailed technical analysis focuses on the modelling of behaviour. It aims to re-engineer the UML behaviour model, which has no real world semantics, into an ontological foundation for the modelling of behaviour.
Each of the sections builds upon the previous section and is aimed at a different audience. The first section is aimed at management who need to understand the basis for the MODEM work. The second section is aimed at users who need to understand the issues that the MODEM work raises without delving into the technical details of the IDEAS model. The third and final section provides the detailed IDEAS analysis for the technical experts.
ISO TC211 workshop to consider the impact of non-relational technologies on TC211 standards
The presentation covers:
- Background
- Space-Time Component (plus Names)
- First workstream – Foundations: Quick View
- Second workstream - Overview
- Mapping General : Spatial Objects
- Part 42 – Mapping: Spatial Objects
- OS Open Names: Mapping
- Relations (Foundation Extension)
ISO TC211 workshop: to consider the impact of non-relational technologies on TC211 standards: BORO Solutions experience
The presentation covers:
- Is there a workable UML profile for managing ontologies?
- What should the output of such a model be like?
- (we covered how neither UML nor OWL is ideal for this
- there are certainly problems generating OWL ontologies from the current TC211 UML profile
- the TC211 use of UML could be improved, even within its own profile)
- What Chris brings is experience (in his domain) of using UML to create/manage ontologies
- (quite probably not expressed in OWL)
How the IMF Team is building a four-dimensional top-level ontology
This describes the IMFs approach to building a four-dimensional top-level ontology (TLO). It starts with the background, describing the Information Management Framework (IMF) and its approach to top level ontologies; with a focus on fundamental ontological choices that typically boil down to a choice whether to stratify or unify. It outlines the TLO use case - 'Euclidean' Standards - and ontological scope it creates. It the situates the TLO in the Foundation Data Layer of the IMF - built upon the ground layer - the Core Constructional Ontology (CCO). It then describes the CCO and the TLO in terms of its components.
Presentation Structure
- Introducing the IMF Team
- Background
- Information Management Framework
- Choice-based framework
- TLO Initial Use
- Situating the TLO in the IMF
- Data Section: Core Constructional Ontology
- Data Section: Top Level Ontology
Modernising Engineering Datasheets a bCLEARer project
A case study in migrating from legacy engineering standards
This presentation aims to provide a sense of what happens when a bCLEARer project is faced with data in the FORM paradigm (a kind of semi-structured data). It aims to show how the implicit FORM syntax can be made explicit. The presentation walks through a sanitised case history based upon real project, but both simplified and sanitised. This is a sample of actual work, but not intended as an example of best practice – or even good practice. It does, however, show some typical approaches and the challenges faced, particularly at the early alpha stage.
Building the foundations for better decisions
This presentation describes the Top level Ontology (TLO) that is being developed for the Information Management framework (IMF). It starts with a brief outline of how the TLO emerged from the work on the IMF. It notes the initial focus on providing a foundation for Euclidean standards. It touches on the foundation - the core constructional ontology - built from a unified constructor theory with three elements: set, sum and tuple constructors. It then looks at the data components of the TLO and how these are used to build four-dimensional space time: taking in mereotopology, chronology and worldlines.
Presentation Structure:
- Introducing the IMF Team
- Background
- Information Management Framework
- Choice-based framework
- TLO Initial Use
- Situating the TLO in the IMF
- Data Section: Core Constructional Ontology
- Data Section: Top Level Ontology
How to – and How Not to – Build Ontologies: The Hard Road to Semantic Interoperability
The digitalisation journey that takes us to semantically seamlessly interoperating enterprise systems is (at the later stages - where ontology is deployed) a hard road to travel. This presentation aims to highlight some of the main hurdles people will face on the digitalisation journey using a cultural evolution perspective. From this viewpoint, we highlight the radical new practices that need to be adopted along the journey. The presentation looks at the concerns this evolutionary perspective raises. For example, evolutionary contingency. It seems clear that if we don’t adapt in the right way, we will not evolve interoperability. While we have some idea of what the practices are, what the trajectory of the journey is. This is not enough, the community also needs find the means to (horizontally) inherit these. The presentation then does a quick tour around so of the new practices that need to be adopted.
Avoiding premature standardisation: balancing innovation and conformity
Overview
- Currently work in a niche area:
- ontologies for operational system semantic interoperability – integration
- think integrating enterprise SAP and Maximo operationallyhave been working here for a while (since late 1980s)
- not many (any?) other people working here
- Believe that:
- there are opportunities for architectural (radical and disruptive) innovation in this and other ontology area
- at this stage, the approach in my area needs to be agile
- that premature standardisation could stifle the innovation
- Want to suggest that:
- there is a need to balance the conformity (of standards) with the agility needed to produce innovation
- the balancing involves recognising when to standardise,
- so, recognising when there is premature standardisation
- it is not yet time to standardise in my area
A brief introduction to BORO
This is a brief introduction to the BORO approach and its two main components; the BORO Foundation and the bCLEARer methodology. The introduction will give an overview of both the history and the nature of the approach. It will finish with a brief look at some current enhancement work on modality and graphs as well as implementations.
Digitalizing Uncertain Information
The paper sketches some initial results from an ongoing project to develop an ontology-based digital form for representing uncertain information. We frame this work as a journey from lower to higher levels of digital maturity across a technology divide. The paper first sets a baseline by describing the basic challenges any project dealing with digital uncertainty faces. It then describes how the project is facing them. It shows firstly how an extensional ontology (such as the BORO Foundational Ontology or the Information Exchange Standard) can be extended with a Lewisian counterpart approach to formalizing uncertainty that is adapted to computing. And then it shows how this is expressive enough to handle the challenges.
Extending the design space of ontologization practices: Using bCLEARer as an example
Our aim in this paper is to outline how the design space for the ontologization process is richer than current practice would suggest. We point out that engineering processes as well as products need to be designed – and identify some components of the design. We investigate the possibility of designing a range of radically new practices, providing examples of the new practices from our work over the last three decades with an outlier methodology, bCLEARer. We also suggest that setting an evolutionary context for ontologization helps one to better understand the nature of these new practices and provides the conceptual scaffolding that shapes fertile processes. Where this evolutionary perspective positions digitalization (the evolutionary emergence of computing technologies) as the latest step in a long evolutionary trail of information transitions. This reframes ontologization as a strategic tool for leveraging the emerging opportunities offered by digitalization.