A 4-Dimensionalist Top Level Ontology (TLO):
Mereotopology and Space-Time
This presentation describes what the 4-dimensionalist top level ontology (TLO) based upon mereotopology and space-time being developed for the Information Management Framework (IMF) looks like. It describes the agile, iterative, modular approach adopted. It situates the 4-dimensional approach in terms of its ontological choices. It outlines the scope of the first iteration, based upon requirements that emerge from industrial standards such as; Building Smart, STEP amd TC211/INSPIRE. It describes the spatio-temporal candidates for ontological analysis that emerge from these standards. It then provides a historical overview of the use of worldlines to characterise these candidates. And builds upon this for one example, coordinate systems. Finally it provides an overview of how space-time can be modularised.
Presentation Structure
- Preliminaries - overall approach: How, broadly speaking, do we develop the ontology?
- Situating 4D in ontological space: A requirement for space-time is central
- Broad modularisation context
- First iteration: scope : What should the scope of the first ‘MVP’ be?
- Top-down and bottom-up approach
- Space-time – top-down workstream
- Space-time: Foundation Data Model : from worldlines to spatial objects and locations
- Space-time: top-level-ontology: from core to worldlines
How to – and How Not to – Build Ontologies: The Hard Road to Semantic Interoperability
The digitalisation journey that takes us to semantically seamlessly interoperating enterprise systems is (at the later stages - where ontology is deployed) a hard road to travel. This presentation aims to highlight some of the main hurdles people will face on the digitalisation journey using a cultural evolution perspective. From this viewpoint, we highlight the radical new practices that need to be adopted along the journey. The presentation looks at the concerns this evolutionary perspective raises. For example, evolutionary contingency. It seems clear that if we don’t adapt in the right way, we will not evolve interoperability. While we have some idea of what the practices are, what the trajectory of the journey is. This is not enough, the community also needs find the means to (horizontally) inherit these. The presentation then does a quick tour around so of the new practices that need to be adopted.
A survey of top-level ontologies
This presentation introduces the survey of top-level ontologies. It provides an overview of the context in which it was produced and reviews its contents.
Presentation Structure:
- Context
- Candidate Top-level Ontologies
- Assessment Framework
- Summary
Why (and how) to use a metaphysicalist foundational ontology
BORO is a metaphysically grounded foundational ontology developed specifically for use with computer systems (a foundational ontology is a system of general domain-independent ontological categories that can form a foundation for domain-specific ontologies; some but not all of these are grounded in metaphysics) and an associated methodology for legacy re-engineering systems. It emerged from a number of system replacement projects that started in the late 1980s. It was developed to mine the ontology-based conceptual models from legacy systems for use in the development of next generation systems.
Once the re-engineering methodology was established in the initial projects, questions arose as to where it could usefully be deployed. To answer this, it would help to understand why it was effective; after all, it would be hard to find a more abstract and esoteric subject than metaphysics – and one that does not immediately seem related to computing. Furthermore metaphysics is a broad subject, it would be good to understand better what areas of metaphysics are important, why they are important and how they are useful. It would also be good to have a better idea of where in computing metaphysics could play a useful role.
The purpose of this position paper is to sketch out how BORO has, over the decades, developed a view that provides answers to these questions (with no claim that this is the only way to answer them). This view is framed by two related themes. The first is that a new kind of information quality – which we label ‘computerate’ – is needed for computer systems and the second that metaphysics provides the right apparatus for grounding foundational ontologies that can be used to produce this ‘computerate’ information.
A survey of top-level ontologies: framework and results
Launched in July 2018, the National Digital Twin programme was set up to deliver key recommendations of the National Infrastructure Commission 2017 “Data for the Public Good Report”
- to steer the successful development and adoption of the Information Management Framework for the built environment
- to create an ecosystem of connected digital twins – a national digital twin– which opens the opportunity to release value for society, the economy, business and the environment
Core Constructional Ontology (CCO): a Constructional Theory of Parts, Sets, and Relations
This presentation introduces the Core Constructional Ontology (CCO). It firstly provides the background to the development of this ontology. It secondly, provides a summary of the approach to the development, looking at its key features and giving an overview of the formalisation.
The Basics of 4-Dimensionalism and the Role it Can Take in Supporting Large Scale Data Integration
This is the first in a series of presentations that should be seen as an integrated whole rather than a collection of separate presentations. It is an introduction to the whole and covers the Information Quality Management angle which is the motivation for our interest in 4-Dimensionalism. Later presentations will go down through the 7 circles of information management showing how 4D permeates what we are doing in developing and using 4-Dimensionalism on the National Digital Twin programme.
BORO: Business Objects Reference Ontology
This presentation shows a foundational ontology that aims to underpin a range of enterprise systems in a consistent and coherent manner and takes data-driven re-engineering as its natural starting point for domain ontology building. It has two closely intertwined components, a foundational ontology and a re-engineering methodology.
The origin and predominant area of application has been the enterprise. Suitability has been demonstrated in many industrial projects across a range of business domains including finance, oil and gas, and defense.
Unification of Types and Multi-Level Modeling:
Introduction - IS
This presentation aims to give an overview of how the unification of types could fit into IS.
Why Form, and so Unification of Types, is Important
This presentation looks at why the ‘unification of types’ is pragmatically important (and, more generally, why the ‘innocent’ development environment unification is pragmatically important). It does this by taking an evolutionary perspective that recognises unification as a form adaption for semantic interoperability.
A Framework for Composition:
A step towards a foundation for assembly: An Introduction
The presentation is an introduction the paper: “A Framework for Composition”, which outlines ‘a step towards a foundation for assembly’It:
- is a contribution to the Foundation Data Model (FDM), which
- is part of the Information Management Foundation (IMF), which
- is part of UK’s National Digital Twin programme (NDTp)
The paper aims to ensure composition (and so the FDM) is built upon a solid foundation. At the core of the notion of a component breakdown is the component as an integral (dependent) part of the composite whole. This has a rich underlying formal structure – which is described in the paper and outlined in this presentation. This structure, in turn, provides a framework for assessing how well a data model (or ontology) has captured the main elements of the structure enabling both the assessment of existing models as well as the design of new models
The paper is technical with a focus on the rich formal structure of the abstract general component breakdown architecture. This presentation provides a short overview of the concerns the paper addresses as such, it provides a simpler introduction to the paper.
Presentation Structure:
- What is composition?
- How is composition modelled?
- What kind of formal structure is emerging?
- The proposed formal structure
How the IMF Team is building a four-dimensional top-level ontology
This describes the IMFs approach to building a four-dimensional top-level ontology (TLO). It starts with the background, describing the Information Management Framework (IMF) and its approach to top level ontologies; with a focus on fundamental ontological choices that typically boil down to a choice whether to stratify or unify. It outlines the TLO use case - 'Euclidean' Standards - and ontological scope it creates. It the situates the TLO in the Foundation Data Layer of the IMF - built upon the ground layer - the Core Constructional Ontology (CCO). It then describes the CCO and the TLO in terms of its components.
Presentation Structure
- Introducing the IMF Team
- Background
- Information Management Framework
- Choice-based framework
- TLO Initial Use
- Situating the TLO in the IMF
- Data Section: Core Constructional Ontology
- Data Section: Top Level Ontology
The Fantastic Combinations and Permutations of Coordinate Systems' Characterising Options
The Game of Constructional Ontology
The multi-level modelling community’s raison d'être is its vision of the ubiquity and importance of multi-level-types: the ascending levelled hierarchy of types in conceptual models; starting with types of things, then types of these types, then types of these types of types, and so on. The community both promotes this vision and investigates this hierarchy, looking at how it can be accommodated into existing frameworks. In this paper, we consider a specific domain, coordinate systems’ characterising options. While we recognise that, unsurprisingly, this domain contains a ubiquity of multi-level-types, our interest is in investigating a new and different approach to understanding them. For this we needed to develop a new framework. We devise one focussing on this case, based upon scaling down to simple compositional algorithms (called constructors) to form a new, radically simpler foundation. From the simple operations of these constructors emerges the scaled up multi-level structures of the domain. We show how the simple operations of simple constructors give rise to compositional connections that shape – and so explain – different complex hierarchies and levels, including the familiar multi-level-types and relatively unknown multi-level-tuples. The framework crystallises these connections as metaphysical grounding relations. We look at how simple differences in the shape and operation of constructors give rise to different varieties of these hierarchies and levels – and the impact this has. We also look at how the constructional approach reveals the differences between foundational constructors and derived constructors built from the foundational constructors – and show that conceptual modeling’s generalisation relations are secondary and dependent upon the more foundational instantiation relations. Based upon this, we assemble a constructional foundational ontology using the BORO Foundational Ontology as our starting point. We then use this to reveal and explain the formal levels and hierarchies that underlie the options for characterising coordinate systems.
A brief introduction to BORO
This is a brief introduction to the BORO approach and its two main components; the BORO Foundation and the bCLEARer methodology. The introduction will give an overview of both the history and the nature of the approach. It will finish with a brief look at some current enhancement work on modality and graphs as well as implementations.
Modernising Engineering Datasheets a bCLEARer project
A case study in migrating from legacy engineering standards
This presentation aims to provide a sense of what happens when a bCLEARer project is faced with data in the FORM paradigm (a kind of semi-structured data). It aims to show how the implicit FORM syntax can be made explicit. The presentation walks through a sanitised case history based upon real project, but both simplified and sanitised. This is a sample of actual work, but not intended as an example of best practice – or even good practice. It does, however, show some typical approaches and the challenges faced, particularly at the early alpha stage.