Processing Services (WP4)

WP 4 aims to develop the functional and domain processing services of IQmulus in order to: maximize the use of data, provide task-specific packing and delivery of data sets, support data quality evaluation, and provide support for analysing quickly changing environmental conditions. The objectives of WP 4 are:

  • to define the methodological guidelines to drive integration and geometry generation tasks, quality assessment and management, documentation of the processing history;
  • to develop methods and algorithms to support data classification, feature extraction, detection and characterization of dynamic events (morphological and resolution changes), semantic enrichment;
  • to implement the toolkit for the basic and domain processing services of the IQmulus platform, according to the specifications given by WP 1 and WP 2.
Task 4.1 Requirements for processing (CNR-IMATI, SINTEF, TUDelft, UCL, IGN)

Based on the end-user requirements collected in WP1, the task will work out the guidelines for developing the processing services by: (i) setting up an inventory of data type, processing services, and concepts that WP4 is expected to handle, (ii) surveying algorithms for data processing, integration and analysis with respect to computational complexity and robustness (iii) defining guidelines for handling data and metadata about processing history, data quality, quality propagation, and result reliability at all stages of the processing pipelines (iv) proposing a validation framework for the algorithms developed. The task will produce a prioritized list of services (functional and domain services) that will have to be adapted, developed and integrated, and serve to harmonize the activities carried out in the various tasks of this WP.


Task 4.2 Spatio-temporal data fusion (TUDelft, UCL, IGN, CNR-IMATI)

The task will focus on processing methods aiming at creating a single data set containing information of the same type (univariate data set) out of different data sources (multi-source fusion). The processing will deal with different different resolutions, spectral dimensions, temporal dimension. Quality assessment of the resulting data (localized, at best at point level) will be provided both qualitatively and quantitatively. Various knowledge sources will be considered to perform fusion: sensor characteristics or known characteristics of the entities measured; knowledge about features, which might be available for some of the data sets to be integrated. This task will alsopropose new ways to incorporate methods of interpolation and extrapolation into the workflow, in order to match different attributes sampled at different spatial and/or temporal resolution.


Task 4.3 Feature extraction, classification and correlation (UCL, TUDelft, IGN, CNR-IMATI, Ifremer)

The processing here is targeted at a semantic enrichment of the data sets. By semantic enrichment we mean automatic extraction and annotation of high-level information by segmentation or classification processes. In day to day practise classification proves to be a bottle neck in processing as it involves massive amounts of manual work in checking and re-classification. To improve feature identification and classification, correlations between different data sets will be explored systematically. Geometrical, statistical and combined reasoning will be explored to cope with various kinds of data and features. In particular, random forest classifiers will be studied for highly automated classification. Random forest classifications are a flexible and adaptive scheme for classification that has proven to perform well on huge data sets and thus are ideally suited for the mass data processing aimed at within this work. Once regions are classified into low-level categories, geometric reasoning can be applied for individual feature recognition, e.g. building blocks for points classified as ‘building’.


Task 4.4 Multivariate Surface Generation (SINTEF, IGN, UCL, CNR-IMATI)

The task will focus on the integration of multivariate data sets, with the objective of providing the correct geospatial and temporal alignment of the different datasets, and the definition of a new data set where various entities are represented (multivariate case). As an example, we describe the service that could be activated to generate surface models for water flooding simulations in urban areas. The surface model should be complete, detailed and accurate: we propose to integrate point clouds and dual surfaces available in IQumulus, from all sources, to fulfil the needs of these simulations. Instead of filtering point clouds (e.g. removing and all raised objects removed), the integrated data set will still contain all of them: all of these objects are important because they are obstacles to water expansion. We will thus develop methods to integrate data fusion and feature extraction (see Task 4.2 and Task 4.3) to properly present all the surfaces that could be relevant for the simulation process (e.g obstacles that are potentially anchored in the ground, and the one that are not) and identify non anchored objects (cars, buses, etc.) that could be carried away by the flood and that could act as corks enhancing locally the effects and the danger of the flood. The methods and algorithms will be designed in order to be robust to scaling up. The surface generation is also a fundamental step to support further steps such as high quality and reliable spatial data visualization; to provide computational efficiency and feasibility, and to guarantee quality description of all results.


Task 4.5 Change detection and dynamics (UBO, CNR-IMATI, Ifremer)

Detecting changes in geospatial data sets is particularly useful is land management and decision making processes. The activities under this task will mainly concentrate on changes that are relevant for the demonstration scenarios and that can be generalized to morphological changes, taking place either on the seabed (e.g., sand dunes and sandbanks evolution), or on the land (e.g., landslides evolution). We will concentrate mainly on two approaches: raster approach, where cross correlation can provide information on surface displacement and height differential can give the vertical motion; vectorial approach, where surface models are extracted first and relevant morphological features are indexed so as to evaluate similarities/dissimilarities between their 3D shape and location. The latter approach can be particularly useful when deformable shapes are to be monitored and characterized.