Computation Coordination and Distribution Framework

The deployment and execution of the complex data analytics workflows across the compute continuum are handled by COMP Superscalar (COMPSs). In addition to the programming model, COMPSs provides a runtime system that exploits the inherent parallelism of applications at execution time.

COMPSs handles the distribution of the application tasks over the compute continuum, from edge to cloud, while honoring the required data dependencies and handling any required data transfers in a way transparent to the developer. Furthermore, the COMPSs scheduler aims to minimise the end-to-end response time of the workflow (thus increasing velocity) and provides an upper bound for the end-to-end execution time, thus offering a guarantee of the expected real-time performance.

Data management is handled by dataClay, a distributed data store that enables the execution of code next to the data. The specific role of dataClay in the CLASS architecture is to: i) ensure the availability of data across the compute continuum, wherever and whenever required by the data analytics, and ii) to create and maintain and periodically clean the Data Knowledge Base (DKB), which contains historical data generated by the analytics.

Order: 
2