The Next Generation of Real-Time System Architectures (part 5/5)

Other considerations

Asynchronous or synchronous? An application interfaces in some way with its connector. The question is: should it pick the latest available value? or should it wait for a value to arrive. The connector should provide both interfaces since it depends highly on the application how it processes its data. The application could ask the connector for the latest value it has received during its own processing loop or the connector can interrupt the application and provide the latest sample at the time it arrived.

Since we are considering large scale systems we should take into account that upgrading and/or expansion of these systems is well covered. It is not always possible to simply replace the system; it should be capable of evolving. This is particularly a problem when the data definitions change. The newer modules may produce or require more data then the previous version and hence incompatibility issues must be resolvable. By introducing a versioning scheme in the meta data, connectors can detect whether the data in the contract matches the version that the corresponding application can handle. It becomes even possible to design specific modules that handle conversions between versions of the data, which makes it possible to upgrade a system without changing the existing applications. In fact, such a scheme makes it possible to upgrade a system without powering it down.

Consider a situation of where a producer creates large quantities of data with varying quality. Until now we only considered the case where data validity was measured in terms of age. Some applications however may prefer an older measurement of good quality over newer measurements of lower quality. A generic construction would be to add a quality indication to the data which would allow the connector to find the data over a set of time with the highest quality.

We have discussed the need for auto discovery of connectors in the network and the inherit need to exchange “catalogs” of data that is available in order to make it possible for the connectors to establish “contracts” automatically. The network will provide build in data items that indicate:

  • Actual subscriptions
  • Actual publications
  • Historical data

These build in elements can be used by the connector and the application to monitor events (publications, (de)subscriptions, requests for historical data, acceptance/refusal of a subscription (e.g. when meta-data parameters indicate the producer cannot meet the subscribers criteria) etc.) These elements also give an insight in statistics on whether and in what quantity data items are produced, consumed and available. Last they can also be used for system performance monitoring by measuring the quantity of data items active, the failed delivery rate or latency.

We discussed the need for unique labeling of data elements and gave a rule of the thumb for identifying coherent data elements, but what if these elements have relations. Take for example the hostname of a node. This information would surely be useful in a number of relations. One could choose to break down the data elements to a low level and have the application re-assemble this. Another approach would be to introduce “compound” data elements that refer to “basic” data elements. Assume the following situation:

  • Item A: consists of elements 1, 2, 3
  • Item B: consists of elements 2, 3, 4
  • Item C: consists of elements 4, 5, 6

When a producer publishes item A, elements 1, 2, 3 are checked in the connector. If the e.g. only item 1 is changed, nothing will happen, but if item 2 is changed, the connector will automatically distribute item B since it is related to the same element. This will be done in a consolidated way; hence an item B with changed element 2 and 3 will lead to a single distribution of item A with both elements changed.


It is possible for the next generation of real-time system architectures to greatly increase the separation of functional applications by introducing a middleware that is data centric. At the same time it is possible to provide generic solutions for difficult aspects such as redundancy, persistency, scaling and upgrading.

Tagged with: , ,
Posted in Code, Uncategorized

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: