The architecture of the system can now be broken down in several components. Let’s assume that a system consists of:
Each application is implementing a part of the total system functionality. Some may implement multiple functions, some may implements only one. Applications interface with each other through a connector.
This connector enforces a data centric interface. It will handle the storage and interfacing logic. Though it handles and stores data, the connector has no knowledge of the data, other than meta-parameters specified by the application. These parameters allow the connector to know how and when information must be exchanged with other parties.
The network handles the data transport between the connectors. Recent developments have made it possible for connectors to detect each other on a network automatically. The connectors exchange meta-data about the information that applications can exchange, thus creating an abstract, powerful and flexible system approach. The network can reside on a single physical computer node but can also extend over many nodes and large distances.
All data that is handled through the connector must be uniquely labeled. This will provide applications with a suitable handle to request and process the data. The definition of this label and other meta-parameters is the founding block for the architecture and is defined by the connectors. The actual data transmitted is flexible, once a match is made on label level the applications can exchange information in a data format that is known on application level.
This approach has an important implication. The communication between the applications is depended on the negotiation of the different connectors and the network. The applications have no knowledge of the communication, they only specify the data they need or the data they produce. As a result, all the configuration of the communication can be established run time and not design time. Applications no longer depend on other applications, they depend on the data: produced or consumed somewhere in the network.
Since the need for unique labeling is of such great importance, the design of the data structures should be given special attention. It makes no sense to create labels for each data element, nor for huge structures that encompass all data. In practice a useful rule of the thumb seems to be: if one piece of data is not relevant without the other, it should not be separated (e.g. coordinates like latitude and longitude make little sense without each other.)
Real time systems often produce large quantities of data that represent the same object. For that type of data it known that its validity quickly expires. The network can make use of this property and distribute the data effectively. There is no need to synchronize that data over the entire system since its validity is limited. Even when the occasional data sample is lost, the system will be able to use subsequent samples to continue processing.
One could compare this to a metro system, all applications (commuters) can use the same vehicle (data) to perform their function, if they miss a vehicle, the next vehicle will bring them back on track. By comparison, an email system would be badly served when data packets would lose validity during transport. Note that the nature of the data is defining the transport properties, and not the architecture of the solution available.
Based on the requirements that applications request of the data will two or more connectors enter a negotiation phase. In this phase the requirements of the “subscriber” will be matched with the specifications of the “producer”. This initial negotiation may take some time, but in return removes the burden of specifying where a particular application is running. Once a “contract” has been established, the producer will sent the data to its subscribers; this contract can be monitored by the network, which is responsible for the requested quality of service.