So what kind of meta data is required to correctly establish a “contract” between these connectors? Again let’s consider the usage in a real-time environment. For such an environment the delivery of data is very important, since the validity depends on it. Consider the following meta properties:
- deadline; at some point in time the data is no longer useful for the receiver
- lifespan; at some point in time the data is no longer valid for the publisher
- reliability; should the network resend lost data or simply wait for the next value
- queue size; it is more efficient to process multiple messages
The queue size is a questionable property, in a real-time environment it is desirable, but it may lead to unwanted knowledge of the application, since we need to know what “efficient” sizes are.
The network itself also has some properties related to its behavior and performance. The bandwidth is an important factor. At some stage the system wants to make sure that producers and consumers are still connected, a common way to do that is to observe the “liveliness” of the data. The network will add “liveliness” to low update rate messages so that all connectors know the state of the producers and consumers. The reliability setting of the data also impacts the bandwidth since this will result in more control information that needs to be exchanged.
When we have described the data and we have described the network, we can also create new functionality using only these two actors. For example we can specify for each data element, whether one or more publishers are producing the data. The connectors can negotiate an ownership strength which indicates which producer is the authoritative source. Failure or death of the current owner allows subscribers to continue to receive data from a new owner, thus providing a generic redundancy concept. Note that the liveliness described above influences the detection time of determining that an entity has died.
Another useful feature is persistency of data. Data which is made available in the connector can also be made persistent. The application would need to specify whether data is transient or persistent which would result in data outliving the transmission. The history could be specified in terms of time or resource allocation. Since all connectors share the same network, data could even be stored centralized or distributed, which would mean that when a connector connects to the network for the first time, it can receive historical data from other entities in the network.
We have already seen that the connectors provide a powerful way to abstract communication behavior from the actual application. Combined with the labeling of data there is one more enhancement we can make to ease the live of the application developer. The connector can filter out the data that the application needs based on its content, despite the fact that the connector has no knowledge of the content itself. Consider the situation where a system is monitoring chemical levels in tanks. Some application may want to process information on a fluid level only when it reaches a certain level. The application would subscribe to the data element which hold the fluid level but would also supply the connector with a content filter (or evaluator) which would be able to tell the connector whether the received information is actually of interest to the application.