User Tools

Site Tools


datacore:enable_resilient_data_transfer_with_rabbitmq_-_fast_flow_variation

Enable Resilient Data Transfer with RabbitMQ - Fast Flow Variation

As well as resilient stream (see Enable Resilient Data Transfer with RabbitMQ), it may be necessary to configure a fast (near-real-time) flow also. The difference is the resilient flow is responsible for guarenteed flow of history to the destination data archive, but the fast flow will only ever attempt to relay current values.

In situations where a connection is unreliable, this means that dashboard displays will show the latest values as soon as connections are available, rather than be delayed with a catch-up lag.

To implement a fast flow, a parallel stream is configured but with “ephemeral” settings.

RabbitMQ Settings

Fast Flow Queue

Name data_core.tag_values.fast
Type classic
Durability Transient

Fast Flow Operator Policy

Name Fast Flow
Pattern ^data_core\..*\.fast$
Apply To Queues
Priority 2
Max Length (bytes) 1000000000
Message TTL (ms) 60000

Fast Flow Shovel

Name Fast Data Relay
Source AMQP 0.9.1
Source URI amqp:/ /
Source Queue data_core.tag_values.fast
Prefetch count
Auto-delete Never
Destination AMQP 0.9.1
Destination URI amqps:/ /data_transfer_user:<password>@<servername or ip>
Destination Queue data_core.tag_values.fast
Add forwarding headers No
Reconnect delay (s) 15
Acknowledgment mode No ack

Data Core Settings

Besides configured RabbitMQ Producer and Consumer drivers, and the Data Stream Rules that manage the routing of data within a Data Core instance, a new component is required - the In-Memory Data Source. The resiliant stream is routed to an archive (e.g. IP Hist) whilst the fast flow is configured to an In-Memory data source. The In-Memory data source can then be configured to route historical queries to the archive.

datacore/enable_resilient_data_transfer_with_rabbitmq_-_fast_flow_variation.txt · Last modified: 2023/08/28 11:05 by su