Why low latency is vital for edge success

5G, edge computing, digital transformation, digital twins, machine learning and AI appearing together looks like a buzzword bingo. But there is a rational connection between these topics and technologies, writes Dheeraj Remella, the chief product officer at VoltDB.

To frame these topics in simple terms, they can be defined as follows:

5G – Provides high bandwidth low latency connectivity to the mobile network, which can be private or public depending on the use case Edge computing – Brings intelligence meaningfully closer to the source of events

Digital transformation – An exercise that an enterprise undertakes to break free of archaic processes and redefine operations to make use of modern technology Digital twins – The next step in the digital representation of organisational assets, such as people, assets and processes, moving from simple state recording to full sense control feedback loop

Machine learning – Employing computers and algorithms to understand what the data is telling us

Artificial intelligence – Operationalising the machine learning insights to facilitate ever-evolving ‘do learn-do better’ cycles

Now, with this backdrop of understanding of the terms, let’s examine the fundamental thread that connects these together. The singular intent of each one of these technologies is to bring more self-learning automation to business processes. Data generated by enterprise assets and processes should get consumed as close to the source of the events as possible. This ensures decisions are not made – and actions are not taken – on stale information. This would render those decisions and actions obsolete as the universe has since moved on.

The decisions are not made based on a static set of rules. Instead, they have a characteristic of dynamism due to the speed at which events get generated and decisions are made. The overarching business process is fluid to adapt to current conditions. These dynamic rules are generated by training and retraining learning models.

When dealing with real-life processes and industrial applications, ‘good enough’ is not enough. Every decision and action needs to be correct based on the current situation and business rules. This is where bringing intelligence beyond simple aggregation and to the edge becomes important. Now, what is edge though? Is it in the device or gateway or some kind of on-premise datacentre or network, or is it in the cloud? It depends on the application. But in most cases, one thing becomes concretely evident. The edge on the devices is too narrow to make any meaningful contextual decisions.

Having the edge layer in the cloud is too far away to be able to act on events within a reasonable amount of time. Gateways are slowly becoming irrelevant with narrowband IoT and Cat-M devices being able to connect directly to the network through embedded (eSIM). So that leaves the onpremises data centre and the network edge as the best possible candidates for intelligent interactions. Applications that are more industrial by nature will best benefit by taking the on-premises approach while applications that are more consumer-oriented will benefit from the network edge. But in either case, while the network ping roundtrip latency is important, the service latency in the middle of the communication roundtrip is also essential.

To keep the service latency low, a data platform that serves multiple purposes together is necessary to ensure the data value is extracted without an artificially complicated technology stack. The event data needs to be ingested, stored and aggregated, either as a single business event or a set of events – think complex events, and compared to the aggregated data to measure some key performance metrics. Any deviation in behaviour needs to be acted upon for either monetisation or prevention of some form of threat.

Monetisation opportunities can range from the personalisation of user experience to determining the most profitable end product – at this moment – from refined crude oil. Threats can range from potential machine downtime to robotic network intrusion. Given the core objective of digital transformation is to automate business processes by shifting to a machine-to-machine communication paradigm, the latency of these decisions to invoke appropriate actions needs to be in single-digit milliseconds and, in a more stringent environment, less than a millisecond. If this window is missed, a cascade of inefficiencies is put into play. The decisions are stale, which translates into wrong learnings. This in turn creates insight that is not congruent with the reality of the enterprise.

Depending on the use case, VoltDB’s customers typically allocate anywhere between 0.25 milliseconds to less than 10 milliseconds for making the decisions, and those decisions are made with 100% accuracy and uncompromising guarantees for resiliency. Completing intelligent decisions and taking action within this timeframe is no longer a niceto-have. It is a must-have to uncover the latent value of the data in its infancy. Our customers are able to gain unprecedented advantages ranging from being able to prevent 100% of bot intrusion to being able to take the best next action by adhering to stringent low latency service level agreements (SLAs).

www.voltdb.com

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close