Multi-access edge computing (MEC) is coming together with the low latency mmWave connectivity enabled by 5G to make intelligence at the edge a reality that truly enables bi-directional digital conversations and gives the potential for technologies such as digital twins to be operated effectively. Here, Dheeraj Remella, the chief product officer of VoltDB, tells George Malim, the managing editor of VanillaPlus, how the challenges associated with MEC are being addressed and why, for many, once they’ve looked over the edge, there will be no going back to centralised, remote architectures.
George Malim: Does the latency offered by mmWave make the goal of linking physical and digital twins in real-time a reality? What are the complex impacts of this on enterprises and can data really be utilised live in the stream of enterprise processes?
Dheeraj Remella: There are a few things that are coming together in this scenario. First, there is the high bandwidth provided by 5G deployments. 5G has three bands of operation: low band, mid-band and high band. High band/mmWave allows massive amounts of data to be transferred over mobile networks. But the caveat is that mmWave doesn’t travel too far and is easily interruptible by obstacles. So, an ideal environment for mmWave to be used optimally would be within a building.
This caveat brings us to the second element of the scenario, which is digital transformation. Enterprises often embark on a digital transformation journey but end up trapping themselves into what I call digital transliteration. Digital transliteration is when an enterprise simply digitises its processes status quo using modern technology. I see this as a wasted opportunity to optimise business processes to break free of traditional constraints and evolve to operate in a realm of new possibilities.
The third element here is the need for closedloop digital twins. Historically, digital twins were state-recording systems whose data was heavily used in analytics to understand the systems, assets, and process behaviour. But given the drive towards automation, digital twins are now the intelligence behind their physical counterparts. Data is not just flowing one way from the physical asset, be it equipment or people or even a business process, to its digital representation. The flow needs to be bidirectional, where data comes from the physical-to-digital path, and the response instructions flow in the digital-tophysical direction. This bidirectional digital conversation completes the picture of the real-time control loop that is going to be a fundamental requirement for successful transformation based on machine-tomachine communication.
The factors in this scenario profoundly impact how enterprises see the value of data. Data’s value for analytics is well established by now. But what is hidden is the value of data captured in the first few milliseconds. The low latency event-driven servicing becomes especially significant when we are talking about the digital automation of business processes. Just a few examples of the hidden value from our experience at VoltDB are an 83% reduction in fraudulent transactions completing, 100% prevention of distributed denial of service (DDoS) attacks on the network, and 100% detection of bots before they intrude into a bidding process. All of these have stringent latency service level agreements (SLAs) to make these decisions. Data usage is going to augment post-event analytics by using that intelligence for inevent decision making. Low latency decisions impacting operations require low latency availability of event data. mmWave in private 5G settings will accelerate the shift to tapping into the first ten milliseconds of data’s life.
GM: What are the challenges of extracting all the value from the platform?
DR: Most enterprises have already invested in many technologies to fulfil various forms of value extraction from data. When faced with the challenge of tapping into the realtime data streams, these enterprises often resort to figuring out how to make-do with these existing investments. But they fall into the trap of spending inordinate amounts of time, money, and resources, only to end up making compromises, or even outright failing at the attempt. The low latency expectations for event-driven real-time value extraction from data require many different capabilities to play together in a single unified technology.
When patching together various technologies, enterprises face pitfalls like:
- Communication latency between various layers breaking SLAs
- Infrastructure footprints bloating because each layer requires its resiliency model
- Complex failure handling when aiming to maintain business continuity
- Operations using stale machine learned insights
- Outdated decisions because of not meeting latency SLAs
On the other hand, a unified platform ingests, stores, analyses and makes decisions, which are continuously enriched with machine learning retraining cycles, to address all needs holistically.
GM: Even with the low latency, is the cloud still too far away to enable the round trip to be completed in time to create a meaningful interaction? Does this mean MEC is a must?
DR: This question is an exciting one for me. Cloud computing brings a lot of value to commoditising compute, storage and networking. But, these new low latency expectations and value forces the intelligence closer to the edge. Cloud vendors, for a large part, are centralised in large data centres. This centralisation means that, immaterial of where the event took place, the event data needs to travel quite a bit before realising its value.
The real-time needs of digital transformation are pushing the edge intelligence agenda forward in an accelerated manner. Now the question is, where is edge? You have the edge in devices, gateways, customer premise equipment (CPE), network edge, and then there is the central/cloud data centre. MEC is going through evolution itself. It is transforming from a simple aggregator or local data storage to become more actively involved in edge intelligence. In my opinion, MECs are going to get a lot more capacity allocated to accommodate increasing responsibilities. It might even change the definition of MEC.
MEC on CPE will be the perfect place to land a variety of capabilities:
- Data thinning
- Automated event-driven decision making
- Data preparation for analytics – This becomes even more significant when you consider reducing the infrastructure costs at the central data centre when not all raw data needs to be stored
- Incorporation of machine learning insights into the decision-making process
A plan to succeed in bringing intelligence to the edge must include increasing the role of MEC.
GM: What are the challenges to MEC that need to be addressed? Are costs, security and the definition of the edge itself still barriers?
DR: Currently, MEC is underutilised, although it is in the right place, in the customer premise equipment. There are steps being taken to bring MEC to the network edge, and we’ve already observed that major communications service providers (CSPs) are partnering with public cloud vendors to get the cloud experience to the CSPs’ data centres. In either case, MEC is on single unit boxes and not taking advantage of all the distributed computing architecture innovations. When MEC becomes an integral part of an enterprise’s operations, MEC needs to provide the ability to scale and for business continuity by clustering for resilience and performance. The initial thought that comes to mind is ‘why invest in near-edge infrastructure when one can have all the hardware necessary procured and managed centrally, especially on top of the central data centre’s massive investment?’. Right after that comes the concern of security and intrusion prevention.
My thoughts on this are: if MEC and consequently edge intelligence is implemented well for the right use cases, the returns will easily outweigh the cost. Besides, the intelligence at the edge can even potentially decrease the infrastructure needs in the central datacentre. As for security, it is not something that is an afterthought. Safety is an integral part of everyday operations and weaves into business operations. A no-trust network implementation intertwines checks and prevention into every step and interaction. Now, the biggest challenge is the willingness, or lack thereof, of enterprises to undertake this journey and not shackle themselves with old ways because that is how it is today.
GM: Is moving intelligence closer to the edge the new standard? Is there no going back?
DR: The bottom line is there is no denying that enterprises are always looking for an edge to take their business to the next level and differentiate themselves from their competitors. With Industry 4.0 getting accelerated by 5G, traditionally non-digital companies are looking to optimise their business processes and utilise communication and computing technology advancements. Innovative leaders that adapt to this fact will leave enterprises that do not behind. Edge will become the most explored area of innovation, bringing better security, optimisations, and user experience. Once the path to the edge is taken, there is no going back.
GM: So where does VoltDB play in all of this?
DR: VoltDB has meaningfully integrated in-memory database and stream processing technologies. This combination brings the best of both worlds, such as data consistency and fast storage with transaction processing from the database world and the ability to integrate with streams for ingestion of data and communication with other applications and systems. Our engineering team has built our technology as a single cohesive product instead of just assembling various open source technologies and calling it a platform. Every step of our design process considers three inextricably linked questions:
What is the shortest path from an event to a responding action?
What is necessary to drive those actions intelligently?
How can we do this while using the least amount of hardware and resources?
I would highly encourage our audience to check out our paper on Intelligence at the Edge.