Edge intelligence is being trumpeted as the next big step in enabling IoT but definitions of the edge are nebulous and the current base of deployed devices are dumb and inflexible. George Malim goes closer to the edge to understand how greater intelligence will open up new opportunities.
The edge has many definitions, most of which depend on the devices, network and applications involved. There’s the network edge, the cloud edge and the edge device to consider all of which are at least partly able to define themselves as the edge. Regardless of your definition the edge is certainly gaining traction: the most recent Forrester Analytics Global Business Technographics Mobility Survey reported that 25% of global telecoms decision-makers are already implementing or expanding edge computing in 2019.
“The definition of the edge depends largely on who is talking about it,” confirmed Rob Milner, the head of smart systems at Cambridge Consultants. “When you talk to cloud providers, they view the edge as being something a bit like a smaller data centre that’s a bit closer to end users but, for a device maker, that still looks like a data centre. Ultimately edge intelligence needs to be more on the devices themselves or if not there, somewhere nearby like a 5G base station.”
Ron Neyland, senior director of IoT & E2C Advanced Solutions and Technologies for HPE’s IoT and Edge CoE & Labs, explained that the edge can actually be a large territory. “The simplest answer is that edge is everything beyond the data centre and cloud,” he said. “It’s where the ‘things’ of IoT are. By keeping compute, storage, data management, and control at the edge, companies can minimise insight delay and reduce data backhaul bandwidth requirements.”
Different kinds to consider
Others agree that there are many different forms of edge to consider. “It’s helpful if you visualise the edge as a spectrum between the device and the compute,” said Marc Flanagan, the EMEA director for Edge and IoT Solutions at Dell Technologies. “On the right edge, what we call the far edge, is where data is generated at the device. Then you have the edge of the IoT network, where the compute power creates the less time sensitive business insights which are then delivered to the hub.”
Regardless of the precise definition of the edge, it’s important not to lose sight of the benefits of adding intelligence at the edge. “The edge is where the action happens,” said Glen Robinson, the emerging technology director at Leading Edge Forum. “Data can be generated almost anywhere, but our ability to do something meaningful with it is slightly more constrained.
“[This is] due to the need for additional capabilities, such as processing, memory, storage – all of which require power –which is a resource that is frequently in limited or occasional supply at the source of data creation. Therefore, the edge is a location [to which] we can get sufficient power…to run a small compute system that can process data and do something useful with it.”
Andrew Grant, the senior business development director for Vision and AI at Imagination Technologies, detailed the advantages of embedding intelligence at the edge. “Embedding intelligence at the edge reduces latency and data transfer,” he said. “It’s a key problem for cameras. So using neural network accelerators, along with traditional computer vision algorithms, the intelligent camera can send only the most important frames or metadata. For example, being able to identify suspicious packages left behind in densely populated public areas.”
“For a vehicle travelling at even 30mph, it makes sense for the vehicle to be able to make split-second decisions rather than sending data back to the cloud and suffering buffering,” he added. “This could literally be the difference between life and death. Commentators now believe that a self-driving vehicle could generate as much as four terabytes of data a day which would need to be moved around the vehicle.”
Low latency, fast computing
Low latency and accelerated computing throughput are clear benefits, along with reduced cost and provision of a better experience to customers. “The fundamental benefit is that if we can move the compute, and the inherent intelligence that brings, closer to the ‘things’, we can materially improve and affect the way things are done,” said HPE’s Neyland.
He continued, “By moving the compute power and intelligence closer to where the data is being originated – in other words to the edge – you can do the analysis right there, and materially improve operations. Further, whereas previously you may have only been able to act on a small percentage of the data, at the edge you can utilise the data to a far greater extent. This allows improved results, earlier detection of issues, greater efficiency, reduced costs and other benefits.”
Cost reduction also resonates with Dell’s Flanagan. “There are two significant benefits to businesses embedding intelligence at the edge,” he said. “The first is cost. It’s naturally more cost-effective to process at least some data at the edge – data transmission is expensive, so by bringing compute closer to the origin of the data, you can reduce the volume of data you are transmitting back to the hub and therefore reduce the cost.
“The second is speed. Many use cases just cannot accept the latency involved in sending data over a network, processing it and returning a response. Autonomous vehicles and video surveillance in manufacturing are both great examples, where even a few seconds delay could mean the difference between an expected outcome and a catastrophic event.”
A substantial shift
Although much of the technology required to enable edge intelligence is mature, a substantial shift is required from the traditional, centralised hub and spoke architecture of data collection, analytics and action. A new era where edge intelligence is the norm will require new approaches to how operations are run.
“A lack of skills, security, the costs of the cloud and a lack of maintenance support will all hinder the growth of edge computing,” said Sam Wiltshire a talent consultant at recruitment firm Paratus People. “For organisations looking to move towards this model, don’t underestimate the initial legwork involved. It’ll be as transformative – and beneficial – as moving to the cloud, with the associated training, infrastructure investment and process changes.”
“Most edge implementations are going to occur in areas with little to no on-site IT support,” he added. “That makes a lack of internal skill a challenge. A potential solution is to use low-cost monitoring solutions that will forewarn of issues and allow tech support to be deployed to the scene quickly.”
This changed architecture coupled with the new operating model it ushers in could actually turn out to be the greater challenge for edge intelligence than the need to deploy programmable, intelligent devices. People as well as the technology will need to change. After all, as the old saying goes, if you aren’t living on the edge, you’re taking up too much room.