Successful deployment of intelligence at the edge relies on a spread of technologies. They include artificial intelligence (AI) and machine learning to enable edge devices with analytics and decisionmaking capabilities. How long will all this take, asks George Malim?
The promise of edge intelligence will lead to lower latency, faster decision making, better resource utilisation and reduced costs – but not all of these go together or even require the same technical foundations. The development of edge intelligence will therefore be highly fragmented with some industries and applications moving very rapidly, thanks to a strong and accessible business case, while others lag, waiting for device replacement cycles or the cost of chipsets to come down a level that can sustain a lower value business case.
There’s a substantial shopping list of necessary hardware and software to be bought and deployed. “Edge computing requires IoT devices to be able to process high volumes of data efficiently and effectively,” said Alistair Elliott, the chief executive of Solutions at Pod Group. “This means that IoT devices need advanced sensor, microcontroller, and SoC [system on a chip] technology, all of which is available today.
“The time it will take for devices, sensors and data processing capabilities to be rolled out to enable the full benefits of IoT at the edge depends entirely on the human factor.”
He added, “The technology needed to deploy edge computing is already here. Whether it is rolled out depends on whether organisations are willing and able to invest the required resources. To successfully deploy an edge computing solution requires a significant upfront investment of time, human resources and money.
There is money to be made
Yet there is money to be made. “Having the data and the real-time analytics at increasingly granular levels grows the opportunity for monetisation – provided this is all compliant with data protection regulations – and enables organisations to follow in the footsteps of Google and Amazon,” said John English, the director of service provider solutions at Netscout. “Service providers will gain from the practical application of data and the increased personalisation of services, but there will be a lot of intensive steps that need to be taken and therefore the full benefits will emerge, gradually over time.”
Technological advancement is coming to an edge near you. “New wireless technologies, longer battery life, improved sensor technologies and continued improvement in computer power for low cost are all contributing to enabling IoT computing,” said Alan Grau, vice president of IoT, Embedded Solutions, at Sectigo. “We are already beginning to see the benefits, but it will be years before we see the full benefit of these technologies. Early adopters are utilising AI, building out business use cases and creating IoT devices with strong security, but much work remains to be done. Integration, deployment and optimisation will continue for the next decade.”
For Dave Baskett, a technical strategy manager at industrial IT software provider SolutionsPT, cost barriers are coming down. “Low cost, non-intrusive sensing and low power wide area networks are making edge sensing a much more realistic deployment option,” he said. “But computing power and scale at the cloud have made true analytics and AI a realistic and deployable technology today. Edge computing is constantly improving. It’s not about limits, it’s about what is appropriate and practical depending on the challenges that organisations are trying to resolve and the different architectures in place.”
Empowering energy
For some, the opportunity is already here. EnergyHub provides utilities with software and distributed energy resources (DERs), which are physical and virtual assets that are deployed across the distribution grid, typically close to load. They can be used individually or in aggregate to provide value to the grid, individual customers, or both. The company is already working with 40 utilities in the US.
“We work with a range of partners in a range of smart homes and we work with technologies…from the fairly dumb to the very smart with a lot of computing intelligence at the edge,” explained Ben HertzShargel, the vice president of analytics at EnergyHub. “The challenge within energy is the co-ordination of these devices. Historically, power distribution was very simple and a one-way process of delivering power to the customer. A transformation is now happening globally where the old model is breaking and a new, bi-directional model is emerging.”
“Photo-voltaic capabilities mean that there can be over-generation at midday but, if utilities can get these devices under management and control them in a smart way, costs can be saved and electricity can be utilised more effectively,” he added. “It’s not critical to move as much intelligence as possible to these devices but the customer experience and the flexibility are critical so we can collectively coordinate enormous aggregation from these devices.”
Hertz-Shargel gave the example of a potential brownout being averted by the utility company being able to turn down smart air-conditioners. Incremental changes done on vast scale can make power grids more resilient. For instance, performing less cooling, or not heating water to such a high temperature would be hardly noticeable to a consumer but when each saving is aggregated across the smart households of a town, substantial demand is removed from the grid, avoiding brown-outs.
Each relatively unintelligent DER needs to play its role to make this vision a reality but the power companies are on-board and so are customers once they are assured the capability is non-intrusive and there are incentives to them for allowing their devices to be controlled by the power provider.
Distributed topography
This distributed topography of intelligent devices is a major change to the traditional, centralised model and management of this will need careful consideration. “In the future, networks will evolve into a sophisticated hierarchy of data centres,” said Alan Carlton, the vice president of InterDigital. “It is not so much that the edge will replace the cloud rather it is more a case that the cloud will become much more distributed. This will allow functionality to be spun up, implemented wherever it is best to do so.”
Moving intelligence closer to the point of use seems a theoretical no-brainer but a practical problem. “Edge requires a different approach to application architecture – microservices rather than monolith systems; serverless rather than dedicated iron,” noted Joseph Denne, the founder and chief executive of EDGE.
He concluded, “As edge computing and edge networking grow, we expect to see services move closer to the point of use – out of the cloud and onto the edge. In the first instance this will be as a complementary service, extending and improving traditional clouds. But as connectivity improves, and localised processing capacity increases, we will see a significant shift towards the edge, ultimately relegating the cloud to a big data and storage solution in support of edge computing.”