Editors note: As developers realize IoT systems need more intelligence deployed to the edge to overcome latency, performance data privacy/security and bandwidth challenges, the Special Projects team at AspenCore Media explores the pursuit of that smarter edge: the what, why and where. It’s an effort — and an opportunity — that requires cooperation among component, hardware and software developers.
The internet of things (IoT) has for several years been touted as the answer to many challenges. Connected devices can improve efficiencies and productivity in industrial systems, provide valuable feedback mechanisms for connected healthcare systems and wearables, and provide a vast array of capabilities to improve driver assistance and enable a path towards autonomous driving in vehicles.
The promise has relied to a large extent on sensors in a network collecting data, transmitting it via a gateway to the cloud, processing that data, analyzing it, and then providing a control or feedback back to the local system or sensor.
However, over the last couple of years, developers and systems integrators have come to realize there are issues around latency, data security, and bandwidth cost in doing things this way. The way to mitigate against these challenges is to add more intelligence at the edge, so that response times are faster, data can be kept secure and private, and data communication costs are minimized. So putting intelligence at the edge is a no-brainer, right? Well it is, but there are of course practical limitations to the smarter edge.
As the “smarter edge” has become the buzzword (or phrase) of the embedded systems industry, its definition has got rather fuzzy and it spans a wide part of the network. Some define the edge as anything not in the cloud; but even within edge, are you at the endpoint or the edge of the gateway? These are some of the questions where we try and understand if there is any consensus on the part of the vendors supplying the chips and systems.
In this special project, we also explore how much intelligence should be added to the edge, and what are the practicalities and limitations for doing so. Also, who are the top edge AI chip startups? We look at some of those, as well as predictions on the key trends around AI inferencing.
Articles in this Special Project:
By Nitin Dahad
What’s the difference between edge AI and endpoint AI, and how much smart should you put into an edge AI device? We talk to various vendors about edge intelligence to see if there is a consensus on the ‘right way’.
by Sally Ward-Foxton
Will Arm continue to dominate going forward? Alternatives are springing up, and with the IoT set to expand, the market is set to expand as well.
By Duncan Stewart, Jeff Loucks, Deloitte’s Center for Technology, Media and Telecommunications
Adding AI will add only incremental cost to a device, whether it’s a smartphone or an enterprise/industrial application. The market, the economics, and the inherent benefits of edge AI.
By Sally Ward-Foxton
Our top ten picks for most promising and/or interesting edge AI chip startups.
By Dennis Goldenson, SAR Insight & Consulting
Making the case for edge intelligence: no networks and real-time data.
By Geoff Tate, Flex Logix
So AI chips optimized for inference, not for graphics, training or DSP, will be the big thing in 2020. Here is one perspective on the top 5 predictions for AI inference.
By Sally Ward-Foxton
BF16, the new number format optimized for deep learning, promises power and compute savings with a minimal reduction in prediction accuracy.