Adam Gibson on the Rise of AI in Edge Devices

Adam, Shawn, Dr Goh
Adam (left) with Skymind Holdings Berhad CEO Shawn Tan (centre) and COO Dr Goh Shu Wei.

Adam Gibson, the co-founder and Chief Technology Officer of Skymind, believes that home and industrial automation featuring artificial intelligence (AI) and machine learning (ML) capabilities is on the rise – also known as the Internet of Things, or IoT.

“We’re slowly seeing a big increase in devices shipped, whether it’s mobile phones, tablets, cameras, smart speakers, cars, drones – there’s all sorts of forms of AI edge devices now. A lot of the time, these capabilities are embedded right in the end device itself, so there’s a lot of different operating system and chip-specific optimisations built right into these devices to make them run specific use cases. For instance, drones typically have cameras built into them, and sometimes they’ll even have built-in processors that are energy-efficient to make them run longer, so you have longer battery life.”

Surfshark VPN

He noted that changes in consumer habits and workplace safety are driving this increase. “You see people becoming increasingly comfortable with the idea that they can just yell the name of an assistant and have it buy groceries for you or have it buy music, play music. Workplace safety is also a big thing right now, especially with coronavirus, so you have cameras watching for masks, social distancing compliance is another big one, so there’s a lot of camera use cases as well.”

Gibson, who said this during his keynote address ‘TinyML and AI at the Edge: The Next Paradigm to make AI run everywhere’ at the 2020 WAIC Developers Day Forum webinar yesterday (11 July), is also CEO of Konduit AI, a subsidiary of Skymind, which maintains Eclipse Deeplearning4j (DL4J) – a framework that Gibson created.


DL4J is the first commercial-grade, open-source, distributed deep learning library written for Java and Scala that allows users to compose flexible deep neural nets for production-grade frameworks.

Gibson defined edge devices as small devices, usually less powerful than laptops, which typically run on ARM-based CPUs.

“ARM is a specialised provider of hardware for constrained devices. They have a big focus on efficiency and low power usage. Power efficiency is important for saving battery life for your car. You know, if you have a self-driving car, and you’re driving it down the street, you can’t just run a big GPU in there, you need something that’s power-efficient.”

However, Gibson highlighted the fact that there remains much room for growth and development in the use of AI in edge devices.

“We record data today and it’s starting to be used, but it’s still fairly early days for the mass market. According to an Eclipse IoT survey, most people still are not thinking about machine learning; it’s still very low-concern.

“Most of the time, people are more concerned about the kinds of sensors and the kind of data capturing they’re able to do. Most people and companies have an opinion on this, but there’s not a ton of priority on understanding what ML use cases are doable out there right now. Over the next 3-5 years, I think we’ll see a trend where this is increasing.”

This is where TinyML comes in, according to Gibson. Coined by Pete Warden at Google, TinyML runs machine learning/analytics workloads on edge devices.

“Just because you can run analytics doesn’t mean that it’s going to be efficient or useful. This is why you have an emphasis on low-power microcontrollers. Typically, this is what you think of with embedded computing. So if you have a board that you can hook up to your laptop and programme it, and then use it to blink LED lights or what have you, that is embedded computing and that is a microcontroller.” Hence, the use of deep learning frameworks specialised on running on low-power devices.

Gibson highlights the fact that there are a lot of innovations in being able to run deep learning on smaller devices.

“Weight pruning is one – being able to take out parts of the neural network that don’t necessarily matter for making a decision. Another thing you can do is force computations to be integer-based. We can do something called quantisation, which takes an input of floating points and turns it into integer-operations, which are cheaper for computation, for memory, and there’s a big focus on making them run on lower-power devices as well.”

However, there are a lot of problems right now in making AI frameworks and models run on edge devices.

“So they provide specialised APIs that only implement a small subset of the operations required to run a neural network. Most neural network libraries today that are used for inference at the edge only support CNNs. So they implement the minimum necessary to run very specific use cases.”

Gibson explains the current focus on 3 areas; namely, computer vision, speech, and time series. “The areas that are more compute-intensive, and where the interesting problems lie, are typically in vision and speech. Those are hard problems; there’s always optimisations coming up for CNNs to make images run more efficiently and better at the edge.”

Currently, AI deployment in edge devices is hampered by fragmentation among devices, so there’s no guarantee that running any specific ARM version will be backwards compatible.

“There’s no guarantee that the frameworks will run for any specific period of time. Even if you look at mobile phones today, Android phones are upgraded once a year minimum, and a lot of times, they’re not backwards compatible. So it makes it hard to support these things,” he said.

To get around that, chip developers provide frameworks for mainly their own uses cases.

“A lot of the innovation at the edge right now starts in consumer electronics and mobile devices. You see Apple and Google pushing their own frameworks, you even see Tencent writing its own framework for its own mobile app AI at the edge. These things are open-source and pretty common now, but mainly supporting their own use cases, though. They’re going to implement whatever they need to make their applications run.”

Gibson also shared other observations on the deployment of AI-enabled edge devices in the real world.

“There are a lot of other parts than just setting up a camera on a Raspberry Pi in a B2B setting where there are SLAs and people relying on the system to be up. A lot of real-world deployments right now are mainly vision and time series i.e. cameras and sensor data, so understanding how people are moving and capturing data about where people are located,” he shared.

He noted that in large deployments typically use cloud-based software for device management.

“Due to the sensibility of devices on the edge, data security is a concern, so hybrid cloud is fairly common. Sometimes, people will be okay with a cloud vendor being a provider of the IoT platform for their use cases; sometimes you’ll have multiple clouds built-in to make sure that they’re not locked into the vendor.

“There are also a lot of devices and sensors needed just for one use case; the scale is generally hundreds, sometimes thousands of devices. So be careful about what promises are being sold to you whenever you’re evaluating an IoT platform.

“Focus on simplicity; sometimes you don’t need a big complicated platform, depending on your individual use cases.”