Bringing Cost-Effective, On-Device AI to Home Appliances - Sparse Weights and Interactions Negate GPUs and Cloud Computing
Circuit Cellar|September 2024
GPU silicon and cloud computing infrastructure are too costly for mass markets devices like refrigerators and washers. By deploying compute resources only to the necessary parts of AI inference, Sparse AI allows product designers to practically incorporate new AI features like natural voice interfaces into their consumer offerings without breaking the bank or electric bill surprises.
Sam Fok
Bringing Cost-Effective, On-Device AI to Home Appliances - Sparse Weights and Interactions Negate GPUs and Cloud Computing

Artificial Intelligence (AI) is changing every aspect of consumer product design. Manufacturers are discovering new ways to deploy AI into their products to differentiate in the market with new capabilities, improved efficiency, and reduced operating costs. However, the extreme processing requirements of AI have led manufacturers to implement AI either on-device using a high-cost, AI-capable chip, with a connection to cloud-based AI infrastructure, or both.

These approaches incur design and operating costs that have priced AI out of all but the highest-end products in markets such as white goods/home appliances and other consumer electronics applications. To make AI accessible to cost-sensitive mass markets, we need to bring costs down. This is where sparse AI technology can help. By optimizing AI inference processing by up to 100 times, sparse AI enables developers to implement complex, deep-learning-based AI models using low-cost AI MCU silicon without adversely impacting speed, efficiency, memory footprint, or performance. This article will explore sparse AI and how manufacturers can optimize AI inferencing to reduce cloudbased dependency and infrastructure or even implement powerful AI-based capabilities completely on-device.

SPARSE AI

AI models can be complex and require extensive processing resources and memory. In high-end AI systems, a specialized (and expensive) processor like a GPU runs AI model inferencing on-device. Alternatively, many systems take a cloud-based approach where data is collected on-device and sent to a server for processing.

This story is from the September 2024 edition of Circuit Cellar.

Start your 7-day Magzter GOLD free trial to access thousands of curated premium stories, and 9,000+ magazines and newspapers.

This story is from the September 2024 edition of Circuit Cellar.

Start your 7-day Magzter GOLD free trial to access thousands of curated premium stories, and 9,000+ magazines and newspapers.

MORE STORIES FROM CIRCUIT CELLARView All
Catching Lightning in an IMU
Circuit Cellar

Catching Lightning in an IMU

Simulating Diffusion-Limited Aggregation with a Raspberry PI RP2040 MCU

time-read
10+ mins  |
November 2024
Build an Interactive Kinetic Wall
Circuit Cellar

Build an Interactive Kinetic Wall

Using a Raspberry Pi 4 and Kinect V1 Camera

time-read
10 mins  |
October 2024
Learn to Program MCUs with uLISP
Circuit Cellar

Learn to Program MCUs with uLISP

Part 1: Crash Course Offers Insight Into Pioneering Language

time-read
10+ mins  |
October 2024
Intelligent Automotive Battery Sensor
Circuit Cellar

Intelligent Automotive Battery Sensor

Shunt Resistors and Evaluation Electronics Offer Two Key Components

time-read
4 mins  |
October 2024
Understanding Mesh Circuits How to Use and Calculate Them
Circuit Cellar

Understanding Mesh Circuits How to Use and Calculate Them

Microcontrollers and other digital systems concern mostly ones and zeros but when connections to the real word are needed it can get messy. Stuart writes about mesh analysis and how mesh circuits can be calculated and applied in practical scenarios.

time-read
10+ mins  |
October 2024
Datasheet: Very Cool Micro Machines
Circuit Cellar

Datasheet: Very Cool Micro Machines

Smartphone Cooling Rounds Out Parade of Advanced MEMS

time-read
2 mins  |
October 2024
Improving Patient Outcomes
Circuit Cellar

Improving Patient Outcomes

Device Technology Advances Medical Practices

time-read
10+ mins  |
October 2024
Bringing Cost-Effective, On-Device AI to Home Appliances - Sparse Weights and Interactions Negate GPUs and Cloud Computing
Circuit Cellar

Bringing Cost-Effective, On-Device AI to Home Appliances - Sparse Weights and Interactions Negate GPUs and Cloud Computing

GPU silicon and cloud computing infrastructure are too costly for mass markets devices like refrigerators and washers. By deploying compute resources only to the necessary parts of AI inference, Sparse AI allows product designers to practically incorporate new AI features like natural voice interfaces into their consumer offerings without breaking the bank or electric bill surprises.

time-read
5 mins  |
September 2024
Thin Film Transistor LED Displays - Visual I/O
Circuit Cellar

Thin Film Transistor LED Displays - Visual I/O

To add a resistive touchscreen on top of a TFT display, Jeff explores the technology offerings and libraries needed to accomplish his task. Calibrating the touchscreen's computed coordinates with the actual pixel coordinates of the display proved difficult with a modular framework but separating functions allows easy expansion.

time-read
9 mins  |
September 2024
Mobile APP Development with React Native
Circuit Cellar

Mobile APP Development with React Native

This month, Bob continues his series on mobile app development from an embedded designer's perspective. He programs a React Native app and he offers guidelines for picking a tutorial for React Native. He further explores how it handles concurrency and asynchronous programming. Bob expected a few nuances during the process but to say there were more than a few would be an understatement.

time-read
9 mins  |
September 2024