Coral logo

Coral

Build beneficial and privacy-preserving AI.

4.6
Try Coral

Open Source Voice Agent SDK

Integrate voice into your apps with VideoSDK's AI Agents. Connect your chosen LLMs & TTS. Build once, deploy across all platforms.

Upvote Now

Overview

Coral is a comprehensive toolkit designed for developing products with local artificial intelligence. It enables on-device machine learning inferencing, ensuring AI applications are efficient, private, fast, and capable of operating offline. Powered by the Edge TPU coprocessor—Google's low-power ASIC—Coral delivers high-performance neural network inferencing directly on embedded devices. Coral empowers a new generation of intelligent devices across various industries, supported by a strategic partnership with ASUS IoT for global manufacturing, distribution, and support.

How It Works

  • Accelerated ML Inferencing: Coral's Edge TPU coprocessor provides high-performance ML inferencing directly on embedded devices.
  • On-Device Processing: Executes deep feed-forward neural networks ideal for vision-based AI applications, achieving 4 TOPS with just 2 watts of power.
  • TensorFlow Lite Compatibility: Allows developers to convert and compile their models for optimal performance on the Edge TPU.
  • Local Data Processing: Ensures privacy by performing inferences on-device, minimizing cloud dependency.
  • Development & Deployment: Leverage various Coral prototyping and production-ready devices to accelerate ML development and field deployment.

Use Cases

Smart Cities AI Solutions
Enable privacy-preserving occupancy detection, pedestrian safety systems, and optimized traffic flow management with on-device AI, improving urban efficiency and safety.
Manufacturing Intelligence
Deploy high-accuracy visual inspection, predictive maintenance, and worker safety systems using Coral's fast, local ML capabilities for enhanced productivity and operational safety.
Agriculture & Healthcare AI
Implement real-time soil analysis, crop disease identification, and healthcare tools like early cancer detection—all powered by energy-efficient on-device ML inferencing.

Features & Benefits

  • Edge TPU Coprocessor: High-performance, low-power AI hardware (4 TOPS at 2W)
  • Efficient Performance: Optimized for embedded applications
  • Enhanced Privacy: Local inference, user data control
  • Lightning-Fast Inference: Real-time AI processing
  • Offline Capability: Operates without internet connectivity
  • Comprehensive Product Range: Prototyping devices, production modules, accessories
  • Broad ML Application Support: Object detection, pose estimation, image segmentation, key phrase detection
  • Flexible Software Support: Compatible with TensorFlow Lite, Python, C++, Mendel Linux
  • Accelerated Transfer Learning: Retrain final model layers with small datasets

Target Audience

  • Developers, Engineers, Product Builders, and Enterprises seeking to integrate on-device AI and ML capabilities.
  • Hardware Manufacturers: Building intelligent devices, embedded systems, and IoT solutions.
  • Software Developers: Especially those deploying TensorFlow Lite models to edge devices.
  • System Integrators: Businesses implementing AI solutions at scale.
  • Key Industries:
    • Smart cities
    • Manufacturing
    • Agriculture
    • Healthcare
    • Energy sectors

Pricing

  • Prototyping Products:
    • Dev Board: £129.99
    • USB Accelerator: £59.99
    • Dev Board Mini: £99.99
    • Dev Board Micro: £79.99
  • Production Products:
    • Mini PCIe Accelerator: £24.99
    • M.2 Accelerator A+E key: £24.99
    • M.2 Accelerator B+M key: £24.99
    • M.2 Accelerator with Dual Edge TPU: £39.99
    • System-on-Module (SoM): £99.99
    • Accelerator Module: £19.99
  • Accessories:
    • Camera: £19.99
    • Wireless Add-on board for Dev Board Micro: £19.99
    • PoE Add-on board for Dev Board Micro: £24.99
    • Environmental Sensor Board: £14.99
    • Dev Board Micro Case: £9.99
For bulk purchases or specific requirements, contact the sales team.

FAQs

What is the Edge TPU?

The Edge TPU is a small ASIC designed by Google that provides high-performance ML inferencing for low-power devices. For example, it can execute state-of-the-art mobile vision models such as MobileNet V2 at almost 400 FPS, in a power-efficient manner. Coral offers multiple products that include the Edge TPU built-in.

What is the Edge TPU's processing speed?

An individual Edge TPU can perform 4 trillion (fixed-point) operations per second (4 TOPS), using only 2 watts of power—in other words, you get 2 TOPS per watt.

How is the Edge TPU different from Cloud TPUs?

They are very different. Cloud TPUs run in a Google data centre and offer very high computational speeds, ideal for training large, complex ML models. The Edge TPU, however, is designed for small, low-power devices and is primarily intended for model inferencing, making it ideal for on-device ML inferencing that is extremely fast and power-efficient.

What machine learning frameworks does the Edge TPU support?

The Edge TPU supports TensorFlow Lite only.

What type of neural networks does the Edge TPU support?

The first-generation Edge TPU can execute deep feed-forward neural networks (like convolutional neural networks), making it ideal for vision-based ML applications.

How do I create a TensorFlow Lite model for the Edge TPU?

You need to convert your model to TensorFlow Lite, quantize it (using quantization-aware training or post-training quantization), and then compile it for the Edge TPU. Coral offers Colab tutorials for retraining models with your own data.

Featured Products