Mobile
Enabling intelligent connections and personalized applications across devices.
Sample Ready Apps
Super Resolution
Sample application to deploy an optimized super resolution solution on device.
Image Classification
Sample application to deploy an optimized image classification solution on device.
Object Detection
Sample application to deploy an optimized object detection solution on device.
Filter by
Domain/Use Case
Device
Tags
- A “backbone” model is designed to extract task-agnostic representations from specific data modalities (e.g., images, text, speech). This representation can then be fine-tuned for specialized tasks.
- A “foundation” model is versatile and designed for multi-task capabilities, without the need for fine-tuning.
- Models capable of generating text, images, or other data using generative models, often in response to prompts.
- Large language models. Useful for a variety of tasks including language generation, optical character recognition, information retrieval, and more.
- A “quantized” model can run in low or mixed precision, which can substantially reduce inference latency.
- A “real-time” model can typically achieve 5–60 predictions per second. This translates to latency ranging up to 200 ms per prediction.
Chipset
95 models
- View details for the AOT-GAN model.
- View details for the Baichuan-7B model.
- View details for the ControlNet model.
- View details for the ConvNext-Tiny model.
- View details for the DDRNet23-Slim model.
- View details for the DeepLabV3-Plus-MobileNet model.
- View details for the DeepLabV3-Plus-MobileNet-Quantized model.
- View details for the DeepLabV3-ResNet50 model.
- View details for the DenseNet-121 model.
- View details for the DETR-ResNet50 model.
- View details for the DETR-ResNet50-DC5 model.
- View details for the DETR-ResNet101 model.