The eIQ inference with TensorFlow™ Lite for Microcontrollers (TF Micro) is optimized for running machine learning models on resource constrained devices, including NXP's i.MX RT crossover MCUs.
Faster and smaller than the traditional TensorFlow Lite open source platform for machine learning, this TF Micro implementation enables inferencing at the edge with lower latency and smaller binary size.
快速参考 文档类别.
4 文件
安全文件正在加载,请稍等
3 硬件
3 培训