Gateworks, an NXP Gold Partner, is a leader in embedded computing solutions for edge AI and industrial connectivity. Bringing decades of experience and deep integration within NXP’s ecosystem, Gateworks delivers USA-made, industrial-grade platforms, complemented by AI acceleration solutions such as the GW16168 M.2 AI Acceleration Card.
GW16168 M.2 AI Acceleration Card.
AI-Powered Interfaces for a Smarter Edge
The demand for AI processing at the edge is growing rapidly as engineers seek powerful yet efficient solutions for applications like computer vision, robotics and industrial IoT. To meet this need, Gateworks partnered with NXP to launch the GW16168 M.2 AI Acceleration Card —a purpose-built solution that brings high-performance, industrial-grade AI processing directly to embedded edge platforms.
Accelerate your designs with a platform for real-world AI deployment. Learn more about the GW16168 M2 AI Acceleration Card.
Where Traditional Edge AI Falls Short
For many embedded systems to the challenge is to support modern AI workloads without compromising overall system performance, power efficiency or development timelines. This leads to engineers trading off between compute capability and system complexity. Common challenges include:
- Limited AI performance on general-purpose CPUs or integrated neural processing units (NPUs)
- System bottlenecks when AI workloads compete with core processing tasks
- Lengthy development cycles due to custom hardware and integration work
- Supply-chain uncertainty and lack of long-term product availability
These constraints slow innovation and increase the total cost of deploying AI at scale.
The GW16168 M.2 AI Acceleration Card Advantage: Built for Real-World AI
The GW16168 separates AI processing from the host CPU by leveraging the NXP Ara240 Discrete Neural Processing Unit (DNPU). This architecture enables up to 40 equivalent tera operations per second (eTOPS) of dedicated inference performance, allowing complex AI workloads to run independently without impacting system responsiveness.
Advantages of this solution include:
- Designed for real-world deployment: With 16GB of onboard low-power double data rate 4 (LPDDR4) memory, this card supports advanced AI models, including vision, intelligent video analysis and large language learning models (LLMs) directly at the edge.
- Seamless integration, faster time-to-market: The M.2 M-Key 2280 form factor with PCIe Gen4 interface enables integration into a wide range of embedded systems, cutting months of development time and is optimized for NXP-based platforms, including i.MX 95, i.MX 8M Plus, iMX 8M Mini Applications Processors.
- USA-based supply chain and support: Designed, manufactured and supported in the United States, the GW16168 offers supply chain transparency, consistent quality and long-term availability. Gateworks also offering direct access to engineering and FAE support throughout the product lifecycle.
- Powered by NXP: At its core, the NXP Ara240 DNPU delivers high performance per watt and supports industry-standard frameworks including TensorFlow, PyTorch and ONNX, enabling developers to efficiently deploy and scale AI models.
Be Prepared for Next Generation Industrial AI with the GW16168 Solution
As AI continues to move to the edge, developers need solutions that are not only powerful, but also scalable, reliable and easy to integrate. By combining NXP’s advanced AI processing technology with Gateworks’ industrial design, USA-based manufacturing and long-term support, the GW16168 provides a complete solution for deploying AI at the edge.