The Neural Network Compute Stick from Movidius™ allows Deep Neural Network development without the need for expensive, power-hungry supercomputer hardware. Simply prototype and tune the Deep Neural Network with the 100Gflops of computing power provided by the Movidius stick. A Cloud connection is not required. The USB stick form-factor makes for easy connection to a host PC while the on-board Myriad-2 Vision Processing Unit (VPU) delivers the necessary computational performance. The Myriad-2 achieves high-efficiency parallel processing courtesy of its twelve Very Long Instruction Word (VLIW) processors. The decision on parallel scheduling is carried out at program compile time, relieving the processors of this chore at run-time.
• Movidius 600MHz Myriad-2 SoC with 12 x 128-bit VLIW vector processors
• 2MB of 400Gbps transfer-rate on-chip memory
• Supports FP16, FP32 and integer operations with 8-, 16- and 32-bit accuracy
• All data and power provided over a single USB 3.0 port on a host PC
• Real-time, on-device inference without Cloud connectivity
• Quickly deploy existing CNN models or uniquely trained networks
• Multiple Movidius Sticks can be networked to the host PC via a suitable hub
• Dimensions: 72.5 x 27 x 14mm
Automatically convert a trained Caffe-based Convolutional Neural Network (CNN) into an embedded neural network optimized for the on-board Myriad-2 VPU.
Layer-by-layer performance metrics for both industry-standard and custom-designed neural networks enable effective tuning for optimal real-world performance at ultra-low power. Validation scripts allow developers to compare the accuracy of the optimized model on the device to the original PC-based model.
The Movidius Stick can behave as a discrete neural network accelerator by adding dedicated deep learning inference capabilities to existing computing platforms for improved performance and power efficiency.