Difference between revisions of "Intel® Compute Library for Deep Neural Networks (clDNN)"

From
Jump to: navigation, search
m (Text replacement - "http:" to "https:")
 
Line 1: Line 1:
[http://www.google.com/search?q=clDNN+openVINO Google search...]
+
[https://www.google.com/search?q=clDNN+openVINO Google search...]
  
* [http://github.com/intel/clDNN clDNN | GIT]
+
* [https://github.com/intel/clDNN clDNN | GIT]
* [http://software.intel.com/en-us/openvino-toolkit/deep-learning-cv OpenVINO Toolkit]
+
* [https://software.intel.com/en-us/openvino-toolkit/deep-learning-cv OpenVINO Toolkit]
 
* [[DeepLens - deep learning enabled video camera]]
 
* [[DeepLens - deep learning enabled video camera]]
  

Latest revision as of 20:32, 28 March 2023

Google search...

Compute Library for Deep Neural Networks (clDNN) is an open source performance library for Deep Learning (DL) applications intended for acceleration of DL Inference on Intel® Processor Graphics – including HD Graphics and Iris® Graphics. clDNN includes highly optimized building blocks for implementation of convolutional neural networks (CNN) with C and C++ interfaces. We created this project to enable the DL community to innovate on Intel® processors.

  • Usages supported: Image recognition, image detection, and image segmentation.
  • Validated Topologies: AlexNet*, VGG(16,19)*, GoogleNet(v1,v2,v3)*, ResNet(50,101,152)* Faster R-CNN*, Squeezenet*, SSD_googlenet*, SSD_VGG*, PVANET*, PVANET_REID*, age_gender*, FCN* and yolo*.

clDNN is released also together with Intel® OpenVino™ Toolkit, which contains:

  • Model Optimizer a Python*-based command line tool, which imports trained models from popular deep learning frameworks such as Caffe*, TensorFlow*, and Apache MXNet*.
  • Inference Engine an execution engine which uses a common API to deliver inference solutions on the platform of your choice (for example GPU with clDNN library)

FPGA-Movidius-dev-workflow-700w.png