This is a repo of deep learning inference benchmark, called DLI. DLI is a benchmark for deep learning inference on various hardware. The goal of the project is to develop a software for measuring the performance of a wide range of deep learning models inferring on various popular frameworks and various hardware, as well as regularly publishing the obtained data.
The main advantage of DLI from the existing benchmarks is the availability of performance results for a large number of deep models inferred on Intel platforms (Intel CPUs, Intel Processor Graphics, Intel Movidius Neural Compute Stick).
DLI supports inference using the following frameworks:
- Intel® Distribution of OpenVINO™ Toolkit.
- Intel® Optimization for Caffe.
- Intel® Optimization for TensorFlow.
- TensorFlow Lite.
- ONNX Runtime.
More information about DLI is available here (in Russian) or here (in English).
This project is licensed under the terms of the Apache 2.0 license.
Please consider citing the following papers.
-
Kustikova V., Vasilyev E., Khvatov A., Kumbrasiev P., Rybkin R., Kogteva N. DLI: Deep Learning Inference Benchmark // Communications in Computer and Information Science. V.1129. 2019. P. 542-553.
-
Sidorova A.K., Alibekov M.R., Makarov A.A., Vasiliev E.P., Kustikova V.D. Automation of collecting performance indicators for the inference of deep neural networks in Deep Learning Inference Benchmark // Mathematical modeling and supercomputer technologies. Proceedings of the XXI International Conference (N. Novgorod, November 22–26, 2021). – Nizhny Novgorod: Nizhny Novgorod State University Publishing House, 2021. – 423 p. https://hpc-education.unn.ru/files/conference_hpc/2021/MMST2021_Proceedings.pdf. (In Russian)
-
dockerdirectory contains Dockerfiles.OpenVINO_DLDTis a directory of Dockerfiles for Intel® Distribution of OpenVINO™ Toolkit.Caffeis a directory of Dockerfiles for Intel® Optimization for Caffe.TensorFlowis a directory of Dockerfiles for Intel® Optimization for TensorFlow.
-
docsdirectory contains auxiliary documentation. Please, find complete documentation at the Wiki page. -
resultsdirectory contains benchmarking and validation results.-
benchmarkingcontains benchmarking results in html and xslx formats. -
accuracycontains accuracy results in html and xslx formats. -
validationcontains tables that confirms correctness of inference implemenration.validation_results.mdis a table that confirms correctness of inference implementation based on Intel Distribution of OpenVINO™ toolkit for public models.validation_results_intel_models.mdis a table that confirms correctness of inference implementation based on Intel® Distribution of OpenVINO™ toolkit for models trained by Intel engineers and available in Open Model Zoo.validation_results_caffe.mdis a table that confirms correctness of inference implementation based on Intel® Optimization for Caffe for several public models.validation_results_tensorflow.mdis a table that confirms correctness of inference implementation based on Intel® Optimization for TensorFlow for several public models.
-
models_checklist.mdcontains a list of supported deep models (in accordance with the Open Model Zoo).
-
-
srcdirectory contains benchmark sources.accuracy_checkercontains scripts to check deep model accuracy using Accuracy Checker of Intel® Distribution of OpenVINO™ toolkit.benchmarkis a set of scripts to estimate inference performance of different models at the single local computer.config_makercontains GUI application to make configuration files of the benchmark components.configscontains template configuration files.csv2htmlis a set of scripts to convert performance and accuracy tables from csv to html.csv2xlsxis a set of scripts to convert performance and accuracy tables from csv to xlsx.deploymentis a set of deployment tools.inferencecontains python inference implementation.node_infocontains a set of functions to get information about computational node.onnxruntime_benchmarkis the tool that allows to measure deep learning models inference performance with ONNX Runtime. This implementation inspired by OpenVINO Benchmark C++ tool as a reference and stick to its measurement methodology, thus provide consistent performance results.quantizationcontains scripts to quantize model to INT8-precision using Post-Training Optimization Tool (POT) of Intel® Distribution of OpenVINO™ toolkit.remote_controlcontains scripts to execute benchmark remotely.utilsis a package of auxiliary utilities.
-
testcontains smoke tests.
The latest documentation for the Deep Learning Inference Benchmark (DLI) is available here. This documentation contains detailed information about the DLI components and provides step-by-step guides to build and run the DLI benchmark on your own test infrastructure.
See the DLI Wiki to get more information.
See the DLI Wiki to get more information.
See the DLI Wiki to get more information.
See the DLI Wiki to get more information about benchmaring results on available hardware.
Report questions, issues and suggestions, using: