Skip to content

Update README.md #168

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 24 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
24 commits
Select commit Hold shift + click to select a range
b63c719
Sync master with r2.8 (#772)
WafaaT Sep 19, 2022
a3f1cd1
Sync master with r2.9 (#858)
WafaaT Nov 17, 2022
59d6cf2
Update Pillow to '>=9.3.0' (#884)
ashahba Dec 9, 2022
d48ffa2
remove supported OS checks (#926)
WafaaT Dec 16, 2022
de7dda4
Remove Linux/windows OS platform support checks (#927)
WafaaT Dec 16, 2022
45372ae
CVE and platform checks fixes (#929)
WafaaT Dec 19, 2022
e0b3415
Merge branch 'master' of github.com:intel-innersource/frameworks.ai.m…
WafaaT Dec 19, 2022
b0884e9
Sync master with r2.10 (#984)
WafaaT Jan 24, 2023
f6bd1ea
Merge branch r2.10 into 'master'
WafaaT Jan 26, 2023
362bd03
sync with r2.10 (#1001)
WafaaT Feb 3, 2023
6e0d82b
set bf32 flag as env var (#1009)
WafaaT Mar 2, 2023
8c4c7c9
upgrade ipython to 8.10.0 to avoid vulnerability (#1024)
WafaaT Feb 28, 2023
accca70
Sync with upstream (#1152)
WafaaT Apr 27, 2023
5d29317
Sync with r2.11 (#1156)
WafaaT Apr 28, 2023
45dfdbe
Fix CVEs and minor doc updates (#1164)
WafaaT May 5, 2023
7df51b6
Update README.md
ashahba Jul 18, 2023
2f44ac5
Merge pull request #145 from IntelAI/ashahba/cloud-data-connector
WafaaT Jul 18, 2023
57d3b15
Sync with r2.11.1 (#1393)
WafaaT Jul 21, 2023
9304c9f
Sync with r2.12 (#1416)
WafaaT Aug 7, 2023
651c0da
Sync with r2.12.1 (#1530)
WafaaT Sep 15, 2023
66e8a95
Sync with r3.0 (#1592)
WafaaT Oct 17, 2023
0996e0c
Fix CVEs (#1747)
WafaaT Dec 5, 2023
6d2175f
Fix CVEs in cloud data connector (#1763)
WafaaT Dec 15, 2023
ecbe6ae
Update README.md
yinghu5 Dec 28, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
401 changes: 401 additions & 0 deletions .bandit.yml

Large diffs are not rendered by default.

4 changes: 3 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
*.pyc
.DS_Store
**.log
pretrained/
.pytest*
.venv*
.coverage
Expand All @@ -16,4 +17,5 @@ output/
tools/docker/models*
.ipynb_checkpoints
nc_workspace

benchmarks/horovod
cloud_data_connector/credentials.json
3 changes: 3 additions & 0 deletions .gitmodules
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
[submodule "models/aidd/pytorch/alphafold2/inference/alphafold"]
path = models/aidd/pytorch/alphafold2/inference/alphafold
url = https://github.com/deepmind/alphafold
8 changes: 5 additions & 3 deletions CODEOWNERS
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,14 @@

# These owners will be the default owners for everything in the repo,
# but PR owner should be able to assign other contributors when appropriate
* @ashahba @claynerobison @dmsuehir
* [email protected] @ashahba @claynerobison
datasets @ashahba @claynerobison @dzungductran
docs @claynerobison @mhbuehler
k8s @ashahba @dzungductran @kkasravi
models @agramesh1 @ashraf-bhuiyan @riverliuintel @wei-v-wang
k8s @ashahba @dzungductran
models @ashraf-bhuiyan @riverliuintel
models @riverliuintel
models/**/pytorch/ @leslie-fang-intel @jiayisunx @zhuhaozhe
quickstart [email protected]
quickstart/**/pytorch/ @leslie-fang-intel @jiayisunx @zhuhaozhe

# Order is important. The last matching pattern has the most precedence.
Expand Down
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ all: venv lint unit_test
$(ACTIVATE):
@echo "Updating virtualenv dependencies in: $(VIRTUALENV_DIR)..."
@test -d $(VIRTUALENV_DIR) || $(VIRTUALENV_EXE) $(VIRTUALENV_DIR)
@. $(ACTIVATE) && python -m pip install -r requirements-test.txt
@. $(ACTIVATE) && python -m pip install -r requirements.txt
@touch $(ACTIVATE)

venv: $(ACTIVATE)
Expand Down
212 changes: 135 additions & 77 deletions README.md

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion SECURITY.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,6 @@ Please report security issues or vulnerabilities to the [Intel® Security Center
For more information on how Intel® works to resolve security issues, see
[Vulnerability Handling Guidelines].

[Intel® Security Center]:https://www.intel.com/security
[Intel® Security Center]:https://www.intel.com/content/www/us/en/security-center/default.html

[Vulnerability Handling Guidelines]:https://www.intel.com/content/www/us/en/security-center/vulnerability-handling-guidelines.html
135 changes: 0 additions & 135 deletions benchmarks/README.md

This file was deleted.

37 changes: 32 additions & 5 deletions benchmarks/common/base_benchmark_util.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
#
# -*- coding: utf-8 -*-
#
# Copyright (c) 2018-2019 Intel Corporation
# Copyright (c) 2018-2023 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
Expand Down Expand Up @@ -68,8 +68,8 @@ def _define_args(self):

self._common_arg_parser.add_argument(
"-p", "--precision",
help="Specify the model precision to use: fp32, int8, or bfloat16",
required=required_arg, choices=["fp32", "int8", "bfloat16"],
help="Specify the model precision to use: fp32, int8, bfloat16 or fp16",
required=required_arg, choices=["fp32", "int8", "bfloat16", "fp16"],
dest="precision")

self._common_arg_parser.add_argument(
Expand Down Expand Up @@ -170,6 +170,10 @@ def _define_args(self):
"--weight-sharing",
help="Enables experimental weight-sharing feature for RN50 int8/bf16 inference only",
dest="weight_sharing", action="store_true")
self._common_arg_parser.add_argument(
"--synthetic-data",
help="Enables synthetic data layer for some models like SSD-ResNet34 where support exists",
dest="synthetic_data", action="store_true")

self._common_arg_parser.add_argument(
"-c", "--checkpoint",
Expand Down Expand Up @@ -270,6 +274,20 @@ def _define_args(self):
help="Additional command line arguments (prefix flag start with"
" '--').")

# Check if GPU is enabled.
self._common_arg_parser.add_argument(
"--gpu",
help="Run the benchmark script using GPU",
dest="gpu", action="store_true")

# Check if OneDNN Graph is enabled
self._common_arg_parser.add_argument(
"--onednn-graph",
help="If Intel® Extension for TensorFlow* is installed, oneDNN Graph for INT8 will be enabled"
" by default. Otherwise, default value of this flag will be False.",
dest="onednn_graph", choices=["True", "False"],
default=None)

def _validate_args(self):
"""validate the args and initializes platform_util"""
# check if socket id is in socket number range
Expand Down Expand Up @@ -300,8 +318,9 @@ def _validate_args(self):
format(system_num_cores))

if args.output_results and ((args.model_name != "resnet50" and
args.model_name != "resnet50v1_5") or args.precision != "fp32"):
raise ValueError("--output-results is currently only supported for resnet50 FP32 inference.")
args.model_name != "resnet50v1_5") or
(args.precision != "fp32" and args.precision != "fp16")):
raise ValueError("--output-results is currently only supported for resnet50 FP32 or FP16 inference.")
elif args.output_results and (args.mode != "inference" or not args.data_location):
raise ValueError("--output-results can only be used when running inference with a dataset.")

Expand Down Expand Up @@ -344,6 +363,14 @@ def _validate_args(self):
"This is less than the number of cores per socket on the system ({})".
format(args.socket_id, cpuset_len_for_socket, self._platform_util.num_cores_per_socket))

if args.gpu:
if args.socket_id != -1:
raise ValueError("--socket-id cannot be used with --gpu parameter.")
if args.num_intra_threads is not None:
raise ValueError("--num-intra-threads cannot be used with --gpu parameter.")
if args.num_inter_threads is not None:
raise ValueError("--num-inter-threads cannot be used with --gpu parameter.")

def initialize_model(self, args, unknown_args):
"""Create model initializer for the specified model"""
model_initializer = None
Expand Down
4 changes: 2 additions & 2 deletions benchmarks/common/platform_util.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@
CPU_SOCKETS_STR_ = "Socket(s)"
CORES_PER_SOCKET_STR_ = "Core(s) per socket"
THREADS_PER_CORE_STR_ = "Thread(s) per core"
LOGICAL_CPUS_STR_ = "CPU(s)"
LOGICAL_CPUS_STR_ = "CPU(s):"
NUMA_NODE_CPU_RANGE_STR_ = "NUMA node{} CPU(s):"
ONLINE_CPUS_LIST = "On-line CPU(s) list:"

Expand Down Expand Up @@ -229,7 +229,7 @@ def _get_list_from_string_ranges(self, str_ranges):
start, end = section.split("-")
section_list = range(int(start), int(end) + 1)
result_list += section_list
elif(len(section)):
elif len(section):
# This section is either empty or just a single number and not a range
result_list.append(int(section))

Expand Down
Loading