Skip to content

Commit acaa283

Browse files
committed
Squashed commit of the following:
commit 538e97c Author: Patrice Vignola <[email protected]> Date: Wed Oct 25 19:56:16 2023 -0700 [DML EP] Add dynamic graph compilation (#17876) Historically, DML was only able to fuse partitions when all sizes are known in advance or when we were overriding them at session creation time. But in practice, it should be possible to compile partitions at compute time if the caller knows that the dimensions won't be changed for every inference (e.g. resizing a webcam window, or padding the input to powers of 2). This graph will be cached and reused until the sizes change. This is an opt-in option gated under the `enable_dynamic_graph_fusion` option, which means that it will only be enabled when the caller requests it since they have more context on how their model will be called between inferences. This PR also adds the option to disable metacommands from the python API, which is an option for the C API but was lacking for python. commit d30d4d3 Author: Jambay Kinley <[email protected]> Date: Wed Oct 25 15:34:58 2023 -0700 Add MatMul FP4 and NF4 Support (#18066) Add a contrib op MatMulBnb4 (FP4 and NF4) and related toolchain to support quantization on weight. This PR adds: - schema for contrib op MatMulBnb4 which can support FP4 (4-bit floating point) and NF4 (4-bit NormalFloat) quantization on weight. - a naive implementation for MatMulBnb4 on CPU and GPU, i.e., implemented like MatMul(A, Dequantize(B)). - a special implementation for GemV for MatMulBnb4 and related benchmark tool. - tool to quantize model to FP4 or NF4. commit d88d52e Author: snadampal <[email protected]> Date: Wed Oct 25 13:34:57 2023 -0500 [aarch64] Remove mmla kernel support from apple (#18082) <!-- Describe your changes. --> The mmla kernels require additional ISA flags and are currently supported only on Linux <!-- - Why is this change required? What problem does it solve? - If it fixes an open issue, please link to the issue here. --> more context is in #15270 cc: @skottmckay , @chenfucn , @snnn commit 706e13e Author: liqun Fu <[email protected]> Date: Wed Oct 25 10:46:04 2023 -0700 implement affinegrid cpu kernel (#17777) commit 2c6b31c Author: pengwa <[email protected]> Date: Wed Oct 25 15:11:02 2023 +0800 FP16 optimizer automatically detect DeepSpeed compatibility (#18084) Optimum/Transformers are using accelerate lib to prepare models, so our FP16 optimizer wrapper does not work for long time. Because the namespace is `accelerate.utils.deepspeed.DeepSpeedOptimizerWrapper`, which underlying is still calling into DeepSpeed stage1and2 optimizer. This PR includes following changes: 1. Add `accelerate.utils.deepspeed.DeepSpeedOptimizerWrapper` in the modifier registry, plus a check on its contained `optimizer` property MUST be DeepSpeed stage 1 and 2 optimizer. (let's cover Stage 3 optimizer later) 2. For DeepSpeed version > 0.9.1, we will store the source code in a version list. As long as the related function in DeepSpeed remains unchanged during its new release, we won't need manually upgrade the version check any more. If some day, the source code did not match, a warning will be raised to users, to add a new version of source code in the list. With the above change, we will have our FP16 Optimizer working again in Optimum. ![image](https://github.com/microsoft/onnxruntime/assets/10530022/d35b4aa9-b371-46f1-98ae-73114f91179b) commit ae85619 Author: Sumit Agarwal <[email protected]> Date: Tue Oct 24 19:41:10 2023 -0700 Introduce new optimizer MatMul + BatchNormalization (#17915) Introduce new ORT L1 optimizer under RewriteRule category to fuse MatMul + BatchNormalization node. This optimizer look for a specific pattern observed in one of the impacting customer models and fuse the Matmul and Batchnormalization node into a Gemm node. For details on the pattern matching and fusion please refer to the comment section of `matmul_bn_fusion.cc`. To visualize, this optimizer will replace following subgraph to a Gemm node. <pre> MatMul GEMM | | Reshape ^ ---> Reshape ^ | | Transpose ^ Transpose ^ | BatchNormalization Note: ^ means there can be >=0 occurrence(s) of that node. Few example fusable pattern: * - MatMul -> Reshape -> Transpose -> BatchNormalization ---> GEMM -> Reshape -> Transpose * - MatMul -> Reshape -> BatchNormalization ---> GEMM -> Reshape * - MatMul -> Transpose -> BatchNormalization ---> GEMM -> Transpose * - MatMul -> Reshape -> Reshape -> BatchNormalization ---> GEMM -> Reshape -> Reshape * - MatMul -> Reshape -> Transpose -> Reshape -> BatchNormalization ---> GEMM -> Reshape -> Transpose -> Reshape * - MatMul -> BatchNormalization ---> GEMM </pre> Note: This optimizer may evolve in the future to be more generic in terms of the pattern matching. - Why is this change required? What problem does it solve? One of the user of ORT+DML ep needs this to better target the model to DML. But this transformation applies more broadly, so added L1 optimizer. <!-- - If it fixes an open issue, please link to the issue here. --> commit 76e275b Author: Jian Chen <[email protected]> Date: Tue Oct 24 15:17:36 2023 -0700 Merge Cuda docker files into a single one (#18020) <!-- Describe your changes. --> <!-- - Why is this change required? What problem does it solve? - If it fixes an open issue, please link to the issue here. --> commit 6ec45f2 Author: Changming Sun <[email protected]> Date: Tue Oct 24 13:04:08 2023 -0700 Merge aiinfra-linux-ARM64-CPU-2019 and onnxruntime-linux-ARM64-CPU-2019 (#18069) Merge aiinfra-linux-ARM64-CPU-2019 and onnxruntime-linux-ARM64-CPU-2019 machines to a single one to ease management. commit efa0cc2 Author: liqun Fu <[email protected]> Date: Tue Oct 24 10:58:54 2023 -0700 implement isinf20 and isnan20 (#17874) commit abb3291 Author: Changming Sun <[email protected]> Date: Tue Oct 24 10:50:12 2023 -0700 Update win-wasm-ci.yml: increase the timeout value (#18023) commit e63ccd3 Author: Jian Chen <[email protected]> Date: Tue Oct 24 10:47:23 2023 -0700 Install CUDA 12.2 on Windows (#18044) <!-- Describe your changes. --> <!-- - Why is this change required? What problem does it solve? - If it fixes an open issue, please link to the issue here. --> commit eb47008 Author: Jiajia Qin <[email protected]> Date: Tue Oct 24 13:56:56 2023 +0800 [js/webgpu] FP16 Cast, Resize (#18035) <!-- Describe your changes. --> Cast/Resize with f16 are missing in vae-decoder-f16. With this change, vae-decoder-f16 becomes 315 ms from over than 1 seconds. commit 688524a Author: Tianlei Wu <[email protected]> Date: Mon Oct 23 22:00:02 2023 -0700 [CUDA EP] Add warning logs when adding memcpy nodes (#18032) Memcpy nodes could have negative impact on performance, they also cause ORT unable to run CUDA graph. Here we add a warning log for CUDA EP when this happens. It could help trouble shooting. For example, when CUDA graph cannot run, we can see the logs to find out where the Memcpy nodes are inserted (Although it is also possible through saving optimized model, but that need more time and disk space). Note that the warning is per graph. When there are subgraphs, we might see multiple warnings if the issue happens in multiple graphs. Example logs: ``` 2023-10-19 20:58:10.678176531 [I:onnxruntime:, transformer_memcpy.cc:329 AddCopyNode] Add MemcpyFromHost after input_ids for CUDAExecutionProvider 2023-10-19 20:58:10.678198702 [I:onnxruntime:, transformer_memcpy.cc:329 AddCopyNode] Add MemcpyFromHost after /text_model/ArgMax_output_0 for CUDAExecutionProvider 2023-10-19 20:58:10.678211727 [I:onnxruntime:, transformer_memcpy.cc:329 AddCopyNode] Add MemcpyFromHost after /text_model/Gather_3_output_0 for CUDAExecutionProvider 2023-10-19 20:58:10.678257903 [W:onnxruntime:, transformer_memcpy.cc:74 ApplyImpl] 3 Memcpy nodes are added to the graph main_graph for CUDAExecutionProvider. It might have negative impact on performance (including unable to run CUDA graph). Set session_options.log_severity_level=1 to see the detail logs before this message. ``` commit 555b2af Author: Chi Lo <[email protected]> Date: Tue Oct 24 02:41:15 2023 +0000 [TensorRT EP] Add unit test for user provided cuda stream (#17974) Add a unit test for testing user provided CUDA stream commit 4ffd022 Author: Chi Lo <[email protected]> Date: Tue Oct 24 00:46:38 2023 +0000 [TensorRT EP] Refactor of TRT plugins support (#17946) Make sure "trt.plugins" custom op domain only being registered once. The bottom line is "trt.plugins" custom op domain needs to be registered before model load. `CreateTensorRTCustomOpDomainList()` is TRT EP's function to create "trt.plugins" custom op domain. Following are places where this function will be called. (This function only fetches all the TRT plugins from TRT plugin registry but not yet registered them to ORT custom op registry. The real registration happens in AddCustomOpDomains()) C/C++ APIs: - `OrtApis::SessionOptionsAppendExecutionProvider_TensorRT_XX`: This function will make session option object contain the "trt.plugins" custom op domain for ORT to register. So that later the session creation api can register the custom op domain accordingly and won't complain about invalid onnx node. - `InferenceSession::RegisterExecutionProvider`: In some cases, users might create the session object first and later call session_object.RegisterExecutionProvider(). This function will call p_exec_provider->GetCustomOpDomainList() which returns "trt.plugins" custom op domain. Otherwise, session_object.Load(model) will complain. Python APIs: - `RegisterTensorRTPluginsAsCustomOps`: Need to call this function so that session option object contains the "trt.plugins" custom op domain for ORT to register. Different language bindings have slightly different workflow of initializing the session. This might cause duplicate custom op domain in `session_option.custom_op_domains_` or `CreateTensorRTCustomOpDomainList()` being called more than once, but we put checks to make sure ep's custom op domain won't be registered twice. commit 2c50b75 Author: Dmitri Smirnov <[email protected]> Date: Mon Oct 23 17:42:20 2023 -0700 Functions Ahead Of Time inlininng (#17764) Inline functions in an EP aware fashion. The result of this PR is that models that are having been inlined by ONNX inliner and optimized and models that have been AOT inlined appear to be visually identical. For tests I used two models. The only difference is the resulting size because ONNX inliner removes local function definitions and AOT does not. Difference in sizes for `HF Mobile` model was 2.5 MB, and for `HF Bart` it was ~500K. It seems that the resuling model size affects the load time more than the actual optimizations. In general, the inlined models grow in size very fast and can easily exceed 2Gb limit. Q. Should we make AOT optional? `If` costant folding and the removal of local inlined models will be coming in other PRs. Some stats: ![image](https://github.com/microsoft/onnxruntime/assets/11303988/fcb4c815-2e06-4574-8d96-5a0a727d1ecf) commit f3cfe08 Author: satyajandhyala <[email protected]> Date: Mon Oct 23 16:02:50 2023 -0700 [JS/Web] Enabled 1d spacial input to GlobalAveragePool (#17973) Enable one-dim special input to GlobalAveragePoll input <!-- - Why is this change required? What problem does it solve? - If it fixes an open issue, please link to the issue here. --> Currently only 2D input is supported. commit 780ee18 Author: snadampal <[email protected]> Date: Mon Oct 23 16:49:04 2023 -0500 [aarch64] Implement QGEMM kernels with UMMLA/SMMLA instructions (#17160) <!-- Describe your changes. --> This PR adds UMMLA and SMMLA based QGEMM kernels for aarch64. This covers (i) symmetric quantization (zero point is Zero) (ii) asymmetric quantization (zero point is non zero) (iii) per channel as well as per tensor quantization (iv) Signed weights (U8S8 Gemm) (v) Unsigned weights (U8U8 Gemm) and (vi) Signed activations and weights (S8S8 Gemm) scenarios I've enabled the ummla/smmla kernels based on cpuinfo check for `I8MM` support MMLA QGEMM kernels are enabled for all the devices that support I8MM instructions. <!-- - Why is this change required? What problem does it solve? - If it fixes an open issue, please link to the issue here. --> This is to improve INT8 quantized MatMul performance on aarch64 platform. I have run the below benchmarking script (bert , roberta and gpt2 model inference) on AWS Graviton3 based c7g.4xl instance and observed up to 1.33x performance improvement compared to the optimized UDOT qgemm kernel performance. ``` cd onnxruntime/python/tools/transformers python3 benchmark.py ``` I have also run the unit tests, and made sure all are passing ``` ./build.sh --config RelWithDebInfo --build_shared_lib --parallel --compile_no_warning_as_error --skip_submodule_sync ``` commit 2a17d5c Author: kunal-vaishnavi <[email protected]> Date: Mon Oct 23 13:00:56 2023 -0700 LLaMA Model Optimization (#18021) This PR contains fusion-level and kernel-level optimizations for [Meta's LLaMA-2](https://blogs.microsoft.com/blog/2023/07/18/microsoft-and-meta-expand-their-ai-partnership-with-llama-2-on-azure-and-windows/). Some of the added optimizations include: - SimplifiedLayerNorm changes - Fusions for multiple variants - SkipSimplifiedLayerNorm changes - Kernel support for CPU - Rotary embeddings (previously did not exist) - Fusions for multiple variants - CPU and CUDA kernels - Supports interleaving and non-interleaving in the same kernels - Optimized cache that requires half of its originally exported sizes - Reduced from `(max_sequence_length, head_size)` to `(max_sequence_length, head_size / 2)` - Multi-head attention - Support for 2D and 3D attention masks - Group query attention (for FP16 CUDA and INT4 CUDA) - Integration with flash attention v2 and past-present buffer sharing - Removes need for `attention_mask` input as it is supported in the kernel - 4 bit quantization - `block_size` parameter is available for customizing - Support the new changes for [Microsoft version](https://github.com/microsoft/Llama-2-Onnx) - Support combinations of the below variants (ex: export ORT version and run with Optimum) Supported variants of LLaMA-2 include: - [ORT version](https://github.com/microsoft/onnxruntime/tree/main/onnxruntime/python/tools/transformers/models/llama) - Produces one ONNX file that is already optimized (and quantized if requested) - Integrates with Optimum - [Another Microsoft version](https://github.com/microsoft/Llama-2-Onnx) - Already exported and available off-the-shelf - Faster versions of those models will be uploaded there soon - [Hugging Face version](https://huggingface.co/meta-llama) - Models that end with `-hf` - Some older and current versions of [`transformers`](https://github.com/huggingface/transformers) and [`optimum`](https://github.com/huggingface/optimum) that export the model to ONNX differently - Note that while some older versions are supported, it is recommended to use the latest package versions. To use the optimizations, please see `README.md` for details. Please note the various `requirements.txt` files for the package versions recommended in order to use these changes. To run the ORT transformer optimizer separately, run the script as follows: ``` $ cd onnxruntime/onnxruntime/python/tools/transformers/ $ python3 optimizer.py --input <filename>.onnx --output <filename>.onnx --model_type gpt2 --num_heads <number of attention heads> --hidden_size <attention hidden size> --use_external_data_format --opt_level 0 ``` This PR helps the following issues: - #14997 - #16254 - #17681 - #17925 - microsoft/onnxruntime-inference-examples#320 This PR uses changes from the following PRs: - pytorch/pytorch#104468 - pytorch/pytorch#109759 - #17020 - #17674 - #17890 - #17920 - huggingface/transformers#26162 - huggingface/optimum#1257 - huggingface/optimum#1289 - huggingface/optimum#1462 This PR uses changes from the following issues and PRs to begin supporting the [new TorchDynamo exporter](https://pytorch.org/docs/stable/onnx.html#torchdynamo-based-onnx-exporter): - huggingface/transformers#26307 - pytorch/pytorch#104903 - pytorch/pytorch#105040 - microsoft/onnxscript#847 - microsoft/onnxscript#862 - microsoft/onnxscript#493 commit 8a12b2c Author: Jiajia Qin <[email protected]> Date: Tue Oct 24 02:02:19 2023 +0800 [js/webgpu] Fix the transpose error when dims > 4D (#18027) <!-- Describe your changes. --> Currently, the uniform support has bugs when dims rank is larger than 4. See #17860 item 1. So this PR only enables shapes uniforms when shape rank is <= 4 for transpose. Otherwise, below compilation errors are thrown: ``` 1 error(s) generated while compiling the shader: :3:50 error: uniform storage requires that array elements are aligned to 16 bytes, but array element of type 'u32' has a stride of 4 bytes. Consider using a vector or struct as the element type instead. struct Uniforms { output_size:u32, a_shape:array<u32, 5>, a_strides:array<u32, 5>, output_shape:array<u32, 5>, output_strides:array<u32, 5> }; ^^^^^^^^^^^^^ :3:7 note: see layout of struct: /* align(4) size(84) */ struct Uniforms { /* offset( 0) align(4) size( 4) */ output_size : u32; /* offset( 4) align(4) size(20) */ a_shape : array<u32, 5>; /* offset(24) align(4) size(20) */ a_strides : array<u32, 5>; /* offset(44) align(4) size(20) */ output_shape : array<u32, 5>; /* offset(64) align(4) size(20) */ output_strides : array<u32, 5>; /* */ }; struct Uniforms { output_size:u32, a_shape:array<u32, 5>, a_strides:array<u32, 5>, output_shape:array<u32, 5>, output_strides:array<u32, 5> }; ^^^^^^ :4:42 note: 'Uniforms' used in address space 'uniform' here @group(0) @binding(2) var<uniform> uniforms: Uniforms; ^^^^^^^^ ``` commit f0d5ea5 Author: Hector Li <[email protected]> Date: Mon Oct 23 09:01:29 2023 -0700 [QNN EP] Disable flaky test QnnCPUBackendTests.MatMulOp_Broadcast (#18033) Disable flaky test QnnCPUBackendTests.MatMulOp_Broadcast. The test failed on Linux randomly. commit b7ae293 Author: JiCheng <[email protected]> Date: Sun Oct 22 23:33:29 2023 +0800 Support large model export using multi-gpu (#17990) This PR is to implemente a exporter which works for large language models(LLM). It works for models like Llama2-70b or gpt-175. The main idea is to utilize multiple-GPU and dispatch differnet layers to different GPU, in short, it symply implemented auto pipeline parallelism. For example : to export Llama2-70b, you need 8x V100-32GB or 4x A100-80G or More GPU memories. It would expect to export decoder-only models. For encoder-decoder arch-like models, we didn't test it yet. <!-- - Why is this change required? What problem does it solve? - If it fixes an open issue, please link to the issue here. --> --------- Co-authored-by: Justin Chu <[email protected]> commit 444a0ed Author: pengwa <[email protected]> Date: Sat Oct 21 19:45:45 2023 +0800 Avoid one time clone to save memory peak (#17934) commit 009cd4e Author: RandySheriffH <[email protected]> Date: Fri Oct 20 16:12:21 2023 -0700 Allow cuda custom ops allocate deferred cpu mem (#17893) Expose a new allocator from cuda stream. The allocator manages deferred cpu memory which only get recycled before stream destruction. --------- Co-authored-by: Randy Shuai <[email protected]> commit 2f57625 Author: Chi Lo <[email protected]> Date: Fri Oct 20 22:09:46 2023 +0000 [TensorRT EP] Add stream sync after enqueue (#18026) If the model is partitioned into TRT subgraphs and CUDA EP node, we observed cuda stream synchronization issue when multithreading. Calling stream sync API after enqueue can solve this issue without adding much performance overhead. commit 020824e Author: liqun Fu <[email protected]> Date: Fri Oct 20 15:08:25 2023 -0700 Update ONNX to 1.15.0rc1 (#17914) commit a43c57f Author: Baiju Meswani <[email protected]> Date: Fri Oct 20 11:39:57 2023 -0700 ResizeGrad CUDA/ROCM kernel implementation (#17772) commit cc7e8cc Author: Changming Sun <[email protected]> Date: Fri Oct 20 09:24:21 2023 -0700 Update dockerfiles/Dockerfile.source to avoid installing onnx (#17975) Update dockerfiles/Dockerfile.source to avoid installing onnx python package. ONNX is not listed in https://github.com/microsoft/onnxruntime/blob/main/requirements.txt.in. We do not have to install it. Especially when we do not run tests, the package provides no help when building onnxruntime from source. Resolve #17781 commit 99b8dca Author: Yi Zhang <[email protected]> Date: Fri Oct 20 23:41:40 2023 +0800 Disable dml stage in windows GPU pipeline temporarily. (#18034) <!-- Describe your changes. --> <!-- - Why is this change required? What problem does it solve? - If it fixes an open issue, please link to the issue here. -->
1 parent c32c494 commit acaa283

File tree

211 files changed

+16851
-1968
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

211 files changed

+16851
-1968
lines changed

.lintrunner.toml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -45,6 +45,7 @@ exclude_patterns = [
4545
'cmake/external/**',
4646
# ignore generated flatbuffers code
4747
'onnxruntime/core/flatbuffers/ort_flatbuffers_py/**',
48+
'orttraining/orttraining/python/training/optim/_ds_code_store.py',
4849
]
4950
command = [
5051
'python',
@@ -76,6 +77,7 @@ exclude_patterns = [
7677
'cmake/**',
7778
'orttraining/*',
7879
'onnxruntime/core/flatbuffers/**',
80+
'orttraining/orttraining/python/training/optim/_ds_code_store.py',
7981
]
8082
command = [
8183
'python',

cgmanifests/generated/cgmanifest.json

Lines changed: 31 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,36 @@
22
"$schema": "https://json.schemastore.org/component-detection-manifest.json",
33
"Version": 1,
44
"Registrations": [
5+
{
6+
"component": {
7+
"type": "git",
8+
"git": {
9+
"commitHash": "a896e3d066448b3530dbcaa48869fafefd738f57",
10+
"repositoryUrl": "https://github.com/emscripten-core/emsdk.git"
11+
},
12+
"comments": "git submodule at cmake/external/emsdk"
13+
}
14+
},
15+
{
16+
"component": {
17+
"type": "git",
18+
"git": {
19+
"commitHash": "7a2ed51a6b682a83e345ff49fc4cfd7ca47550db",
20+
"repositoryUrl": "https://github.com/google/libprotobuf-mutator.git"
21+
},
22+
"comments": "git submodule at cmake/external/libprotobuf-mutator"
23+
}
24+
},
25+
{
26+
"component": {
27+
"type": "git",
28+
"git": {
29+
"commitHash": "0c296085f9f65f0f8ef7aec7b9eed55faf37dc40",
30+
"repositoryUrl": "https://github.com/onnx/onnx.git"
31+
},
32+
"comments": "git submodule at cmake/external/onnx"
33+
}
34+
},
535
{
636
"component": {
737
"type": "git",
@@ -166,7 +196,7 @@
166196
"component": {
167197
"type": "git",
168198
"git": {
169-
"commitHash": "fdefbe85ed9c362b95b9b401cd19db068a76141f",
199+
"commitHash": "6a20ba82b439ea1fd650da4d389e96b60a1dd828",
170200
"repositoryUrl": "https://github.com/onnx/onnx.git"
171201
},
172202
"comments": "onnx"

cmake/deps.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ microsoft_gsl;https://github.com/microsoft/GSL/archive/refs/tags/v4.0.0.zip;cf36
2424
microsoft_wil;https://github.com/microsoft/wil/archive/refs/tags/v1.0.230629.1.zip;e4a542a323c070376f7c2d1973d0f7ddbc1d2fa5
2525
mimalloc;https://github.com/microsoft/mimalloc/archive/refs/tags/v2.1.1.zip;d5ee7d34223d0567892db5179849939c8769dc41
2626
mp11;https://github.com/boostorg/mp11/archive/refs/tags/boost-1.82.0.zip;9bc9e01dffb64d9e0773b2e44d2f22c51aace063
27-
onnx;https://github.com/onnx/onnx/archive/14303de049144035dfd94ace5f7a3b44773b1aad.zip;250eab9690392b248d75b56e605fb49eca373442
27+
onnx;https://github.com/onnx/onnx/archive/6a20ba82b439ea1fd650da4d389e96b60a1dd828.zip;179a22ad4cd67109c60031ae4b6cf2f434d8bd7e
2828
#use the commit of supporting all the plugins and TRT 8.6-GA (https://github.com/onnx/onnx-tensorrt/commit/0462dc31ae78f48744b6141ae376df1f96d3f459)
2929
onnx_tensorrt;https://github.com/onnx/onnx-tensorrt/archive/0462dc31ae78f48744b6141ae376df1f96d3f459.zip;5ff086361956cceb81ed17453a1fd8db2aa4328d
3030
protobuf;https://github.com/protocolbuffers/protobuf/archive/refs/tags/v21.12.zip;7cf2733949036c7d52fda017badcab093fe73bfa

cmake/external/onnx

Submodule onnx updated 960 files

cmake/onnxruntime_mlas.cmake

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -339,12 +339,18 @@ else()
339339
set(mlas_platform_srcs
340340
${mlas_platform_srcs}
341341
${MLAS_SRC_DIR}/aarch64/HalfGemmKernelNeon.S
342+
${MLAS_SRC_DIR}/aarch64/QgemmS8S8KernelSmmla.S
343+
${MLAS_SRC_DIR}/aarch64/QgemmU8X8KernelUmmla.S
342344
${MLAS_SRC_DIR}/activate_fp16.cpp
343345
${MLAS_SRC_DIR}/dwconv.cpp
344346
${MLAS_SRC_DIR}/halfgemm_kernel_neon.cpp
345347
${MLAS_SRC_DIR}/pooling_fp16.cpp
348+
${MLAS_SRC_DIR}/qgemm_kernel_smmla.cpp
349+
${MLAS_SRC_DIR}/qgemm_kernel_ummla.cpp
346350
)
347351
set_source_files_properties(${MLAS_SRC_DIR}/aarch64/HalfGemmKernelNeon.S PROPERTIES COMPILE_FLAGS " -march=armv8.2-a+fp16 ")
352+
set_source_files_properties(${MLAS_SRC_DIR}/aarch64/QgemmS8S8KernelSmmla.S PROPERTIES COMPILE_FLAGS " -march=armv8.2-a+i8mm ")
353+
set_source_files_properties(${MLAS_SRC_DIR}/aarch64/QgemmU8X8KernelUmmla.S PROPERTIES COMPILE_FLAGS " -march=armv8.2-a+i8mm ")
348354
set_source_files_properties(${MLAS_SRC_DIR}/activate_fp16.cpp PROPERTIES COMPILE_FLAGS " -march=armv8.2-a+fp16 ")
349355
set_source_files_properties(${MLAS_SRC_DIR}/dwconv.cpp PROPERTIES COMPILE_FLAGS " -march=armv8.2-a+fp16 ")
350356
set_source_files_properties(${MLAS_SRC_DIR}/pooling_fp16.cpp PROPERTIES COMPILE_FLAGS " -march=armv8.2-a+fp16 ")

cmake/onnxruntime_rocm_hipify.cmake

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -54,6 +54,11 @@ set(contrib_ops_excluded_files
5454
"quantization/attention_quantization_impl.cuh"
5555
"quantization/dequantize_blockwise.cuh"
5656
"quantization/dequantize_blockwise.cu"
57+
"quantization/dequantize_blockwise_bnb4.cuh"
58+
"quantization/dequantize_blockwise_bnb4.cu"
59+
"quantization/matmul_bnb4.cc"
60+
"quantization/matmul_bnb4.cuh"
61+
"quantization/matmul_bnb4.cu"
5762
"quantization/matmul_nbits.cc"
5863
"quantization/matmul_nbits.cuh"
5964
"quantization/matmul_nbits.cu"

dockerfiles/Dockerfile.source

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -8,13 +8,14 @@ FROM mcr.microsoft.com/cbl-mariner/base/python:3
88
MAINTAINER Changming Sun "[email protected]"
99
ADD . /code
1010

11-
RUN tdnf install -y tar ca-certificates build-essential python3-numpy cmake python3-setuptools python3-wheel python3-pip curl python3-devel
11+
RUN tdnf install -y tar ca-certificates build-essential cmake curl python3-devel python3-setuptools python3-wheel python3-pip python3-numpy python3-flatbuffers python3-packaging python3-protobuf
12+
# The latest cmake version in Mariner2 is 3.21, but we need 3.26+
1213
RUN /code/dockerfiles/scripts/install_cmake.sh
1314

1415
# Prepare onnxruntime repository & build onnxruntime
15-
RUN cd /code && python3 -m pip install -r tools/ci_build/github/linux/docker/inference/x64/python/cpu/scripts/requirements.txt && /bin/bash ./build.sh --allow_running_as_root --skip_submodule_sync --config Release --build_wheel --update --build --parallel --cmake_extra_defines ONNXRUNTIME_VERSION=$(cat ./VERSION_NUMBER)
16+
RUN cd /code && /bin/bash ./build.sh --allow_running_as_root --skip_submodule_sync --config Release --build_wheel --update --build --parallel --cmake_extra_defines ONNXRUNTIME_VERSION=$(cat ./VERSION_NUMBER)
1617

1718
FROM mcr.microsoft.com/cbl-mariner/base/python:3
1819
COPY --from=0 /code/build/Linux/Release/dist /root
1920
COPY --from=0 /code/dockerfiles/LICENSE-IMAGE.txt /code/LICENSE-IMAGE.txt
20-
RUN tdnf install -y ca-certificates python3-setuptools python3-wheel python3-pip && python3 -m pip install /root/*.whl && rm -rf /root/*.whl
21+
RUN tdnf install -y ca-certificates python3-setuptools python3-wheel python3-pip python3-numpy python3-flatbuffers python3-packaging python3-protobuf python3-mpmath python3-sympy && python3 -m pip install coloredlogs humanfriendly && python3 -m pip install --no-index --find-links /root onnxruntime && rm -rf /root/*.whl

docs/ContribOperators.md

Lines changed: 107 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -47,6 +47,7 @@ Do not modify directly.*
4747
* <a href="#com.microsoft.Inverse">com.microsoft.Inverse</a>
4848
* <a href="#com.microsoft.Irfft">com.microsoft.Irfft</a>
4949
* <a href="#com.microsoft.LongformerAttention">com.microsoft.LongformerAttention</a>
50+
* <a href="#com.microsoft.MatMulBnb4">com.microsoft.MatMulBnb4</a>
5051
* <a href="#com.microsoft.MatMulFpQ4">com.microsoft.MatMulFpQ4</a>
5152
* <a href="#com.microsoft.MatMulInteger16">com.microsoft.MatMulInteger16</a>
5253
* <a href="#com.microsoft.MatMulIntegerToFloat">com.microsoft.MatMulIntegerToFloat</a>
@@ -90,6 +91,7 @@ Do not modify directly.*
9091
* <a href="#com.microsoft.RemovePadding">com.microsoft.RemovePadding</a>
9192
* <a href="#com.microsoft.RestorePadding">com.microsoft.RestorePadding</a>
9293
* <a href="#com.microsoft.Rfft">com.microsoft.Rfft</a>
94+
* <a href="#com.microsoft.RotaryEmbedding">com.microsoft.RotaryEmbedding</a>
9395
* <a href="#com.microsoft.SampleOp">com.microsoft.SampleOp</a>
9496
* <a href="#com.microsoft.Sampling">com.microsoft.Sampling</a>
9597
* <a href="#com.microsoft.SkipLayerNormalization">com.microsoft.SkipLayerNormalization</a>
@@ -2503,6 +2505,62 @@ This version of the operator has been available since version 1 of the 'com.micr
25032505
</dl>
25042506

25052507

2508+
### <a name="com.microsoft.MatMulBnb4"></a><a name="com.microsoft.matmulbnb4">**com.microsoft.MatMulBnb4**</a>
2509+
2510+
MatMulBnb4 is a MatMul with weight quantized with 4 bits using either FP4 or NF4 data type (https://arxiv.org/pdf/2305.14314.pdf). It does Matrix Multiplication like MatMul (https://github.com/onnx/onnx/blob/main/docs/Operators.md#matmul) with differences:
2511+
1. Input B is a 2D constant Matrix. Its input feature count and output feature count are specified by attribute 'K' and 'N'.
2512+
2. Input B is quantized with 4 bits with quantization data type specified by attribute 'quant_type'. It is transposed, flattened and quantized blockwisely with block size specified by attribute 'block_size'.
2513+
And block_size is not an arbitrary number and must be a power of 2 and not smaller than 16, like 16, 32, 64, 128,..
2514+
3. Input B's quantization constants or scales are specified by input 'absmax'.
2515+
2516+
Input B is stored as uint8_t with shape: [(N * K + 1) / 2].
2517+
Input absmax is stored in same type as original type of B(float32, float16) with shape like: [(N * K + block_size - 1) / block_size].
2518+
2519+
#### Version
2520+
2521+
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
2522+
2523+
#### Attributes
2524+
2525+
<dl>
2526+
<dt><tt>K</tt> : int (required)</dt>
2527+
<dd>size of each input feature</dd>
2528+
<dt><tt>N</tt> : int (required)</dt>
2529+
<dd>size of each output feature</dd>
2530+
<dt><tt>block_size</tt> : int (required)</dt>
2531+
<dd>number of groupsize used for weight quantization. It needs to be a power of 2 and not smaller than 16.</dd>
2532+
<dt><tt>quant_type</tt> : int (required)</dt>
2533+
<dd>quantization data type. 0 for FP4, 1 for NF4.</dd>
2534+
</dl>
2535+
2536+
#### Inputs
2537+
2538+
<dl>
2539+
<dt><tt>A</tt> : T1</dt>
2540+
<dd>The input tensor, not quantized</dd>
2541+
<dt><tt>B</tt> : T2</dt>
2542+
<dd>1-dimensional quantized data for weight</dd>
2543+
<dt><tt>absmax</tt> : T1</dt>
2544+
<dd>quantization constants</dd>
2545+
</dl>
2546+
2547+
#### Outputs
2548+
2549+
<dl>
2550+
<dt><tt>Y</tt> : T1</dt>
2551+
<dd>tensor. The output tensor has the same rank as the input. </dd>
2552+
</dl>
2553+
2554+
#### Type Constraints
2555+
2556+
<dl>
2557+
<dt><tt>T1</tt> : tensor(float), tensor(float16)</dt>
2558+
<dd>Constrain input and output types to float/half_float tensors.</dd>
2559+
<dt><tt>T2</tt> : tensor(uint8)</dt>
2560+
<dd>Constrain quantized weight types to uint8.</dd>
2561+
</dl>
2562+
2563+
25062564
### <a name="com.microsoft.MatMulFpQ4"></a><a name="com.microsoft.matmulfpq4">**com.microsoft.MatMulFpQ4**</a>
25072565

25082566
Matrix product with right hand matrix being pre-packed and quantized int4 data blob.
@@ -2834,7 +2892,7 @@ This version of the operator has been available since version 1 of the 'com.micr
28342892
<dt><tt>bias</tt> (optional) : T</dt>
28352893
<dd>Bias tensor with shape (hidden_size + hidden_size + v_hidden_size) from input projection</dd>
28362894
<dt><tt>key_padding_mask</tt> (optional) : M</dt>
2837-
<dd>Key padding mask with shape (batch_size) or (3 * batch_size + 2) or (batch_size, kv_sequence_length)</dd>
2895+
<dd>Key padding mask with shape (batch_size), (3 * batch_size + 2), (batch_size, kv_sequence_length), (batch_size, total_sequence_length), or (batch_size, sequence_length, total_sequence_length)</dd>
28382896
<dt><tt>relative_position_bias</tt> (optional) : T</dt>
28392897
<dd>relative position bias: addition to QxK' with shape (batch_size, num_heads, sequence_length, total_sequence_length) or (1, num_heads, sequence_length, total_sequence_length)</dd>
28402898
<dt><tt>past_key</tt> (optional) : T</dt>
@@ -4796,6 +4854,54 @@ This version of the operator has been available since version 1 of the 'com.micr
47964854
</dl>
47974855

47984856

4857+
### <a name="com.microsoft.RotaryEmbedding"></a><a name="com.microsoft.rotaryembedding">**com.microsoft.RotaryEmbedding**</a>
4858+
4859+
RotaryEmbedding is the implementation of rotary positional embeddings (RoPE). The positions are represented as rotation matrices
4860+
that are multiplied to query and key before the inner product of query and key is taken.
4861+
4862+
#### Version
4863+
4864+
This version of the operator has been available since version 1 of the 'com.microsoft' operator set.
4865+
4866+
#### Attributes
4867+
4868+
<dl>
4869+
<dt><tt>interleaved</tt> : int</dt>
4870+
<dd>Rotate using interleaved pattern. Default value is 0 (False).</dd>
4871+
<dt><tt>scale</tt> : float</dt>
4872+
<dd>Custom scale will be used if specified. Default value is 1.0</dd>
4873+
</dl>
4874+
4875+
#### Inputs
4876+
4877+
<dl>
4878+
<dt><tt>input</tt> : T</dt>
4879+
<dd>3D tensor with shape (batch_size, sequence_length, hidden_size)</dd>
4880+
<dt><tt>position_ids</tt> : M</dt>
4881+
<dd>1D tensor with shape (1) or 2D tensor with shape (batch_size, sequence_length)</dd>
4882+
<dt><tt>cos_cache</tt> : T</dt>
4883+
<dd>2D tensor with shape (max_sequence_length, head_size / 2).</dd>
4884+
<dt><tt>sin_cache</tt> : T</dt>
4885+
<dd>2D tensor with shape (max_sequence_length, head_size / 2).</dd>
4886+
</dl>
4887+
4888+
#### Outputs
4889+
4890+
<dl>
4891+
<dt><tt>output</tt> : T</dt>
4892+
<dd>3D tensor with shape (batch_size, sequence_length, hidden_size)</dd>
4893+
</dl>
4894+
4895+
#### Type Constraints
4896+
4897+
<dl>
4898+
<dt><tt>T</tt> : tensor(float), tensor(float16)</dt>
4899+
<dd>Constrain input and output types to float tensors.</dd>
4900+
<dt><tt>M</tt> : tensor(int64)</dt>
4901+
<dd>Constrain input and output types to integer tensors</dd>
4902+
</dl>
4903+
4904+
47994905
### <a name="com.microsoft.SampleOp"></a><a name="com.microsoft.sampleop">**com.microsoft.SampleOp**</a>
48004906

48014907
Sample echo operator.

0 commit comments

Comments
 (0)