Skip to content

Qualcomm AI Engine Direct - Support Qnn IR backend in online preparation #8876

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .ci/scripts/build-qnn-sdk.sh
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,7 @@ set_up_aot() {
cmake .. \
-DCMAKE_INSTALL_PREFIX=$PWD \
-DEXECUTORCH_BUILD_QNN=ON \
-DANDROID_NATIVE_API_LEVEL=30 \
-DQNN_SDK_ROOT=${QNN_SDK_ROOT} \
-DEXECUTORCH_BUILD_DEVTOOLS=ON \
-DEXECUTORCH_BUILD_EXTENSION_MODULE=ON \
Expand Down
7 changes: 6 additions & 1 deletion backends/qualcomm/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -70,6 +70,7 @@ endif()

include_directories(
BEFORE ${_common_include_directories} ${QNN_SDK_ROOT}/include/QNN
${QNN_SDK_ROOT}/share/QNN/converter/jni
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, I'm getting this error

In file included from fbcode/executorch/backends/qualcomm/runtime/backends/QnnContextCommon.cpp:10:
buck-out/v2/gen/fbcode/5d832762563ef7a9/executorch/backends/qualcomm/runtime/__runtime__/buck-headers/executorch/backends/qualcomm/runtime/backends/QnnDlcManager.h:15:10: fatal error: 'QnnWrapperUtils.hpp' file not found
   15 | #include "QnnWrapperUtils.hpp"
      |          ^~~~~~~~~~~~~~~~~~~~~
1 error generated.

Where does this file come from?

The QnnWrapperUtils.hpp file is located under the path ${QNN_SDK_ROOT}/share/QNN/converter/jni.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Find it, looks like we miss some files for the internal build. I added a target for the files inside /share/QNN/converter/jni, and now run into this error

ld.lld: error: undefined symbol: qnn_wrapper_api::strnDup(char const*, unsigned long)
>>> referenced by QnnWrapperUtils.cpp:75 (./third-party/qualcomm/qnn/qnn-2.28/share/QNN/converter/jni/QnnWrapperUtils.cpp:75)
>>>               buck-out/v2/gen/fbsource/7d5d1c564400faae/third-party/qualcomm/qnn/qnn-2.28/__app_sources__/__objects__/share/QNN/converter/jni/QnnWrapperUtils.cpp.pic.o:(qnn_wrapper_api::deepCopyQnnTensors(Qnn_Tensor_t&, Qnn_Tensor_t&))
>>> referenced by QnnModel.cpp:403 (./third-party/qualcomm/qnn/qnn-2.28/share/QNN/converter/jni/QnnModel.cpp:403)
>>>               buck-out/v2/gen/fbsource/7d5d1c564400faae/third-party/qualcomm/qnn/qnn-2.28/__app_sources__/__objects__/share/QNN/converter/jni/QnnModel.cpp.pic.o:(qnn_wrapper_api::getGraphInfoFromModels(qnn_wrapper_api::QnnModel*, unsigned int, qnn_wrapper_api::GraphInfo***))

Looks like

char *strnDup(const char *source, size_t maxlen);

is defined inside QnnModelPal.hpp, where is the source function?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah find it, nvm.

Do you know how much size increase it will add to the android? Also, is it for x86 only or both?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ld.lld: error: undefined symbol: qnn_wrapper_api::strnDup(char const*, unsigned long)
>>> referenced by QnnWrapperUtils.cpp:75 (./third-party/qualcomm/qnn/qnn-2.28/share/QNN/converter/jni/QnnWrapperUtils.cpp:75)
>>>               buck-out/v2/gen/fbsource/7d5d1c564400faae/third-party/qualcomm/qnn/qnn-2.28/__app_sources__/__objects__/share/QNN/converter/jni/QnnWrapperUtils.cpp.pic.o:(qnn_wrapper_api::deepCopyQnnTensors(Qnn_Tensor_t&, Qnn_Tensor_t&))
>>> referenced by QnnModel.cpp:403 (./third-party/qualcomm/qnn/qnn-2.28/share/QNN/converter/jni/QnnModel.cpp:403)
>>>               buck-out/v2/gen/fbsource/7d5d1c564400faae/third-party/qualcomm/qnn/qnn-2.28/__app_sources__/__objects__/share/QNN/converter/jni/QnnModel.cpp.pic.o:(qnn_wrapper_api::getGraphInfoFromModels(qnn_wrapper_api::QnnModel*, unsigned int, qnn_wrapper_api::GraphInfo***))

We don't need to include the QnnWrapperUtils.cpp file; we only use the macro inside QnnWrapperUtils.hpp.

Do you know how much size increase it will add to the android? Also, is it for x86 only or both?

Regarding the size increase, libqnn_executorch_backend.so will grow from 11.79MB to 12.19MB on android in total, based on a comparison between the mainline and this PR.
This is required for both x86 and android.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh, hmm, do you mean I just need to add some files? I currently add a dependency for the buck target like this

cxx_library(
    name = "app_sources",
    srcs = glob([
        "share/QNN/converter/jni/*.cpp",
    ]) + select({
        "DEFAULT": glob([
            "share/QNN/converter/jni/linux/*.cpp",
        ]),
        "ovr_config//os:linux": glob([
            "share/QNN/converter/jni/linux/*.cpp",
        ]),
        "ovr_config//os:windows": glob([
            "share/QNN/converter/jni/windows/*.cpp",
        ]),
    }),
    headers = glob([
        "share/QNN/converter/jni/*.hpp",
    ]),
    header_namespace = "",
    exported_headers = subdir_glob([
        ("share/QNN/converter/jni", "*.hpp"),
    ]),
    visibility = [
        "PUBLIC",
    ],
    deps = [
        ":api",
    ],
)

Can you help me understand what is required and what is not? If you have a better name, that's better

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we consider making it optional in the future? For production, the runtime size budget can be limited sometimes.

Copy link
Collaborator

@DannyYuyang-quic DannyYuyang-quic Apr 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh, hmm, do you mean I just need to add some files? I currently add a dependency for the buck target like this

cxx_library(
    name = "qnn_converter_sources",
    exported_headers = subdir_glob([
        ("share/QNN/converter/jni", "QnnWrapperUtils.hpp"),
    ]),
    visibility = [
        "PUBLIC",
    ],
    deps = [
        ":api",
    ],
)

Yes, we only need QnnWrapperUtils.hpp, so I think our dependency can just be like this~ ?

Can we consider making it optional in the future? For production, the runtime size budget can be limited sometimes.

I see. Ideally, it would be great to make it optional, I will have the corresponding PR on this.

${EXECUTORCH_SOURCE_DIR}/third-party/flatbuffers/include
${EXECUTORCH_SOURCE_DIR}/runtime/core/portable_type/c10
)
Expand Down Expand Up @@ -117,6 +118,7 @@ add_library(qnn_backend STATIC)
add_library(qnn_backend_cache STATIC)
add_library(qnn_context STATIC)
add_library(qnn_custom_protocol STATIC)
add_library(qnn_dlc_manager STATIC)
add_library(qnn_device STATIC)
add_library(qnn_executorch_backend SHARED)
add_library(qnn_executorch_header INTERFACE)
Expand Down Expand Up @@ -174,8 +176,11 @@ target_link_libraries(
qnn_factory PRIVATE qnn_schema qnn_backend qnn_device qnn_context qnn_graph
qnn_mem_manager qnn_custom_protocol
)

target_link_libraries(qnn_dlc_manager PRIVATE qnn_factory qnn_backend qnn_device qnn_context qnn_graph qnn_mem_manager)

target_link_libraries(
qnn_manager PRIVATE qnn_factory wrappers qnn_schema utils shared_buffer
qnn_manager PRIVATE qnn_factory wrappers qnn_schema utils shared_buffer qnn_dlc_manager
)
target_link_libraries(
qnn_executorch_backend PRIVATE qnn_executorch_header qnn_schema qnn_manager
Expand Down
9 changes: 6 additions & 3 deletions backends/qualcomm/aot/python/PyQnnManagerAdaptor.h
Original file line number Diff line number Diff line change
Expand Up @@ -195,7 +195,7 @@ class PyQnnManager {
std::vector<std::shared_ptr<OpWrapper>>& op_wrappers) {
QnnExecuTorchContextBinary binary_info;

if (qnn_manager_->IsOnlinePrepare() || qnn_manager_->IsMultipleGraphs()) {
if (qnn_manager_->IsMultipleGraphs()) {
builder_.Reset();
std::vector<uint8_t> tensor_data;
std::vector<uint64_t> offsets;
Expand Down Expand Up @@ -305,8 +305,11 @@ class PyQnnManager {
QNN_EXECUTORCH_LOG_ERROR("Fail to compile QNN graph");
return py::array_t<char>(0);
}
if (qnn_manager_->GetContextBinary(binary_info) !=
executorch::runtime::Error::Ok) {
auto qnn_executorch_options = GetQnnExecuTorchOptions(
qnn_executorch_option_ptr_.cast<std::string_view>().data());
if (qnn_executorch_options->saver() ||
qnn_manager_->GetContextBinary(binary_info) !=
executorch::runtime::Error::Ok) {
return py::array_t<char>(0);
}
}
Expand Down
2 changes: 1 addition & 1 deletion backends/qualcomm/builders/op_dequantize.py
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ def define_node(
dequant_output_tensors = [output_tensor_wrapper]

dequant_op = PyQnnWrapper.PyQnnOpWrapper(
node.target.__name__,
node.name,
QNN_OP_PACKAGE_NAME_QTI_AISW,
OpDequantize.op_name,
)
Expand Down
2 changes: 1 addition & 1 deletion backends/qualcomm/builders/op_quantize.py
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ def define_node(
quant_output_tensors = [output_tensor_wrapper]

quant_op = PyQnnWrapper.PyQnnOpWrapper(
node.target.__name__,
node.name,
QNN_OP_PACKAGE_NAME_QTI_AISW,
OpQuantize.op_name,
)
Expand Down
9 changes: 9 additions & 0 deletions backends/qualcomm/qnn_preprocess.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,9 @@
from executorch.backends.qualcomm.builders.node_visitor import get_node_visitors
from executorch.backends.qualcomm.builders.qnn_constants import OpContextLoader
from executorch.backends.qualcomm.partition.utils import generate_qnn_executorch_option
from executorch.backends.qualcomm.serialization.qc_schema_serialize import (
flatbuffer_to_option,
)
from executorch.exir.backend.backend_details import (
BackendDetails,
CompileSpec,
Expand Down Expand Up @@ -92,6 +95,12 @@ def preprocess(
qnn_manager.GetGraphNames()[0],
[py_op_wrapper.GetOpWrapper() for py_op_wrapper in py_op_wrapper_list],
)

obj_options = flatbuffer_to_option(option)
if obj_options.saver:
exit(
f"Record all QNN API calls from saver backend at: {obj_options.saver_output_dir}"
)
assert len(qnn_context_binary) != 0, "Failed to generate Qnn context binary."
qnn_manager.Destroy()
# For now, debug_handle_map is not used by QNN ExecuTorch
Expand Down
6 changes: 2 additions & 4 deletions backends/qualcomm/runtime/QnnExecuTorchBackend.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,6 @@ Result<DelegateHandle*> QnnExecuTorchBackend::init(
// covert SizedBuffer to qnn ExecuTorch option
QnnExecuTorchContextBinary qnn_context_blob;
const qnn_delegate::QnnExecuTorchOptions* qnn_executorch_options = nullptr;

auto [status, signature, ctx_size, ctx_bin] =
QnnContextCustomProtocol().DeserializeContextCustomBuffer(
const_cast<void*>(processed->data()));
Expand Down Expand Up @@ -74,7 +73,6 @@ Result<DelegateHandle*> QnnExecuTorchBackend::init(
// NOTE: Since we use placement new and since this type is not trivially
// destructible, we must call the destructor manually in destroy().
new (qnn_manager) QnnManager(qnn_executorch_options, qnn_context_blob);

// TODO: this is a temporal solution for multi-graph support, will be
// removed once framework starts to accept runtime configuration
// ---
Expand All @@ -96,9 +94,9 @@ Result<DelegateHandle*> QnnExecuTorchBackend::init(

if (qnn_manager->IsOnlinePrepare()) {
ET_CHECK_OR_RETURN_ERROR(
qnn_manager->CompileQcir() == Error::Ok,
qnn_manager->CompileDlc() == Error::Ok,
Internal,
"Fail to compile binary in qcir format");
"Fail to compile binary in Dlc format");
} else {
for (const std::string& graph_name : qnn_manager->GetGraphNames()) {
ET_CHECK_OR_RETURN_ERROR(
Expand Down
118 changes: 96 additions & 22 deletions backends/qualcomm/runtime/QnnManager.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -37,9 +37,7 @@ bool CompareExportedInput(
}

QnnManager::~QnnManager() {
backend_params_ptr_.reset(new BackendConfigParameters());
logger_.reset();
qnn_loaded_backend_.TerminateAllBackends();
Destroy();
}

QnnManager::QnnManager(
Expand Down Expand Up @@ -96,10 +94,14 @@ QnnManager::QnnManager(
}
qnn_loaded_backend_ = QnnImplementation(library_path);
backend_params_ptr_ = std::make_unique<BackendConfigParameters>();

qnn_dlc_manager_ =
std::make_shared<QnnDlcManager>(qnn_context_blob_, options_);
}

Error QnnManager::LoadQnnLibrary() {
Error ret = qnn_loaded_backend_.Load(nullptr);
auto config = GetImplementationConfig();
Error ret = qnn_loaded_backend_.Load(config.get());
return ret;
}

Expand Down Expand Up @@ -286,7 +288,11 @@ Error QnnManager::Init() {
"parameters for Qnn executorch backend type %d",
options_->backend_options()->backend_type());
backend_params_ptr_ = QnnBackendFactory().Create(
qnn_loaded_backend_, logger_.get(), qnn_context_blob_, options_);
qnn_loaded_backend_,
logger_.get(),
qnn_context_blob_,
options_,
qnn_dlc_manager_.get());
ET_CHECK_OR_RETURN_ERROR(
backend_params_ptr_ != nullptr,
Internal,
Expand Down Expand Up @@ -326,6 +332,18 @@ Error QnnManager::Init() {
Internal,
"Fail to pre register custom memory handle");
#endif

if (IsOnlinePrepare()) {
Qnn_ApiVersion_t qnn_version = {QNN_VERSION_INIT};
qnn_loaded_backend_.GetQnnInterface().qnn_backend_get_api_version(
&qnn_version);

ET_CHECK_OR_RETURN_ERROR(
qnn_dlc_manager_->SetUpDlcEnvironment(qnn_version.coreApiVersion) ==
Error::Ok,
Internal,
"Fail to setup Dlc environment");
}
return Error::Ok;
}

Expand Down Expand Up @@ -446,9 +464,11 @@ Error QnnManager::ProfileExecuteData(
void QnnManager::Destroy() {
QNN_EXECUTORCH_LOG_INFO("Destroy Qnn backend parameters");
backend_params_ptr_.reset(new BackendConfigParameters());
qnn_dlc_manager_->ResetBackendParams();
logger_.reset();

qnn_dlc_manager_->ResetLogger();
qnn_loaded_backend_.TerminateAllBackends();
qnn_dlc_manager_->TerminateAllBackends();
}

bool QnnManager::IsNodeSupportedByBackend(
Expand Down Expand Up @@ -483,11 +503,64 @@ bool QnnManager::IsNodeSupportedByBackend(

Error QnnManager::GetContextBinary(
QnnExecuTorchContextBinary& qnn_executorch_context_binary) {
ET_CHECK_OR_RETURN_ERROR(
backend_params_ptr_->qnn_context_ptr_->GetContextBinary(
qnn_executorch_context_binary) == Error::Ok,
Internal,
"Fail to get context binary.");
if (IsOnlinePrepare() &&
qnn_dlc_manager_->backend_params_ptr_->qnn_context_ptr_.get() !=
nullptr) {
ET_CHECK_OR_RETURN_ERROR(
qnn_dlc_manager_->backend_params_ptr_->qnn_context_ptr_
->GetContextBinary(qnn_executorch_context_binary) == Error::Ok,
Internal,
"Fail to get context binary.");
}

else {
ET_CHECK_OR_RETURN_ERROR(
backend_params_ptr_->qnn_context_ptr_->GetContextBinary(
qnn_executorch_context_binary) == Error::Ok,
Internal,
"Fail to get context binary.");
}
return Error::Ok;
}

Error QnnManager::CompileDlc() {
Qnn_ErrorHandle_t error;
auto qnn_dlc_graph_info = qnn_dlc_manager_->GetQnnDlcGraphInfoPtr();
uint32_t qnn_dlc_graph_info_num = qnn_dlc_manager_->GetQnnDlcGraphInfoNum();
for (uint32_t i = 0; i < qnn_dlc_graph_info_num; ++i) {
auto& graphInfo = (*qnn_dlc_graph_info)[i];
backend_params_ptr_->qnn_graph_ptr_->SetGraphHandle(
graphInfo.graphName, graphInfo.graph);
error =
backend_params_ptr_->qnn_graph_ptr_->GraphFinalize(graphInfo.graphName);
if (error != QNN_SUCCESS) {
QNN_EXECUTORCH_LOG_ERROR(
"Failed to finalize Qnn Graph with error: %d",
QNN_GET_ERROR_CODE(error));
return Error::Internal;
}

std::vector<std::shared_ptr<TensorWrapper>> graph_inputs, graph_outputs,
tensors;

for (int i = 0; i < graphInfo.numInputTensors; ++i) {
auto tw = CreateTensorWrapper(graphInfo.inputTensors[i]);
tw->UpdateQnnTensorMeta(graphInfo.inputTensors[i]);
graph_inputs.push_back(tw);
}
for (int i = 0; i < graphInfo.numOutputTensors; ++i) {
auto tw = CreateTensorWrapper(graphInfo.outputTensors[i]);
tw->UpdateQnnTensorMeta(graphInfo.outputTensors[i]);
graph_outputs.push_back(tw);
}

ET_CHECK_OR_RETURN_ERROR(
AllocateTensor(graphInfo.graphName, graph_inputs, graph_outputs) ==
Error::Ok,
Internal,
"Fail to allocate tensor for Dlc with graph_name: %s",
graphInfo.graphName);
}

return Error::Ok;
}
Expand Down Expand Up @@ -616,31 +689,34 @@ Error QnnManager::Compile(
const std::string& graph_name,
std::vector<std::shared_ptr<OpWrapper>>& op_wrappers) {
Qnn_ErrorHandle_t error = QNN_SUCCESS;
QnnGraph* qnn_graph_ptr = backend_params_ptr_->qnn_graph_ptr_.get();

if (IsOnlinePrepare() &&
qnn_dlc_manager_->backend_params_ptr_->qnn_graph_ptr_.get() != nullptr) {
qnn_graph_ptr = qnn_dlc_manager_->backend_params_ptr_->qnn_graph_ptr_.get();
}
for (std::shared_ptr<OpWrapper>& op_wrapper : op_wrappers) {
for (const auto& tensor_wrapper : op_wrapper->GetInputTensors()) {
ET_CHECK_OR_RETURN_ERROR(
backend_params_ptr_->qnn_graph_ptr_->EnsureTensorInQnnGraph(
graph_name, tensor_wrapper) == Error::Ok,
qnn_graph_ptr->EnsureTensorInQnnGraph(graph_name, tensor_wrapper) ==
Error::Ok,
Internal,
"Tensor name %s isn't added to Qnn Graph",
tensor_wrapper->GetName().c_str());
}

for (const auto& tensor_wrapper : op_wrapper->GetOutputTensors()) {
ET_CHECK_OR_RETURN_ERROR(
backend_params_ptr_->qnn_graph_ptr_->EnsureTensorInQnnGraph(
graph_name, tensor_wrapper) == Error::Ok,
qnn_graph_ptr->EnsureTensorInQnnGraph(graph_name, tensor_wrapper) ==
Error::Ok,
Internal,
"Tensor name %s isn't added to Qnn Graph",
tensor_wrapper->GetName().c_str());
}

for (const auto& param : op_wrapper->GetParams()) {
auto* p_tensor_param = dynamic_cast<TensorParamWrapper*>(param.get());
if (p_tensor_param != nullptr) {
ET_CHECK_OR_RETURN_ERROR(
backend_params_ptr_->qnn_graph_ptr_->EnsureTensorInQnnGraph(
qnn_graph_ptr->EnsureTensorInQnnGraph(
graph_name, p_tensor_param->GetTensorWrapper()) == Error::Ok,
Internal,
"Param tensor name %s isn't added to Qnn Graph",
Expand All @@ -652,23 +728,21 @@ Error QnnManager::Compile(
"Fail to configure Qnn backend");
}

error = backend_params_ptr_->qnn_graph_ptr_->GraphAddNode(
graph_name, op_wrapper->GetOpConfig());
error = qnn_graph_ptr->GraphAddNode(graph_name, op_wrapper->GetOpConfig());
if (error != QNN_SUCCESS) {
QNN_EXECUTORCH_LOG_ERROR(
"Failed to add node to Qnn Graph with error: %d",
QNN_GET_ERROR_CODE(error));
return Error::Internal;
}
}
error = backend_params_ptr_->qnn_graph_ptr_->GraphFinalize(graph_name);
error = qnn_graph_ptr->GraphFinalize(graph_name);
if (error != QNN_SUCCESS) {
QNN_EXECUTORCH_LOG_ERROR(
"Failed to finalize Qnn Graph with error: %d",
QNN_GET_ERROR_CODE(error));
return Error::Internal;
}

return Error::Ok;
}

Expand Down
22 changes: 21 additions & 1 deletion backends/qualcomm/runtime/QnnManager.h
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@
#include <executorch/backends/qualcomm/runtime/Logging.h>
#include <executorch/backends/qualcomm/runtime/QnnExecuTorch.h>
#include <executorch/backends/qualcomm/runtime/backends/QnnBackendFactory.h>
#include <executorch/backends/qualcomm/runtime/backends/QnnDlcManager.h>
#include <executorch/runtime/core/error.h>

#include <memory>
Expand Down Expand Up @@ -71,7 +72,7 @@ class QnnManager {
QnnExecuTorchContextBinary& qnn_executorch_context_binary);

executorch::runtime::Error CompileQcir();

executorch::runtime::Error CompileDlc();
executorch::runtime::Error Compile(
const std::string& graph_name,
std::vector<std::shared_ptr<OpWrapper>>& op_wrappers);
Expand Down Expand Up @@ -110,6 +111,22 @@ class QnnManager {
std::string GetBinarySignature();

private:
std::unique_ptr<const QnnSaver_Config_t*[]> GetImplementationConfig() {
if (options_->saver()) {
auto outputDirCfg = std::make_unique<QnnSaver_Config_t>();
outputDirCfg->option = QNN_SAVER_CONFIG_OPTION_OUTPUT_DIRECTORY;
outputDirCfg->outputDirectory = options_->saver_output_dir()->c_str();

auto saverCfg = std::make_unique<const QnnSaver_Config_t*[]>(2);
saverCfg[0] = outputDirCfg.release();
saverCfg[1] = nullptr;

return saverCfg;
} else {
return nullptr;
}
}

executorch::runtime::Error LoadQnnLibrary();

static constexpr const char* htp_library_name_ = "libQnnHtp.so";
Expand Down Expand Up @@ -147,6 +164,9 @@ class QnnManager {
{Qnn_DataType_t::QNN_DATATYPE_UFIXED_POINT_16,
executorch::aten::ScalarType::UInt16},
};

// Manager for handling DLC (Deep Learning Container)
std::shared_ptr<QnnDlcManager> qnn_dlc_manager_;
};
} // namespace qnn
} // namespace backends
Expand Down
Loading
Loading