Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,9 @@ class ONNX_FRONTEND_API GraphIterator : ::ov::RuntimeAttribute {
/// \brief Retrieves metadata associated with the graph.
virtual std::map<std::string, std::string> get_metadata() const = 0;

/// \brief Returns the directory path where the model is located.
virtual std::string get_model_dir() const = 0;

/// \brief Destructor
virtual ~GraphIterator();
};
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -293,7 +293,6 @@ ov::frontend::onnx::TensorMetaInfo extract_tensor_meta_info(const TensorProto* t
GraphIteratorProto::GraphIteratorProto(const GraphIteratorProtoMemoryManagementMode mode)
: m_graph(nullptr),
m_parent(nullptr),
m_model_dir(nullptr),
m_mode(mode),
m_mmap_cache{mode == External_MMAP ? std::make_shared<std::map<std::string, std::shared_ptr<ov::MappedMemory>>>()
: nullptr},
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -121,8 +121,8 @@ class GraphIteratorProto : public ov::frontend::onnx::GraphIterator {

std::map<std::string, std::string> get_metadata() const override;

std::string get_model_dir() const {
return *m_model_dir;
std::string get_model_dir() const override {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] Since it's virtual, it will never be inlined (so having it in the header only increases the compilation time). Consider having all bodies of virtual functions in the cpp file.

return m_model_dir ? *m_model_dir : std::string{};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or, more safely:

namespace {
std::string const model_dir_unset; // Empty, always instantiated string
}

GraphIteratorProto::GraphIteratorProto(const GraphIteratorProtoMemoryManagementMode mode)
    : m_graph(nullptr),
      m_model_dir(&model_dir_unset),

std::string GraphIteratorProto::get_model_dir() const {
    return *m_model_dir;
}

Rationale: null pointers are inherently bad and will eventually be dereferenced by accident. Having an invariant saying: "m_model_dir is always valid" is more reliable.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Question: is an empty model directory one of use cases (and has a meaning in our domain) or is returning an empty string just a safeguard for null pointers?

}

GraphIteratorProtoMemoryManagementMode get_memory_management_mode() const {
Expand Down
11 changes: 8 additions & 3 deletions src/frontends/onnx/frontend/src/core/node.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -792,8 +792,10 @@ Tensor Node::get_attribute_value(const std::string& name) const {
auto tensor_decoder = std::dynamic_pointer_cast<ov::frontend::onnx::DecoderBaseTensor>(
m_decoder->get_attribute(name).as<ov::frontend::onnx::DecoderBase::Ptr>());
const auto& tensor_meta_info = tensor_decoder->get_tensor_info();
auto input_model = m_translate_session ? m_translate_session->get_input_model() : nullptr;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Question: can a session have no input model?

[nitpick] If no, then how about moving validation to get_input_model() and having this here:

FRONT_END_GENERAL_CHECK(m_translate_session != nullptr, "InputModel is not available for tensor attributes");

auto input_model = m_translate_session->get_input_model();

?

FRONT_END_GENERAL_CHECK(input_model != nullptr, "InputModel is not available for tensor attributes");
auto tensor_place = std::make_shared<ov::frontend::onnx::TensorONNXPlace>(
*m_translate_session->get_input_model().get(),
*input_model,
tensor_meta_info.m_partial_shape,
tensor_meta_info.m_element_type,
std::vector<std::string>{*tensor_meta_info.m_tensor_name},
Expand All @@ -816,11 +818,14 @@ SparseTensor Node::get_attribute_value(const std::string& name) const {
FRONT_END_GENERAL_CHECK(sparse_tensor_info.m_indices && sparse_tensor_info.m_values,
"Incomplete sparse tensors are not supported");

auto input_model = m_translate_session ? m_translate_session->get_input_model() : nullptr;
FRONT_END_GENERAL_CHECK(input_model != nullptr, "InputModel is not available for sparse tensor attributes");

auto values_decoder =
std::dynamic_pointer_cast<ov::frontend::onnx::DecoderBaseTensor>(sparse_tensor_info.m_values);
const auto& values_meta_info = values_decoder->get_tensor_info();
auto values_place = std::make_shared<ov::frontend::onnx::TensorONNXPlace>(
*m_translate_session->get_input_model().get(),
*input_model,
values_meta_info.m_partial_shape,
values_meta_info.m_element_type,
std::vector<std::string>{*values_meta_info.m_tensor_name},
Expand All @@ -834,7 +839,7 @@ SparseTensor Node::get_attribute_value(const std::string& name) const {
std::dynamic_pointer_cast<ov::frontend::onnx::DecoderBaseTensor>(sparse_tensor_info.m_indices);
const auto& indices_meta_info = indices_decoder->get_tensor_info();
auto indices_place = std::make_shared<ov::frontend::onnx::TensorONNXPlace>(
*m_translate_session->get_input_model().get(),
*input_model,
indices_meta_info.m_partial_shape,
indices_meta_info.m_element_type,
std::vector<std::string>{*indices_meta_info.m_tensor_name},
Expand Down
11 changes: 10 additions & 1 deletion src/frontends/onnx/frontend/src/core/tensor.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
#include "core/tensor.hpp"

#include "input_model.hpp"
#include "openvino/util/file_util.hpp"

namespace ov {
namespace frontend {
Expand All @@ -19,10 +20,18 @@ detail::LocalStreamHandles TensorONNXPlace::get_stream_cache() {
return model_onnx->get_stream_cache();
}

std::string TensorONNXPlace::get_model_dir() const {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] Or, in a more idiomatic C++:

try {
    using Model = const unify::InputModel&;
    return dynamic_cast<Model>(m_input_model)->get_model_dir();
}
catch (const std::bad_cast &) {
    return {};
}

const auto model_onnx = dynamic_cast<const unify::InputModel*>(&m_input_model);
if (!model_onnx) {
return {};
}
return model_onnx->get_model_dir();
}

Tensor::Tensor(const std::shared_ptr<TensorONNXPlace>& tensor_place) {
m_tensor_proto = nullptr;
m_shape = tensor_place->get_partial_shape().get_shape();
m_model_dir = "";
m_model_dir = tensor_place->get_model_dir();
m_mmap_cache = tensor_place->get_mmap_cache();
m_tensor_place = tensor_place;
}
Expand Down
1 change: 1 addition & 0 deletions src/frontends/onnx/frontend/src/core/tensor.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -141,6 +141,7 @@ class TensorONNXPlace : public ov::frontend::onnx::TensorPlace {

detail::MappedMemoryHandles get_mmap_cache();
detail::LocalStreamHandles get_stream_cache();
std::string get_model_dir() const;

protected:
int64_t m_input_idx = -1, m_output_idx = -1;
Expand Down
17 changes: 17 additions & 0 deletions src/frontends/onnx/frontend/src/input_model.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -626,6 +626,10 @@ class InputModel::InputModelONNXImpl {
return m_stream_cache;
}

std::string get_model_dir() const {
return m_model_dir;
}

private:
void load_model();
void clean_up();
Expand All @@ -647,6 +651,7 @@ class InputModel::InputModelONNXImpl {
detail::MappedMemoryHandles m_mmap_cache;
// This is used for keeping a readed external data without MMAP
detail::LocalStreamHandles m_stream_cache;
std::string m_model_dir;

std::shared_ptr<TensorONNXPlace> register_tensor_place(const std::shared_ptr<TensorONNXPlace>& tensor_place);
std::shared_ptr<TensorONNXPlace> find_tensor_place(const TensorMetaInfo& tensor_meta_info) const;
Expand Down Expand Up @@ -837,6 +842,9 @@ InputModel::InputModelONNXImpl::InputModelONNXImpl(const GraphIterator::Ptr& gra
m_telemetry(telemetry),
m_enable_mmap(enable_mmap) {
FRONT_END_GENERAL_CHECK(m_graph_iterator, "Null pointer specified for GraphIterator");
if (const auto graph_iterator = std::dynamic_pointer_cast<GraphIterator>(m_graph_iterator)) {
m_model_dir = graph_iterator->get_model_dir();
}
if (m_enable_mmap) {
m_mmap_cache = std::make_shared<std::map<std::string, std::shared_ptr<ov::MappedMemory>>>();
m_stream_cache = nullptr;
Expand All @@ -857,6 +865,11 @@ InputModel::InputModelONNXImpl::InputModelONNXImpl(const GraphIterator::Ptr& gra
m_mmap_cache(parent_model->_impl->m_mmap_cache),
m_stream_cache(parent_model->_impl->m_stream_cache) {
FRONT_END_GENERAL_CHECK(m_graph_iterator, "Null pointer specified for GraphIterator");
if (const auto graph_iterator = std::dynamic_pointer_cast<GraphIterator>(m_graph_iterator)) {
m_model_dir = graph_iterator->get_model_dir();
} else {
m_model_dir = parent_model->_impl->m_model_dir;
}
load_model();
}

Expand Down Expand Up @@ -1053,6 +1066,10 @@ detail::LocalStreamHandles InputModel::get_stream_cache() const {
return _impl->get_stream_cache();
}

std::string InputModel::get_model_dir() const {
return _impl->get_model_dir();
}

} // namespace unify
} // namespace onnx
} // namespace frontend
Expand Down
1 change: 1 addition & 0 deletions src/frontends/onnx/frontend/src/input_model.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -152,6 +152,7 @@ class InputModel : public ov::frontend::InputModel {
bool is_enabled_mmap() const;
detail::MappedMemoryHandles get_mmap_cache() const;
detail::LocalStreamHandles get_stream_cache() const;
std::string get_model_dir() const;
};
} // namespace unify

Expand Down
5 changes: 4 additions & 1 deletion src/frontends/onnx/tests/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -129,7 +129,10 @@ set_property(TEST ov_onnx_frontend_tests PROPERTY LABELS OV UNIT ONNX_FE)
add_dependencies(ov_onnx_frontend_tests openvino_template_extension)


target_include_directories(ov_onnx_frontend_tests PRIVATE "${CMAKE_CURRENT_SOURCE_DIR}")
target_include_directories(ov_onnx_frontend_tests PRIVATE
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess the idiomatic way of doing this is:

# in src/frontends/onnx/frontend/CMakeLists.txt
ov_add_frontend(NAME onnx
                ...
)
target_include_directories(mylib INTERFACE
    $<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/src>
)

# in src/frontends/onnx/onnx_common/CMakeLists.txt
set(TARGET_NAME "openvino_onnx_common")
# target_include_directories all good already

# in src/frontends/onnx/tests/CMakeLists.txt
target_link_libraries(ov_onnx_frontend_tests
    PRIVATE
        onnx
        openvino_onnx_common
)

Not tested, however the general rule (in CMake 4) is to:

  1. have all targets export their (export) include directories
  2. link the final targets with targets from point 1 and inherit their exports

Typically a relative path indicates something fishy happening.

"${CMAKE_CURRENT_SOURCE_DIR}"
"${CMAKE_CURRENT_SOURCE_DIR}/../frontend/src"
"${CMAKE_CURRENT_SOURCE_DIR}/../onnx_common/include")

target_compile_definitions(ov_onnx_frontend_tests
PRIVATE
Expand Down
103 changes: 79 additions & 24 deletions src/frontends/onnx/tests/graph_iterator.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -5,46 +5,68 @@

#include <onnx/onnx_pb.h>

#include <algorithm>
#include <filesystem>
#include <fstream>
#include <iostream>
#include <map>
#include <openvino/frontend/exception.hpp>
#include <openvino/frontend/graph_iterator.hpp>
#include <openvino/frontend/input_model.hpp>
#include <openvino/openvino.hpp>
#include <system_error>
#include <unordered_map>

#include "../frontend/src/core/graph_iterator_proto.hpp"
#include "common_test_utils/common_utils.hpp"
#include "load_from.hpp"
#include "onnx_utils.hpp"
#include "utils.hpp"

using ::ONNX_NAMESPACE::ModelProto;
using ::ONNX_NAMESPACE::Version;

TEST_P(FrontEndLoadFromTest, testLoadUsingSimpleGraphIterator) {
ov::frontend::FrontEnd::Ptr fe;
class SimpleIterator : public ov::frontend::onnx::GraphIterator {
public:
mutable size_t get_model_dir_call_count = 0;
mutable std::string last_returned_dir;
std::string model_dir;

class SimpleIterator : public ov::frontend::onnx::GraphIterator {
public:
size_t size() const override {
return 0;
}
void reset() override {};
void next() override {};
bool is_end() const override {
return true;
};
std::shared_ptr<ov::frontend::onnx::DecoderBase> get_decoder() const override {
return nullptr;
};

int64_t get_opset_version(const std::string& domain) const override {
return 1;
}
SimpleIterator() = default;
explicit SimpleIterator(const std::string& dir) : model_dir(dir) {}

std::map<std::string, std::string> get_metadata() const override {
return {};
}

~SimpleIterator() override {};
size_t size() const override {
return 0;
}
void reset() override {};
void next() override {};
bool is_end() const override {
return true;
};
std::shared_ptr<ov::frontend::onnx::DecoderBase> get_decoder() const override {
return nullptr;
};

int64_t get_opset_version(const std::string& domain) const override {
return 1;
}

std::map<std::string, std::string> get_metadata() const override {
return {};
}

std::string get_model_dir() const override {
++get_model_dir_call_count;
last_returned_dir = model_dir;
return model_dir;
}

~SimpleIterator() override {};
};

TEST_P(FrontEndLoadFromTest, testLoadUsingSimpleGraphIterator) {
ov::frontend::FrontEnd::Ptr fe;

auto iter = std::make_shared<SimpleIterator>();

{
Expand Down Expand Up @@ -128,3 +150,36 @@ TEST_P(FrontEndLoadFromTest, testLoadUsingGraphIteratorExternalMMAP) {
ASSERT_EQ(iter->get_mmap_cache()->size(), 1); // MMAP handle must be in cache after work finished
ASSERT_EQ(model->get_ordered_ops().size(), 6);
}

TEST_P(FrontEndLoadFromTest, tensor_place_uses_model_dir_for_external_data) {
const std::string model_name = "external_data/external_data.onnx";
const auto path =
ov::util::path_join({ov::test::utils::getExecutableDirectory(), TEST_ONNX_MODELS_DIRNAME, model_name}).string();

const auto expected_model_dir = std::filesystem::path(path).parent_path().string();

auto iter = std::make_shared<SimpleIterator>(expected_model_dir);

auto graph_iter = std::dynamic_pointer_cast<ov::frontend::onnx::GraphIterator>(iter);
ASSERT_NO_THROW(m_frontEnd = m_fem.load_by_framework("onnx"));
ASSERT_NE(m_frontEnd, nullptr);
ASSERT_TRUE(m_frontEnd->supported(graph_iter));

ASSERT_NO_THROW(m_inputModel = m_frontEnd->load(graph_iter));
ASSERT_NE(m_inputModel, nullptr);

ASSERT_NO_THROW({
try {
auto model = m_frontEnd->convert(m_inputModel);
ASSERT_NE(model, nullptr);
} catch (const std::exception& ex) {
std::cerr << "convert failed: " << ex.what() << std::endl;
throw;
}
});
Comment on lines +171 to +179
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about:

Suggested change
ASSERT_NO_THROW({
try {
auto model = m_frontEnd->convert(m_inputModel);
ASSERT_NE(model, nullptr);
} catch (const std::exception& ex) {
std::cerr << "convert failed: " << ex.what() << std::endl;
throw;
}
});
try {
auto model = m_frontEnd->convert(m_inputModel);
ASSERT_NE(model, nullptr);
} catch (const std::exception& ex) {
FAIL() << "convert failed: " << ex.what();
} catch (...) {
FAIL() << "convert failed: reason unknown" << ex.what();
}

Rationale: you probably want to print the failure consistently with the rest of googletest. Assuming, that googletest (always) prints to std::cerr might be a bit of a stretch.


ASSERT_GT(iter->get_model_dir_call_count, 0) << "get_model_dir() was never called";
ASSERT_EQ(iter->last_returned_dir, expected_model_dir)
<< "get_model_dir() returned unexpected path: " << iter->last_returned_dir
<< " (expected: " << expected_model_dir << ")";
}
Loading