Skip to content

Unable to run separate instances in multiple threads #219

Closed
@ttdd11

Description

@ttdd11

I've built a library that loads an engine, and this works well.

Now I am testing on two cards. In the library, when initialized, I call setCudaDevice(which GPU) to assign to net to the correct device.

At the end of the lib, there are two pointers I'm using:

    std::shared_ptr<nvinfer1::ICudaEngine>             m_Engine; 
    std::shared_ptr<nvinfer1::IExecutionContext>    m_Context;

After these are initialized, I run a test to see if the inference results are correct, and they are.

However, after, when running just using the pointers in the library, the first card runs fine, but the second is throwing this error: Cudnn Error in nvinfer1::rt::CommonContext::configure: 7 (CUDNN_STATUS_MAPPING_ERROR)

Environment

OS: Windows 10
TensorRT Version: TensorRT 6
GPU: 2080 ti and 2080
Nvidia Driver: 441
CUDA: 10.1
CUDNN: 7.4

Any ideas?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions