Skip to content

Conversation

@liwenju0
Copy link
Contributor

@liwenju0 liwenju0 commented Jun 7, 2025

No description provided.

liwenju0 and others added 2 commits June 7, 2025 17:24
… functionality of automatically configuring data types to support different CUDA computing capabilities.
@zenosai zenosai merged commit f5ebe5b into Yuliang-Liu:main Jun 7, 2025
@zenosai
Copy link
Collaborator

zenosai commented Jun 7, 2025

We have merged this PR, thanks for your contribution.

@zenosai
Copy link
Collaborator

zenosai commented Jun 7, 2025

Thanks again! Also, could you share which GPU you were using during deployment, and whether the deployment was successful? That would help us refine support for different hardware setups and better understand potential issues. Appreciate your input! @liwenju0

@liwenju0
Copy link
Contributor Author

Thanks again! Also, could you share which GPU you were using during deployment, and whether the deployment was successful? That would help us refine support for different hardware setups and better understand potential issues. Appreciate your input! @liwenju0

I deployed it on my 2080ti with 22G of memory on a single card (magically modified)

@zenosai
Copy link
Collaborator

zenosai commented Jun 12, 2025

Thanks for your reply!
Just wondering — did you encounter any shared memory errors during deployment? (See: https://github.com/Yuliang-Liu/MonkeyOCR#fix-shared-memory-error-on-rtx-3090--4090---gpus-optional)
This issue tends to occur on RTX 3090 and 4090 GPUs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants