Skip to content

[BUG]: have not yet support gemma3? #1129

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
nipeone opened this issue Mar 18, 2025 · 2 comments
Closed

[BUG]: have not yet support gemma3? #1129

nipeone opened this issue Mar 18, 2025 · 2 comments

Comments

@nipeone
Copy link
Contributor

nipeone commented Mar 18, 2025

Description

llama_model_load: error loading model: error loading model architecture: unknown model architecture: 'gemma3'

Reproduction Steps

i use model unsloth/gemma-3-4b-it-GGUF

Environment & Configuration

  • Operating system: windows 11
  • .NET runtime version: .net 8.0
  • LLamaSharp version: 0.21.0
  • CUDA version (if you are using cuda backend):
  • CPU & GPU device: CPU

Known Workarounds

No response

@martindevans
Copy link
Member

There's already a PR open to update the llama.cpp version, once that's merged that should support gemma3 :)

@martindevans
Copy link
Member

The latest version on nuget should support Gemma3 now.

@nipeone nipeone closed this as completed Mar 21, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants