Skip to content

ruby : null-check#3689

Merged
KitaitiMakoto merged 2 commits into
ggml-org:masterfrom
KitaitiMakoto:ruby-dev
Mar 5, 2026
Merged

ruby : null-check#3689
KitaitiMakoto merged 2 commits into
ggml-org:masterfrom
KitaitiMakoto:ruby-dev

Conversation

@KitaitiMakoto
Copy link
Copy Markdown
Contributor

Hi,

I fixed a tiny null-check bug.
By this fix, my podcast's audio processing pipeline combined with whisper.cpp (VAD), Torch and ONNX in Ruby works fine, finally!

Thanks.

@KitaitiMakoto
Copy link
Copy Markdown
Contributor Author

Thanks for approval! I will stop CIs and will merge this pull request because CI for Ruby passed.

@KitaitiMakoto KitaitiMakoto merged commit 30c5194 into ggml-org:master Mar 5, 2026
29 of 67 checks passed
thewh1teagle pushed a commit to thewh1teagle/whisper.cpp that referenced this pull request Mar 6, 2026
* Introduce null-check to prevent SEGV

* Fix error message
bygreencn added a commit to bygreencn/whisper.cpp that referenced this pull request Mar 13, 2026
* ggerganov/master: (42 commits)
  ruby : null-check (ggml-org#3689)
  gguf : sync (ggml/0)
  scripts : sync gguf
  talk-llama : sync llama.cpp
  sync : ggml
  replace the magic nunber 768 by max work group size to support iGPU (llama/19920)
  ggml-zendnn: update code for latest ZenDNN API (llama/19923)
  ggml : fix AMX and add batched support (llama/19925)
  vulkan: fix fp16 Flash Attention on Windows AMD RDNA2 and below (llama/19921)
  ggml-virtgpu: improve the reliability of the code (llama/19846)
  support permuted, remove check s0/s10 (llama/19889)
  vulkan: check for memory overlap before doing fusion (llama/19768)
  ggml/gguf : prevent integer overflows (llama/19856)
  Vulkan Scalar Flash Attention Refactor (llama/19625)
  vulkan: fix coopmat1 without bf16 support (llama/19793)
  vulkan: fix data race in mul_mat_id shader (llama/19790)
  hexagon refactor all Ops to use local context struct (llama/19819)
  ggml-cpu: arm64: q5_K repack gemm and gemv (and generic) implementations (dotprod) (llama/19356)
  Improve CUDA graph capture (llama/19754)
  ggml-cpu: add RVV vec dot kernels for quantization types (llama/18784)
  ...
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants