Skip to content

Conversation

@Treboko
Copy link
Contributor

@Treboko Treboko commented Aug 24, 2025

this might happen depending on the way the $stderr.winsize is defined.
If the expression "$stderr.winsize[1] - line.size" in Line 114 gets negative, we will get a "negative argument" exception in the padding calculation

this might happen depending on the way the $stderr.winsize is defined. If the expression "$stderr.winsize[1] - line.size" in Line 114 gets negative, we will get a "negative argument" exception in the padding calculation
Copy link
Collaborator

@KitaitiMakoto KitaitiMakoto left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@KitaitiMakoto KitaitiMakoto merged commit 7745fcf into ggml-org:master Aug 24, 2025
54 of 55 checks passed
bygreencn added a commit to bygreencn/whisper.cpp that referenced this pull request Sep 24, 2025
* ggerganov/master: (92 commits)
  ggml : Fix MKL detection by quoting BLAS_INCLUDE_DIRS (ggml-org#3426)
  whisper : prefer curl over wget in download scripts (ggml-org#3409)
  ci : remove brew installation of cmake for macos-latest (ggml-org#3408)
  tests : use CMake definitions for model/sample paths (ggml-org#3406)
  Handle negative value in padding (ggml-org#3389)
  models :  update`./models/download-ggml-model.cmd` to allow for tdrz download (ggml-org#3381)
  talk-llama : sync llama.cpp
  sync : ggml
  ggml: Add initial WebGPU backend (llama/14521)
  ggml : initial zDNN backend (llama/14975)
  common : handle mxfp4 enum
  ggml-quants : fix make_qp_quants NANs and IQ1 assertion errors (llama/15379)
  vulkan: disable spirv-opt for bfloat16 shaders (llama/15352)
  vulkan: Use larger workgroups for mul_mat_vec when M is small (llama/15355)
  vulkan: support sqrt (llama/15370)
  vulkan: Optimize argsort (llama/15354)
  vulkan: fuse adds (llama/15252)
  vulkan: Support mul_mat_id with f32 accumulators (llama/15337)
  vulkan: Add missing bounds checking to scalar/coopmat1 mul_mat_id (llama/15334)
  OpenCL: add initial FA support (llama/14987)
  ...
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants