Skip to content

Fix/fix convert float to fp16 warning#6525

Merged
nihui merged 7 commits intoTencent:masterfrom
bluemiao3:fix/fix-convert-float-to-fp16-warning
Feb 3, 2026
Merged

Fix/fix convert float to fp16 warning#6525
nihui merged 7 commits intoTencent:masterfrom
bluemiao3:fix/fix-convert-float-to-fp16-warning

Conversation

@bluemiao3
Copy link
Contributor

Fix: eliminate all float-to-fp16 compilation warnings

@github-actions github-actions bot added the riscv label Jan 26, 2026
@tencent-adm
Copy link
Member

tencent-adm commented Jan 26, 2026

CLA assistant check
All committers have signed the CLA.

@codecov-commenter
Copy link

codecov-commenter commented Jan 26, 2026

Codecov Report

❌ Patch coverage is 88.67925% with 18 lines in your changes missing coverage. Please review.
✅ Project coverage is 92.94%. Comparing base (ce2c565) to head (2857988).
⚠️ Report is 9 commits behind head on master.

Files with missing lines Patch % Lines
src/layer/riscv/interp_bicubic_packn_fp16s.h 0.00% 10 Missing ⚠️
src/layer/riscv/interp_bilinear_packn_fp16s.h 0.00% 3 Missing ⚠️
src/layer/riscv/convolution1d_riscv_zfh.cpp 71.42% 2 Missing ⚠️
src/layer/riscv/interp_riscv_zfh.cpp 0.00% 2 Missing ⚠️
src/layer/riscv/deconvolution_packnto1_fp16s.h 80.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master    #6525      +/-   ##
==========================================
- Coverage   93.08%   92.94%   -0.14%     
==========================================
  Files         808      809       +1     
  Lines      256136   257088     +952     
==========================================
+ Hits       238416   238963     +547     
- Misses      17720    18125     +405     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This pull request aims to eliminate float-to-fp16 compilation warnings by adding explicit (__fp16) casts throughout RISC-V specific layer implementations. The changes target various operations including interpolation, convolution, deconvolution, instance normalization, and other neural network operations.

Changes:

  • Added (__fp16) casts to float literals and variables when passing them to fp16 intrinsics
  • Cast zero initializers for fp16 variables and vectors
  • Cast alpha coefficients in interpolation operations
  • Cast bias values and other scalar parameters in various operations

Reviewed changes

Copilot reviewed 27 out of 27 changed files in this pull request and generated 15 comments.

Show a summary per file
File Description
src/layer/riscv/interp_riscv_zfh.cpp Added casts for alphap coefficients in bilinear and bicubic interpolation
src/layer/riscv/interp_bilinear_packn_fp16s.h Partially added casts for alphap coefficients (incomplete)
src/layer/riscv/interp_bicubic_packn_fp16s.h Partially added casts for alphap coefficients (incomplete)
src/layer/riscv/instancenorm_riscv_zfh.cpp Added casts for zero initializers and epsilon values
src/layer/riscv/innerproduct_riscv_zfh.cpp Added casts for zero initializers in vector operations
src/layer/riscv/gru_riscv_zfh.cpp Refactored sigmoid computation casts
src/layer/riscv/deconvolution_*_fp16s.h Added casts for zero initializers
src/layer/riscv/convolutiondepthwise_3x3.h Incorrectly added casts for float bias variables
src/layer/riscv/convolution_winograd_*.h Added casts for various coefficients and initializers
src/layer/riscv/convolution_sgemm*.h Incorrectly added casts for float bias variables and arrays
src/layer/riscv/convolution_*_fp16s.h Added casts for zero initializers and coefficients
src/layer/riscv/convolution1d_riscv_zfh.cpp Added casts for val parameters in widen-multiply operations
src/layer/riscv/celu_riscv_zfh.cpp Added casts for alpha and scalar constants
src/layer/riscv/bias_riscv_zfh.cpp Added cast for bias data access

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@nihui
Copy link
Member

nihui commented Jan 29, 2026

@nihui
Copy link
Member

nihui commented Feb 2, 2026


/data/action/runner0/_work/ncnn/ncnn/src/layer/riscv/deconvolution_packnto1_fp16s.h: In function 'void ncnn::deconvolution_packnto1_fp16sa_rvv(const Mat&, Mat&, const Mat&, const Mat&, int, int, int, int, int, int, int, const Mat&, const Option&)':
/data/action/runner0/_work/ncnn/ncnn/src/layer/riscv/deconvolution_packnto1_fp16s.h:186:36: warning: ISO C++ does not allow converting to '_Float16' from 'float' with greater conversion rank [-Wnarrowing]
  186 |                 sum = activation_ss(sum, activation_type, activation_params);
      |                       ~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


/data/action/runner0/_work/ncnn/ncnn/src/layer/riscv/gru_riscv_zfh.cpp: In function 'int ncnn::gru_fp16sa(const Mat&, Mat&, int, const Mat&, const Mat&, const Mat&, Mat&, const Option&)':
/data/action/runner0/_work/ncnn/ncnn/src/layer/riscv/gru_riscv_zfh.cpp:586:30: warning: ISO C++ does not allow converting to '_Float16' from 'float' with greater conversion rank [-Wnarrowing]
  586 |             output_data[q] = H;
      |                              ^

/data/action/runner0/_work/ncnn/ncnn/src/layer/riscv/convolution1d_riscv_zfh.cpp: In member function 'int ncnn::Convolution1D_riscv::forward_fp16sa(const ncnn::Mat&, ncnn::Mat&, const ncnn::Option&) const':
/data/action/runner0/_work/ncnn/ncnn/src/layer/riscv/convolution1d_riscv_zfh.cpp:474:40: warning: ISO C++ does not allow converting to '_Float16' from 'float' with greater conversion rank [-Wnarrowing]
  474 |                     sum = activation_ss(sum, activation_type, activation_params);
      |                           ~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

@nihui nihui merged commit b0be0c9 into Tencent:master Feb 3, 2026
52 of 53 checks passed
@nihui
Copy link
Member

nihui commented Feb 3, 2026

Thanks for your contribution !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants