You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: CHANGELOG.md
+12Lines changed: 12 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -189,3 +189,15 @@ Improvements:
189
189
- StableEmbedding layer now has device and dtype parameters to make it 1:1 replaceable with regular Embedding layers (@lostmsu)
190
190
- runtime performance of block-wise quantization slightly improved
191
191
- added error message for the case multiple libcudart.so are installed and bitsandbytes picks the wrong one
192
+
193
+
194
+
### 0.37.0
195
+
196
+
#### Int8 Matmul + backward support for all GPUs
197
+
198
+
Features:
199
+
- Int8 MatmulLt now supports backward through inversion of the ColTuring/ColAmpere format. Slow, but memory efficient. Big thanks to @borzunov
200
+
- Int8 now supported on all GPUs. On devices with compute capability < 7.5, the Int weights are cast to 16/32-bit for the matrix multiplication. Contributed by @borzunov
201
+
202
+
Improvements:
203
+
- Improved logging for the CUDA detection mechanism.
self.add_log_entry('CUDA SETUP: CUDA detection failed! Possible reasons:')
@@ -112,6 +113,7 @@ def run_cuda_setup(self):
112
113
self.add_log_entry('3. You have multiple conflicting CUDA libraries')
113
114
self.add_log_entry('4. Required library not pre-compiled for this bitsandbytes release!')
114
115
self.add_log_entry('CUDA SETUP: If you compiled from source, try again with `make CUDA_VERSION=DETECTED_CUDA_VERSION` for example, `make CUDA_VERSION=113`.')
116
+
self.add_log_entry('CUDA SETUP: The CUDA version for the compile might depend on your conda install. Inspect CUDA version via `conda list | grep cuda`.')
cuda_setup.add_log_entry("WARNING: Compute capability < 7.5 detected! Only slow 8-bit matmul is supported for your GPU!", is_warning=True)
152
154
else:
153
155
has_cublaslt=True
154
156
returnhas_cublaslt
@@ -362,7 +364,6 @@ def evaluate_cuda_setup():
362
364
print('')
363
365
print('='*35+'BUG REPORT'+'='*35)
364
366
print('Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues')
365
-
print('For effortless bug reporting copy-paste your error into this form: https://docs.google.com/forms/d/e/1FAIpQLScPB8emS3Thkp66nvqwmjTEgxp8Y9ufuWTzFyr9kJ5AoI47dQ/viewform?usp=sf_link')
0 commit comments