Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ERROR: Your GPU does not support Int8 Matmul! #97

Closed
XiepengLi opened this issue Nov 18, 2022 · 4 comments
Closed

ERROR: Your GPU does not support Int8 Matmul! #97

XiepengLi opened this issue Nov 18, 2022 · 4 comments

Comments

@XiepengLi
Copy link

  • nvidia-smi
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 515.65.01    Driver Version: 515.65.01    CUDA Version: 11.7     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Tesla P40           Off  | 00000000:02:00.0 Off |                  Off |
| N/A   29C    P8    10W / 250W |     24MiB / 24576MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   1  Tesla P40           Off  | 00000000:03:00.0 Off |                  Off |
| N/A   26C    P8     9W / 250W |     40MiB / 24576MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
+-----------------------------------------------------------------------------+
  • python -m bitsandbytes
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++ DEBUG INFORMATION +++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

++++++++++ POTENTIALLY LIBRARY-PATH-LIKE ENV VARS ++++++++++
'CONDA_EXE': '~miniconda3/bin/conda'
'CONDA_PREFIX': '~/miniconda3'
'CONDA_PYTHON_EXE': '~/miniconda3/bin/python'
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

WARNING: Please be sure to sanitize sensible info from any such env vars!

++++++++++++++++++++++++++ OTHER +++++++++++++++++++++++++++
COMPILED_WITH_CUDA = True
COMPUTE_CAPABILITIES_PER_GPU = ['6.1', '6.1']
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++ DEBUG INFO END ++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Running a quick check that:
    + library is importable
    + CUDA function is callable

SUCCESS!
Installation was successful!
=============================================
ERROR: Your GPU does not support Int8 Matmul!
=============================================

python: /mmfs1/gscratch/zlab/timdettmers/git/bitsandbytes/csrc/ops.cu:379: int igemmlt(cublasLtHandle_t, int, int, int, const int8_t*, const int8_t*, void*, float*, int, int, int) [with int FORMATB = 3; int DTYPE_OUT = 32; int SCALE_ROWS = 0; cublasLtHandle_t = cublasLtContext*; int8_t = signed char]: Assertion `false' failed.
Aborted (core dumped)
@gururise
Copy link

gururise commented Dec 22, 2022

Same problem with my Nvidia Tesla P40.

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.60.11    Driver Version: 525.60.11    CUDA Version: 12.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Tesla P40           Off  | 00000000:0E:00.0 Off |                  Off |
| N/A   40C    P0    55W / 250W |  24417MiB / 24576MiB |     99%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|    0   N/A  N/A      1610      G   /usr/bin/gnome-shell                3MiB |
|    0   N/A  N/A     17625      C   python                          24412MiB |
+-----------------------------------------------------------------------------+

$ nvcc -V

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Sep_21_10:33:58_PDT_2022
Cuda compilation tools, release 11.8, V11.8.89
Build cuda_11.8.r11.8/compiler.31833905_0

I was under the impression the P40 supported all the int8 operations. It has CUDA Compute Capability 6.1, which is higher than the 3.7 offered by the K80. Is this a software bug, or does the Tesla P40 really not offer int8 matmul?

BlackHC added a commit to BlackHC/bitsandbytes that referenced this issue Dec 29, 2022
@TimDettmers
Copy link
Collaborator

The Int8 Matmul feature relied on Int8 Tensor Cores and as such your GPUs were not supported. This changed with the commit de53588 which is already pushed to pip in the latest 0.37.0 version. So if you update it should work for you now.

Please reopen this issue if you encounter any problems.

@fightingman1
Copy link

I have update to 0.38.1 version, but this problem still exists.

@itjuba
Copy link

itjuba commented May 26, 2023

Having the same issue here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants