Search Criteria
Package Details: python-bitsandbytes-git 0.44.1.11.g9264f02-1
Package Actions
Git Clone URL: | https://aur.archlinux.org/python-bitsandbytes-git.git (read-only, click to copy) |
---|---|
Package Base: | python-bitsandbytes-git |
Description: | Lightweight wrapper around CUDA custom functions, in particular 8-bit optimizers, matrix multiplication (LLM.int8()), and quantization functions. |
Upstream URL: | https://github.com/TimDettmers/bitsandbytes |
Licenses: | MIT |
Conflicts: | python-bitsandbytes |
Provides: | python-bitsandbytes |
Submitter: | Premik |
Maintainer: | Premik |
Last Packager: | Premik |
Votes: | 0 |
Popularity: | 0.000000 |
First Submitted: | 2023-12-26 15:48 (UTC) |
Last Updated: | 2024-11-15 22:19 (UTC) |
Dependencies (12)
- cuda (cuda11.1AUR, cuda-12.2AUR, cuda12.0AUR, cuda11.4AUR, cuda11.4-versionedAUR, cuda12.0-versionedAUR)
- python (python37AUR, python311AUR, python310AUR)
- python-accelerateAUR
- python-einopsAUR
- python-lion-pytorchAUR
- python-pytorch (python-pytorch-mkl-gitAUR, python-pytorch-cuda-gitAUR, python-pytorch-mkl-cuda-gitAUR, python-pytorch-cxx11abiAUR, python-pytorch-cxx11abi-optAUR, python-pytorch-cxx11abi-cudaAUR, python-pytorch-cxx11abi-opt-cudaAUR, python-pytorch-cxx11abi-rocmAUR, python-pytorch-cxx11abi-opt-rocmAUR, python-pytorch-rocm-binAUR, python-pytorch-cuda, python-pytorch-opt, python-pytorch-opt-cuda, python-pytorch-opt-rocm, python-pytorch-rocm)
- python-scipy (python-scipy-gitAUR, python-scipy-mklAUR, python-scipy-mkl-tbbAUR, python-scipy-mkl-binAUR)
- python-transformersAUR
- git (git-gitAUR, git-glAUR) (make)
- python-build (make)
- python-installer (python-installer-gitAUR) (make)
- python-pytest (check)
Required by (2)
- python-transformers (requires python-bitsandbytes) (optional)
- python-trl (requires python-bitsandbytes) (optional)
Latest Comments
arzeth commented on 2024-04-18 19:32 (UTC) (edited on 2024-04-18 21:36 (UTC) by arzeth)
I had to replace the
make
line with(
50;52;...
means compile code for each of the specified CUDA Capability versions, which can be found by running/opt/cuda/extras/demo_suite/deviceQuery
, e.g. GTX 1660 Super is 7.5)For non-CUDA users there are less args: