@Sherlock-Holo thanks for your report, it's been added to makedepends
Search Criteria
Package Details: python-vllm-cuda 0.7.2-3
Package Actions
Git Clone URL: | https://aur.archlinux.org/python-vllm-cuda.git (read-only, click to copy) |
---|---|
Package Base: | python-vllm-cuda |
Description: | high-throughput and memory-efficient inference and serving engine for LLMs |
Upstream URL: | https://github.com/vllm-project/vllm |
Licenses: | Apache-2.0 |
Conflicts: | python-vllm |
Provides: | python-vllm |
Submitter: | envolution |
Maintainer: | envolution |
Last Packager: | envolution |
Votes: | 0 |
Popularity: | 0.000000 |
First Submitted: | 2024-12-01 16:12 (UTC) |
Last Updated: | 2025-02-12 21:30 (UTC) |
Dependencies (8)
- python (python37AUR, python311AUR, python310AUR)
- python-installer
- python-pytorch (python-pytorch-rocm-binAUR, python-pytorch-cxx11abiAUR, python-pytorch-cxx11abi-optAUR, python-pytorch-cxx11abi-cudaAUR, python-pytorch-cxx11abi-opt-cudaAUR, python-pytorch-cxx11abi-rocmAUR, python-pytorch-cxx11abi-opt-rocmAUR, python-pytorch-cuda, python-pytorch-opt, python-pytorch-opt-cuda, python-pytorch-opt-rocm, python-pytorch-rocm)
- cuda (cuda11.1AUR, cuda-12.2AUR, cuda12.0AUR, cuda11.4AUR, cuda11.4-versionedAUR, cuda12.0-versionedAUR) (make)
- cuda-tools (cuda11.1-toolsAUR, cuda12.0-toolsAUR, cuda11.4-toolsAUR, cuda11.4-versioned-toolsAUR, cuda12.0-versioned-toolsAUR) (make)
- gcc13 (make)
- git (git-gitAUR, git-glAUR) (make)
- python-setuptools-scm (make)
Required by (0)
Sources (1)
Latest Comments
envolution commented on 2025-02-12 21:30 (UTC)
Sherlock-Holo commented on 2025-02-11 09:28 (UTC) (edited on 2025-02-11 09:30 (UTC) by Sherlock-Holo)
when build this package, it says
Traceback (most recent call last):
File "/home/sherlock/.cache/yay/python-vllm-cuda/src/vllm/setup.py", line 18, in <module>
from setuptools_scm import get_version
ModuleNotFoundError: No module named 'setuptools_scm'
if add the miss makedepends python-setuptools-scm
, it will fail with
Traceback (most recent call last):
File "/home/sherlock/.cache/yay/python-vllm-cuda/src/vllm/setup.py", line 633, in <module>
version=get_vllm_version(),
~~~~~~~~~~~~~~~~^^
File "/home/sherlock/.cache/yay/python-vllm-cuda/src/vllm/setup.py", line 527, in get_vllm_version
raise RuntimeError("Unknown runtime environment")
RuntimeError: Unknown runtime environment
envolution commented on 2024-12-28 04:47 (UTC) (edited on 2024-12-28 04:51 (UTC) by envolution)
Not working currently due to lack of python 3.13 support in vllm-flash-attention. Try python-vllm-bin or the cpu version python-vllm
Pinned Comments
envolution commented on 2024-12-28 04:47 (UTC) (edited on 2024-12-28 04:51 (UTC) by envolution)
Not working currently due to lack of python 3.13 support in vllm-flash-attention. Try python-vllm-bin or the cpu version python-vllm