Search Criteria
Package Details: python-llama-cpp 0.3.6-3
Package Actions
Git Clone URL: | https://aur.archlinux.org/python-llama-cpp.git (read-only, click to copy) |
---|---|
Package Base: | python-llama-cpp |
Description: | Python bindings for llama.cpp |
Upstream URL: | https://github.com/abetlen/llama-cpp-python |
Licenses: | GPL-3.0-or-later |
Submitter: | Freed |
Maintainer: | envolution |
Last Packager: | envolution |
Votes: | 2 |
Popularity: | 0.57 |
First Submitted: | 2023-07-18 08:30 (UTC) |
Last Updated: | 2025-01-09 03:06 (UTC) |
Dependencies (20)
- python-diskcacheAUR
- python-numpy (python-numpy-gitAUR, python-numpy1AUR, python-numpy-mkl-binAUR, python-numpy-mkl-tbbAUR, python-numpy-mklAUR)
- python-typing_extensions
- python-build (make)
- python-installer (make)
- python-scikit-build (make)
- python-scikit-build-core (make)
- python-wheel (make)
- python-fastapi (check)
- python-httpx (python-httpx-gitAUR) (check)
- python-huggingface-hub (python-huggingface-hub-gitAUR) (check)
- python-pydantic-settings (check)
- python-pytest (check)
- python-scipy (python-scipy-gitAUR, python-scipy-mkl-binAUR, python-scipy-mkl-tbbAUR, python-scipy-mklAUR) (check)
- python-sse-starletteAUR (check)
- python-fastapi (optional)
- python-pyaml (optional)
- python-pydantic-settings (optional)
- python-sse-starletteAUR (optional)
- uvicorn (optional)
Required by (2)
- python-outlines (optional)
- python-translate-shell (optional)
Latest Comments
envolution commented on 2025-01-15 18:24 (UTC)
@furrykef i just added python-llama-cpp-cuda - please let me know if you have any issues with that package
furrykef commented on 2025-01-15 17:44 (UTC)
Users of this package should be aware you need to define, say,
CMAKE_ARGS="-DGGML_CUDA=on"
in your environment for GPU acceleration when building the package. Which flags you want enabled will depend on your hardware; read the readme on the llama-cpp-python github to find out which.It'd probably be a good idea to add a note to this effect at the top of the PKGBUILD.
alfalfa commented on 2024-03-24 11:02 (UTC) (edited on 2024-03-24 11:06 (UTC) by alfalfa)
Building llama-cpp... ==> Making package: llama-cpp c3e53b4-1 (Sun 24 Mar 2024 06:57:44 AM) ==> Checking runtime dependencies... ==> Checking buildtime dependencies... ==> Retrieving sources... -> Downloading master-c3e53b4.tar.gz... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 14 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 curl: (22) The requested URL returned error: 404 ==> ERROR: Failure while downloading https://github.com/ggerganov/llama.cpp/archive/master-c3e53b4.tar.gz Aborting... Failed to build llama-cpp
carsme commented on 2024-01-10 17:27 (UTC)
Hey, would you mind updating this package? Feel free to add me as co-maintainer if you'd like help to keep it up-to-date. Thanks!
colobas commented on 2023-09-02 21:40 (UTC)