Search Criteria
Package Details: llama-cpp-rocm-git r1110.423db74-1
Package Actions
Git Clone URL: | https://aur.archlinux.org/llama-cpp-rocm-git.git (read-only, click to copy) |
---|---|
Package Base: | llama-cpp-rocm-git |
Description: | Port of Facebook's LLaMA model in C/C++ (with ROCm) (PR#1087) |
Upstream URL: | https://github.com/ggerganov/llama.cpp |
Licenses: | MIT |
Conflicts: | llama-cpp, llama.cpp |
Provides: | llama-cpp, llama.cpp |
Submitter: | ulyssesrr |
Maintainer: | None |
Last Packager: | ulyssesrr |
Votes: | 0 |
Popularity: | 0.000000 |
First Submitted: | 2023-08-22 16:25 (UTC) |
Last Updated: | 2023-08-22 16:25 (UTC) |
Dependencies (3)
- hipblas (opencl-amd-devAUR)
- git (git-gitAUR, git-glAUR) (make)
- rocm-llvm (opencl-amd-devAUR) (make)
Required by (1)
- python-llama-cpp (requires llama-cpp)
Latest Comments
dreieck commented on 2024-03-25 22:31 (UTC)
llama-cpp
andwhisper.cpp
need to conflict with each other:Regards and thanks for maintaining!
edtoml commented on 2024-03-20 21:45 (UTC) (edited on 2024-03-20 21:52 (UTC) by edtoml)
To build against rocm 6.0.2 edit the PKGBUILD
edtoml commented on 2024-03-02 19:09 (UTC)
Using the latest version of the master branch it now works with rocm 6.0.x
edtoml commented on 2024-02-06 01:00 (UTC)
This package builds with the rocm 6.0.1 packages but cannot run models. It missreports the context sizes and then segfaults - worked with rocm 5.7.1. [ed@grover Mixtral-8x7B-Instruct-v0.1-GGUF]$ llama.cpp -ngl 7 -i -ins -m ./mixtral-8x7b-instruct-v0.1.Q4_K_M.gguf -c 16384 main: warning: base model only supports context sizes no greater than 2048 tokens (16384 specified) main: build = 1163 (9035cfcd) main: seed = 1707181157 ggml_init_cublas: found 1 ROCm devices: Device 0: AMD Radeon RX 6600 XT, compute capability 10.3 Segmentation fault (core dumped)
CorvetteCole commented on 2023-10-18 20:22 (UTC)
You don't need to pull from the PR #1087 branch anymore, it has been merged