ROCm/hipblas support has been merged into llama.cpp for quite some time, consider using this package instead: https://aur.archlinux.org/packages/llama.cpp-hipblas-git
Search Criteria
Package Details: llama-cpp-rocm-git r1110.423db74-1
Package Actions
Git Clone URL: | https://aur.archlinux.org/llama-cpp-rocm-git.git (read-only, click to copy) |
---|---|
Package Base: | llama-cpp-rocm-git |
Description: | Port of Facebook's LLaMA model in C/C++ (with ROCm) (PR#1087) |
Upstream URL: | https://github.com/ggerganov/llama.cpp |
Licenses: | MIT |
Conflicts: | llama-cpp, llama.cpp |
Provides: | llama-cpp, llama.cpp |
Submitter: | ulyssesrr |
Maintainer: | ulyssesrr |
Last Packager: | ulyssesrr |
Votes: | 0 |
Popularity: | 0.000000 |
First Submitted: | 2023-08-22 16:25 (UTC) |
Last Updated: | 2023-08-22 16:25 (UTC) |
Dependencies (3)
- hipblas (opencl-amd-devAUR)
- git (git-gitAUR, git-glAUR) (make)
- rocm-llvm (opencl-amd-devAUR) (make)
Required by (0)
Sources (1)
Latest Comments
ulyssesrr commented on 2025-03-17 21:40 (UTC)
mags commented on 2025-03-15 17:06 (UTC)
This fails to build in a clean chroot without adding cmake to makedepends.
dreieck commented on 2024-03-25 22:31 (UTC)
llama-cpp
and whisper.cpp
need to conflict with each other:
error: failed to commit transaction (conflicting files)
llama-cpp-rocm-git: /usr/include/ggml.h exists in filesystem (owned by whisper.cpp-clblas)
Regards and thanks for maintaining!
edtoml commented on 2024-03-20 21:45 (UTC) (edited on 2024-03-20 21:52 (UTC) by edtoml)
To build against rocm 6.0.2 edit the PKGBUILD
# replace with prepare() { cd "$srcdir/$_reponame" git fetch origin git checkout } # ignore if you have only one GPU # I have a iGPU so /opt/rocm/llvm/bin/amdgpu-arch returns two values. replace with -DAMDGPU_TARGETS="$(/opt/rocm/llvm/bin/amdgpu-arch | tail -1)" # you may have to use "head -1" if tail -1 is your iGPU
edtoml commented on 2024-03-02 19:09 (UTC)
Using the latest version of the master branch it now works with rocm 6.0.x
edtoml commented on 2024-02-06 01:00 (UTC)
This package builds with the rocm 6.0.1 packages but cannot run models. It missreports the context sizes and then segfaults - worked with rocm 5.7.1. [ed@grover Mixtral-8x7B-Instruct-v0.1-GGUF]$ llama.cpp -ngl 7 -i -ins -m ./mixtral-8x7b-instruct-v0.1.Q4_K_M.gguf -c 16384 main: warning: base model only supports context sizes no greater than 2048 tokens (16384 specified) main: build = 1163 (9035cfcd) main: seed = 1707181157 ggml_init_cublas: found 1 ROCm devices: Device 0: AMD Radeon RX 6600 XT, compute capability 10.3 Segmentation fault (core dumped)
CorvetteCole commented on 2023-10-18 20:22 (UTC)
You don't need to pull from the PR #1087 branch anymore, it has been merged
Pinned Comments
ulyssesrr commented on 2025-03-17 21:40 (UTC)
ROCm/hipblas support has been merged into llama.cpp for quite some time, consider using this package instead: https://aur.archlinux.org/packages/llama.cpp-hipblas-git