Package Details: ollama-cuda-git 0.5.5+r3779+g6982e9cc9-3

Git Clone URL: https://aur.archlinux.org/ollama-cuda-git.git (read-only, click to copy)
Package Base: ollama-cuda-git
Description: Create, run and share large language models (LLMs)
Upstream URL: https://github.com/ollama/ollama
Licenses: MIT
Conflicts: ollama
Provides: ollama
Submitter: sr.team
Maintainer: None
Last Packager: envolution
Votes: 5
Popularity: 1.07
First Submitted: 2024-02-22 23:22 (UTC)
Last Updated: 2025-01-14 06:03 (UTC)

Dependencies (5)

Required by (29)

Sources (5)

Latest Comments

« First ‹ Previous 1 2

brauliobo commented on 2024-11-14 18:59 (UTC)

got the error:

==> Validating source files with b2sums...
    ollama ... Skipped
    ollama.service ... Passed
    sysusers.conf ... Passed
    tmpfiles.d ... Passed
==> Removing existing $srcdir/ directory...
==> Extracting sources...
  -> Creating working copy of ollama git repo...
Cloning into 'ollama'...
done.
==> Starting prepare()...
sed: can't read llm/generate/gen_linux.sh: No such file or directory

sr.team commented on 2024-08-21 04:42 (UTC)

@JamesMowery you need preinstall makedepends before build package

JamesMowery commented on 2024-08-21 04:28 (UTC)

Getting the following error when installing on Nvidia 555 + Wayland + KDE Plasma.

+ gzip -n --best -f ../build/linux/x86_64/cpu/bin/ollama_llama_server
+ '[' -z '' ']'
+ '[' -d /usr/local/cuda/lib64 ']'
+ '[' -z '' ']'
+ '[' -d /opt/cuda/targets/x86_64-linux/lib ']'
+ CUDA_LIB_DIR=/opt/cuda/targets/x86_64-linux/lib
+ '[' -z '' ']'
+ CUDART_LIB_DIR=/opt/cuda/targets/x86_64-linux/lib
+ '[' -z '' -a -d /opt/cuda/targets/x86_64-linux/lib ']'
+ echo 'CUDA libraries detected - building dynamic CUDA library'
CUDA libraries detected - building dynamic CUDA library
+ init_vars
+ case "${GOARCH}" in
+ ARCH=x86_64
+ LLAMACPP_DIR=../llama.cpp
+ CMAKE_DEFS=-DCMAKE_SKIP_RPATH=on
+ CMAKE_TARGETS='--target ollama_llama_server'
+ echo '-march=native -mtune=generic -O2 -pipe -fno-plt'
+ grep -- -g
+ CMAKE_DEFS='-DCMAKE_BUILD_TYPE=Release -DLLAMA_SERVER_VERBOSE=off -DCMAKE_SKIP_RPATH=on'
+ case $(uname -s) in
++ uname -s
+ LIB_EXT=so
+ WHOLE_ARCHIVE=-Wl,--whole-archive
+ NO_WHOLE_ARCHIVE=-Wl,--no-whole-archive
+ GCC_ARCH=
+ DIST_BASE=../../dist/linux-amd64/
+ '[' -z '50;52;61;70;75;80' ']'
++ which pigz
++ echo gzip
+ GZIP=gzip
++ head -1
++ ls /opt/cuda/targets/x86_64-linux/lib/libcudart.so.12 /opt/cuda/targets/x86_64-linux/lib/libcudart.so.12.5.82
++ cut -f3 -d.
+ CUDA_MAJOR=12
+ '[' -n 12 -a -z '' ']'
+ CUDA_VARIANT=_v12
+ '[' x86_64 == arm64 ']'
+ '[' -n '' ']'
+ CMAKE_CUDA_DEFS='-DGGML_CUDA=on -DCMAKE_CUDA_ARCHITECTURES=50;52;61;70;75;80'
+ export CUDAFLAGS=-t8
+ CUDAFLAGS=-t8
+ CMAKE_DEFS='-DCMAKE_SKIP_RPATH=on -DBUILD_SHARED_LIBS=on -DCMAKE_POSITION_INDEPENDENT_CODE=on -D LLAMA_LTO=on -D CMAKE_BUILD_TYPE=Release -DGGML_NATIVE=off -DGGML_AVX=on -DGGML_AVX2=off -DGGML_AVX512=off -DGGML_FMA=off -DGGML_F16C=off -DGGML_OPENMP=off -DCMAKE_BUILD_TYPE=Release -DLLAMA_SERVER_VERBOSE=off -DCMAKE_SKIP_RPATH=on  -DGGML_CUDA=on -DCMAKE_CUDA_ARCHITECTURES=50;52;61;70;75;80 -DGGML_STATIC=off'
+ BUILD_DIR=../build/linux/x86_64/cuda_v12
+ export 'LLAMA_SERVER_LDFLAGS=-L/opt/cuda/targets/x86_64-linux/lib -lcudart -lcublas -lcublasLt -lcuda'
+ LLAMA_SERVER_LDFLAGS='-L/opt/cuda/targets/x86_64-linux/lib -lcudart -lcublas -lcublasLt -lcuda'
+ CUDA_DIST_DIR=../../dist/linux-amd64//lib/ollama
+ build
+ cmake -S ../llama.cpp -B ../build/linux/x86_64/cuda_v12 -DCMAKE_SKIP_RPATH=on -DBUILD_SHARED_LIBS=on -DCMAKE_POSITION_INDEPENDENT_CODE=on -D LLAMA_LTO=on -D CMAKE_BUILD_TYPE=Release -DGGML_NATIVE=off -DGGML_AVX=on -DGGML_AVX2=off -DGGML_AVX512=off -DGGML_FMA=off -DGGML_F16C=off -DGGML_OPENMP=off -DCMAKE_BUILD_TYPE=Release -DLLAMA_SERVER_VERBOSE=off -DCMAKE_SKIP_RPATH=on -DGGML_CUDA=on '-DCMAKE_CUDA_ARCHITECTURES=50;52;61;70;75;80' -DGGML_STATIC=off
-- The C compiler identification is GNU 14.2.1
-- The CXX compiler identification is GNU 14.2.1
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Git: /usr/bin/git (found version "2.46.0")
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- Using llamafile
-- Could not find nvcc, please set CUDAToolkit_ROOT.
CMake Warning at ggml/src/CMakeLists.txt:397 (message):
  CUDA not found


-- CUDA host compiler is GNU
CMake Error at ggml/src/CMakeLists.txt:984 (get_flags):
  get_flags Function invoked with incorrect arguments for function named:
  get_flags


-- Warning: ccache not found - consider installing it for faster compilation or disable this warning with GGML_CCACHE=OFF
-- CMAKE_SYSTEM_PROCESSOR: x86_64
-- x86 detected
-- Configuring incomplete, errors occurred!
llm/generate/generate_linux.go:3: running "bash": exit status 1
==> ERROR: A failure occurred in build().
    Aborting...
 -> error making: ollama-cuda-git-exit status 4
 -> Failed to install the following packages. Manual intervention is required:
ollama-cuda-git - exit status 4

nmanarch commented on 2024-04-17 07:49 (UTC) (edited on 2024-04-29 10:12 (UTC) by nmanarch)

I have found a little trick showed by others on ollama git issue. So for those want ollama cuda run without avx try :

https://github.com/ollama/ollama/issues/2187#issuecomment-2082334649

Thanks to @sr.team and to all.

Hello. I apologized. Since 1.29 the gpu support without avx cpu is blocked in ollama. Did someone can help to have this to work again? https://github.com/ollama/ollama/issues/2187 and the bypass propose by dbzoo which work but not apply at the main. https://github.com/dbzoo/ollama/commit/45eb1048496780a78ed07cf39b3ce6b62b5a72e3 Many thanks.have a nice days.

nmanarch commented on 2024-04-04 14:23 (UTC)

Yes it is fixed and build and run.thanks.

sr.team commented on 2024-04-01 12:43 (UTC)

@nmanarch thanks for report. This problem must be fixed now

nmanarch commented on 2024-03-31 23:00 (UTC) (edited on 2024-03-31 23:59 (UTC) by nmanarch)

Hi ! the build failed..? someone have some tricks to solve ?


/var/tmp/pamac-build-nico/ollama-cuda-git/src/ollama/llm/llama.cpp/ggml-cuda.cu:9432:13: note: in instantiation of function template specialization 'mul_mat_vec_q_cuda<256, 8, block_iq3_s, 1, &vec_dot_iq3_s_q8_1>' requested here
            mul_mat_vec_q_cuda<QK_K, QI3_XS, block_iq3_s, 1, vec_dot_iq3_s_q8_1>
            ^
error: option 'cf-protection=return' cannot be specified on this target
error: option 'cf-protection=branch' cannot be specified on this target
194 warnings and 2 errors generated when compiling for gfx1010.
make[3]: *** [CMakeFiles/ggml.dir/build.make:132: CMakeFiles/ggml.dir/ggml-cuda.cu.o] Error 1
make[2]: *** [CMakeFiles/Makefile2:838: CMakeFiles/ggml.dir/all] Error 2
make[1]: *** [CMakeFiles/Makefile2:3575: ext_server/CMakeFiles/ext_server.dir/rule] Error 2
make: *** [Makefile:1440: ext_server] Error 2
llm/generate/generate_linux.go:3: running "bash": exit status 2
==> ERREUR : Une erreur s’est produite dans build().
    Abandon…

So i have found on : https://aur.archlinux.org/packages/ollama-rocm-git?O=10 So i remove my fcf-protection in my /etc/makepkg.conf .

But now i have


 g++ -fPIC -g -shared -o ../llama.cpp/build/linux/x86_64/rocm/lib/libext_server.so -Wl,--whole-archive ../llama.cpp/build/linux/x86_64/rocm/ext_server/libext_server.a -Wl,--no-whole-archive ../llama.cpp/build/linux/x86_64/rocm/common/libcommon.a ../llama.cpp/build/linux/x86_64/rocm/libllama.a '-Wl,-rpath,$ORIGIN' -lpthread -ldl -lm -L/opt/rocm/lib -L/opt/amdgpu/lib/x86_64-linux-gnu/ '-Wl,-rpath,$ORIGIN/../../rocm/' -lhipblas -lrocblas -lamdhip64 -lrocsolver -lamd_comgr -lhsa-runtime64 -lrocsparse -ldrm -ldrm_amdgpu
/usr/sbin/ld : ../llama.cpp/build/linux/x86_64/rocm/ext_server/libext_server.a : membre %B dans l'archive n'est pas un objet
collect2: erreur: ld a retourné le statut de sortie 1
llm/generate/generate_linux.go:3: running "bash": exit status 1
==> ERREUR : Une erreur s’est produite dans build().
    Abandon…