Package Details: ollama-vulkan-git 0.3.9+5.r3417.20240902.ad3eb00b-2

Git Clone URL: https://aur.archlinux.org/ollama-nogpu-git.git (read-only, click to copy)
Package Base: ollama-nogpu-git
Description: Create, run and share large language models (LLMs). With vulkan backend.
Upstream URL: https://github.com/jmorganca/ollama
Licenses: MIT
Conflicts: ollama
Provides: ollama, ollama-git
Submitter: dreieck
Maintainer: None
Last Packager: dreieck
Votes: 5
Popularity: 0.47
First Submitted: 2024-04-17 15:09 (UTC)
Last Updated: 2024-09-03 11:26 (UTC)

Required by (24)

Sources (5)

Latest Comments

« First ‹ Previous 1 2

brianwo commented on 2024-06-07 14:50 (UTC)

@dreieck, I have no idea about that

dreieck commented on 2024-06-07 13:42 (UTC) (edited on 2024-06-07 14:46 (UTC) by dreieck)

@brianwo,

icx […] icpx

Do you have an idea which packages/ which upstream project provides those executables? I have no idea what they are.

--

Hacky workaround:
Disabled upstream-added ONEAPI build by setting OLLAMA_ROOT to a hopefully non-existend directory. See https://github.com/ollama/ollama/issues/4511#issuecomment-2154973327.

Regards!

brianwo commented on 2024-06-07 11:32 (UTC) (edited on 2024-07-29 10:38 (UTC) by brianwo)

Unable to build, looks like it builds with oneAPI. It was working for 0.1.39+12.r2800.20240528.ad897080-1 previously.

++++ '[' '' = /opt/intel/oneapi/compiler/2024.1/opt/compiler/lib:/opt/intel/oneapi/compiler/2024.1/lib ']'
++++ printf %s /opt/intel/oneapi/tbb/2021.12/env/../lib/intel64/gcc4.8:/opt/intel/oneapi/compiler/2024.1/opt/compiler/lib:/opt/intel/oneapi/compiler/2024.1/lib
+++ LD_LIBRARY_PATH=/opt/intel/oneapi/tbb/2021.12/env/../lib/intel64/gcc4.8:/opt/intel/oneapi/compiler/2024.1/opt/compiler/lib:/opt/intel/oneapi/compiler/2024.1/lib
+++ export LD_LIBRARY_PATH
++++ prepend_path /opt/intel/oneapi/tbb/2021.12/env/../include ''
++++ path_to_add=/opt/intel/oneapi/tbb/2021.12/env/../include
++++ path_is_now=
++++ '[' '' = '' ']'
++++ printf %s /opt/intel/oneapi/tbb/2021.12/env/../include
+++ CPATH=/opt/intel/oneapi/tbb/2021.12/env/../include
+++ export CPATH
++++ prepend_path /opt/intel/oneapi/tbb/2021.12/env/.. /opt/intel/oneapi/compiler/2024.1
++++ path_to_add=/opt/intel/oneapi/tbb/2021.12/env/..
++++ path_is_now=/opt/intel/oneapi/compiler/2024.1
++++ '[' '' = /opt/intel/oneapi/compiler/2024.1 ']'
++++ printf %s /opt/intel/oneapi/tbb/2021.12/env/..:/opt/intel/oneapi/compiler/2024.1
+++ CMAKE_PREFIX_PATH=/opt/intel/oneapi/tbb/2021.12/env/..:/opt/intel/oneapi/compiler/2024.1
+++ export CMAKE_PREFIX_PATH
++++ prepend_path /opt/intel/oneapi/tbb/2021.12/env/../lib/pkgconfig /opt/intel/oneapi/compiler/2024.1/lib/pkgconfig
++++ path_to_add=/opt/intel/oneapi/tbb/2021.12/env/../lib/pkgconfig
++++ path_is_now=/opt/intel/oneapi/compiler/2024.1/lib/pkgconfig
++++ '[' '' = /opt/intel/oneapi/compiler/2024.1/lib/pkgconfig ']'
++++ printf %s /opt/intel/oneapi/tbb/2021.12/env/../lib/pkgconfig:/opt/intel/oneapi/compiler/2024.1/lib/pkgconfig
+++ PKG_CONFIG_PATH=/opt/intel/oneapi/tbb/2021.12/env/../lib/pkgconfig:/opt/intel/oneapi/compiler/2024.1/lib/pkgconfig
+++ export PKG_CONFIG_PATH
++ temp_var=2
++ '[' 2 -eq 0 ']'
++ echo ':: oneAPI environment initialized ::'
:: oneAPI environment initialized ::
++ echo ' '

++ '[' 0 -ne 0 ']'
++ eval set -- ''\''--force'\'' \
 '
+++ set -- --force
++ prep_for_exit 0
++ script_return_code=0
++ unset -v SETVARS_CALL
++ unset -v SETVARS_ARGS
++ unset -v SETVARS_VARS_PATH
++ '[' 0 = '' ']'
++ '[' 0 -eq 0 ']'
++ SETVARS_COMPLETED=1
++ export SETVARS_COMPLETED
++ return 0
++ return
+ CC=icx
+ CMAKE_DEFS='-DCMAKE_POSITION_INDEPENDENT_CODE=on -DLLAMA_NATIVE=off -DLLAMA_AVX=on -DLLAMA_AVX2=off -DLLAMA_AVX512=off -DLLAMA_FMA=off -DLLAMA_F16C=off -DCMAKE_BUILD_TYPE=Release -DLLAMA_SERVER_VERBOSE=off  -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx -DLLAMA_SYCL=ON -DLLAMA_SYCL_F16=OFF'
+ BUILD_DIR=../build/linux/x86_64/oneapi
+ EXTRA_LIBS='-fsycl -Wl,-rpath,/opt/intel/oneapi/compiler/latest/lib,-rpath,/opt/intel/oneapi/mkl/latest/lib,-rpath,/opt/intel/oneapi/tbb/latest/lib,-rpath,/opt/intel/oneapi/compiler/latest/opt/oclfpga/linux64/lib -lOpenCL -lmkl_core -lmkl_sycl_blas -lmkl_intel_ilp64 -lmkl_tbb_thread -ltbb'
+ DEBUG_FLAGS=
+ build
+ cmake -S ../llama.cpp -B ../build/linux/x86_64/oneapi -DCMAKE_POSITION_INDEPENDENT_CODE=on -DLLAMA_NATIVE=off -DLLAMA_AVX=on -DLLAMA_AVX2=off -DLLAMA_AVX512=off -DLLAMA_FMA=off -DLLAMA_F16C=off -DCMAKE_BUILD_TYPE=Release -DLLAMA_SERVER_VERBOSE=off -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx -DLLAMA_SYCL=ON -DLLAMA_SYCL_F16=OFF
-- The C compiler identification is unknown
-- The CXX compiler identification is unknown
CMake Error at CMakeLists.txt:2 (project):
  The CMAKE_C_COMPILER:

    icx

  is not a full path and was not found in the PATH.

  Tell CMake where to find the compiler by setting either the environment
  variable "CC" or the CMake cache entry CMAKE_C_COMPILER to the full path to
  the compiler, or to the compiler name if it is in the PATH.


CMake Error at CMakeLists.txt:2 (project):
  The CMAKE_CXX_COMPILER:

    icpx

  is not a full path and was not found in the PATH.

  Tell CMake where to find the compiler by setting either the environment
  variable "CXX" or the CMake cache entry CMAKE_CXX_COMPILER to the full path
  to the compiler, or to the compiler name if it is in the PATH.


-- Configuring incomplete, errors occurred!
llm/generate/generate_linux.go:3: running "bash": exit status 1
==> ERROR: A failure occurred in build().
    Aborting...
 -> error making: ollama-nogpu-git-exit status 4
 -> Failed to install the following packages. Manual intervention is required:
ollama-vulkan-git - exit status 4

dreieck commented on 2024-05-22 08:38 (UTC)

Reactivated Vulkan build by deactivating testing options.

dreieck commented on 2024-05-21 21:12 (UTC) (edited on 2024-05-21 21:12 (UTC) by dreieck)

Disabled Vulkan build since it currently fails.

dreieck commented on 2024-05-21 20:44 (UTC)

Upstream has implemented CUDA and ROCm skip variables 🎉 — implementing it and uploading fixed PKGBUILD

nmanarch commented on 2024-05-18 09:41 (UTC)

Ok ! This sad. Perhaps they agree to accept your code changes to theirs master ? So many thanks for all of this.

dreieck commented on 2024-05-18 09:32 (UTC) (edited on 2024-05-18 09:38 (UTC) by dreieck)

Ahoj @nmanarch,

it seems upstream is moving too fast, needing to change the patch too often.

If I do not find an easier way to not build with ROCm or CUDA even if some of their files are installed, I might just give up.

↗ Upstream feature request to add a "kill switch" to force-off ROCm and CUDA.

nmanarch commented on 2024-05-18 09:17 (UTC) (edited on 2024-05-18 09:37 (UTC) by nmanarch)

Hello @derieck. I want to try your ollama vulkan . But the patch failed to apply: I have try to add --fuzz 3 and --ignore-whitespace but not better. Thanks for a trick.

Submodule path 'llm/llama.cpp': checked out '614d3b914e1c3e02596f869649eb4f1d3b68614d' pplying patch disable-rocm-cuda.gen_linux.sh.patch ...

patching file llm/generate/gen_linux.sh Hunk #5 FAILED at 143. Hunk #6 FAILED at 220. 2 out of 6 hunks FAILED -- saving rejects to file llm/generate/gen_linux.sh.rej ==> ERROR: A failure occurred in prepare(). Aborting...