Package Details: llama.cpp-sycl-f16-git b4730-1

Git Clone URL: https://aur.archlinux.org/llama.cpp-sycl-f16-git.git (read-only, click to copy)
Package Base: llama.cpp-sycl-f16-git
Description: Port of Facebook's LLaMA model in C/C++ (with Intel SYCL GPU optimizations and F16)
Upstream URL: https://github.com/ggerganov/llama.cpp
Licenses: MIT
Conflicts: llama.cpp
Provides: llama.cpp
Submitter: robertfoster
Maintainer: robertfoster
Last Packager: robertfoster
Votes: 0
Popularity: 0.000000
First Submitted: 2024-11-15 20:36 (UTC)
Last Updated: 2025-02-16 14:07 (UTC)

Dependencies (4)

Required by (0)

Sources (4)

Latest Comments

ioctl commented on 2025-03-06 09:02 (UTC) (edited on 2025-03-06 09:18 (UTC) by ioctl)

Build is fine, but instead of using SYCL this app seems uses CPU only: benchmark performance is the same as CPU-only version and GPU is not used according to the gputop utility.

Here is official Docker file, that can be used as a reference: https://github.com/ggml-org/llama.cpp/blob/master/.devops/intel.Dockerfile

pepijndevos commented on 2024-11-25 10:51 (UTC)

I'm getting all sorts of linking errors trying to build this, seems to be from functions in the common namespace.