Package Details: python-jaxlib-cuda 0.4.38-1

Git Clone URL: https://aur.archlinux.org/python-jaxlib-cuda.git (read-only, click to copy)
Package Base: python-jaxlib-cuda
Description: XLA library for JAX
Upstream URL: https://github.com/jax-ml/jax/
Keywords: deep-learning google jax maching-learning xla
Licenses: Apache
Groups: jax
Conflicts: python-jaxlib
Provides: python-jaxlib
Submitter: daskol
Maintainer: daskol
Last Packager: daskol
Votes: 9
Popularity: 1.43
First Submitted: 2023-02-12 23:18 (UTC)
Last Updated: 2024-12-24 19:26 (UTC)

Latest Comments

1 2 3 4 Next › Last »

actionless commented on 2025-01-02 17:06 (UTC)

it doesn't build if ccache is installed, workaround:

diff --git a/PKGBUILD b/PKGBUILD
index e4cec07..3c972bb 100644
--- a/PKGBUILD
+++ b/PKGBUILD
@@ -35,6 +35,8 @@ build() {
     # Override default version.
     export JAXLIB_RELEASE=$pkgver

+       export PATH=$(echo $PATH | tr ":" "\n" | grep -v ccache | tr "\n" ":")
+
     cd $srcdir/jax-jax-v$pkgver
     build/build.py build \
         --bazel_options='--action_env=JAXLIB_RELEASE' \

daskol commented on 2024-12-26 01:14 (UTC) (edited on 2024-12-26 01:37 (UTC) by daskol)

@medaminezghal You can add the following to your .bashrc or .zshrc or any other esource file of your shell.

export XLA_FLAGS=--xla_gpu_cuda_data_dir=/opt/cuda

Could find any solution in the installation process?

Building with local CUDA is broken in JAX across multiple version. So it takes some time. Patch to upstream repository is ready.

medaminezghal commented on 2024-12-25 14:13 (UTC) (edited on 2024-12-25 15:08 (UTC) by medaminezghal)

W external/xla/xla/service/gpu/llvm_gpu_backend/default/nvptx_libdevice_path.cc:40] Can't find libdevice directory ${CUDA_DIR}/nvvm/libdevice. This may result in compilation or runtime failures, if the program we try to run uses routines from libdevice.
Searched for CUDA in the following directories:
  ./cuda_sdk_lib
  python.runfiles/cuda_nvcc
  python/cuda_nvcc

  /usr/local/cuda
  /usr/lib/python3.13/site-packages/jax_plugins/xla_cuda12/../nvidia/cuda_nvcc
  /usr/lib/python3.13/site-packages/jax_plugins/xla_cuda12/../../nvidia/cuda_nvcc
  /usr/lib/python3.13/site-packages/jax_plugins/xla_cuda12/cuda
  .
You can choose the search directory by setting xla_gpu_cuda_data_dir in HloModule's DebugOptions.  For most apps, setting the environment variable XLA_FLAGS=--xla_gpu_cuda_data_dir=/path/to/cuda will work.

This happen when I test a simple example. To use jax properly, I need to execute export XLA_FLAGS=--xla_gpu_cuda_data_dir=/opt/cuda every time I launch terminal. Could find any solution in the installation process?

actionless commented on 2024-11-10 13:48 (UTC)

it doesn't require any extra files, just update the pkgbuild with the command i described in the github issue

medaminezghal commented on 2024-11-10 11:47 (UTC)

@actionless Could you send me the necessary files to build it?

actionless commented on 2024-11-10 08:06 (UTC)

yup

medaminezghal commented on 2024-11-10 05:53 (UTC)

@actionless did you manage to compile it successfully as I see in the GitHub discussion?

medaminezghal commented on 2024-11-05 17:59 (UTC) (edited on 2024-11-06 06:18 (UTC) by medaminezghal)

@daskol @actionless I get this error when compiling: clang: error: argument unused during compilation: '--cuda-path=external/cuda_nvcc' [-Werror,-Wunused-command-line-argument] when compiling borinssl

daskol commented on 2024-11-05 14:13 (UTC)

@medaminezghal Try --jobs option in Bazel. Also, there should be an option to limit memory usage per job but not sure.