Package Details: python-tensorrt 10.7.0.23-1

Git Clone URL: https://aur.archlinux.org/tensorrt.git (read-only, click to copy)
Package Base: tensorrt
Description: A platform for high-performance deep learning inference on NVIDIA hardware (python bindings and tools)
Upstream URL: https://developer.nvidia.com/tensorrt/
Keywords: ai artificial intelligence nvidia
Licenses: Apache-2.0, LicenseRef-custom
Provides: python-onnx-graphsurgeon, python-polygraphy, python-tensorflow-quantization
Submitter: dbermond
Maintainer: dbermond
Last Packager: dbermond
Votes: 20
Popularity: 0.54
First Submitted: 2018-07-29 16:17 (UTC)
Last Updated: 2024-12-07 14:13 (UTC)

Dependencies (18)

Sources (13)

Latest Comments

« First ‹ Previous 1 2 3 4 5 6 7 8 9 10 Next › Last »

DavTheRaveUK commented on 2023-05-03 13:24 (UTC)

I followed your instructions only to be met with the following error:

==> ERROR: 010-tensorrt-use-local-protobuf-sources.patch was not found in the build directory and is not a URL.

What gives?

feiticeir0 commented on 2023-03-19 10:23 (UTC)

@dbermond Thank you ! Just compiled well ! Now Tensorflow is finally working with Nvidia GPU.

dbermond commented on 2023-03-18 19:42 (UTC)

@Smoolak @feiticeir0 Package updated. Building fine for me now. The current version supports cuda 12.0.

feiticeir0 commented on 2023-03-14 00:42 (UTC) (edited on 2023-03-14 00:46 (UTC) by feiticeir0)

I'm getting the following error:


-- Build files have been written to: /tmp/makepkg/tensorrt/src/build
make: Entering directory '/tmp/makepkg/tensorrt/src/build'
[  2%] Built target third_party.protobuf
[  2%] Built target gen_onnx_proto
[  2%] Built target caffe_proto
[  2%] Built target gen_onnx_operators_proto
[  2%] Built target gen_onnx_data_proto
[  4%] Built target onnx_proto
[  8%] Built target nvcaffeparser_static
[ 12%] Built target nvcaffeparser
[ 17%] Built target nvonnxparser_static
[ 17%] Built target nvonnxparser
make[2]: *** No rule to make target '/opt/cuda/lib/libcudart_static.a', needed by 'libnvinfer_plugin.so.8.5.3'.  Stop.
make[1]: *** [CMakeFiles/Makefile2:1166: plugin/CMakeFiles/nvinfer_plugin.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
[ 58%] Built target nvinfer_plugin_static
make: *** [Makefile:156: all] Error 2
make: Leaving directory '/tmp/makepkg/tensorrt/src/build'
==> ERROR: A failure occurred in build().
    Aborting...

I've downgraded CUDA and CUDNN to match the tensorRT version, but now this is happening.. Any hints ?

/opt/cuda/lib

does not exits, but instead there is lib64

Even if I create a symlink to lib, it does not work... Thank you

Smoolak commented on 2023-03-10 17:20 (UTC)

Oh ok. Look like I've been very unlucky and decided to install this on the wrong day. CUDA was just updated in the repos, yesterday's version is working fine. I downgraded my system using the Arch Linux Archive.

Smoolak commented on 2023-03-10 16:48 (UTC)

I'm getting this error:

[ 24%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/bertQKVToContextPlugin/qkvToContextPlugin.cpp.o
[ 24%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/embLayerNormPlugin/embLayerNormPlugin.cpp.o
[ 24%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/embLayerNormPlugin/embLayerNormVarSeqlenPlugin.cpp.o
[ 24%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/fcPlugin/fcPlugin.cpp.o
In file included from /home/smoolak/.cache/yay/tensorrt/src/TensorRT/plugin/common/bertCommon.h:26,
                 from /home/smoolak/.cache/yay/tensorrt/src/TensorRT/plugin/fcPlugin/fcPlugin.h:27,
                 from /home/smoolak/.cache/yay/tensorrt/src/TensorRT/plugin/fcPlugin/fcPlugin.cpp:24:
/home/smoolak/.cache/yay/tensorrt/src/TensorRT/plugin/fcPlugin/fcPlugin.h: In member function ‘void nvinfer1::plugin::bert::AlgoProps::populate(const cublasLtMatmulAlgo_t&)’:
/home/smoolak/.cache/yay/tensorrt/src/TensorRT/plugin/fcPlugin/fcPlugin.h:379:25: error: ‘CUBLASLT_ALGO_CAP_MATHMODE_IMPL’ was not declared in this scope; did you mean ‘CUBLASLT_ALGO_CAP_TILE_IDS’?
  379 |             matmulAlgo, CUBLASLT_ALGO_CAP_MATHMODE_IMPL, &mathMode, sizeof(mathMode), nullptr));
      |                         ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/smoolak/.cache/yay/tensorrt/src/TensorRT/plugin/common/checkMacrosPlugin.h:215:19: note: in definition of macro ‘PLUGIN_CUBLASASSERT’
  215 |         auto s_ = status_;                                                                                             \
      |                   ^~~~~~~
/home/smoolak/.cache/yay/tensorrt/src/TensorRT/plugin/fcPlugin/fcPlugin.cpp: In function ‘void LtGemmSearch(cublasLtHandle_t, cublasOperation_t, cublasOperation_t, const int&, const int&, const int&, const void*, const void*, const int&, const void*, const int&, const void*, void*, const int&, void*, size_t, cublasComputeType_t, cudaDataType_t, cudaDataType_t, cudaDataType_t, cudaDataType_t, std::vector<nvinfer1::plugin::bert::customMatMultPerfType_t>&)’:
/home/smoolak/.cache/yay/tensorrt/src/TensorRT/plugin/fcPlugin/fcPlugin.cpp:171:21: error: ‘CUBLASLT_MATMUL_PREF_MATH_MODE_MASK’ was not declared in this scope; did you mean ‘CUBLASLT_MATMUL_PREF_IMPL_MASK’?
  171 |         preference, CUBLASLT_MATMUL_PREF_MATH_MODE_MASK, &mathMode, sizeof(mathMode)));
      |                     ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/smoolak/.cache/yay/tensorrt/src/TensorRT/plugin/common/checkMacrosPlugin.h:215:19: note: in definition of macro ‘PLUGIN_CUBLASASSERT’
  215 |         auto s_ = status_;                                                                                             \
      |                   ^~~~~~~
/home/smoolak/.cache/yay/tensorrt/src/TensorRT/plugin/fcPlugin/fcPlugin.cpp:218:20: error: ‘CUBLASLT_ALGO_CAP_MATHMODE_IMPL’ was not declared in this scope; did you mean ‘CUBLASLT_ALGO_CAP_TILE_IDS’?
  218 |             &algo, CUBLASLT_ALGO_CAP_MATHMODE_IMPL, &mathMode, sizeof(mathMode), nullptr));
      |                    ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/smoolak/.cache/yay/tensorrt/src/TensorRT/plugin/common/checkMacrosPlugin.h:215:19: note: in definition of macro ‘PLUGIN_CUBLASASSERT’
  215 |         auto s_ = status_;                                                                                             \
      |                   ^~~~~~~
make[2]: *** [plugin/CMakeFiles/nvinfer_plugin.dir/build.make:2260: plugin/CMakeFiles/nvinfer_plugin.dir/fcPlugin/fcPlugin.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:1166: plugin/CMakeFiles/nvinfer_plugin.dir/all] Error 2
make: *** [Makefile:156: all] Error 2
make: Leaving directory '/home/smoolak/.cache/yay/tensorrt/src/build'
==> ERROR: A failure occurred in build().
    Aborting...
 -> error making: tensorrt

Baytars commented on 2022-12-16 15:52 (UTC) (edited on 2022-12-16 15:53 (UTC) by Baytars)

@thoth You can refer to my workaround solution below.

thoth commented on 2022-12-16 15:38 (UTC)

Mine fails with:

HEAD is now at 914c06fb chore: set to version 2.9.2
/tmp/build/tensorrt/src/TensorRT/python/build /tmp/build/tensorrt/src/TensorRT/python
-- The CXX compiler identification is GNU 12.2.0
-- The C compiler identification is GNU 12.2.0
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Configuring done
CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
Please set them or make sure they are set and tested correctly in the CMake files:
/tmp/build/tensorrt/src/TensorRT/python/PY_CONFIG_INCLUDE
   used as include directory in directory /tmp/build/tensorrt/src/TensorRT/python
/tmp/build/tensorrt/src/TensorRT/python/PY_INCLUDE
   used as include directory in directory /tmp/build/tensorrt/src/TensorRT/python

-- Generating done
CMake Generate step failed.  Build files cannot be regenerated correctly.
==> ERROR: A failure occurred in build().
    Aborting...

dbermond commented on 2022-12-10 16:00 (UTC)

@otakutyrant license prohibits the package distribution. Unless Nvidia grants the package distribution within Arch Linux, we cannot do it.