Search Criteria
Package Details: tensorrt 10.7.0.23-2
Package Actions
Git Clone URL: | https://aur.archlinux.org/tensorrt.git (read-only, click to copy) |
---|---|
Package Base: | tensorrt |
Description: | A platform for high-performance deep learning inference on NVIDIA hardware |
Upstream URL: | https://developer.nvidia.com/tensorrt/ |
Keywords: | ai artificial intelligence nvidia |
Licenses: | Apache-2.0, LicenseRef-NVIDIA-TensorRT-SLA |
Submitter: | dbermond |
Maintainer: | dbermond |
Last Packager: | dbermond |
Votes: | 20 |
Popularity: | 0.43 |
First Submitted: | 2018-07-29 16:17 (UTC) |
Last Updated: | 2024-12-25 17:37 (UTC) |
Dependencies (12)
- cuda (cuda11.1AUR, cuda-12.2AUR, cuda12.0AUR, cuda11.4AUR, cuda11.4-versionedAUR, cuda12.0-versionedAUR)
- cudnn
- cmake (cmake-gitAUR) (make)
- cuda (cuda11.1AUR, cuda-12.2AUR, cuda12.0AUR, cuda11.4AUR, cuda11.4-versionedAUR, cuda12.0-versionedAUR) (make)
- cudnn (make)
- git (git-gitAUR, git-glAUR) (make)
- python (python37AUR, python311AUR, python310AUR) (make)
- python-build (make)
- python-installer (make)
- python-onnx (make)
- python-setuptools (make)
- python-wheel (make)
Required by (2)
Sources (13)
- 010-tensorrt-use-local-protobuf-sources.patch
- 020-tensorrt-fix-python.patch
- 030-tensorrt-onnx-tensorrt-disable-missing-source-file.patch
- cub-nvlabs
- git+https://github.com/google/benchmark.git
- git+https://github.com/NVIDIA/TensorRT.git#tag=v10.7.0
- git+https://github.com/onnx/onnx-tensorrt.git
- git+https://github.com/onnx/onnx.git
- git+https://github.com/pybind/pybind11.git
- https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.7.0/tars/TensorRT-10.7.0.23.Linux.x86_64-gnu.cuda-12.6.tar.gz
- https://github.com/google/protobuf/releases/download/v3.20.1/protobuf-cpp-3.20.1.tar.gz
- protobuf-protocolbuffers
- TensorRT-SLA.txt
Latest Comments
« First ‹ Previous 1 2 3 4 5 6 7 .. 10 Next › Last »
FuzzyAtish commented on 2024-10-31 11:07 (UTC)
@jholmer In the past with similar errors what worked for me was to reinstall the
python-onnx
package. I'm not saying it's a definite solution, but it could workjholmer commented on 2024-10-20 01:17 (UTC)
I am also receiving the "Could not load library libnvonnxparser.so.10: Unable to open library: libnvonnxparser.so.10 due to /usr/lib/libnvonnxparser.so.10: undefined symbol: _ZTIN8onnx2trt16OnnxTrtExceptionE" error on runtime. I have tried doing a completely clean build. I'm wondering if there is some sort of version incompatibility between this package and a dependency?
dbermond commented on 2024-09-19 21:26 (UTC)
@lu0se yes, the '.so' link to 'libnvonnxparser.so.10' is right, otherwise you would get a 'file not found' error. If you are getting a 'Using existing $srcdir/ tree' warning during makepkg, it means that you are not doing a clean build. You should use makepkg --cleanbuild/-C option for doing a clean build when building your packages, or build it in a clean chroot. Try to do it and see if it works.
lu0se commented on 2024-09-19 18:03 (UTC)
trtexec --onnx=rife_v4.10.onnx
&&&& RUNNING TensorRT.trtexec [TensorRT v100400] [b26] # trtexec --onnx=/usr/lib/vapoursynth/models/rife/rife_v4.10.onnx --device=0 [09/20/2024-02:01:38] [I] Start parsing network model. [09/20/2024-02:01:38] [E] Could not load library libnvonnxparser.so.10: Unable to open library: libnvonnxparser.so.10 due to /usr/lib/libnvonnxparser.so.10: undefined symbol: _ZTIN8onnx2trt16OnnxTrtExceptionE [09/20/2024-02:01:38] [E] Assertion failure: parser.onnxParser != nullptr
is libnvonnxparser.so.10 link right?is it related to WARNING: Using existing $srcdir/ tree
milianw commented on 2024-08-06 16:28 (UTC)
@dbermond: the forum post is not mine. I got the same/similar error when I tried to edit the PKGBUILD manually to try to build the newer tensorrt against cuda 12.5.
monarc99 commented on 2024-08-06 14:55 (UTC) (edited on 2024-08-06 15:05 (UTC) by monarc99)
You have a commented out part in the PKGBUILD in which you compile the python bindings.
# python bindings (fails to build with python 3.11) #local _pyver ...
Since my GPU (1060) is no longer supported by tensorrt 10, I had to get the 9 version to work and compile the python bindings for python 3.12 myself.
All I had to do was set another ENV variable and adjust the install command.
in build()...{ ... local -x TENSORRT_MODULE="tensorrt" ... }
the generated whl is located somewhere else, therefore adapt the install command
in package_python-tensorrt() { ...
python -m installer --destdir="$pkgdir" "TensorRT/python/build/bindings_wheel/dist/"*.whl
... }
I cannot say whether everything is correct. But everything compiles and the models also run (rife+upscale via trt).
In case someone might need it.
dbermond commented on 2024-07-27 01:17 (UTC)
@milianw I could compile your 'binsim.cu' source file using cuda 12.5.1 by running the exact same nvcc command that you posted in the mentioned nvidia thread. No errors, no warnings, and the 'binsimCUDA' executable builds fine. I cannot answer why you are getting these errors, and further discussing this here will be out of the scope of this AUR web page.
milianw commented on 2024-07-25 19:33 (UTC)
@dbermond: if gcc is not an issue, then why did I see the compile errors from the linked forum thread? I have gcc13 installed, but only
So gcc13 will still end up using libstc++ headers from gcc14 which are incompatible. How is this supposed to work?
dbermond commented on 2024-07-25 13:24 (UTC)
@milianw Sure, I will be happy to update the package if you provide the fix for this issue that I reported upstream in the same day 10.2 was released. And there is also this another one which I reported, but I could fix it myself. Please notice that if you can fix the first issue, other ones may arise later in the compilation, or even in the python modules, so make sure to check everything. Regarding the gcc usage in cuda, each cuda version uses a specific gcc version. cuda 12.5 uses gcc13 (and not gcc14), so the gcc version is not a problem for us, since the cuda package is already using the correct one.
milianw commented on 2024-07-25 10:12 (UTC)
meh, just updating the versions won't be sufficient since cuda (even 12.5 apparently) is not compatible with gcc 14 system includes: https://forums.developer.nvidia.com/t/cuda-12-4-nvcc-and-gcc-14-1-incompatibility/293295
« First ‹ Previous 1 2 3 4 5 6 7 .. 10 Next › Last »