@Yatha
The upstream recommends 40G minimum, 60G to be safe.
https://docs.ceph.com/en/latest/install/build-ceph/#build-prerequisites
Git Clone URL: | https://aur.archlinux.org/ceph.git (read-only, click to copy) |
---|---|
Package Base: | ceph |
Description: | Ceph Storage client library for RADOS block devices |
Upstream URL: | https://ceph.com/ |
Licenses: | GPL-2.0-or-later OR LGPL-2.1-or-later OR LGPL-3.0-or-later |
Provides: | libceph_librbd_parent_cache.so, librbd.so |
Submitter: | foxxx0 |
Maintainer: | pbazaah |
Last Packager: | pbazaah |
Votes: | 7 |
Popularity: | 0.25 |
First Submitted: | 2022-08-08 09:09 (UTC) |
Last Updated: | 2025-01-05 14:54 (UTC) |
« First ‹ Previous 1 2 3 4 5 6 7 8 9 10 11 Next › Last »
@Yatha
The upstream recommends 40G minimum, 60G to be safe.
https://docs.ceph.com/en/latest/install/build-ceph/#build-prerequisites
I had a 26 GB free space and after 92% of building to get out of space. How much space is needed to build this?
@goshaRusty
I am aware, if you feel like, you can strip out the stack trace and add it as a comment, either here or on this issue: https://github.com/bazaah/aur-ceph/issues/2
Beyond that, you can fix your problem right now but using the prebuilt binary packages: https://aur.archlinux.org/pkgbase/ceph-bin
Otherwise, I am planning on investigating this a bit this weekend.
I try update manjaro with Package manager and i also have an error related to osd_legacy_options.h for
/var/tmp/pamac-build-myUserName/ceph/src/ceph-17.2.5/src/common/options/legacy_config_opts.h:7:10: fatal error: osd_legacy_options.h: No such file or directory
@petronny
See https://github.com/bazaah/aur-ceph/issues/2
If you have the chance, could you pull out the complete compile error stack trace for that error and post it on that issue (or here).
In the mean time, you could try another compile as typically the issue is intermittent.
Getting
/build/ceph/src/ceph-17.2.5/src/common/options/legacy_config_opts.h:3:10: fatal error: mds_legacy_options.h: No such file or directory
Full build log: https://github.com/arch4edu/cactus/actions/runs/3361784541/jobs/5572635705
v17.2.5 has been released, and Archlinux is finally on the current Ceph release again!
Its been an interesting couple months, and I've definitely learned a lot about CMake, but I'll probably be taking a break from pushing releases now, until after the new year, serious bug fixes excluded.
Yes. Quincy doesn't support leveldb anymore. See https://docs.ceph.com/en/quincy/releases/quincy/#upgrading-from-octopus-or-pacific
You can show this in post_upgrade().
So, I'm quite likely to release 17.2.5-1 this weekend.
I did hit one strange issue during tests (see below) but both initializing a new cluster, and upgrading from v16 work.
On my v16 upgrade, I encountered a segfault in a mon, due to using leveldb
as a kv_backend, which has been deprecated since at least Jewel and is not supported in Quincy.
I have no idea why, the whole test is scripted so I didn't run something weird on one of them.
Regardless, I strongly encourage anyone that runs a cluster to follow the below instructions before you start upgrading monitors.
mon
s use kv_backend rocksdbDuring testing I encountered a mon that was using leveldb
instead of rocksdb
. This is super weird, as leveldb was deprecated back in v10... and this test cluster was installed with v15... so WTF.
cat /var/lib/ceph/mon/ceph-$(hostname -s)/kv_backend
This should report: rocksdb
.
If it instead reports leveldb
you need to run the following:
(
# These assume you name your mons after your hostnames. If not, adjust accordingly
mID=$(hostname -s)
SERVICE="ceph-mon@${mID}.service"
MONMAP=$(mktemp monmap.${mID}.XXXXX)
systemctl stop ${SERVICE} && sleep 2
ceph mon getmap -o ${MONMAP}
mv /var/lib/ceph/mon/ceph-${mID} /var/lib/ceph/mon/ceph-${mID}.bak
ceph-mon -i ${mID} --mkfs --monmap ${MONMAP} --keyring /var/lib/ceph/mon/ceph-${mID}.bak/keyring
chown -R ceph:ceph /var/lib/ceph/mon/ceph-${mID}
systemctl start ${SERVICE}
)
Once you confirm that:
kv_backend
now reports rocksdb
You can remove the backup monmap:
rm -rf /var/lib/ceph/mon/ceph-$(hostname -s).bak
I've got what is likely to be the 17.2.5 release locked down, including enabling AQMP and kafka integrations.
This is also the first release (maybe ever, for Archlinux?) that will include working cython bindings for rados,rbd,cephfs libs, and the first release with fully passing make check
tests -- minus a couple that were disabled (wants sudo, docker, etc).
My only concern remaining is that with python 3.11 released, I'll need to rebuild binaries soonish.
Pinned Comments
pbazaah commented on 2022-10-05 13:03 (UTC) (edited on 2022-10-05 13:03 (UTC) by pbazaah)
For future commenters:
TLDR:
https://aur.archlinux.org/pkgbase/ceph | From source build (slow)
https://aur.archlinux.org/pkgbase/ceph-bin | Pre-built binaries (fast)
Unlike the original community version, this repo builds ceph from source. Ceph is a large, complicated project so this takes several hours on a good build server.
To get a similar experience to how community/ceph worked (pre-built binaries) use ceph-bin instead.