ceph 14.2.5-3ubuntu1 source package in Ubuntu
Changelog
ceph (14.2.5-3ubuntu1) focal; urgency=medium * Merge from Debian unstable, remaining changes: - d/control: Add missing Depends on python3-{distutils,routes} to ceph-mgr-dashboard package (LP: #1858304). * All other changes merged into Debian packaging (Thanks Bernd). * d/control: Fix misnamed package Recommends brbd1 -> librbd1. * d/control: Add missing debhelper misc:Depends for python3-ceph. ceph (14.2.5-3) unstable; urgency=medium * Uploading to unstable * [010db9a] Fix ceph-mgr - indefinite queue growth hangs. Applying the backport for the fix https://github.com/ceph/ceph/pull/32466 Thanks to Milan Kupcevic (Closes: #947969) * [b01de37] Merge branch 'debian/unstable' into debian/experimental * [c8f35e5] Add breaks/replaces for ceph-common - ceph mds. * [ee905cb] Revert "Configure gbp for experimental" This reverts commit 3bcd5ac5f416b902a868036c243d7f19752c82f8. * [6303513] Revert "CI: build in experimental" This reverts commit d481122833e611c69c28e2b381e1cc1c8f689385. * [f1a9482] Snapshot changelog * [6e955c8] Removing automatic Ubuntu header * [b90d95a] Mark patch as forwarded ceph (14.2.5-2) experimental; urgency=medium * [8c74414] lower --max-parallel for >=16GB g++ loves to eat ram * [b15dcdd] Build-dep. on python3-dev instead python3-all-dev. Thanks to Graham Inggs (Closes: #948021) * [d481122] CI: build in experimental * [4303a75] 32bit: fix more size_t vs uint64_t issues. * [c98ea07] Install bash-completion in /usr again. This change went missing somewhere during the import of the changes done in Ubuntu between 12.2.11 and 14.2.4. Thanks to Andreas Beckmann (Closes: #948165) * [c7d90b9] Move manpages to ceph-common again. This also went missing during the import. * [3e5a680] Use a better way to check if we are on 32bit. * [c03cd06] rm d/p/boost-py37-compat.patch. Upstream renamed assert.h to ceph_assert.h, so this patch should not be necessary anymore. ceph (14.2.5-1) experimental; urgency=medium [ Bernd Zeimetz ] * [3bcd5ac] Configure gbp for experimental * [bd0b051] New upstream version 14.2.5 * [46cbe61] Merge upstream changes for 14.2.5 * [4dfd819] Refreshing patches * [da26f25] Fix copy&paste errors in build-deps. * [7ff43a2] Mark build-deps needed for make check. And remove the need to install them. * [5ef8ac3] Remove left over patch file * [91ab5b9] */lib_tp.so files are not built, don't install them. * [44591e4] Don't try to install files we don't build * [db0994e] librbd1.symbols: add new symbols. * [d53724e] Add install/postinstall files for ceph-mgr-k8sevents * [acada37] Add lintian override for .chm file. Source and build info is shipped. * [bbb0bd6] copy the radosgw init file in override_dh_installinit. * [a5958d5] Avoid duplicate files. etc/bash_completion.d/ceph was accidentally shipped in ceph-base again. * [fbc33a3] Add missing > in Dependency. ceph (14.2.4-9) unstable; urgency=medium * [8c74414] lower --max-parallel for >=16GB g++ loves to eat ram * [b15dcdd] Build-dep. on python3-dev instead python3-all-dev. Thanks to Graham Inggs (Closes: #948021) * [c98ea07] Install bash-completion in /usr again. This change went missing somewhere during the import of the changes done in Ubuntu between 12.2.11 and 14.2.4. Thanks to Andreas Beckmann (Closes: #948165) * [c7d90b9] Move manpages to ceph-common again. This also went missing during the import. ceph (14.2.4-8) unstable; urgency=medium * [e187e6a] Use WITH_CCACHE from cmake to build with ccache. * [8cbe25e] Hack CMakeCache.txt to disable HAVE_ARM_NEON on armel. after trying to patch the various places where HAVE_ARM_NEON is used the easiest way to get rid of it on armel seems to be to patch the cmake cache file. * [424ea9b] Don't build ceph on mipsel. No compiler is able to build the code without running into oom-issues. ceph (14.2.4-7) unstable; urgency=medium * [9b97753] Make sure we use ccache if needed. * [3dbd1ac] d/rules: Remove the armel fpu options. Hopefully properly patched now. * [da253e4] m68k, sh4: build with clang to avoid gcc OOM. ceph (14.2.4-6) unstable; urgency=medium * [b1c9b5d] Try to reduce memory usage even further if needed. gcc loves to eat too much memory on armhf mipsel armel. * [d695778] Remove softfp patch in favour of build flags. This hopefully avoids the need to update and debug the patch. * [6eddb32] Build with clang(++) on armhf mipsel armel. Reports seem to suggest that clang does not need as much memory as gcc. * [b9420ba] Fix unsigned/size_t issue for sh4 & m86k. * [0027181] Updating changelog * [6502f60] Fix another 32bit size_t/uint64_t issue. In src/common/options.cc line 192. * [4a0b044] Fix another 32bit size_t/uint64_t issue. This time: powerpc. ceph (14.2.4-5) unstable; urgency=medium [ Bernd Zeimetz ] * [453eaa4] Avoid using make -j 32 on powerpc. A lot of CPU do not always help. [ Milan Kupcevic ] * [c6ec924] cherry pick critical bluestore data corruption fix (Closes: #947457) [ Bernd Zeimetz ] * [e88fc21] Set -DWITH_BOOST_CONTEXT=OFF where necessary. [!s390x !mips64el !ia64 !m68k !ppc64 !riscv64 !sh4 !sparc64 !x32] ceph (14.2.4-4) unstable; urgency=medium * [b70efb1] Create missing directories for arch:all build. * [3e4530f] try to save even more memory on armhf. Don't build debug flags at all. * [b478ee5] Avoid overloading on mipsel. Add mipsel to debian/patches/32bit-avoid-overloading.patch * [85eb6e9] Also build jerasure with softfp on armel. ceph (14.2.4-3) unstable; urgency=medium [ Bernd Zeimetz ] * [f3f47f5] CI: disable extra-long running tests. [ Steve Langasek ] * [9794fc4] Drop uninstallable and unneeded server binaries on i386. (Closes: #947156) [ Bernd Zeimetz ] * [6c2993f] Merge tag 'debian/14.2.4-0ubuntu3' into debian/unstable * [0c5b41f] Use a tracker.d.o list instead of a closed one. (Closes: #760538) * [d95db97] Try to build with --max-parallel=1 on slow arches. We run into out-of-memory errors again. * [e8d9e63] Use -mfloat-abi=softfp on armel for NEON instructions. And again, that patch went missing somewhere. Taken from https://salsa.debian.org/ceph-team/ceph/commit/fa7d0d84f736d0b8450572f3192a43ff7b3252c4 ceph (14.2.4-2) unstable; urgency=medium [ Thomas Goirand ] * [4b2327d] Add a python3-ceph metapackage. [ Bernd Zeimetz ] * [dbc7d2f] Add lintian override for empty python3-ceph package. * [5381390] Remove -en from description. We actually want to have a description in the changes file... * [4a57f31] Make python3-ceph dependencies binNMU safe ceph (14.2.4-1) unstable; urgency=medium * Uploading 14.2.4 to Debian. (Closes: #936282, #943961, #940854, #942733) * Adding missing sources (two.js, bootstrap 3.3.4) * ceph-mon.postinst missed the interpreter * Add missing dependency on python3 * ignore lintian errors about minified js with shipped sources * Radowgw uses a systemd template, override lintian. * libcephfs-jni: rpath to java libraries needed. Add lintian override. * Remove .pc folder from debian folder. * Adding myself to Uploaders * Merging the work done in Ubuntu. [ Dariusz Gadomski ] * d/p/issue37490.patch: Cherry pick fix to optimize LVM queries in ceph-volume, resolving performance issues in systems under heavy load or with large numbers of disks (LP: #1850754). [ James Page ] * d/p/issue40114.patch: Cherry pick endian fixes to resolve issues using Ceph on big-endian architectures such as s390x (LP: #1851290). * New upstream release (LP: #1850901): - d/p/more-py3-compat.patch,ceph-volume-wait-for-lvs.patch, ceph-volume-wait-for-lvs.patch: Drop, included upstream. - d/p/bluefs-use-uint64_t-for-len.patch: Cherry pick fix to resolve FTBFS on 32 bit architectures. * d/rules: Disable SPDK support as this generates a build which has a minimum CPU baseline of 'corei7' on x86_64 which is not compatible with older CPU's (LP: #1842020). * d/p/issue40781.patch: Cherry pick fix for py3 compatibility in ceph- crash. [ Eric Desrochers ] * Ensure that daemons are not automatically restarted during package upgrades (LP: #1840347): - d/rules: Use "--no-restart-after-upgrade" and "--no-stop-on-upgrade" instead of "--no-restart-on-upgrade". - d/rules: Drop exclusion for ceph-[osd,mon,mds] for restarts. [ Jesse Williamson ] * d/p/civetweb-755-1.8-somaxconn-configurable*.patch: Backport changes to civetweb to allow tuning of SOMAXCONN in Ceph RADOS Gateway deployments (LP: #1838109). [ James Page ] * d/p/ceph-volume-wait-for-lvs.patch: Cherry pick inflight fix to ensure that required wal and db devices are present before activating OSD's (LP: #1828617). [ Steve Beattie ] * SECURITY UPDATE: RADOS gateway remote denial of service - d/p/CVE-2019-10222.patch: rgw: asio: check the remote endpoint before processing requests. - CVE-2019-10222 - Closes: #936015 [ James Page ] * New upstream release. * d/p/fix-py3-encoding-fsid.patch: Drop, no longer required. * d/p/pybind-auto-encode-decode-cstr.patch: Drop, reverted upstream. * d/p/fix-py3-encoding-fsid.patch: Cherry pick correct fix to resolve FSID encoding issues under Python 3 (LP: #1833079). * d/p/pybind-auto-encode-decode-cstr.patch: Cherry pick fix to ensure that encoding/decoding of strings is correctly performed under Python 3 (LP: #1833079). * New upstream release. * d/p/misc-32-bit-fixes.patch: Drop, included upstream. * d/p/py37-compat.patch: Drop, included upstream. * d/p/collections.abc-compat.patch: Drop, included in release. * d/p/*: Refresh. * d/*: Re-sync packaging with upstream for Nautilus release. * d/control,ceph-test.*,rules: Disable build of test binaries, drop ceph-test binary package (reduce build size). * d/control,rules: Use system boost libraries (reduce build time). * d/control: Add dependency on smartmontools, suggest use of nvme-cli for ceph-osd package. * d/p/32bit-*.patch: Fix misc 32 bit related issues which cause compilation failures on armhf and i386 architectures. * d/control: Add Breaks/Replaces on ceph-common for ceph-argparse to deal with move of Python module. * New upstream release (LP: #1810766). * d/p/*: Refresh. * d/p/more-py3-compat.patch: Add more py3 fixes. * d/p/more-py3-compat.patch: Misc Python 3 fixes in ceph-create-keys. * d/tests/python-ceph: Fix python3 test support resolving autopkgtest failure. * New upstream point release. * d/p/*: Refresh. * d/control,python-*.install,rules: Drop Python 2 support. * d/tests: Update for Python 2 removal. * d/p/misc-32-bit-fixes.patch: Update type of rgw_max_attr_name_len, resolving SIGABRT in radosgw (LP: #1805145). * d/p/boost-py37-compat.patch: Fix compilation issue with boost imports conflicting with ceph's assert.h header. * d/p/collections.abc-compat.patch: Selective cherry-pick of upstream fix for future compatibility with Python 3.8, avoiding deprecation warnings under Python 3.7. * d/ceph-mds.install: Install missing systemd configuration (LP: #1789927). * Re-instate 32bit architectures. - d/control: Switch back to linux-any - d/p/misc-32-bit-fixes.patch: Misc fixes for compilation failures under 32 bit architectures. - d/rules: Disable SPDK integration under i386. * Repack upstream tarball, excluding non-DFSG sources (LP: #1750848): - d/copyright: Purge upstream tarball of minified js files, which are neither shipped in binaries or required for package build. - d/watch: Add dversionmangle for +dfsg\d version suffix. * d/control,rules: Drop requirement for gcc-7 for arm64. * d/ceph-osd.udev: Add udev rules for sample LVM layout for OSD's, ensuring that LV's have ceph:ceph ownership (LP: #1767087). * d/copyright,source.lintian-overrides: Exclude jsonchecker component of rapidjson avoiding license-problem-json-evil non-free issue. * New upstream point release. * d/control: Remove obsolete X{S}-* fields. * New upstream release. * Sync with changes in upstream packaging: - d/*.install,rules: Use generated systemd unit files for install - d/ceph-test.install: Drop binaries removed upstream. * d/p/*: Refresh and drop as needed. * d/*.symbols: Refresh for new release. * d/rules,calc-max-parallel.sh: Automatically calculate the maximum number of parallel compilation units based on total memory. * d/control: Drop support for 32 bit architectures. * d/control: Update Vcs-* fields for Ubuntu. * d/control: Drop min python version field. -- James Page <email address hidden> Fri, 10 Jan 2020 09:22:49 +0000
Upload details
- Uploaded by:
- James Page
- Uploaded to:
- Focal
- Original maintainer:
- Ubuntu Developers
- Architectures:
- linux-any all
- Section:
- admin
- Urgency:
- Medium Urgency
See full publishing history Publishing
Series | Published | Component | Section |
---|
Downloads
File | Size | SHA-256 Checksum |
---|---|---|
ceph_14.2.5.orig.tar.xz | 77.0 MiB | 0a6e858078a00f95bc654d81a8cd85f347ce6f6237aa352c8865f2729b2b09fd |
ceph_14.2.5-3ubuntu1.debian.tar.xz | 110.3 KiB | f71405a97d60e56ce7cd0c5d1b0761f5f637fe2c60fa3406ecbbec7f333e98c2 |
ceph_14.2.5-3ubuntu1.dsc | 8.6 KiB | b3d120400addbeaf72b50f0685fea0f94cdcb64ef9ef476b58d1f03b67ab8857 |
Available diffs
Binary packages built by this source
- ceph: distributed storage and file system
Ceph is a massively scalable, open-source, distributed
storage system that runs on commodity hardware and delivers object,
block and file system storage.
- ceph-base: common ceph daemon libraries and management tools
Ceph is a distributed storage system designed to provide excellent
performance, reliability, and scalability.
.
This package contains the libraries and management tools that are common among
the Ceph server daemons (ceph-mon, ceph-mgr, ceph-osd, ceph-mds). These tools
are necessary for creating, running, and administering a Ceph storage cluster.
- ceph-base-dbgsym: debug symbols for ceph-base
- ceph-common: common utilities to mount and interact with a ceph storage cluster
Ceph is a distributed storage and file system designed to provide
excellent performance, reliability, and scalability. This is a collection
of common tools that allow one to interact with and administer a Ceph cluster.
- ceph-common-dbgsym: debug symbols for ceph-common
- ceph-fuse: FUSE-based client for the Ceph distributed file system
Ceph is a distributed network file system designed to provide
excellent performance, reliability, and scalability. This is a
FUSE-based client that allows one to mount a Ceph file system without
root privileges.
.
Because the FUSE-based client has certain inherent performance
limitations, it is recommended that the native Linux kernel client
be used if possible. If it is not practical to load a kernel module
(insufficient privileges, older kernel, etc.), then the FUSE client will
do.
- ceph-fuse-dbgsym: debug symbols for ceph-fuse
- ceph-mds: metadata server for the ceph distributed file system
Ceph is a distributed storage and network file system designed to
provide excellent performance, reliability, and scalability.
.
This package contains the metadata server daemon, which is used to
create a distributed file system on top of the ceph storage cluster.
- ceph-mds-dbgsym: debug symbols for ceph-mds
- ceph-mgr: manager for the ceph distributed file system
Ceph is a massively scalable, open-source, distributed
storage system that runs on commodity hardware and delivers object,
block and file system storage.
.
This package contains the manager daemon, which is used to expose high
level management and monitoring functionality.
- ceph-mgr-dashboard: dashboard module for ceph-mgr
Ceph is a massively scalable, open-source, distributed
storage system that runs on commodity hardware and delivers object,
block and file system storage.
.
This package provides a ceph-mgr module, providing a web-based
application to monitor and manage many aspects of a Ceph cluster and
related components.
.
See the Dashboard documentation at http://docs.ceph. com/ for details
and a detailed feature overview.
- ceph-mgr-dbgsym: debug symbols for ceph-mgr
- ceph-mgr-diskprediction-cloud: diskprediction-cloud module for ceph-mgr
Ceph is a massively scalable, open-source, distributed
storage system that runs on commodity hardware and delivers object,
block and file system storage.
.
This package contains the diskprediction_cloud module for the ceph-mgr
daemon, which helps predict disk failures.
- ceph-mgr-diskprediction-local: diskprediction-local module for ceph-mgr
Ceph is a massively scalable, open-source, distributed
storage system that runs on commodity hardware and delivers object,
block and file system storage.
.
This package contains the diskprediction_local module for the ceph-mgr
daemon, which helps predict disk failures.
- ceph-mgr-k8sevents: kubernetes events module for ceph-mgr
Ceph is a massively scalable, open-source, distributed
storage system that runs on commodity hardware and delivers object,
block and file system storage.
.
This package contains the k8sevents module, to allow ceph-mgr to send
ceph related events to the kubernetes events API, and track all events
that occur within the rook-ceph namespace.
- ceph-mgr-rook: rook module for ceph-mgr
Ceph is a massively scalable, open-source, distributed
storage system that runs on commodity hardware and delivers object,
block and file system storage.
.
This package contains the rook module for ceph-mgr's orchestration
functionality, to allow ceph-mgr to install and configure ceph using
Rook.
- ceph-mgr-ssh: No summary available for ceph-mgr-ssh in ubuntu focal.
No description available for ceph-mgr-ssh in ubuntu focal.
- ceph-mon: monitor server for the ceph storage system
Ceph is a massively scalable, open-source, distributed
storage system that runs on commodity hardware and delivers object,
block and file system storage.
.
This package contains the cluster monitor daemon for the Ceph storage
system. One or more instances of ceph-mon form a Paxos part-time parliament
cluster that provides extremely reliable and durable storage of cluster
membership, configuration, and state.
- ceph-mon-dbgsym: debug symbols for ceph-mon
- ceph-osd: OSD server for the ceph storage system
Ceph is a massively scalable, open-source, distributed
storage system that runs on commodity hardware and delivers object,
block and file system storage.
.
This package contains the Object Storage Daemon for the Ceph storage system.
It is responsible for storing objects on a local file system
and providing access to them over the network.
- ceph-osd-dbgsym: debug symbols for ceph-osd
- ceph-resource-agents: OCF-compliant resource agents for Ceph
Ceph is a distributed storage and network file system designed to provide
excellent performance, reliability, and scalability.
.
This package contains the resource agents (RAs) which integrate
Ceph with OCF-compliant cluster resource managers,
such as Pacemaker.
- cephfs-shell: interactive shell for the Ceph distributed file system
Ceph is a massively scalable, open-source, distributed
storage system that runs on commodity hardware and delivers object,
block and file system storage. This is an interactive tool that
allows accessing a Ceph file system without mounting it by providing
a nice pseudo-shell which works like an FTP client.
.
This package contains a CLI for interacting with the CephFS.
- libcephfs-dev: Ceph distributed file system client library (development files)
Ceph is a distributed network file system designed to provide
excellent performance, reliability, and scalability. This is a
shared library allowing applications to access a Ceph distributed
file system via a POSIX-like interface.
.
This package contains development files needed for building applications that
link against libcephfs2.
- libcephfs-java: Java library for the Ceph File System
Ceph is a distributed storage system designed to provide excellent
performance, reliability, and scalability.
.
This package contains the Java library for interacting with the Ceph
File System.
- libcephfs-jni: Java Native Interface library for CephFS Java bindings
Ceph is a distributed storage system designed to provide excellent
performance, reliability, and scalability.
.
This package contains the Java Native Interface library for interacting
with the Ceph File System.
- libcephfs-jni-dbgsym: debug symbols for libcephfs-jni
- libcephfs2: Ceph distributed file system client library
Ceph is a distributed network file system designed to provide
excellent performance, reliability, and scalability. This is a
shared library allowing applications to access a Ceph distributed
file system via a POSIX-like interface.
- libcephfs2-dbgsym: debug symbols for libcephfs2
- librados-dev: RADOS distributed object store client library (development files)
RADOS is a reliable, autonomic distributed object storage cluster
developed as part of the Ceph distributed storage system. This is a
shared library allowing applications to access the distributed object
store using a simple file-like interface.
.
This package contains development files needed for building applications that
link against librados2.
- librados-dev-dbgsym: debug symbols for librados-dev
- librados2: RADOS distributed object store client library
RADOS is a reliable, autonomic distributed object storage cluster
developed as part of the Ceph distributed storage system. This is a
shared library allowing applications to access the distributed object
store using a simple file-like interface.
- librados2-dbgsym: debug symbols for librados2
- libradospp-dev: RADOS distributed object store client C++ library (development files)
RADOS is a reliable, autonomic distributed object storage cluster
developed as part of the Ceph distributed storage system. This is a
shared library allowing applications to access the distributed object
store using a simple file-like interface.
.
This package contains development files needed for building C++ applications that
link against librados.
- libradosstriper-dev: RADOS striping interface (development files)
libradosstriper is a striping interface built on top of the rados
library, allowing to stripe bigger objects onto several standard
rados objects using an interface very similar to the rados one.
.
This package contains development files needed for building applications that
link against libradosstriper.
- libradosstriper1: RADOS striping interface
Striping interface built on top of the rados library, allowing
to stripe bigger objects onto several standard rados objects using
an interface very similar to the rados one.
- libradosstriper1-dbgsym: debug symbols for libradosstriper1
- librbd-dev: RADOS block device client library (development files)
RBD is a block device striped across multiple distributed objects
in RADOS, a reliable, autonomic distributed object storage cluster
developed as part of the Ceph distributed storage system. This is a
shared library allowing applications to manage these block devices.
.
This package contains development files needed for building applications that
link against librbd1.
- librbd1: RADOS block device client library
RBD is a block device striped across multiple distributed objects
in RADOS, a reliable, autonomic distributed object storage cluster
developed as part of the Ceph distributed storage system. This is a
shared library allowing applications to manage these block devices.
- librbd1-dbgsym: debug symbols for librbd1
- librgw-dev: RADOS client library (development files)
RADOS is a distributed object store used by the Ceph distributed
storage system. This package provides a REST gateway to the
object store that aims to implement a superset of Amazon's S3
service.
.
This package contains development files needed for building applications
that link against librgw2.
- librgw2: RADOS Gateway client library
RADOS is a distributed object store used by the Ceph distributed
storage system. This package provides a REST gateway to the
object store that aims to implement a superset of Amazon's S3
service.
.
This package contains the library interface and headers only.
- librgw2-dbgsym: debug symbols for librgw2
- python3-ceph: Meta-package for all Python 3.x modules for the Ceph libraries
Ceph is a massively scalable, open-source, distributed
storage system that runs on commodity hardware and delivers object,
block and file system storage.
.
This package is a metapackage for all Ceph Python 3.x bindings.
- python3-ceph-argparse: Python 3 utility libraries for Ceph CLI
Ceph is a massively scalable, open-source, distributed
storage system that runs on commodity hardware and delivers object,
block and file system storage.
.
This package contains types and routines for Python 3 used by the
Ceph CLI as well as the RESTful interface.
- python3-cephfs: Python 3 libraries for the Ceph libcephfs library
Ceph is a massively scalable, open-source, distributed
storage system that runs on commodity hardware and delivers object,
block and file system storage.
.
This package contains Python 3 libraries for interacting with Ceph's
CephFS file system client library.
- python3-cephfs-dbgsym: debug symbols for python3-cephfs
- python3-rados: Python 3 libraries for the Ceph librados library
Ceph is a massively scalable, open-source, distributed
storage system that runs on commodity hardware and delivers object,
block and file system storage.
.
This package contains Python 3 libraries for interacting with Ceph's
RADOS object storage.
- python3-rados-dbgsym: debug symbols for python3-rados
- python3-rbd: Python 3 libraries for the Ceph librbd library
Ceph is a massively scalable, open-source, distributed
storage system that runs on commodity hardware and delivers object,
block and file system storage.
.
This package contains Python 3 libraries for interacting with Ceph's
RBD block device library.
- python3-rbd-dbgsym: debug symbols for python3-rbd
- python3-rgw: Python 3 libraries for the Ceph librgw library
Ceph is a massively scalable, open-source, distributed
storage system that runs on commodity hardware and delivers object,
block and file system storage.
.
This package contains Python 3 libraries for interacting with Ceph's
RGW library.
- python3-rgw-dbgsym: debug symbols for python3-rgw
- rados-objclass-dev: RADOS object class development kit.
.
This package contains development files needed for building RADOS object class plugins.
- radosgw: REST gateway for RADOS distributed object store
RADOS is a distributed object store used by the Ceph distributed
storage system. This package provides a REST gateway to the
object store that aims to implement a superset of Amazon's S3
service as well as the OpenStack Object Storage ("Swift") API.
.
This package contains the proxy daemon and related tools only.
- radosgw-dbgsym: debug symbols for radosgw
- rbd-fuse: FUSE-based rbd client for the Ceph distributed file system
Ceph is a distributed network file system designed to provide
excellent performance, reliability, and scalability. This is a
FUSE-based client that allows one to map Ceph rbd images as files.
- rbd-fuse-dbgsym: debug symbols for rbd-fuse
- rbd-mirror: Ceph daemon for mirroring RBD images
Ceph is a distributed storage system designed to provide excellent
performance, reliability, and scalability.
.
This package provides a daemon for mirroring RBD images between
Ceph clusters, streaming changes asynchronously.
- rbd-mirror-dbgsym: debug symbols for rbd-mirror
- rbd-nbd: NBD-based rbd client for the Ceph distributed file system
Ceph is a massively scalable, open-source, distributed
storage system that runs on commodity hardware and delivers object,
block and file system storage. This is a
NBD-based client that allows one to map Ceph rbd images as local
block device.
.
NBD base client that allows one to map Ceph rbd images as local
block device.
- rbd-nbd-dbgsym: debug symbols for rbd-nbd