Post

Upgrading Ceph from Reef to Squid (Proxmox 8.4 to 9)

This guide explains how to safely upgrade your Ceph cluster from Reef to Squid in a Proxmox VE 8.4+ environment. It is based on the official documentation.

Before you begin: Ensure your cluster is healthy (ceph status) and you’re running Ceph 18.2.4-pve3 or higher.


Prerequisites

  • Proxmox VE ≥ 8.2
  • Ceph Reef ≥ 18.2.4
  • Cluster health: HEALTH_OK
  • Backup configurations and monitor the upgrade closely

Step 1: Update Ceph Repository (on all nodes)

Update the Ceph APT source from reef to squid:

1
sed -i 's/reef/squid/' /etc/apt/sources.list.d/ceph.list

For no-subscription users, the file should look like:

1
deb http://download.proxmox.com/debian/ceph-squid bookworm no-subscription

For enterprise users with valid subscriptions, use: deb https://enterprise.proxmox.com/debian/ceph-squid bookworm enterprise

Then run:

1
apt update

This prevents unnecessary rebalancing during the upgrade:

1
ceph osd set noout

Or set it via the GUI in the OSD tab.


Step 3: Upgrade Ceph Packages (on all nodes)

Use the following commands on each node:

1
2
apt update
apt full-upgrade

Note: The system is still running Reef binaries after the upgrade.

If you see a 401 error using enterprise repos, try:

1
pvesubscription update --force

Step 4: Restart Monitor Daemons (on monitor nodes only)

Restart one monitor at a time and wait for the cluster to stabilize:

1
2
systemctl restart ceph-mon.target
ceph -s

Then verify upgrade using:

1
ceph mon dump | grep min_mon_release

Expected output:

1
min_mon_release 19 (squid)

Step 5: Restart Manager Daemons

If not already restarted with the monitors, restart managers manually:

1
2
systemctl restart ceph-mgr.target
ceph -s

Step 6: Restart OSDs (one node at a time)

Restart all OSDs on one node at a time:

1
2
systemctl restart ceph-osd.target
ceph status

After all nodes are upgraded, you may see this warning:

1
all OSDs are running squid or later but require_osd_release < squid

Set the new minimum OSD version:

1
ceph osd require-osd-release squid

Step 7: Upgrade CephFS MDS Daemons (if used)

For each filesystem (check with ceph fs ls):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# 1. Disable standby replay
ceph fs set <fs_name> allow_standby_replay false

# 2. Reduce MDS to a single rank
ceph fs get <fs_name> | grep max_mds
ceph fs set <fs_name> max_mds 1

# 3. Stop standby MDS daemons
systemctl stop ceph-mds.target

# 4. Restart the primary MDS daemon
systemctl restart ceph-mds.target

# 5. Restart standby MDS daemons
systemctl start ceph-mds.target

# 6. Restore original max_mds value
ceph fs set <fs_name> max_mds <original_max_mds>

Step 8: Unset noout Flag

After all daemons are upgraded:

1
ceph osd unset noout

Summary Checklist

  • Ceph source changed from reef to squid
  • apt full-upgrade run on all nodes
  • All MON, MGR, OSD, and MDS daemons restarted
  • Ceph status is HEALTH_OK
  • require-osd-release squid set
  • noout flag unset

Need Help?

If you have questions or want to share your experience, join the conversation on YouTube or Bluesky.

Happy Cephing!

This post is licensed under CC BY 4.0 by the author.