Expanding Synology Volume Beyond 16TB

16Nov20

Background

Earlier in the year I wrote about upgrading from my old DS411j to a new DS420j, and how simple the upgrade process was. But I knew there would be trouble ahead, as Synology doesn’t provide a way to upgrade volumes created on its 32bit systems past 16TB.

Last weekend one of my HGST 4TB drives failed, and I was already seeing warnings about running out of space. It was time for some newer and bigger spindles. So I ordered a 4 pack of 8TB Seagate Iron Wolf Pro.

Each new drive took about 9 hours to resilver, so it was a couple of days before I had my RAID5 array fully working on the new drives. With the last one in place the storage pool and volume were automatically expanded to the maximum size of 16TB, but that was leaving over 5TB unusable. Not an immediate problem, but one that would eventually bite.

Workaround

!WARNING!

The stuff that follows isn’t supported by Synology, and if it goes wrong could destroy your data. Don’t consider doing this unless you are 100% confident in your backups.

!WARNING!

I’ll detail below the process that I gave up on. Thankfully I found a workaround detailed in a blog post – Expand Synology volume beyond 16TB; but since those details apply to different Synology types to my DS420j here’s what I did…

I already had Entware opkg installed, using these instructions. What follows I ran as root in an SSH session, but you can always prefix commands with sudo. It might also be advisable to run from a screen session in case your SSH link gets cut.

The crucial tool is resize2fs, but DSM’s version is too old, so opkg is needed to get a newer version:

opkg install resize2fs

I didn’t use a pinned version, and the one I got was 1.45.6

The RAID volume needs to be unmounted to be resized, but the Entware tools are on that volume, so first they need to be copied elsewhere:

umount /volume1/@entware-ng/opt
cp -R /volume1/@entware-ng/opt/* /opt/

Then shut down services and unmount the RAID volume:

syno_poweroff_task -d

/volume1 will now be unmounted, and it’s possible to check the filesystem, which is mandatory before resizing.

e2fsck -f /dev/md2

That took about an hour on my filesystem, and I was glad for the suggestion to answer ‘a’ to the first y/n question as there were hundreds of them.

Next up is the command to convert the filesystem from 32bit to 64bit:

resize2fs -b /dev/md2

This took almost 4 hours for my system, with the CPU pegged at 100% pretty much throughout. It’s possible that adding a ‘p’ flag would have helped a little by showing progress.

At this point enough is done for DSM to pick up the rest. So:

reboot

and log back in. Then go to Storage Manager > Storage Pool > Action > Resize. That then kicks off a process that validates the extra array space, which for my system ran for a few hours.

And then I had a 21.82TB volume :)

The longer way

If I was less impatient I’d have copied all my data over to the old spindles (plus a spare to replace the failed drive) on my old NAS, then created a new volume on the new NAS, then copied all the data back. That would have taken days, possibly weeks, and would have carried a bunch of its own risks.

Doh!

I stupidly thought I could save a bunch of time by putting the old spindles into my old NAS and just restore the RAID5 set. But it doesn’t work like that. Drives 1,2,3 had been pulled from the array at different times, and so when they were brought back together they were inconsistent.



One Response to “Expanding Synology Volume Beyond 16TB”

  1. Some conversation about this (or more generally about running a big NAS) on a LinkedIn thread


Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.