Less known Solaris 11 features: Shadow Migration

In the ZFS Storage Appliance we have little nice feature enabling you to do migrations of data in the background. It’s called Shadow Migration. It’s a really useful feature. Imagine you have a RAIDZ. After a time you recognize that RAIDZ wasn’t a good decision for your workload and RAID10 would be much better choice. But how to transform it into a RAID10 and how to do it with minimal interruption? You can do this with the Shadow Migration feature. With the Shadow Migration feature, you can migrate the data from one local or remote filesystem to another, while you are already accessing the new one to get the data on the old ZFS filesystem. This feature is available in Solaris 11 as well. For this demonstration we will use two zfs pools consisting out of files. So we have to create the files first:

root@test:/test/brainslug# mkfile 128m source1
root@test:/test/brainslug# mkfile 128m source2
root@test:/test/brainslug# mkfile 128m source3
root@test:/test/brainslug# mkfile 128m source4
root@test:/test/brainslug# mkfile 128m target1 
root@test:/test/brainslug# mkfile 128m target2
root@test:/test/brainslug# mkfile 128m target3
root@test:/test/brainslug# mkfile 128m target4
root@test:/test/brainslug# mkfile 128m target5
root@test:/test/brainslug# mkfile 128m target6

Now the pools are created. At first our RAIDZ pool consisting out of 4 files. It’s named source

root@test:/# zpool create source raidz \
 /test/brainslug/source1 \
 /test/brainslug/source2 \
 /test/brainslug/source3 \

The second one is the future target of the shadow migration. It consists out of six “disks”

root@test:/# zpool create target
  mirror /test/brainslug/target1 /test/brainslug/target2 \
  mirror /test/brainslug/target3  /test/brainslug/target4 \
  mirror /test/brainslug/target5 /test/brainslug/target6

When you did a basic install, the tools and daemons needed for shadow-migration are not included. You have to install them and enable the shadowd afterwards:

root@test:/test/brainslug# pkg install shadow-migration
root@test:/test/brainslug# svcadm enable shadowd

Now you should see the shadowd daemon running.

root@test:/test/brainslug# ps -ef | grep "shadow"
    root  3292     1   0 14:32:33 ?           0:03 /usr/lib/fs/shadowd

Okay … to test the shadow migration we create a filesystem in the source pool:

root@test:/test/brainslug# zfs create source/somestuff

Now we have to fill this file with a some data. Let’s create some play data.

root@test:/test/brainslug# dd if=/dev/urandom of=myfile bs=1024 count=300000                        
300000+0 records in
300000+0 records out
root@test:/test/brainslug# mkdir demodata
root@test:/test/brainslug# cd demodata
root@test:/test/brainslug/demodata# split -b 128k -a 5  ../myfile

This should yield a significant number of 128k files. Now we copy them to the newly created filesystem source/somestuffWe will copy the files into the zfs filesystem posing as our old filesystem:

root@test:/test/brainslug/demodata# cp * /source/somestuff/
root@test:/test/brainslug/demodata# cd /
root@test:/# zfs list source
source  294M  42,1M  46,4K  /source

Just to have something to compare, you could simply count the files and calculate the md5 checksum of a file.

root@test:/# ls -l /source/somestuff | wc -l
root@test:/# md5sum /source/somestuff/xaadmd
3fb4a6be2f93c3d93998db52061244aa  /source/somestuff/xaadmd

Shadow migration will only works, when the source filesystem read-only. So we have to put the source filesystem into such a state:

root@test:/# zfs set readonly=on source/somestuff

Okay, now let’s configure the shadow migration:

root@test:/# zfs create -o shadow=file:///source/somestuff \

That’s all. The command may take some moments to get back. The migration of data starts right in the moment you create the new filesystem. It runs in the background and starts to copy all data to the new filesystem. Important to know: You can do shadow migration via NFS as well and it can be an UFS filesystem as well. you just have to declare the source of the shadow migration like nfs://fileserver/directory Okay. With shadowstat we can check the process of migration.

root@test:/# shadowstat    
                                BYTES   BYTES           ELAPSED
DATASET                         XFRD    LEFT    ERRORS  TIME
target/newlocationforsomestuff  25,5M   -       -       00:01:10

The cool think about shadow migration is: You can already use the new filesystem. Despite the fact that the migration is still running, you will already see all files and when you access one file it will be migrated in the moment you access the file on the new filesystem. You don’t have to wait with the access, until the block would be migrated by the normal background migration. When you try to access data, that isn’t already migrated, it’s migrated in the moment you access it in the new filesystem.

root@test:/# md5sum /target/newlocationforsomestuff/xaadmd               
3fb4a6be2f93c3d93998db52061244aa  /target/newlocationforsomestuff/xaadmd
root@test:/# ls -l /target/newlocationforsomestuff | wc -l                       

Afterwards it proceeds with the further migration of all data in the pool. You can observe that with the shadowstat command.

root@test:/# shadowstat 
                                BYTES   BYTES           ELAPSED
DATASET                         XFRD    LEFT    ERRORS  TIME
target/newlocationforsomestuff  97,8M   -       -       00:01:50
target/newlocationforsomestuff  128M    -       -       00:02:00
target/newlocationforsomestuff  147M    -       -       00:02:10
target/newlocationforsomestuff  165M    -       -       00:02:20
target/newlocationforsomestuff  186M    -       -       00:02:30
target/newlocationforsomestuff  202M    -       -       00:02:40
target/newlocationforsomestuff  211M    -       -       00:02:50
target/newlocationforsomestuff  224M    -       -       00:03:00
target/newlocationforsomestuff  236M    -       -       00:03:10
target/newlocationforsomestuff  243M    -       -       00:03:20
target/newlocationforsomestuff  249M    -       -       00:03:30
target/newlocationforsomestuff  256M    -       -       00:03:40
target/newlocationforsomestuff  260M    -       -       00:03:50
target/newlocationforsomestuff  266M    -       -       00:04:00
target/newlocationforsomestuff  272M    -       -       00:04:10
target/newlocationforsomestuff  278M    -       -       00:04:20
target/newlocationforsomestuff  286M    -       -       00:04:30
No migrations in progress


Successfuly migrated.

Do you want to learn more?

docs.oracle.com: Migrating ZFS File Systems"
docs.oracle.com: Migrating File System Data to ZFS File Systems

blogs.oracle.com: What is Shadow Migration
blogs.oracle.com: Shadow Migration Internals