Written by J. Moellenkamp on
Reading time: 3 minutes
SolarisEnglish
The CeBIT magic
I promised to publish a walk through to my ZFS demonstration at the CeBit 2009 booth … it´s the stuff Ingo Frobenius called magic. Well, it isn´t really magic, but perhaps impressive when you demo it at a high speed. For the people used to the virtues ZFS this speed seems normal, but you have to consider, that most people know it otherwise …. vastly slower, vastly less integrated and vastly more uncomfortable.
So … what was my demo case for ZFS at the CeBIT? As i had just one disk in my CeBIT system, i´ve used the trick with using files as devices. So i had to create the file devices first.
Okay, now i create my testpool.
To show how mighty the zfs create is, i showed the mount table afterwards to present the already mounted filesystem.
Afterwards i´ve created some filesystem. As i´m a child of the beginning 80ies i use Muppet Show names most of the times:
ZFS filesystem creation is so fast, that i show the mount table again to show the audience, that i´ve really created file systems.
The concept of the storage pool was new to many people at the CeBIT booth so i told them to observe the third column.
90.8 M free. Now let´s create a file in one of the directory.
Okay, yet another look to the filesystems.
As all four filesystems share the same pool, all have the same reduced amount of storage. Okay, now let´s extend the pool. A short look about the current configuration.
We have a mirror of two devices. Okay, let´s add the other two filesystem.
Let´s have another look to our pool structure.
We have now a stripe of two mirrors. And when you look at the filesystem, all filesystems of the pool have the same increased amount of storage.
Let´s play around with filesystem snapshots. I´ve used the example of working in your home directory.
Okay, it would be nice to protect your work against mishaps. Let´s do a snapshot.
The storyline of my demo repeats this for a while:
It´s Saturday and the boss needs the results of Monday.
Fsck … you´ve deleted them. But you could use the snapshots:
You can just go to the .zfs directory in the root of your filesystem and access a snapshot with it´s name as a directory name. Okay, most people are really impressed now, but we can do more than that. We can do the same for raw devices. At first i showed them the creation of sparse provisioned directories with the storyline “Imagine, you have collegue telling you, that he need a raw volume as large as 5 gigabyte but you know he needs only 128 megabytes. How to give him 5 gigabyte without giving him 5 gigabyte worth of hardisks”. So i create such a volume.
It´s really a device. Just look at the device path:
Let´s format it with UFS just as an example. You could export it with iSCSI and format it with NTFS as well.
We need some mountpoint.
I initially mount the filesystem and create an timestamp file in it.
I´m unmounting it, make a snapshot of it, remout it and create another timestamp file just to show that it´s still writeable.
Now let´s have a short look to the contents of our UFS filesystem.
There are two timestamp files in it as expected. Now we mount our snapshot. As snapshots are read-only by definition, we can just mount it read only.
But we can look in and the the version at the time of the snapshot … but now we want to have a writeable version of the filesystem. We have to clone the snapshot. No problem.
Initially it has the same concent as in our snapshot. But when we create an additional file the filesystem starts to be different. The nice thing. The cloned filesystem just takes the storage needed for the modifications, not for the a complete copy.
This was my ZFS CeBIT showcase. For many people it was a really impressive show.