ChunkFS - solution or just a kludge

It´s a well known problem in the computer industry, that the needed time for filesystem checking will reach sooner or later unacceptable dimensions. This was one reason, why we developed ZFS. A number of mechanisms in the filesystem ensures an always consistent state. The Linux community sees this problem as well. But this solution looks more like a kludge: ChunkFS divides a filesystem into up to 256 chunks that gets transparently merged into one user/application-visible filesystem. Every chunk is an filystem on it´s own. The idea behind this concept is, that you only need to check a few chunks and not the whole filesystem. This idea has some major drawbacks. At first i assume that in practice the fault isolation of chunkfs won´t reach the level you need to save substantial fscking time. It´s only a short thoughtgame, but: The more write load you give to the filesystem the more chunks will be in “dirty state”. The more write load you have on a filesystem, the more probable a inconsistent state will be, as the probability of disrupting an write operation in progress rises with the amount of write operations. So in my personal opinion you end with several “dirty” chunks and thus you won´t get such an big advantage. The more you need the mechanisms to shorten fsck time, the less ChunkFS would help you. The biggest advantage may be parallel fsck-ing of the chunks but this would pose a hge load to the storage systems although it would be interesting how to solve the dependencies between chunk (Imagine: Chunk A needs a consistent Chunk B, Chunk B needs a consistent Chunk A. How to solve this conflict without risking consistency of the whole filesystem) . Besides of this, it introduces new classes of problems like the creation of unique inodes over several file systems or the mentioned problem of interchunk dependencies. At the end, there is only one solution to problem of the growing fsck run times: Obsoleting filesystem checking at all. The most reasonable way to to this is by copy on write and transactional writes. Net Apps saw the problem and invented WAFL, Sun saw the problem and invented ZFS. I think it´s time for the Linux community to find a real solution and to step back of developing an questionable kludge. PS: The section 8 of this shows a problem that seems to be common within the Linux development community: Vast misunderstandings about the inner function of ZFS. There is no filesytem checking at ZFS, as the filesystem don´t need one, so . You can scrub the filesystem (in the widest and farthest sense similar to fsck, but this can be done online and it checks the validity of data and metadata by the checksums). As the people in the linux community tend to be intelligent to downright brilliant, i don´t understand this misunderstandings …