after adding some more 8Mb-chunked (yeah i should make chunks smaller, but who cares) MD's to geom_virstor (so newfs can proceed):mnu@RELENG_8_0:~> s mount /dev/virstor/virtest /mnt/tmp
mnu@RELENG_8_0:~> df !$
df /mnt/tmp
Filesystem 1K-blocks Used Avail Capacity Mounted on
/dev/virstor/virtest 4159842858 4 3827055426 0% /mnt/tmp
mnu@RELENG_8_0:~> df -h /mnt/tmp
Filesystem Size Used Avail Capacity Mounted on
/dev/virstor/virtest 3.9T 4.0K 3.6T 0% /mnt/tmp
mnu@RELENG_8_0:~> gvirstor status
Name Status Components
virstor/virtest 6% physical free md12
md13
md14
md15
md16
md17
md18
md19
md20
md21
md22
md23
md24
mnu@RELENG_8_0:~> ll -s /tmp/gvirstor.dd /mnt/big/gvirstor* | awksum # space really taken on disks by sparse MDs
1529120
mnu@RELENG_8_0:~> gvirstor list | grep Mediasize | awk '{s += $2} END {print s/1024}'
4395319296
mnu@RELENG_8_0:~> s umount /mnt/tmp
mnu@RELENG_8_0:~> s newfs -O1 /dev/virstor/virtest
/dev/virstor/virtest: 4194304.0MB (8589934592 sectors) block size 16384, fragment size 2048
using 22834 cylinder groups of 183.69MB, 11756 blks, 23552 inodes.
super-block backups (for fsck -b #) at:
32, 376224, 752416, 1128608, 1504800, 1880992, 2257184, 2633376, 3009568, 3385760,
...
etc etc (there is no delayed inode allocation in UFS1, therefore newfs works MUCH slower this time)
So 4Tb virtual storage is OK for UFS1 too.
p.s. ZFS is used w/o prefetch here, as i do not have 4Gb+ RAM, and benchmarks showed it is faster w/o prefetch.
p.p.s. Another ZFS slowdown (full-blown hard kernel hang for upto 10 sec) will be experienced by "zfs compression=on" users - kernel will hang while zlib compresses data, which is then sent to ZFS. Only then kernel will unhang and continue.