AnInstanceOfMeQuick qu: anyone know whether setting the sharesmb property on a file system has been fully implemented on FreeBSD, or will be in future releases if not? I'm getting "Unsupported share protocol: 1" when setting the prop on FreeBSD 10.3-RELEASE-p4
BaughnSo I was looking at the zfs_vdev_async_write_max_active tunable...
BaughnAm I correct in thinking that this will generally also match the number of concurrent writes to the storage device?
DHEit's a maximum. ZFS will not issue more than that
BaughnRight.
BaughnFunny. I guess I'll need a Skylake CPU to test it properly, but the NVMe device I'm using is theoretically capable of around 1,024 concurrent writes to arbitrary locations.
DHEsounds like you have some numbers to crank up. :)
BaughnRight now I'm getting 750 MB/s simultaneous read and write. Which is great and all, but the number comes from the maximum DMA channel bandwidth.
BaughnZFS is capable of capping it at default settings. I wasn't looking for a CPU upgrade yet, but... this looks like a good excuse.
Baughn(That's 750 MB/s read, 750 MB/s write.)
DHEin the source code for vdec_queue.c there's a lovely little diagram (near the top) telling how the parameters interact with how full the dirty data buffer is
BaughnYeah, looked at it.
BaughnOne possible confounder would be if Linux somehow buffers requests, despite -- presumably -- ZFS waiting for completion before issuing more. There's no replacement for real-world testing, though.. hmm.
BaughnEight simultaneous file writes probably isn't a heavy enough workload. I need to do random writes.
DHEkeep in mind ZFS writes in linear batches. a bunch of random writes won't stress the disk out all that much
BaughnI'm well aware. It's one of the reasons I like to use it for SSDs.
BaughnIt does make it hard to get any results other than "instant" out of this SSD, though. :P
BaughnHum. Perhaps zfs_vdev_async_read_max_active would be a more salient tunable.
DHEhonestly, don't be afraid to crank the minimums
BaughnOh, that's not it.
BaughnI'm trying to get solid enough data that I can recommend changing the defaults.
Baughn(For the NVMe case)
BaughnI imagine they're only going to get more common over time.
BaughnOn a completely separate note, I had a horribly fragmented filesystem until yesterday, when I reset it by way of rsync.
BaughnIt's probably a worst-case workload; database-like random writes to the middle of mid-sized files, combined with automatic snapshots. It eventually made ZFS seize up, so I wish I'd kept it for testing, but I needed the machine for something else.
BaughnQuestion is, is there anything I can do to reduce the impact?
Baughn..
BaughnActually, let me put that another way. Is there any way to check the fragmentation level of a single file?
DHEnot really. best hope is zdb but the output isn't user-friendly
DHEit will literally print a list of disk offsets for each individual block
BaughnThat'd work, if I could map blocks to files.
DHEno, you dump a file and get a block/offset list
dusoHello, I am a muggle with a headless nas4free server. I have read the documentation at http://docs.oracle.com/cd/E19253-01/819-5461/6n7ht6r4n/index.html trying to figure out if it is safe to delete a clone snapshot on my nas. Is someone able to help me confirm if I need to replace the zfs filesystem with the clone or delete the clone please?
DHEa clone is a filesystem like any other except its existence depends on the snapshot from which it came.
DHEso what exactly is the problem?
dusowhen I do a scrub it says I have corrupted files, and restore them from backup. they have been deleted from the filesystem but still appear in the scrub
dusonot sure how to explain it better, I would just like to return the system back to default so to speak
dusono clones, no snapshots
DHEit takes 2 scrubs to make the list of permanent errors go away
DHEyou can simply cancel a scrub once you start though
dusoI had a WD green die and managed to replace it ok I thought but when it resilvered I got this @worstcase snapshot
dusothat was about 4 years ago, I just discovered the clone today
BaughnDHE:
BaughnHm... how?
dusoanyway, I think I will do some more reasearch on this and tackle it with a hot coffee and a clear head, its after 2300h here
dusonight y'all