zfstcaputi commented on pull request zfsonlinux/zfs#4329 - ZFS Encryption by tcaputi <https://github.com/zfsonlinux/zfs/pull/4329#issuecomment-231931731>
zfsgreg-hydrogen commented on pull request zfsonlinux/zfs#4329 - ZFS Encryption by tcaputi <https://github.com/zfsonlinux/zfs/pull/4329#issuecomment-231932395>
zfstcaputi commented on pull request zfsonlinux/zfs#4329 - ZFS Encryption by tcaputi <https://github.com/zfsonlinux/zfs/pull/4329#issuecomment-231934448>
ptx0I see the new github fonts now
ghormoonhi, any idea why when doing zfs send | zfs recv, after sending all the data, it hangs for some time and then spits out "dataset is busy"? it was never doing that, the datasets (well, those are zvols) are not used at all on the remote site
zfserikjanhofstede commented on issue zfsonlinux/zfs#4512 - zfs 0.6.5 and write performance problems <https://github.com/zfsonlinux/zfs/issues/4512#issuecomment-231959524>
ghormoonnvm, I'm dumb, I've really started the Vm from wring pool :)
bunderits the same font isn't it? just random sizing changes?
bunderSeagate Fires 6,500, Or 14% of Workforce, Stock Soars
bundersmh
zfspjd commented on pull request zfsonlinux/zfs#4329 - ZFS Encryption by tcaputi <https://github.com/zfsonlinux/zfs/pull/4329#issuecomment-232002991>
zfsraphaelcohn commented on issue zfsonlinux/zfs#4813 - Fails to compile with musl: zed_log.c <https://github.com/zfsonlinux/zfs/issues/4813#issuecomment-232009189>
zfsmailinglists35 opened issue zfsonlinux/zfs#4844 - l2arc_feed kernel thread keeps waking up the cpu on idle system <https://github.com/zfsonlinux/zfs/issues/4844>
bb0xptx0, I had a look over Zmotion ... that's nice, so you only need to keep the same IP address ?
bb0xptx0, at the moment I'm using a CNAME, so I suppose that one will not work
bb0xptx0, also if I understand this well, for Zol, you don't need to worry about their patch right ?
zfskernelOfTruth commented on issue zfsonlinux/zfs#4512 - zfs 0.6.5 and write performance problems <https://github.com/zfsonlinux/zfs/issues/4512#issuecomment-232031126>
bb0xis resumable ZFS send/receive available on Zol ?
DHEapparently, as of commits on June 28th
DHEhttps://github.com/zfsonlinux/zfs/commit/47dfff3b86c67c6ae184c2b7166eaa529590c2d2 (warning: large)
bb0xI'm running zfs-0.6.5.7 so probably it's not there yet
bb0xI'm interested since I want to send some data to another location which is quite big, and in case of a network issue, i have to take it from beginning again
bb0xif I'll upgrade somehow or compile zfs command separately, should this work ?
bb0xwithout upgrading everything ?
bb0xor there are any details/properties setup on the existing pool
sveinseThe "golden rule" of having 1GB memory per TB drive, is that raw size (sum of the drives used) or the final resulting size when the party disks have been subtracted?
DHEthe latter
DHEbut it's also a "golden" rule insofar as it isn't a rule as much as a suggestion on how to behave. "large pools require more RAM" isn't a rule, it's reminder that huge pools with lots of IO benefit from larger caches and you shouldn't cheap out
sveinseDHE, yes, hence the double quotes :D
sveinseMy practical case is for my home NAS, where I have 6x3TB raidz2, and I'm about to max out the installable memory with 16GB
jaakkoshi! the disk usage information seen by libzfs (and zfs get) seems to update only when data is written to disk. is it possible to query more up-to-date free space information?
bundergot any snapshots?
bunderif you snapshot and delete a file, the blocks don't get freed until you delete the snapshot
DHEjaakkos: no, because ZFS doesn't know how much space will be used for things like dedup and compression until it actually does it
jaakkosno snapshots for now
jaakkosright..
jaakkosso i guess this means writes can return while it turns out they can't be completed evre?
jaakkoss/evre/ever/
Melianjaakkos meant: so i guess this means writes can return while it turns out they can't be completed ever?
DHEjaakkos: a grace period of given for user quotas - you can exceed it slightly if you overfill it in a single transaction
DHEfilesystem quotas are enforced by ZFS timing transactions out a bit sooner
DHEoh...
DHEI just realized something
zfschrisrd commented on pull request zfsonlinux/zfs#4838 - Kill znode->z_links field by chrisrd <https://github.com/zfsonlinux/zfs/pull/4838#discussion_r70449866>
zfschrisrd commented on commit zfsonlinux/zfs@5479e64eca - Kill zp->z_xattr_parent to prevent pinning <https://github.com/zfsonlinux/zfs/commit/5479e64ecaca97f8f7f6e06da37b5f1103768169#commitcomment-18213769>
ptx06TB disk on amazon for $175
sveinseHow resilient is zfs against powerloss? And mostly with respect to catastrophic errors, such as unmountable file system. Loss of data, as in lose the last write transaction is less critical
sveinseHave anyone made any instrumented test for this?
bunderdoes rebooting a frozen laptop count? seems fine
zfschrisrd commented on pull request zfsonlinux/zfs#4838 - Kill znode->z_links field by chrisrd <https://github.com/zfsonlinux/zfs/pull/4838#discussion_r70455745>
ptx0my only problem seems to be zsh
ryaosveinse: It is highly resilent. Its internally transactions mean that any power failure can be survived as long as the hardware follows proper flush semantics. There are even mechanisms in place to deal with buggy flush semantics, but there are not strong guarantees there.
ptx0if I hang the system real good, it'll corrupt my ~/.zsh_history
ptx0it's a very simple fix though and I'm not sure this is a ZFS problem, just a zsh thing
ryaosveinse: There is a program called ztest that does stochastic testing to catch issues caused by power loss by simulting it via SIGKILL on a program running the kernel driver in userspace.
ryaosveinse: And there are studies of ZFS that can be found via Google showing that it is very resilient.
bunderptx0: fwiw, i think bash history only gets written if the session closes
ptx0bunder: you can change that
ptx0ACTION does :)
bunderhmm
ptx0it's frustrating to be in two shells and have two different ^R histories to search
sveinseryao: I think spin drives are good when it comes to flushing, as they use the stored rotational energy as generator to save out the buffer and park the head. Quite clever actually
ryaosveinse: The firmware is designed to obey flushes.
ptx0sveinse: I don't know about saving the buffer but they certainly park the head before it crashes into the platter
bundertell that to the ibm deathstars
ryaoThey don't save the buffer, but ZFS is designed to ensure that the buffer can be loss on power failure. The flush is intended to commit the buffer. Two flushes are needed before ZFS will recognize that non-synchronous changes have been made wehn resuming from power failure due to the 2 stage transaction commit.
ryaos/loss/lost/
Melianryao meant: They don't save the buffer, but ZFS is designed to ensure that the buffer can be lost on power failure. The flush is intended to commit the buffer. Two flushes are needed before ZFS will recognize that non-synchronous changes have been made wehn resuming ...
sveinseheh, I remember the big harddrive platters on the mainframe computers some decades ago. The drive serves as a gyroscope as well. A crash there, and the room was ruined!
ptx0not really, their cases absorb most of the damage of 80lb platters coming off their axis
ptx0if you want to see destruction, check out one of those 20 ton flyweights coming unhinged
ptx0that's a fuckton of stored rotational energy, it's gotta go *somewhere* :D
sveinseI didn't see it myself when it happened, but I saw the destruction in a mainframe harddisk-room when I started styding in university, some too-many-years ago. The platter had allegedly climbed the walls
ptx0flywheel, derp
ptx0"Advanced FES systems have rotors made of high strength carbon-fiber composites, suspended by magnetic bearings, and spinning at speeds from 20,000 to over 50,000 rpm in a vacuum enclosure"
sveinseMelian: Am I misinformed that hard drives flushes out their write buffers on powerloss? Would be good to know if I'm lying :o
ptx0that's a script
djsa somewhat annoying script really
ptx0the novelty wore off quickly
cirdan:-)
zfstcaputi commented on pull request zfsonlinux/zfs#4329 - ZFS Encryption by tcaputi <https://github.com/zfsonlinux/zfs/pull/4329#issuecomment-232084666>
zfskernelOfTruth commented on pull request zfsonlinux/zfs#4329 - ZFS Encryption by tcaputi <https://github.com/zfsonlinux/zfs/pull/4329#issuecomment-232085945>
Baughnsveinse: Depends on how much you paid for them. In general, yes, you're mistaken.
Baughnsveinse: This is what UPSen are for, and also why modern filesystems take such great care to keep on-disk state always consistent.
BaughnSo in principle you should only lose a second or five... but you haven't lived until you've dealt with an HDD which reorders writes while ignoring flushes or write barriers.
zfstcaputi commented on pull request zfsonlinux/zfs#4329 - ZFS Encryption by tcaputi <https://github.com/zfsonlinux/zfs/pull/4329#issuecomment-232088122>
sveinseBaughn: Well, thanks. It's always good to clear up any mistakes!
Baughnsveinse: At any rate, this is *why* fsync takes a long time. And why SLOG is a good idea.
BaughnIt's not why fsync doesn't work; that's just POSIX being braindead.
sveinseBaughn: Tell me about it. Haven't you heard me moan about dpkg taking a very long time to install linux-headers, due to a few 100k fsync() in their implementation?
zfsdd1dd1 opened issue zfsonlinux/zfs#4845 - centos7 - selinux must be disabled <https://github.com/zfsonlinux/zfs/issues/4845>
Baughnsveinse: So one thing you can do is set sync=disabled. This causes fsync to have no effect, but since ZFS doesn't reorder writes around transaction borders your FS will still always be locally consistent.
Baughnsveinse: It screws horribly with the assumptions of any distributed systems, though. I.e. it can cause postgresql to think something is flushed when it actually isn't; don't use it on a server or for anything important unless you know exactly what it does.
sveinseBaughn: Yes, I know. The fix I ended up doing is using eatmydata which overloads fsync into aio. This way I can limit and control what ends up without being synced, rather than doing it on the whole fs
zfsdweeezil commented on issue zfsonlinux/zfs#4845 - centos7 - selinux must be disabled <https://github.com/zfsonlinux/zfs/issues/4845#issuecomment-232091592>
NukienBaughn, So whata *is* the correct way to host postgresql on zfs ?
BaughnNukien: Straightforwardly on a default-configured filesystem, perhaps?
zfsbehlendorf commented on pull request zfsonlinux/zfs#4822 - Allow zfs_purgedir() to skip inodes undergoing eviction by behlendorf <https://github.com/zfsonlinux/zfs/pull/4822#issuecomment-232096151>
ryaoNukien: http://open-zfs.org/wiki/Performance_tuning#PostgreSQL
zfsironMann commented on issue zfsonlinux/zfs#4829 - PANIC at fnvpair.c:205:fnvlist_add_nvlist() <https://github.com/zfsonlinux/zfs/issues/4829#issuecomment-232097413>
NukienYah, I've seen that. Keep feeling that there should be more to it ...
zfsbehlendorf commented on pull request zfsonlinux/zfs#4833 - Want tunable to ignore hole_birth by rincebrain <https://github.com/zfsonlinux/zfs/pull/4833#issuecomment-232099806>
dorianjare there any long-term future plans to allow compression in ARC? it seems like for some workloads, this would be a win
FinalXanyone know if there's an easy way to enable auto snapshotting on ubuntu 16.04 with the ootb zfs?
FinalXusing some script i wrote myself, but that doesnt work very well with the lxd snapshot stuff :)
PMTFinalX: i mean, zfs-auto-snapshot or something similar
FinalXah, I see there's no package for it, but the github page says it all
FinalXty :)
PMTdorianj: compressed L2ARC is something already implemented in some OpenZFS platforms, and I think I saw chatter about compressed ARC somewhere but I can't recall where ATM
PMTah, https://github.com/openzfs/openzfs/pull/103
DHEARC isn't compressed so much as it is keeping the raw on-disk data in RAM
DHEso there's an onus on you to configure your pool properly
PMTsure, so it won't be compressed if it's not already stored compressed
dorianjoh, wow, neat! PMT are you implying that ZoL L2ARC _isn’t_ compressed even if the pool is?
PMTdorianj: i honestly don't recall if it is or isn't on ZoL.
FinalXit cached blocks
FinalXafaik it caches the data of the blocks, so compressed if compression is not, and not if it's not, etc.
FinalXeh, so compressed if compression is on
pcdcompressed ARC will be upstreamed Soon (TM)
DHEL2ARC is compressed on ZoL, always using LZ4
FinalXah
DHEinteresting question of how that will interact with compressed ARC. Might change to use the on-disk compression and match the current ARC settings.
PMTso currently, compressed L2ARC is a thing, but not compressed ARC (or, currently, persistent L2ARC, IIRC)
DHEcorrect
DHEcompressed ARC has been really good for my metadata-heavy workloads. :)
PMTDHE: running a custom build with it pulled in, or a non-ZoL platform?
zfsl1k commented on pull request zfsonlinux/zfs#4329 - ZFS Encryption by tcaputi <https://github.com/zfsonlinux/zfs/pull/4329#issuecomment-232100012>
zfsbehlendorf commented on pull request zfsonlinux/zfs#4791 - Unable to destroy filesystem with receive_resume_token by Ramzec <https://github.com/zfsonlinux/zfs/pull/4791#issuecomment-232103320>
DHEPMT: correction: compressed L2ARC
DHE:/
zfsbehlendorf commented on issue zfsonlinux/zfs#4809 - Silently corrupted file in snapshots after send/receive <https://github.com/zfsonlinux/zfs/issues/4809#issuecomment-232106427>
DHEstill I need to finish my custom build anyway. :)
zfsthegreatgazoo commented on pull request zfsonlinux/zfs#4328 - [RFC] SIMD implementation of vdev_raidz generate and reconstruct routines by ironMann <https://github.com/zfsonlinux/zfs/pull/4328#issuecomment-232117243>
FinalXPMT: works perfectly, ty :)
zfsthegreatgazoo commented on pull request zfsonlinux/zfs#4328 - [RFC] SIMD implementation of vdev_raidz generate and reconstruct routines by ironMann <https://github.com/zfsonlinux/zfs/pull/4328#discussion_r70482698>
zfsbehlendorf commented on pull request zfsonlinux/zfs#4837 - Prevent null dereferences when accessing dbuf kstat by dweeezil <https://github.com/zfsonlinux/zfs/pull/4837#issuecomment-232118842>
zfsthegreatgazoo commented on pull request zfsonlinux/zfs#4328 - [RFC] SIMD implementation of vdev_raidz generate and reconstruct routines by ironMann <https://github.com/zfsonlinux/zfs/pull/4328#discussion_r70483427>
zfsthegreatgazoo commented on pull request zfsonlinux/zfs#4328 - [RFC] SIMD implementation of vdev_raidz generate and reconstruct routines by ironMann <https://github.com/zfsonlinux/zfs/pull/4328#discussion_r70483791>
zfstcaputi commented on pull request zfsonlinux/zfs#4329 - ZFS Encryption by tcaputi <https://github.com/zfsonlinux/zfs/pull/4329#issuecomment-232128774>
zfstuxoko commented on pull request zfsonlinux/zfs#4828 - Fix get_zfs_sb race and misc fixes by tuxoko <https://github.com/zfsonlinux/zfs/pull/4828#issuecomment-232129089>
zfsironMann commented on pull request zfsonlinux/zfs#4328 - [RFC] SIMD implementation of vdev_raidz generate and reconstruct routines by ironMann <https://github.com/zfsonlinux/zfs/pull/4328#issuecomment-232129534>
zfsbehlendorf commented on pull request zfsonlinux/zfs#4328 - [RFC] SIMD implementation of vdev_raidz generate and reconstruct routines by ironMann <https://github.com/zfsonlinux/zfs/pull/4328#issuecomment-232130095>
zfsironMann commented on pull request zfsonlinux/zfs#4328 - [RFC] SIMD implementation of vdev_raidz generate and reconstruct routines by ironMann <https://github.com/zfsonlinux/zfs/pull/4328#issuecomment-232130846>
zfsironMann commented on pull request zfsonlinux/zfs#4328 - [RFC] SIMD implementation of vdev_raidz generate and reconstruct routines by ironMann <https://github.com/zfsonlinux/zfs/pull/4328#discussion_r70490515>
zfsrdolbeau commented on pull request zfsonlinux/zfs#4328 - [RFC] SIMD implementation of vdev_raidz generate and reconstruct routines by ironMann <https://github.com/zfsonlinux/zfs/pull/4328#issuecomment-232131559>
zfsironMann commented on pull request zfsonlinux/zfs#4328 - [RFC] SIMD implementation of vdev_raidz generate and reconstruct routines by ironMann <https://github.com/zfsonlinux/zfs/pull/4328#discussion_r70491960>
zfsbehlendorf commented on pull request zfsonlinux/zfs#4821 - Fix zdb crash with 4K-only devices by behlendorf <https://github.com/zfsonlinux/zfs/pull/4821#discussion_r70492246>
zfstuxoko commented on pull request zfsonlinux/zfs#4838 - Kill znode->z_links field by chrisrd <https://github.com/zfsonlinux/zfs/pull/4838#discussion_r70493038>
zfstuxoko commented on pull request zfsonlinux/zfs#4837 - Prevent null dereferences when accessing dbuf kstat by dweeezil <https://github.com/zfsonlinux/zfs/pull/4837#issuecomment-232137486>
zfsbehlendorf commented on issue zfsonlinux/zfs#4841 - Please add link to "Getting Started" wiki page for zol on voidlinux <https://github.com/zfsonlinux/zfs/issues/4841#issuecomment-232138982>
zfsgotwf closed issue zfsonlinux/zfs#4841 - Please add link to "Getting Started" wiki page for zol on voidlinux <https://github.com/zfsonlinux/zfs/issues/4841>
zfsbehlendorf commented on pull request zfsonlinux/zfs#4837 - Prevent null dereferences when accessing dbuf kstat by dweeezil <https://github.com/zfsonlinux/zfs/pull/4837#issuecomment-232142042>
zfstuxoko commented on pull request zfsonlinux/zfs#4837 - Prevent null dereferences when accessing dbuf kstat by dweeezil <https://github.com/zfsonlinux/zfs/pull/4837#issuecomment-232146083>
zfsbehlendorf commented on pull request zfsonlinux/zfs#4837 - Prevent null dereferences when accessing dbuf kstat by dweeezil <https://github.com/zfsonlinux/zfs/pull/4837#issuecomment-232146627>
zfsironMann commented on issue zfsonlinux/zfs#4844 - l2arc_feed kernel thread keeps waking up the cpu on idle system <https://github.com/zfsonlinux/zfs/issues/4844#issuecomment-232148849>
zfsthegreatgazoo commented on pull request zfsonlinux/zfs#4328 - [RFC] SIMD implementation of vdev_raidz generate and reconstruct routines by ironMann <https://github.com/zfsonlinux/zfs/pull/4328#discussion_r70504560>
zfstuxoko opened pull request zfsonlinux/zfs#4846 - Fix dbuf_stats_hash_table_data race by tuxoko <https://github.com/zfsonlinux/zfs/pull/4846>
zfs5YN3R6Y commented on pull request zfsonlinux/spl#560 - isa_defs.h should be more arch agnostic by 5YN3R6Y <https://github.com/zfsonlinux/spl/pull/560#issuecomment-232154566>
zfsthegreatgazoo commented on pull request zfsonlinux/zfs#4328 - [RFC] SIMD implementation of vdev_raidz generate and reconstruct routines by ironMann <https://github.com/zfsonlinux/zfs/pull/4328#issuecomment-232157337>
zfsthegreatgazoo commented on pull request zfsonlinux/zfs#4328 - [RFC] SIMD implementation of vdev_raidz generate and reconstruct routines by ironMann <https://github.com/zfsonlinux/zfs/pull/4328#issuecomment-232159498>
zfsdweeezil commented on pull request zfsonlinux/zfs#4837 - Prevent null dereferences when accessing dbuf kstat by dweeezil <https://github.com/zfsonlinux/zfs/pull/4837#issuecomment-232160004>
zfsmailinglists35 commented on issue zfsonlinux/zfs#4844 - l2arc_feed kernel thread keeps waking up the cpu on idle system <https://github.com/zfsonlinux/zfs/issues/4844#issuecomment-232160565>
zfsbehlendorf commented on pull request zfsonlinux/zfs#4837 - Prevent null dereferences when accessing dbuf kstat by dweeezil <https://github.com/zfsonlinux/zfs/pull/4837#issuecomment-232167199>
zfsbehlendorf commented on pull request zfsonlinux/zfs#4794 - Multi-thread &#39;zpool import&#39; for blkid by behlendorf <https://github.com/zfsonlinux/zfs/pull/4794#issuecomment-232173127>
zfsironMann commented on pull request zfsonlinux/zfs#4328 - [RFC] SIMD implementation of vdev_raidz generate and reconstruct routines by ironMann <https://github.com/zfsonlinux/zfs/pull/4328#issuecomment-232173977>
zfsbehlendorf commented on pull request zfsonlinux/zfs#4827 - xattr dir doesn&#39;t get purged during iput by tuxoko <https://github.com/zfsonlinux/zfs/pull/4827#issuecomment-232178046>
zfstuxoko commented on issue zfsonlinux/zfs#4845 - centos7 - selinux must be disabled <https://github.com/zfsonlinux/zfs/issues/4845#issuecomment-232181742>
DeHackEdwow, busy in here
zfsdweeezil commented on issue zfsonlinux/zfs#4834 - ZFS Slow Write Performance / z_wr_iss stuck in native_queued_spin_lock_slowpath <https://github.com/zfsonlinux/zfs/issues/4834#issuecomment-232183107>
PMTmostly just busy on github.
zfsdweeezil commented on issue zfsonlinux/zfs#4512 - zfs 0.6.5 and write performance problems <https://github.com/zfsonlinux/zfs/issues/4512#issuecomment-232185396>
zfsbdaroz commented on issue zfsonlinux/zfs#4834 - ZFS Slow Write Performance / z_wr_iss stuck in native_queued_spin_lock_slowpath <https://github.com/zfsonlinux/zfs/issues/4834#issuecomment-232186157>
zfsdweeezil commented on issue zfsonlinux/zfs#4834 - ZFS Slow Write Performance / z_wr_iss stuck in native_queued_spin_lock_slowpath <https://github.com/zfsonlinux/zfs/issues/4834#issuecomment-232188429>
zfssnajpa commented on issue zfsonlinux/zfs#3508 - Unmounting ZFS filesystems frequently hangs for me in git tip and recent git versions <https://github.com/zfsonlinux/zfs/issues/3508#issuecomment-232189035>
zfstuxoko commented on pull request zfsonlinux/zfs#4822 - Allow zfs_purgedir() to skip inodes undergoing eviction by behlendorf <https://github.com/zfsonlinux/zfs/pull/4822#issuecomment-232191012>
zfsjaw3000 closed issue zfsonlinux/zfs#4842 - Error on copy using Lubuntu <https://github.com/zfsonlinux/zfs/issues/4842>
zfsbprotopopov commented on issue zfsonlinux/zfs#4809 - Silently corrupted file in snapshots after send/receive <https://github.com/zfsonlinux/zfs/issues/4809#issuecomment-232201072>
zfsbehlendorf closed issue zfsonlinux/zfs#4752 - Large kmem_alloc(1430784, 0x1000), unable to handle kernel NULL pointer dereference, metaslab_init+0x219/0x2d0 [zfs] <https://github.com/zfsonlinux/zfs/issues/4752>
zfsbehlendorf pushed to master at zfsonlinux/zfs - Comparing 590c9a0994...62b2d54b2b <https://github.com/zfsonlinux/zfs/compare/590c9a0994...62b2d54b2b>
zfsbehlendorf commented on issue zfsonlinux/zfs#3508 - Unmounting ZFS filesystems frequently hangs for me in git tip and recent git versions <https://github.com/zfsonlinux/zfs/issues/3508#issuecomment-232206805>
zfsbehlendorf closed pull request zfsonlinux/zfs#4828 - Fix get_zfs_sb race and misc fixes by tuxoko <https://github.com/zfsonlinux/zfs/pull/4828>
zfsbehlendorf commented on pull request zfsonlinux/zfs#4827 - xattr dir doesn&#39;t get purged during iput by tuxoko <https://github.com/zfsonlinux/zfs/pull/4827#issuecomment-232207013>
zfsbehlendorf commented on pull request zfsonlinux/zfs#4822 - Allow zfs_purgedir() to skip inodes undergoing eviction by behlendorf <https://github.com/zfsonlinux/zfs/pull/4822#issuecomment-232208224>
zfsbehlendorf commented on pull request zfsonlinux/zfs#4838 - Kill znode->z_links field by chrisrd <https://github.com/zfsonlinux/zfs/pull/4838#issuecomment-232208521>
zfsbehlendorf commented on pull request zfsonlinux/zfs#4815 - Add RAID-Z routines for SSE2 instruction set, in x86_64 mode. by ironMann <https://github.com/zfsonlinux/zfs/pull/4815#issuecomment-232209864>
zfsbdaroz commented on issue zfsonlinux/zfs#4834 - ZFS Slow Write Performance / z_wr_iss stuck in native_queued_spin_lock_slowpath <https://github.com/zfsonlinux/zfs/issues/4834#issuecomment-232211665>
zfsbehlendorf pushed to master at zfsonlinux/zfs - Comparing 62b2d54b2b...81edd3e834 <https://github.com/zfsonlinux/zfs/compare/62b2d54b2b...81edd3e834>
zfsbehlendorf commented on pull request zfsonlinux/zfs#4743 - fix the PANIC: metaslab_free_dva(): bad DVA with zfs #3937 by hsepeng <https://github.com/zfsonlinux/zfs/pull/4743#issuecomment-232216326>
bundergo go gadget brian
zfsbehlendorf commented on issue zfsonlinux/zfs#4829 - PANIC at fnvpair.c:205:fnvlist_add_nvlist() <https://github.com/zfsonlinux/zfs/issues/4829#issuecomment-232217325>
zfsbehlendorf commented on pull request zfsonlinux/zfs#4760 - OpenZFS 4185 - add new cryptographic checksums to ZFS: SHA-512, Skein, Edon-R by tonyhutter <https://github.com/zfsonlinux/zfs/pull/4760#issuecomment-232217807>
zfspwolny commented on issue zfsonlinux/zfs#4809 - Silently corrupted file in snapshots after send/receive <https://github.com/zfsonlinux/zfs/issues/4809#issuecomment-232218861>
zfsbehlendorf commented on pull request zfsonlinux/zfs#4685 - [RFC] Remove znode&#39;s z_uid/z_gid member by lorddoskias <https://github.com/zfsonlinux/zfs/pull/4685#issuecomment-232219101>
BaughnIf I set sync=disabled, will /sbin/sync still work?
BaughnI.e. for a normal reboot.
zfsjsalinasintel closed issue zfsonlinux/zfs#4597 - On zfs master zpool import of destroyed pool failing <https://github.com/zfsonlinux/zfs/issues/4597>
bundera quick google seems to say that solaris has sync but zfs holds it for the transaction flush
bunderi think if you want it done now, you would want to sync; echo 3 > /proc/sys/vm/drop_caches
PMTbunder: i _think_ that drop_caches only affects read caches, but i haven't had occasion to test it
bunderi guess failing that, sync;sync;sync and wait 5 seconds
PMTsync isn't beetlejuice, using it 3 times won't make it work
bunderdunno, was kindof a thing i used to do on ext3 if i had to force a reboot
PMTi believe it will flush out on unmount regardless tho, which should be happening
bundernot if you can't unmount ;)
BaughnRoot FS. :)
FireSnakewouldn't 'a normal reboot' also export the pool and unload the modules, which should eliminate the need for sync?
Baughnumount is not happening
BaughnIt's possibly a bug on the NixOS shutdown scripts. I actually managed to get it into an inconsistent state, which takes some doing.. but now I'm wondering how to do it right.
Baughndrop_caches might do the trick, I guess.
DeHackEdany `zfs` command which requires writing will force a flush. take a snapshot on shutdown?
BaughnAn empirical trial shows that /sbin/sync does nothing.
BaughnAlso, I just saw something amusing. du gives the on-disk usage, apparently not including buffers not yet written out...
DeHackEdcorrect
Baughn'echo 3 > drop_caches' does nothing.
Baughn..well, it possibly drops caches.
Baughn'zfs snapshot' works.
BaughnACTION would prefer not to do something which might leave a snapshot behind.
DeHackEdwell it doesn't have to be a snapshot. what else can you do? zfs set?
Baughnzfs set of a custom property...
BaughnWorks. :D
DeHackEdnow you're getting it
Baughn"nix:shutdown-time". That'll do just fine.
DeHackEd:)
PMTyou could also set sync=disabled
DeHackEdcan you set the property to its current value and get a writeback?
PMT(i don't know if it'll trigger a write if it's not a state change; you could, amusingly, turn it on to make it flush)
PMTI don't know, that was what I was about to say
zfsbprotopopov commented on issue zfsonlinux/zfs#4809 - Silently corrupted file in snapshots after send/receive <https://github.com/zfsonlinux/zfs/issues/4809#issuecomment-232227132>
zfsrincebrain commented on issue zfsonlinux/zfs#4809 - Silently corrupted file in snapshots after send/receive <https://github.com/zfsonlinux/zfs/issues/4809#issuecomment-232228087>
BaughnThat would work, but it would also leave a hole during which the system could crash and *not* undo the change.
BaughnThe custom attribute set looks far safer.
Baughnhttps://github.com/NixOS/nixpkgs/pull/16903 <- Also, I already implemented it. Love NixOS, really.
BaughnTesting it on my own system was trivial. :)
zfsbprotopopov commented on issue zfsonlinux/zfs#4809 - Silently corrupted file in snapshots after send/receive <https://github.com/zfsonlinux/zfs/issues/4809#issuecomment-232230277>
HalkI've deleted about 7TB of stuff on an array but the space hasn't become available and seems to be showing up in USEDSNAP
HalkYet when I look at the snapshots none are anywhere close to 1TB
HalkHow do I free up the space?
DeHackEddestroy the snapshot(s) still referencing the files you deleted
HalkI don't know that any are
HalkThe largest says 63GB
HalkAnd USEDSNAP is 6.96TB
DeHackEdyou need to understand that there are 3 metrics for "how big a dataset is" under various interpretations -- Used, Written and Refer.
DeHackEdUsed is how much space is reclaimed by destroying a snapshot, but only the one snapshot
DeHackEdwhen you do so, the USED sizes of the adjacent snapshots changes
flingHalk: `zpool get freeing`
HalkOh, I semi understand
HalkNAME PROPERTY VALUE SOURCE
Halkarray freeing 0 default
HalkBy the way, all my data survived, and I replaced the two dodgy drives
HalkThanks for your help with that
DeHackEdso all those 7 TB of space are referenced by multiple snapshots, so no single one says it USED all of them
flingHalk: dodgy drives?
DeHackEdso, here's the nasty breakdown: zfs list -t snapshot -d 1 -r -o name,used,refer,written array/fsname
HalkYeah, I had broken drives and you were helping me
DeHackEdwhere you fill in "array/fsname" as the dataset containing all this data
flingHalk: I don't recall. Could you show me the zpool status before and after? :P
HalkBefore with the dodgy drives? I don't have it to hand but there was about 1.2K errors on one, 2 drives rejected by ZFS and another failed
Halkhttp://pastebin.com/raw/R0LxEz40
flingHalk: what was the issue?
HalkTwo drives were dead :)
HalkThe third one wasn't faulty, I've done a resilver on both new drives and had 0 errors since
BaughnHalk: So if you delete the oldest one, that's 1T recovered right there.
flingHalk: which steps you performed exactly? dd of the rejected drives?
DeHackEdBaughn: no. columns are USED, REFER, WRITTEN
HalkI didn't need to DD the rejected drives thankfully
flingHalk: then tell me what you did
BaughnDeHackEd: Right? It says 927G used.
HalkI just whipped the worst offender out, resilvered with 6 good, 1 dodgy and 1 new drive. And then I whipped out the other dodgy and resilvered
DeHackEdBaughn: he wants his 7 TB back
DeHackEdthat's barely 1
BaughnThat's what I said...
flingHalk: you need to replace all the bad drives at once next time, no need to resilver twice.
HalkI did it that way becuase one of the two of them was giving errors only rarely
HalkI thought if I did 2 at once and another drive failed I was humped, but if I did one at a time I'd be able to muddle through better if that happened
flingHalk: so you tell me the regular `zpool replace` fixed your pool?
HalkYeah
HalkActually rebooting fixed the pool
DeHackEdHalk: you said you deleted 7 TB. Across the whole snapshot sequence there's about 2.5 TB written, which means most of the deleted files are present in the first snapshot.
flingHalk: what about data loss?
HalkNone
flingGreat!
DeHackEdso you're probably looking at destroying all the snapshots in the end to get it all back.
Halkzfs list -H -o name -t snapshot | xargs -n1 zfs destroy
HalkWill that do the trick?
DeHackEdthat'll do it
DeHackEdare there other datasets with snapshots?
HalkNah
DeHackEdyou can add "-nv" to the destroy command for "pretend" mode
HalkAfter - array/data 1.08T 12.3T 0 12.3T 0 0
HalkBefore - array/data 680G 19.3T 6.96T 12.3T 0 0
DeHackEdzpool get freeing array # now do it
Halk4.98T - so it's in the process of freeing?
DeHackEdyeah. it'll clean it up in the background
DeHackEdgo ahead and refrehs the 'zpool get' command
HalkIt's so much easier when I'm not fearing that all my data is lost
Halk3.66T
DeHackEdgood speed
HalkYeah, there's 8 WD 4TB Red drives on an 8 port RAID controller
HalkI keep meaning to upgrade the network to 10G so I can actually see proper file speeds but the price of NICs means no
DeHackEdintel x540 are overpriced, if that's what you're looking at
HalkIt's done
HalkI'm not really looking, I look every year or so and then say "no"
DeHackEdhaha
Halkarray/data 7.63T 12.3T 0 12.3T 0 0
HalkWere you about to suggest an alternative to the intel NIC? I could say no to that one as well, but less emphatically
DeHackEdwell, RJ-45 jacks do raise the price a bit, but the alternative is universally SFP+
DeHackEdthe thing about the 540x cards is they double as hardware iSCSI HBAs, which raises the price. other cards tend to be cheaper and lose that functionality
HalkNah when I moved in here I got cat 7 wraggled into the walls
DeHackEdpfft, should have used fiber. :)
HalkI just want to find out the cheapest 10G ethernet NIC so I can decide that no, now is not the time
HalkIt's usually frustrating searching for it whenever the notion takes me, there's so little of them available in comparison to bog standard cheap 1gig NICs that I get loads of false positives
bunderi feel cheap for not wanting to spend $400 on a cisco managed switch for gigabit
bunderi'm still on 100mbit
HalkI have cheap gigabit switches that must have cost me no more than £10
HalkWhat's a 400 dollar switch going to do on top of that?
bunderdo they have vlans?
bundersomething other than a cheap web interface?
HalkOh they're unmanaged
bunderheh
HalkWhat does managed do for you that unmanaged doesn't?
HalkDoes it replace the job of a router, or does it just help if traffic is congested?
bundervlans for network segregation, so i don't need two switches
DeHackEdtons of things. LACP, vlans, SNMP graphing, force a port to fixed speeds, security, MAC learning options
DeHackEdI have a juniper EX switch (small office model) here at home
HalkYeah, at some point I might have understood some of that if I stretched... but not now... :)
DeHackEdports are assigned 1 or more vlans (in the case of multiple vlans, each packet is tagged). two ports may only communicate with each other if they are on the same vlan
HalkI just have a server, a PC, a pi running Kodi and various other things like TVs connected to the network so I use dirty cheap unmanaged switches and it all works great
bunderi never liked juniper, their stuff runs too hot and loud
HalkI don't think managed switches would benefit me much
DeHackEdbunder: well, their datacenter stuff yeah. but they have low-fan and no-fan options as well
bunderinteresting
DeHackEdI have an ex-2200-c. it's small and only has 14 gig ports total (2 SFP options) but it's fanless
bunderwhat did that run you?
HalkAnd my googling says 10G NICs are still £300+ so no 10G for me this year
DeHackEdaround $600 CAD
bundereugh
bunderwell, i should be getting ready for work, i'll be back :P
PMTNICs can be had cheaper on ebay if you really desired, but switches are painful, particularly for RJ-45 options
DeHackEdoh geez don't get me started on the 10gig switches...
HalkOh well fuck it all then I hadn't even thought about switches
HalkI'm not spending a grand just to seek faster on movies
DeHackEdmeh, I have a few SFP+ servers now with fiber modules. I'm a happy man.
HalkProbably by the time I could justify buying 10G upgrades, wireless will be faster than 1G
Halk802.11ac I think it is, once that's ubiquitous there's no need for 10G I guess for me at home
DeHackEdkeep in mind wireless is half-duplex, so whatever speed is advertised you will get half that on a good day.
HalkNot for streaming media around my home though?
PMTI can pull/push 100 Mbit over my 11ac, so I'm content enough
DeHackEdI have a goddamned Juniper switch in my home. screw wireless where possible. :)
DeHackEdg
DeHackEdg'night
PMTI have a nice 1Gb switch too, but
HalkAlthough I've never liked wireless and always gone wired... what I don't get is why 10G hasn't replaced 1G the way that 1G replaced 100M
PMTnn
PMTHalk: because it's prohibitively expensive and they decided to sidegrade to a different PHY tech
PMTsee also: 20GbE
PMT*25GbE
HalkIt's the expensive bit I don't get
HalkThere's an expense between 100M and 1G but not huge, but there's a Valles Marineris between 1G and 10G
HalkAnyway, thanks for all the help, my array is all as it should be. Next task is to automate backups to a different array but I CBA
bunderACTION shrugs
bunderi can seek fine on 100mbit, i just want more speed than 12MB/s
bunderwhy have a disk array that can do 450-500MB/s when the network is so gimped