zfsironMann commented on commit zfsonlinux/zfs@ab9f4b0b82 - SIMD implementation of vdev_raidz generate and reconstruct routines <https://github.com/zfsonlinux/zfs/commit/ab9f4b0b824ab4cc64a4fa382c037f4154de12d6#commitcomment-18189307>
zfs0xFelix commented on issue zfsonlinux/zfs#3785 - system freezes when zfs is waiting for disks to spin up <https://github.com/zfsonlinux/zfs/issues/3785#issuecomment-231656273>
zfsbdaroz commented on issue zfsonlinux/zfs#4834 - ZFS Slow Write Performance / z_wr_iss stuck in native_queued_spin_lock_slowpath <https://github.com/zfsonlinux/zfs/issues/4834#issuecomment-231656676>
zfschrisrd opened pull request zfsonlinux/zfs#4838 - Kill znode->n_links field by chrisrd <https://github.com/zfsonlinux/zfs/pull/4838>
zfschrisrd commented on pull request zfsonlinux/zfs#4827 - xattr dir doesn&#39;t get purged during iput by tuxoko <https://github.com/zfsonlinux/zfs/pull/4827#issuecomment-231674855>
zfsAndCycle commented on issue zfsonlinux/zfs#4809 - Silently corrupted file in snapshots after send/receive <https://github.com/zfsonlinux/zfs/issues/4809#issuecomment-231679459>
zfskobuki commented on issue zfsonlinux/zfs#3785 - system freezes when zfs is waiting for disks to spin up <https://github.com/zfsonlinux/zfs/issues/3785#issuecomment-231683547>
zfs0xFelix commented on issue zfsonlinux/zfs#3785 - system freezes when zfs is waiting for disks to spin up <https://github.com/zfsonlinux/zfs/issues/3785#issuecomment-231685228>
zfsrincebrain commented on issue zfsonlinux/zfs#4809 - Silently corrupted file in snapshots after send/receive <https://github.com/zfsonlinux/zfs/issues/4809#issuecomment-231686689>
zfsAndCycle commented on issue zfsonlinux/zfs#4809 - Silently corrupted file in snapshots after send/receive <https://github.com/zfsonlinux/zfs/issues/4809#issuecomment-231688283>
zfschrisrd commented on pull request zfsonlinux/zfs#4838 - Kill znode->z_links field by chrisrd <https://github.com/zfsonlinux/zfs/pull/4838#issuecomment-231695928>
WormFoodrlaager, I just wanted to let you know, you were a huge help in getting my zfs array sorted out. You definitely got me on the right track, and now everything seems to be back up to snuff. Once the array was resilvered, and cleaned, I was able to remove the offending devices, and do my replace operation like normal.
stratactGrayShade: hey :)
GrayShadestratact: :D
bb0xhi guys
bb0xanyone using ganesha nfs server on top of ZFS ?
bb0xI'm looking to a way to scale NFS
DHEbeing a userspace filesystem it should be fine, right?
bb0xI understand it uses a library to communicate with ZFS, I've never used it
bb0xI'm looking to a way to scale out nfs mounts, or at least to have some HA
bb0xin case one of the nodes goes down, to be able to server mounts from the other nodes, ideally without restarting autofs
bb0xusing a VIP or something similar
WormFoodI share out my zfs array over nfs, without problems. Works like any other file system.
WormFoodIn fact, I share it out over samba, and nfs.
DHEyou want 'zfs share' to interact with this NFS server?
bb0xDHE, yes
bb0xWormFood, I already have this setup with zfs and ext4 (actually I'm migrating from ext4 to nfs) since with ext4 I have to use rsync to scrap files and sync them to a backup machine
bb0xwith zfs this is much easier since I have snapshots + compression :)
bb0xmy problem is, with traditional NFS server, if the server goes down, the clients will be affected
bb0xsince I'm using intr,hard they will wait for NFS server to be available
bb0xI'm looking for a way to handle this failover flawless or at least without having to reboot clients sometime
bb0xeven I can update autofs configuration and restart it, it will still wait for the old server, for the connection that were already made by apps
bb0xso I want to avoid restart apps or reboot :)
bb0xon client side
bb0xmaybe using nfs-ganesha as a proxy is more appropriate
bb0xdefinitely I have to test it
ptx0bb0x: nfs id should be the same on both NFS servers
ptx0then failover with VIP should work fine
bb0xptx0, nfs id ? where I can define that one ?
bb0xor you are referring to exports ?
ptx0look up zMotion video by OVH guys
ptx0they describe this
zfsgutleib opened issue zfsonlinux/zfs#4839 - Kernel panic during zfs scrub <https://github.com/zfsonlinux/zfs/issues/4839>
Scott0_im having a problem where I have a remote and local zfs filesystem which I use zfs send and recevie to sync snapshots. I've been backing up VMs to a dataset and the backups change by 2GB a day, but the snapshot being created is 15GB a day
Scott0_is there any way to modify how snapshots are taken to ignore dates or something?
Scott0_the vm backup software uses reverse deltas and recreates a new full backup and pushes old deltas to incremental files
Scott0_which baloons a 2GB delta to 15GB of data in a snapshot
DHEa zvol is sent as a block changed list. if a huge file is created and then deleted, the incremental might still pass all the modified blocks. (TRIM support can help with that)
ncopahi, trying to build zfs on alpine linux with gcc-6
ncopaconfigure script is broken
ncopawith gcc 6
ncopahttps://dpaste.de/nncb
ncopa"bio has bi_iter" check fails due to -Werror=unused-but-set-variable
ncopaeiter we need to set -Wno-error=unused-but-set-variable
ncopaexplicitly
ncopaor
ncopawe can add the __always_unused attribute
ncopato the test case
zfstcaputi opened issue zfsonlinux/zfs#4840 - Raw Send Feature (Encrypted + compressed) <https://github.com/zfsonlinux/zfs/issues/4840>
DHEsomehow I don't like the idea of sending the MOS encryption information out of the pool, even though this sort of thing is exactly what encryption is designed to protect against.
ryaogotwf: I suggest filing an issue for behlendorf.
zfsdpquigl commented on pull request zfsonlinux/zfs#4768 - OpenZFS 6950 - ARC should cache compressed data by dpquigl <https://github.com/zfsonlinux/zfs/pull/4768#issuecomment-231801429>
gg10hello everyone, anyone using Encryption (pull request 4329)?... I get "invalid argument for this pool operation" when using tcaputi's master branch as it does not apply cleanly to the master branch any more
zfsbehlendorf commented on issue zfsonlinux/zfs#4773 - &#39;zfs list&#39; takes unreasonable long time <https://github.com/zfsonlinux/zfs/issues/4773#issuecomment-231806821>
zfstuxoko commented on pull request zfsonlinux/zfs#4827 - xattr dir doesn&#39;t get purged during iput by tuxoko <https://github.com/zfsonlinux/zfs/pull/4827#issuecomment-231815614>
zfsbehlendorf commented on issue zfsonlinux/zfs#4582 - PANIC: blkptr at ffff88080812d048 DVA 1 has invalid VDEV 1 <https://github.com/zfsonlinux/zfs/issues/4582#issuecomment-231818278>
zfsbehlendorf commented on issue zfsonlinux/zfs#4826 - Change a fix amount of disks which scrubs/replaces to a percentage IO-value <https://github.com/zfsonlinux/zfs/issues/4826#issuecomment-231823973>
zfsbehlendorf commented on issue zfsonlinux/zfs#4813 - Fails to compile with musl: zed_log.c <https://github.com/zfsonlinux/zfs/issues/4813#issuecomment-231825975>
zfsbehlendorf commented on pull request zfsonlinux/zfs#4685 - [RFC] Remove znode&#39;s z_uid/z_gid member by lorddoskias <https://github.com/zfsonlinux/zfs/pull/4685#discussion_r70314224>
zfsbehlendorf commented on pull request zfsonlinux/zfs#4685 - [RFC] Remove znode&#39;s z_uid/z_gid member by lorddoskias <https://github.com/zfsonlinux/zfs/pull/4685#discussion_r70315326>
zfsbehlendorf commented on pull request zfsonlinux/zfs#4685 - [RFC] Remove znode&#39;s z_uid/z_gid member by lorddoskias <https://github.com/zfsonlinux/zfs/pull/4685#discussion_r70315587>
zfsbehlendorf commented on pull request zfsonlinux/zfs#4685 - [RFC] Remove znode&#39;s z_uid/z_gid member by lorddoskias <https://github.com/zfsonlinux/zfs/pull/4685#discussion_r70315692>
zfsbehlendorf commented on pull request zfsonlinux/zfs#4685 - [RFC] Remove znode&#39;s z_uid/z_gid member by lorddoskias <https://github.com/zfsonlinux/zfs/pull/4685#discussion_r70315780>
zfsbehlendorf commented on pull request zfsonlinux/zfs#4685 - [RFC] Remove znode&#39;s z_uid/z_gid member by lorddoskias <https://github.com/zfsonlinux/zfs/pull/4685#discussion_r70316237>
zfstuxoko commented on pull request zfsonlinux/zfs#4837 - Prevent null dereferences when accessing dbuf kstat by dweeezil <https://github.com/zfsonlinux/zfs/pull/4837#discussion_r70316662>
zfstuxoko commented on pull request zfsonlinux/zfs#4837 - Prevent null dereferences when accessing dbuf kstat by dweeezil <https://github.com/zfsonlinux/zfs/pull/4837#issuecomment-231832663>
zfsbehlendorf commented on pull request zfsonlinux/zfs#4685 - [RFC] Remove znode&#39;s z_uid/z_gid member by lorddoskias <https://github.com/zfsonlinux/zfs/pull/4685#discussion_r70324362>
zfsbehlendorf commented on pull request zfsonlinux/zfs#4685 - [RFC] Remove znode&#39;s z_uid/z_gid member by lorddoskias <https://github.com/zfsonlinux/zfs/pull/4685#discussion_r70325232>
zfsbehlendorf commented on pull request zfsonlinux/zfs#4685 - [RFC] Remove znode&#39;s z_uid/z_gid member by lorddoskias <https://github.com/zfsonlinux/zfs/pull/4685#discussion_r70326099>
zfsbehlendorf commented on pull request zfsonlinux/zfs#4685 - [RFC] Remove znode&#39;s z_uid/z_gid member by lorddoskias <https://github.com/zfsonlinux/zfs/pull/4685#discussion_r70326694>
zfsbehlendorf commented on pull request zfsonlinux/zfs#4685 - [RFC] Remove znode&#39;s z_uid/z_gid member by lorddoskias <https://github.com/zfsonlinux/zfs/pull/4685#discussion_r70326702>
FireSnakewhoa, new fonts on github
FireSnakei like this better
zfslorddoskias commented on pull request zfsonlinux/zfs#4685 - [RFC] Remove znode&#39;s z_uid/z_gid member by lorddoskias <https://github.com/zfsonlinux/zfs/pull/4685#discussion_r70327058>
zfsbehlendorf commented on pull request zfsonlinux/zfs#4685 - [RFC] Remove znode&#39;s z_uid/z_gid member by lorddoskias <https://github.com/zfsonlinux/zfs/pull/4685#issuecomment-231851413>
zfsbehlendorf commented on issue zfsonlinux/zfs#4358 - the manual of zpool is incorrected regarding hot spare <https://github.com/zfsonlinux/zfs/issues/4358#issuecomment-231852232>
zfsbehlendorf commented on commit zfsonlinux/zfs@222ef5b625 - Vectorized fletcher_4 must be 64-bit aligned <https://github.com/zfsonlinux/zfs/commit/222ef5b6253e504c585b19408d9d24797b4064ad#commitcomment-18201825>
zfsbehlendorf commented on issue zfsonlinux/zfs#4701 - zfs hangs on suspend/resume <https://github.com/zfsonlinux/zfs/issues/4701#issuecomment-231853701>
zfsbehlendorf commented on issue zfsonlinux/zfs#3461 - zpool commands block when a disk goes missing / pool suspends <https://github.com/zfsonlinux/zfs/issues/3461#issuecomment-231856056>
zfsbehlendorf commented on commit zfsonlinux/zfs@f6fb7651a0 - Use &#39;git describe&#39; for working builds <https://github.com/zfsonlinux/zfs/commit/f6fb7651a0d05b357dc179cc4853263ce15da6ed#commitcomment-18202010>
zfstuxoko commented on pull request zfsonlinux/zfs#4685 - [RFC] Remove znode&#39;s z_uid/z_gid member by lorddoskias <https://github.com/zfsonlinux/zfs/pull/4685#issuecomment-231858057>
zfsbehlendorf commented on issue zfsonlinux/zfs#4789 - Enhancement: Port AVX2 Fletcher-4 algorithm to SSSE3 <https://github.com/zfsonlinux/zfs/issues/4789#issuecomment-231858315>
zfsbehlendorf commented on issue zfsonlinux/zfs#2172 - Note the requirements for a "in-tree kernel builtin module building" for ZFS + SPL <https://github.com/zfsonlinux/zfs/issues/2172#issuecomment-231859105>
twnqxbehlendorf is a spammer.
twnqx;)
zfsironMann commented on issue zfsonlinux/zfs#4789 - Enhancement: Port AVX2 Fletcher-4 algorithm to SSSE3 <https://github.com/zfsonlinux/zfs/issues/4789#issuecomment-231861640>
bundernew fonts? can't say i've noticed a change
zfslorddoskias commented on pull request zfsonlinux/zfs#4685 - [RFC] Remove znode&#39;s z_uid/z_gid member by lorddoskias <https://github.com/zfsonlinux/zfs/pull/4685#issuecomment-231866577>
DeHackEdtwnqx: he runs the site, it's forgiven
zfsbehlendorf commented on pull request zfsonlinux/zfs#4827 - xattr dir doesn&#39;t get purged during iput by tuxoko <https://github.com/zfsonlinux/zfs/pull/4827#issuecomment-231874882>
zfsironMann commented on pull request zfsonlinux/zfs#4793 - Implementation of SSE optimized Fletcher-4 by tj90241 <https://github.com/zfsonlinux/zfs/pull/4793#discussion_r70344539>
zfskernelOfTruth commented on pull request zfsonlinux/zfs#3441 - ABD: linear/scatter dual typed buffer for ARC (ver 2) by tuxoko <https://github.com/zfsonlinux/zfs/pull/3441#issuecomment-231878236>
ptx0if overcommit is disabled and something like FUSE is using fork(), which fails eventually despite having free space, is this because the kernel calculates as if all parent process pages were fully copied / modified?
ptx0s/space/memory/
Melianptx0 meant: if overcommit is disabled and something like FUSE is using fork(), which fails eventually despite having free memory, is this because the kernel calculates as if all parent process pages were fully copied / modified?
zfsironMann commented on pull request zfsonlinux/zfs#4793 - Implementation of SSE optimized Fletcher-4 by tj90241 <https://github.com/zfsonlinux/zfs/pull/4793#discussion_r70345349>
bunderwhats with that guy, i never see him talk, its all bot spam
ptx0trying to run 30 FUSE mountpoints and it says out of memory, with overcommit disabled, even though memory is at 3G out of 8G used
ptx0bunder: I've seen him speak but it's rare, I also asked him to turn off the scripts but no reply
zfsbehlendorf commented on pull request zfsonlinux/zfs#4838 - Kill znode->z_links field by chrisrd <https://github.com/zfsonlinux/zfs/pull/4838#issuecomment-231879369>
ptx0with overcommit enabled I can set up 50+ FUSE mountpoints without memory allocation failure but it says 7.5G out of 8G used
ptx0not a whole lot of room left for ARC
PMTptx0: is the FUSE port being maintained?
PMTor are you using some FUSE thing atop kernel ZoL?
zfsironMann commented on pull request zfsonlinux/zfs#4793 - Implementation of SSE optimized Fletcher-4 by tj90241 <https://github.com/zfsonlinux/zfs/pull/4793#discussion_r70346438>
ptx0it's non-ZFS FUSE
ptx0though I've got ZFS running on top of the FUSE "block device"
PMTI can't imagine that's remotely fast
ptx0it's nt supposed to be, it's to backup or use s3 as archive
DeHackEdfuse makes block devices?
PMThave you traced down to see what is actually failing an allocation?
DeHackEd(I know it makes char devices...)
PMTI thought that was pronounced CUSE
ptx0DeHackEd: s3backer creates a file that we link into /dev/ basically
PMTah, no, CUSE is implemented on FUSE, m/b
DeHackEdah, so it's not really a block device but we pretend it is
ptx0yes
ptx0i'm pretty sure zfs thinks of it as a file vdev though
PMTptx0: can you try running down exactly where an allocation failure is happening?
PMToh, it definitely does
ptx0I'll have to do that tomorrow I think, I was just wondering what the solution is for fixing FUSE's memory allocation
ptx0no reason that I can see for 50 mountpoints to consume 7G ram
PMTthat sounds like a large-scope problem, if FUSE is indeed the problem
DeHackEdptx0: that's the catch. I'd be better if it was a real block device. but I guess that's what the loop driver is for.
PMTptx0: a blind guess might be that each mountpoint ends up as a distinct FUSE process with a constant-size memory allocation
PMT(or a constant starting size, or s/t)
ptx0yeah that's what I was afraid of, which means we'll have to just boost the min requirements for this kind of workload
PMTit's possible that it does something like touching each page to avoid similar issues to what happens if you try to allocate out of a sparse file and are out of space
ptx0that's bad though, so very bad
PMTbut i'm mostly blindly shooting my mouth off without facts at this point
PMTptx0: bad for your use case, good for not having unexpected failures for them probably
ptx0I have trouble finding anyone else experiencing this because typically people stay far away from FUSE, they don't try to run 25+ at once
pcdPMT: I have a proto fix for 7172 (I think was the number)
DeHackEdone process can handle multiple filesystems, if there are limits it's from the FUSE library itself
PMTpcd: 7176 i think was the number, does this one work? :)
ptx0it's saying fork: cannot allocate memory
pcdPMT: yeah, that's the one. And yeah, machine boots and my simple test case no longer does the wrong thing
pcdPMT: still have to run the full test suite, but good start at least
ptx0what kinda tracer do I use to find more info?
PMTthat is a promising start
PMTptx0: what is? the mount command?
ptx0the s3backer process executes FUSE, I think that's where it fails
PMTpcd: i might actually need to finish cleaning up my dumb script and post it then
pcdPMT: indeed, I will post it for people to look at once it passes tests, though I probably won't submit the PR until I get it past internal review
PMTthough, the downside of my dumb script is that it requires patching zdb to use, since you can't convince zdb to print birth zero holes under any flags currently
ptx0I realise I have almost no idea how FUSE actually works..
PMTptx0: witchcraft(tm)
pcdPMT: indeed. it's also a bit unfortunate that you need an old snapshot to compare to
pcdbetter than nothing, to be sure
PMTpcd: i mean, given the nature of the flaw, if you don't have an older snapshot to compare to, the bug can't manifest, since birth zero hole with no prior snapshots is equivalent to a birth (first snapshot) hole anyway
PMT(I _think_)
pcdPMT: well, the birth times will still be wrong, it just won't matter because there's no snapshot to do a send from
pcdbut if you're trying to detect if this happened to you in the past at any point, you can't tell
PMTthat's true
pcdI guess in that case you would just compare the backups to the original and see if you got got
PMTi
PMTi mean, if you have originals, you could presumably do the same dance
pcdwell, the originals might only have some of the snapshots
pcdusually people will only keep the last few around, or delete the daily/weeklies after a few months
PMTyeah
zfsironMann commented on pull request zfsonlinux/zfs#4793 - Implementation of SSE optimized Fletcher-4 by tj90241 <https://github.com/zfsonlinux/zfs/pull/4793#issuecomment-231883781>
ptx0I hope I never have to re-enable overcommit though
ptx0too many support tickets
PMTthis is why i'm in favor of a combination of the tunable and a feature flag to say "i promise we fixed the bugs we know about in this after this txg"
DeHackEdall things considered this is probably the sensible chioce
DeHackEdbesides a public flogging
ptx0feature@holes_plugged
DeHackEdhahaha
PMTpcd: actually, heh, if we could rewrite the enabled_txg on hole_birth to the future, it would be equivalent too
ptx0ZFS: DP edition
PMTfuture == "when we'd want hole_birth_fix to be enabled"
pcdPMT: yeah, but you need another bit so you know you've done that
PMTpcd: yeah, otherwise you could just import the pool on a flawed implementation and break it again
PMTwomp womp
pcdyeah
pcdmahrens had some ideas for how best to handle it that he'll post at some point, I was chatting with him about it earlier
PMTi really like hole_plugged as a name for the flag, but it doesn't extend itself well if it ends up being the first of many holes in a dam to be plugged
pcdyeah, it would be kind of embarassing to have hole_plugged{,2,3,4,ohgodreally}
PMTi was considering a concept of sort of hidden versioned flags, for annotating shit like this without necessarily flooding people with hole_birth_1,2,...
pcdyeah
ptx0by that point we'll just rename hole_birth to holy_fucked
PMTsince this will presumably not be the last feature that has some bug that needs an annotation to know to work around
PMTptx0: holey_fuck?
ptx0makes you wonder how this feature made it out of staging
rlaagerhole_birth2 might be good
pcdwe don't do a lot of hole punching, unfortunately, and it didn't get thought of in the design stage
PMTptx0: as pcd said yesterday or so when i expressed surprise these didn't crop up in testing, the edges here aren't in their common use cases
ptx0that's fair but I mean, the feature supports it :P
rlaagerSo is there some easy way to tell if your data is affected?
ptx0rlaager: checksum verification
ptx0rsync --checksum
PMTrlaager: outside of ZFS checksum verification
DeHackEdrlaager: any holes in the dataset put it at risk. disabling compression helps mitigate some more dangerous aspects
ptx0tbh I've been doing rsync checksum fixes on send/recv since the last corrruption cropped up
PMTat this rate, we'll have a new feature flag eventually
ptx0I haven't seen this issue using ZVOL
PMThole_unbirth: "NEVER AGAIN"
rlaagerI'll take that as "no". So my approach will be: 1) Wait for you awesome folks to fix the bug. 2) Prod the Ubuntu devs to ship the fix. 3) Re-send ALL of my data everywhere.
PMTrlaager: there is if you have two snapshots and give me a few minutes to make my script usable for other people
pcdyeah, it's hard to tell if you've been had without manually looking at zdb
ptx0pcd: you just have to compare all the files in all the snapshots on every server you send to
PMTwhat ptx0 said.
ptx0it takes a long fucking time
pcdptx0: yeah, that will work too.
rlaagerYeah, hence "no".
pcdbut yeah, like you said
PMTptx0: it's not that bad to do find over all of it, as long as you don't have heavy file churn, tbh
ptx0but with ARC it's pretty fast :D
ptx0I'm lucky that my system does roughly the same thing every 10 seconds so there's not much churn, no
PMTrlaager: if it were me, and i were in charge of large datasets with lots of snapshots and backups, i'd probably destroy them on the backup site, apply the patch to add an ignore_hole_birth tunable, and resend
ptx0"but if you were me, then I'd be you, and I'd use YOUR body to get to the top! you can't stop me no matter who you are!" -- ace ventura
PMTi saw an interview a while ago about how people didn't think that film was going to even get finished or break even at the box office, much less, uh, do as well as it did
ptx0luckily there were a bunch of braindead morons such as myself who gave them money
ptx0I'd probably go see a third if they made one.
ptx0(it's pronounced 'turd' in the UK)
PMTptx0: they apparently made a direct-to-video sequel not starring carrey in 2009
ptx0I would need it to have carrey in it
PMTyeah
ptx0ACTION starts a gofundme
ptx0ACTION raises $12m in five minutes
PMTi don't think that's how this works
ptx0tell that to my 12 mil
PMTis "a new jim carrey film" the new "make me a sandwich" on gofundme
ptx0mayhaps. I don't know about that reference.
PMTevidently i was thinking of potato salad
PMThttp://www.columbusmonthly.com/content/stories/2014/09/potato-salad-guy-and-the-prank-that-raised-55000.html
zfskingneutron commented on issue zfsonlinux/zfs#4831 - Very high load average, zpool commands hang, zfs list runs, IO appears normal <https://github.com/zfsonlinux/zfs/issues/4831#issuecomment-231888351>
ptx0funny
ptx0I'm sure mine will go just as well even though I don't have communication lines open with James directly (I call him James)
PMTif i have a single-disk pool that had the device in question fall off the bus abruptly, is there any way to remove the pool from the system? I can't destroy it, as that reports pool IO suspended, and I can't export it, as that just hangs forever.
ptx0you have to do zpool clear or reconnect the disk
PMTptx0: after a clear it went from ONLINE to UNAVAIL, but I still can't export or destroy the pool.
zfskingneutron commented on issue zfsonlinux/zfs#4839 - Kernel panic during zfs scrub <https://github.com/zfsonlinux/zfs/issues/4839#issuecomment-231889709>
bunderreboot time?
PMTwhat is this, windows 95
ptx0yes. same codebase.
PMTwelp, time to move to btrfs
ptx0on netbsd?
PMTno, on windows, clearly
PMTsadly it's only single-disk on windows atm
ptx0windows + coLinux + btrfs
PMTyou don't even need coLinux
PMTthough that's a name that takes me back
ptx0wow wtf
ptx0winbtrfs is a reimplementation from scratch
PMTyup
ptx0because the original wasn't dangerous enough
ptx0windows has a history of being worse than linux, can't let that go now
zfstuxoko commented on pull request zfsonlinux/zfs#4827 - xattr dir doesn&#39;t get purged during iput by tuxoko <https://github.com/zfsonlinux/zfs/pull/4827#issuecomment-231890760>
PMTptx0: psh, windows doesn't have an xfs driver yet, it can't be that bad
bunderi still use fat32 for maxilol performawhoosh
zfsgotwf opened issue zfsonlinux/zfs#4841 - Please add link to "Getting Started" wiki page for zol on voidlinux <https://github.com/zfsonlinux/zfs/issues/4841>
zfsrlaager commented on issue zfsonlinux/zfs#4841 - Please add link to "Getting Started" wiki page for zol on voidlinux <https://github.com/zfsonlinux/zfs/issues/4841#issuecomment-231899509>
zfsbehlendorf pushed to master at zfsonlinux/zfs - Comparing 5c27b29605...590c9a0994 <https://github.com/zfsonlinux/zfs/compare/5c27b29605...590c9a0994>
zfsbehlendorf commented on pull request zfsonlinux/zfs#4820 - Fix compiling with -O0 opt level by ironMann <https://github.com/zfsonlinux/zfs/pull/4820#issuecomment-231900739>
zfstuxoko commented on pull request zfsonlinux/zfs#4827 - xattr dir doesn&#39;t get purged during iput by tuxoko <https://github.com/zfsonlinux/zfs/pull/4827#issuecomment-231904646>
zfsbehlendorf commented on pull request zfsonlinux/zfs#4828 - Fix get_zfs_sb race and misc fixes by tuxoko <https://github.com/zfsonlinux/zfs/pull/4828#discussion_r70361531>
zfsbehlendorf commented on pull request zfsonlinux/zfs#4828 - Fix get_zfs_sb race and misc fixes by tuxoko <https://github.com/zfsonlinux/zfs/pull/4828#issuecomment-231906518>
zfstuxoko commented on pull request zfsonlinux/zfs#4838 - Kill znode->z_links field by chrisrd <https://github.com/zfsonlinux/zfs/pull/4838#discussion_r70361699>
ChibaPetCan someone recommend the sanest way to do local storage pools for KVM using ZFS, where I don't have a whole pool to devote? Is letting libvirt drop files in a dedicated dataset reasonable?
ChibaPetI'm doing zvol-per-VM now, but this is suboptimal as the software seems to have no concept of one-device-per-VM.
ChibaPetI was hoping to be able to control snapshots per-VM, but ... well. I guess I could divvy up different datasets into groups, potentially with one dataset per vm.
zfstuxoko commented on pull request zfsonlinux/zfs#4828 - Fix get_zfs_sb race and misc fixes by tuxoko <https://github.com/zfsonlinux/zfs/pull/4828#issuecomment-231909797>
ptx0hm
ptx0I do one ZVOL per VM but I use my own management layers
ChibaPetI tried that, but libvirt didn't present me a clean way to do that.
ChibaPetMaybe I can accomplish it if I use a different toolset.
ptx0yeah that's why I forewent libvirt management of the storage
ptx0they want things done very.. stupidly.. at least the way virt-manager exposes things.
ChibaPetYes.
ChibaPetI'm seeing that.
ChibaPetI'll look at different management options. I've only just spun this up.
ChibaPetFirst VM is running NetBSD 7.0.1 right now. :P Blast from the past.
ChibaPetptx0: Can I bother you for a paste of one of your hard disk stanzas from /etc/libvirt/qemu?
ChibaPetI want to see what it *should* look like.
ptx0I don't actually use one
ptx0my VMs boot via PXE to iSCSI
ChibaPetFair enough.
ptx0it'll re-clone their OS ZVOL depending on DB configuration received from the central control panel
ptx0so you can set a VM to locked or unlocked, can't be overridden by the person sitting "at the terminal"
ChibaPetCool.
ptx0the OS is a single template ZVOL and it's reprovisioned at every boot, it has an initrd module to set up overlayfs root using a 2nd (or 3rd or 4th) zvol that's assigned for persistence
ChibaPetSounds slick.
ptx0so when I update my gentoo VMs I just log into one, compile, set to snapshot using the control panel and it actually creates the snap on reboot, for consistency
ptx0reboot whatever other VMs need updates, voila
ChibaPetVery, very cool.
ptx0couldn't do it without overlayfs
ptx0well, I'd have to use aufs I guess, but that's slower
ChibaPetI loved that in NetBSD, but I haven't used one under Linux.
ptx0not upstream etc either
ptx0I was looking into ansible and puppet but realized quickly they don't really handle *every* layer of automagic-ness - automation?
ptx0I can't reprovision the OS at every boot from a single template with just puppet/ansible because they need a pre-configured agent, hostname, ssh keys, whatever
ptx0need that overlayfs for /etc at a minimum
ptx0but once i started looking into overlayfs for /etc I realized, why not just do it for all of / and be done with it :D
ptx0plan9 was ahead of its time
ChibaPetheh
ChibaPethttps://libvirt.org/formatdomain.html#elementsDisks suggests a format that might work - just need to figure out how to spin up VMs with virsh or similar.
rkeeneptx0, I use overlayfs for / -- unfortunately it broken in recent kernels :-(
ptx0I'm on 4.6
ptx0what's your version?
ptx0ChibaPet: write an xml file and then 'virsh define <path>'
ptx0ChibaPet: 'EDITOR=nano virsh edit <vm>'
ChibaPetnano!
ptx0virsh destroy <vm>; virsh start <vm>
ChibaPetDoes it start me off with a template? Literally haven't used KVM/libvirt before today.
ptx0[Kernel: 4.6.0-gentoo] [Uptime: 16 days, 7:25:02] [CPU: QEMU Virtual CPU version 2.1.2 3.4 GHz] [Load average: 0.67 0.48 0.43] [RAM: 359 MB of 946 MB used] [Swap: 250 MB of 2 GB used] [Disks: 190 GB of 200 GB free] [Network: 241 GB received, 1620 GB transmitted] [Audio: ] [Video: Cirrus Logic GD 5446]
ChibaPetI'll spend some time studying anyway.
ptx0that's the one, 4.6.0-gentoo but I have 4.6.3 in my update
ptx0hm no
ptx0that'd be a different command
ptx0virt-create ?
ChibaPethm
ChibaPetI'll read through a tutorial anyway. It looks like I can specify the block device if I do it this way.
ptx0virt-install -r 1024 --accelerate -n Fedora14 -f /path/to/guest.img --cdrom Fedora-14-x86_64-Live.iso
ptx01024 = mem (mb)
ptx0you can do without -f and then virsh edit to add XML manually
ChibaPetkk, ty
WormFoodI've replaced all of the hard drives in my array with bigger drives. What is the proper way to expand the size of the array, to use all of the space available. ZFS is using the whole disk.
DeHackEdautoexpand property? or throw around `zpool online -e`
zfsjaw3000 opened issue zfsonlinux/zfs#4842 - Error on copy using Lubuntu <https://github.com/zfsonlinux/zfs/issues/4842>
WormFoodDeHackEd, thanks. I wasn't aware of the online -e option...that's what I used, and it worked perfectly.
zfsahrens opened issue zfsonlinux/zfs#4843 - zfs promote .../%recv should be an error <https://github.com/zfsonlinux/zfs/issues/4843>