bunderKernel 2.6+ and 3.2+ - Key Differences and When to Use Each
bunderlolwut
ptx0did you find that filed under "Best of 2012"?
bunderno, it's one of the chapters on linux academy's lpic level 2 exam prep course
bunderthe more i think about going back to school, the more i think these courses are designed to turn people into buzzword spouters
ptx0that's why I left
ptx0better off learning by doing
PMTbunder: that sure is a fascinating loaded question
bunderwhich
bunderthe kernel version thing?
flinghaha
bunderwith 4.x available, i don't see why people can't use that... and 2.6 isn't even supported anymore by anyone
bunderlet alone those kernel bugs in 2.6.18 and/or 2.6.32
GrayShadebunder: i think centos 6 is still pretty popular
PMTit's true, it's pretty popular, particularly among centos users who violently hate systemd
bunderapparently 2.6.32 went EOL in march, but you're right centos 6 does still use it, and their EOL is 2020?
PMTcorrect.
bundersucks to be them
PMTI'm not advocating for running old kernels without a good use case, but it still has a very large install base.
PMTI mean, it (arguably) sucks to be anyone who wants to develop things that are usable on Centos 6.
GrayShadeand, at least a while ago, people didn't want to move from really old kernels because of performance regressions in the new versions. i think postgres was especially affected
apusbunder: did you by chance figure out what needs to be done regarding this hole_birth mess? can the pool remain or does it need to be redone?
bunderno, since i don't have it enabled, i can't test fully
bunderi still theorize that you would have to delete your old snaps though
apuson the main or backup pool?
bunderthe backup pool
GrayShadeapus: the source pool is corrupted. PMT, i think, is working on patches that 1. work around the corruption when sending (but that's not fully working) and 2. make the receive side ignore holes
GrayShadeand i think you'll still have to recreate your destination pool
apusGrayShade: can't the source pool be fixed if there is enough space? that way only files which are sparse would be recreated ok
GrayShadeapus: once every bug is fixed (and it seems there are still a couple of them), you should be able to copy the affected files
apusbut the file integrity of those sparse files on the source pool is okay, right?
bunderPMT: https://www.exploit-db.com/exploits/25444/ i would hope they patched this one at least
GrayShadeyes, you can read them. the problem is with the metadata used by send
apuscan't a script be written that fixes the metadata? just recreate it?
GrayShadeyou'll have to make a copy of the file
GrayShadeand that's after https://www.illumos.org/issues/7176 gets fixed
apusso writing a script that finds affected files, copying the file and renaming it to original file would work
GrayShadeit should work, but not right now
apusany estimate how long this will take till the remaining bugs are fixed and a new version is out? 2-4 months?
GrayShadenot from me :)
GrayShadebut ZoL releases seem to come out every two or three months or so
apusthis is a huge mess if you ask me
PMTapus: sort of, but it'd break your old snapshots
PMT(e.g. you'd either have to delete them or the file would still be munged there even after a rename)
bunderi don't see any docs that say hole birth should even be enabled, the arch docs say grub can boot off a pool with it turned on, but that's about it
apusthat was exactly why i asked if a script could be written, that fixes the metadata in-place
PMTalso I don't know of a good way to find those affected files short of generating a send stream and comparing the results
PMTapus: if we could programmatically detect the problem in-place, we could also silently workaround it
apusi guess this isn't possible?
PMTnobody has currently come up with a manner to do it, no.
PMTi am not going to claim impossible, because that's a stronger claim, but...nobody's got a good option.
apuswell for me nothing has changed. i've known this issue existed for a couple of months now and am since using rsync only. getting on my nerve, but what should one do ?!
PMTapus: I mean, a patch to ignore hole_birth data for send is trivial, if you wanted to do that.
bunderstart your pool over with it turned off
apusbunder: a discussion in this channel going back a couple of weeks lead to someone saying that it doesn't matter if its on or off, the problem is still there. dunno if that was true, don't know who said it anymore either
PMThttps://github.com/zfsonlinux/zfs/pull/4833 is even a pull request for a setting to do that for ZoL
PMTapus: if hole_birth is off, I don't think that's true, because the metadata that is incorrect isn't even generated (AIUI)
bunderi ran that one perl script this morning on a cloned snap of mine, came back fine, but again i have hole birth turned off
PMT"perl script"?
bunderurm
bunderfind . -type f ! -size 0 -exec perl -le 'for(@ARGV){open(A,"<",$_)or next;seek A,0,4;$p=tell A;seek A,0,2;print if$p!=tell A;close A}' {} +
bunderthe one from stack exchange
PMTI think that just finds files with holes in them, which doesn't necessarily mean they have issues or not.
PMTBut I'm not certain, as I haven't tried it, and haven't spent more than a brief interval trying to interpret the perl.
GrayShadePMT: yes, that what it does
GrayShadelet's call it conservative :)
BoobuigiIs the
Boobuigi... whoops. What I wanted to say: I have 16GB RAM with a single raidz2 spanning 8 6TB disks... Before building this, I had presumed that I would need more RAM. Everything on the internet told me so, and everything on the internet seems to have been wrong.
PMTBoobuigi: I mean, if you're not using dedup, more RAM is better, but that's not the end of the world.
PMT(footnote: don't use dedup.)
BoobuigiHehe. So nothing obvious is amiss?
BoobuigiI almost feel like I did something wrong, ZFS uses so little RAM. It's totally different than what I'd read.
PMTI mean, performance might be less than it could be if you had more RAM, depending on your workload, phase of the moon, etc, but it's not going to kill you.
BoobuigiRock on.
PMTAlso, RAM usage will go up as the pool gets more used, and what's available will get used for caching, so I assure you, it will not generally go to waste.
BoobuigiThanks. I will keep an eye on it.
Boobuigi... Though to be honest, I'm not sure how. The only process I see is zed, and it's always at 0% RAM.
PMTBoobuigi: arc_summary.py is one of your friends. :)
bunderdid you set your arc max too low?
PMTif they didn't set arc_max at all, probably not, I'd guess the pool is just not much used yet.
bunderif its unset, the default is half memory i think
Boobuigibunder: That is correct on both accounts.
bunderi have mine set to 16 of 24g, with an average 2gb workload and some extra room for package upgrades
bundertakes me about a week to get a useful arc, don't ask about l2arc because mine is useless heh
bunderthen again i'm not a heavy user
PMTsomething about persistent l2arc goes here. :)
bunderi do feel like writing some sort of dropbox/pastebin/imgur combo though, if i actually finish and use it, i might actually get my l2arc working
Boobuigiarc_summary.py is great. It's kind of weird they left the py extension, though.
PMTBoobuigi: I suspect it was initially referred to by that name to distinguish from arc_summary.pl, and then it stuck
PMTpcd: I feel like it should be conceptually possible to write something to notice the discrepancy between the hole_birth information and the actual object metadata for a given file across two snapshots, but I'm not immediately convinced. Does this sound reasonable?
ray13so when I get the msg: errors: Permanent errors have been detected in the following files:
ray13and have the file (from backup) do i just copy it over?
PMTray13: I'd probably rm -f the file then cp over it, but yes.
PMT(I'm just being paranoid, cp should be equivalent.)
ray13cool thx
ray13so doing an md5sum on the bad file.. gives me i/o error
ray13but rm -rf did work
PMTray13: zfs is going to return EIO on any regions of the file that are uncorrectably errored
PMTso getting IO error out from reading is kind of expected
ray13ok.. so with it just deleted.. it still doesn't like that
PMTray13: "doesn't like that" meaning what, it says permanent errors hvae been detected in the following files: 0x21398471234 or something like that
ray13yeah
ray13it says tank:<0x6ed6b>
ray13but i just rsync'ed the file and it's still saying tank:<0x6ed6b>
ray13PMT: does it take a rescrub to now clear it?
ray13scrub started..
PMTray13: I'd probably try zpool clear before scrub. it might also be unhappy if you have any snapshots with the affected file.
ray13no snapshots
ray13but I do have some checksum errors in my zpool status
ray13but I can clear those during and after the scrub right?
PMTclearing won't restart the scrub, no
ray13right, so these: NAME STATE READ WRITE CKSUM
ray13 tank ONLINE 0 0 6
ray13 raidz1-0 ONLINE 0 0 12
PMTray13: please use bastebin or similar, not pasting directly into IRC
PMTbut yes, those
ray13and I just cleared it.
ray13so I'll report back in 10 hrs. :-D
bunderscrub only works on portions of the disk with data on them, it should come back clean if you deleted the files/snaps
DeHackEdmaybe, but I'm worried that might be metadata corruption and suggests potential leaked space
bunderyeah metadata:0 is scary
PMTwas metadata:0 somewhere in the pasted output i didn't see
DeHackEdno, I'm taking an educated guess
DeHackEd<ray13> it says tank:<0x6ed6b> tells me ZFS isn't sure the filename, and the fact that the pool has 6 checksum eeors while the raidz has 12 suggests copies=2 failed to repair things
PMTthe reason it's not sure of the filename is that he deleted the file.
MarisaKirisamein my case I have an error in <0xe5>:<0x51760>
MarisaKirisamethat won't go away
MarisaKirisameit's from a damaged snapshot I deleted
DeHackEdMarisaKirisame: it takes 2 scrubs to eliminate the permanent data errors messages. you can simply cancel it after starting to make it count
MarisaKirisameok
MarisaKirisameoh, it's gone now
MarisaKirisamewould have been annoying if I had to recreate the pool from backups
MarisaKirisame395G over SATA 2... *shudders*
bundercan someone double check my math? 41 blurays is like 2tb?
MarisaKirisame1TB according to wolframalpha
MarisaKirisame2.1 if they're dual-layer
MarisaKirisameso yes, you're correct
bunderah
PMTbunder: depends what kind, there's a bunch of blu-ray types
bundersomehow i don't think netflix is gonna let me borrow 41 at once
MarisaKirisamehaha
stratactIs it possible to import a FreeBSD zpool into Linux?
bunderbah, even worse, my laptop doesnt have a bluray drive, gah i thought it did
bunderstratact: sure
bundermight be read only though if it has a feature that linux doesn't support yet though
stratactI suppose the only notable feature that I'm aware of not being avaliable in Linux is TRIM support
bunderthat's not a zfs thing though is it?
bunderi thought trim was just kernel disk stuff
stratactIt's not tied to it. It's just something additional. I suppose comparing zpool versions are more important. What's the latest zpool version in ZoL?
PMTstratact: "version" doesn't make sense in quite some time, since feature flags became the way of delineating things
PMT*hasn't made sense
bunderi see a pull request to add it, but no code specifiying it as a feature eg: feature@large_blocks
PMTTRIM support doesn't require a feature flag since it's not a modification of the on-disk format of ZFS, it's just the FS telling the underlying storage "I don't care about this range of blocks any more" versus just leaving them there or writing zeroes or w/e
FireSnakehi guys, can someone try to reproduce #4832? steps to reproduce https://github.com/zfsonlinux/zfs/issues/4832#issuecomment-231478290
zfs[zfs] #4832 - reproductible bug: steps to trigger hang of a pool <https://github.com/zfsonlinux/zfs/issues/4832>
stratactI just want to be able to reuse my existing zpool on FreeBSD which I use for /home when I migrate back to using Linux. I can live without TRIM. I just don't want to have to migrate the data to another drive because it's more work that way.
stratact(Plus I'm attached to my raid 1+0 configuration)
bunderdo you plan on going back to bsd with that pool at some point? i'm not sure what the effect of ignoring trim until linux gets it would be
bunderi'm just thinking out loud, don't mind me
snajpa-none
snajpa-:)
bunderwell there we go then heh
snajpa-create the pool with minimal common set of feature flags
snajpa-and don't use ACLs which one of the systems wouldn't understand
snajpa-second part might be more tricky, I have no idea, the best approach would be to test it
PMTbunder: it doesn't matter, it's "just" more work for the underlying storage when you rewrite the blocks, is all
snajpa-I mean, xattr=sa is ZoL only thing
snajpa-IIRC
snajpa-and the likes
bunderi can't think of any other hitches
PMTif FBSD created the pool with multi_vdev_crash_dump, Linux might be annoyed
PMTI've not had occasion to test this since that happened, though
stratacthow do I find out?
PMTstratact: zpool get all [pool] | grep feature (under FBSD)
stratact"home feature@multi_vdev_crash_dump enabled local"
bunderwouldn't that only be a problem on root pools?
PMTstratact: you might be able to set it to disabled, and i'd suggest trying
PMToh, right, no, you can't
PMTbut as long as it's only enabled but not active, it "should" be able to import r/w
PMT(only in quotes because I haven't personally tested this that I can recall)
stratactwould it be possible to disable it if the zpool were exported?
stratactbtw, everyone, thanks for the responses.
PMTno, being exported wouldn't help.
stratactThis does not look good for me: https://github.com/zfsonlinux/zfs/issues/2438#issuecomment-47590720
PMTstratact: that's active, not enabled.
perfinionthatpool txg_sync hang looks a lot like mine
stratactPMT: Alright, thanks for reassuring.
zfsperfinion commented on issue zfsonlinux/zfs#4832 - reproductible bug: steps to trigger hang of a pool <https://github.com/zfsonlinux/zfs/issues/4832#issuecomment-231533261>
bunderhmm
bunderthe fbsd man page doesn't say multi vdev can't be turned off
bunderas long as its enabled but not active
bundermaybe a quick import with a live usb could do the trick
DeHackEdthere's a man page zpool-features which should describe it. if memory serves correctly, it can be deactivated by destroying the affected zvols
bunderhttps://www.freebsd.org/cgi/man.cgi?query=zpool-features&apropos=0&sektion=7&manpath=FreeBSD+11-current&arch=default&format=html
DeHackEdhmm.. doesn't say...
bundersome of them are very explicit in saying there's no going back
bunderso i'm willing to think its doable for this one
DeHackEdhttps://github.com/illumos/illumos-gate/commit/810e43b2eb0e320833671a403fdda51917e8b036 suggests not. the feature has an increment but not decrement statement
PMTthe man page claims you can't disable a feature once it's enabled
PMT(technically zhack has a decrement button for feature flags but don't do that)
bunderoh dang so it does, where is my brain today
stratactbunder: I actually tried it myself and FreeBSD wouldn't let me, so PMT was on the mark. However it just says "enabled" which I assume means it's not active and something I didn't really take advantage of.
DeHackEdPMT: 'disabled' prevents ZFS from trying to use the feature. 'enabled' means it's available, but the on-disk format ist he same as if it were disabled. 'active' means the on-disk format changes are in place.
DeHackEdyou can't move back down to `disabled` once it's at least 'enabled'. the question is, can you bring down from 'active' back to 'disabled' again?
pcdPMT: it's not
pcdPMT: (possible to write such a thing)
pcdthe things look like holes with birth time zero, and nothing in the object is going to indicate otherwise
PMTI thought bt zero holes were already special-cased to be sent, though.
PMTAlso, if they look like bt zero holes, wouldn't it be possible to at least script detecting the problem by seeing if there are bt zero holes present in snapshot X that are not in snapshot X-1? Since this presumably only affects send because the hole_birth data is not relevant for the actual address mapping of the file, but only for computing whether to send in a diff?
PMTYou wouldn't easily be able to rework the send code to do that automatically from how I recall it being structured, but you could do the traversal.
PMT(My apologies if I'm misunderstanding anything fundamental; as I've mentioned a few times, I'm definitely new to this codebase and on-disk representation.)
PMTAh, I see, if the bt zero hole postdates hole_birth's initial txg, then we presume it's been there forever, which is why this crops up in the diff send. But can't you still notice the difference between that metadata and the mapping of a given file across two snapshots?
faenilhello people :)
faenilI am setting a root ZFS filesystem to run Ubuntu on, following https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-ZFS
faenilI'm not sure I get why the guide suggests using -a1 when creating the partition for BIOS botting
faenilbooting*
faenildoes anybody know the reason?
PMTI suspect because they need to write to that precise location and it's definitely not aligned to 512/4k/etc.
PMTBut as I am not familiar with using sgdisk in general, I am only inferring.
faenilmm ok
MarisaKirisamedoesn't pretty much every sane partitioning system have 1MiB-aligned partitions?
faenilI could not find what unit is used when none is specified
faenilthe man page says that -a1 means it will align the start of the partition to sectors that are multiple of 1, and it defaults to 2048
faenilso I was wondering why the change from 2048 to 1
MarisaKirisame2048 sectors are 1MiB
faenilso I guess the default unit is MiB
faenil"that are multiple of this value, which defaults to 2048 on freshly formatted disks"
faenilthat leads me to believe -a2048 is the default...what am I missing MarisaKirisame ?
MarisaKirisamedunno
faenilxD
Nukienfaenil, If you're looking to do a root-on-zfs for Ubuntu, check out http://pastebin.com/fa83QrBk
NukienMy script will do it all for you
faenilNukien: cheers, I'll check that out
gkeen_can i check a pools size excluding snapshots?
gkeen_zfs get all tank | grep used :)
DeHackEdUSED includes snapshot. I suggest: zfs list -o space # for the full breakdown
DeHackEdyou'll have to do some math yourself, but the USEDSNAP column is right there
gkeen_DeHackEd: i know, i just pasted the solution i found for it :P
ptx0you can do 'zfs get -H -o value used tank'
DeHackEdsummarizing free space on ZFS is hard. between reservations, quotas, snapshots and raidz it's sometimes hard to quantify
faenilNukien: wow, what a beast
PMTI would mostly expect "referenced" at the root of a pool to be a useful metric for pool space usage minus snapshots
DeHackEdthat only covers the root dataset, which is often recommended to be unused
DeHackEdI think the best option is to sum up the USEDDS column
DeHackEdwhich is equal to REFER except when the dataset in question is a clone
PMTi would expect you to be able to zpool get used [root] and then subtract usedsnap
DeHackEdit's not a recursive quantity though
DeHackEdonly USED and USEDCHILD are recursive, in their own ways
zfsironMann commented on issue zfsonlinux/zfs#4829 - PANIC at fnvpair.c:205:fnvlist_add_nvlist() <https://github.com/zfsonlinux/zfs/issues/4829#issuecomment-231544340>
PMTah, i meant allocated, not used, for zpool get.
PMTwhich definitely covers the entire pool
DeHackEdthat's much closer to the truth. raidz throws a wrench in the mix but it's calculatable
faenilNukien: does it support BIOS? or just UEFI?
faenilline 499 seems to assume that you use EFI
pcdPMT: if you have a previous snapshot, you could do something like that. You'd have to have some evidence that the file hadn't been deleted, and the inode reallocated between the two
pcdPMT: I'm not sure if there's a way to do that in the current ondisk format?
PMTpcd: presumably the object having blocks <= first_snapshot_txg and the same inode would be sufficient?
PMT(non-hole blocks)
PMTHm, I don't know how dedup would interact with that, presuming the metadata information I'm discussing even is stored.
gchristensenHi, I want to experiment with l2arc. on my machine, I think I should see a degradation in performance by adding it (16gb ram, 3x 2tb disks in raidz1, and a 24GB SSD I want to use for the l2arc) so I'd also like to be able to remove it ... is it possible to add and remove the l2arc?
PMTgchristensen: yes
gchristensenPMT: via `zpool add <pool> cache <ssd-device>`, and then `zpool detach <pool> <ssd-device>`, yeah?
ptx0zpool remove
gchristensen*goes to read the manual* thank you!
PMTgchristensen: be careful not to miss the "cache" term before the ssd-device, or you're going to be very sad, since you can't "zpool remove" it if you do
gchristensenheh, thank you. it wouldn't be the end of the world if I had to start fresh... but thank you for the heads up!
faenilmmm grub installation fails
faenilfrom inside the chroot
dasjoeNukien: friendly recommendation to put your script in a Github gist, less (none?) ads, better scrolling for me ;)
dasjoeNukien: also, Ubuntu has ssh-import-id, which can import public keys from Launchpad and Github
faenil"failed to get canonical path of <blabla>" :/
dasjoeNukien: POOLNAME=${SYSNAME:0:10} ← Why?
faenilah, nvm
KocaneHey guys , I'm new to ZFS
Kocaneand just created a RAIDZ pool
Kocanecan I see the creation process anywhere?
dasjoeKocane: well, if "zpool create" returned you to the shell it is done
KocaneUsing zfs list or zfs status, it reports as nothing existing?
Kocanehm
dasjoeKocane: check "zpool status"
KocaneThing is, I'm using OMV (openmediavault, freenasesque thing)
Kocaneso it's not done thru shell
PMTKocane: zpool history [poolname]
KocanePMT: So it should list something.. returns no such pool
KocaneWhen I created the raidz thru the omv interface, it said it won't show before it's done creating which "could take some time".
PMTKocane: did you replace [poolname] with your pool's name?
Kocaneye
Kocanepretty sure I just named it "data" :p
KocaneIt's not the "alias" is it?
PMTpcd: my apologies for the possibly-stupid question, but in trying to figure out whether you can write such a script, and reading zdb output, I can't seem to see anywhere that zdb prints information about holes other than implicitly where there are gaps in L0 entries (or the gaps in the ranges of the segment information at the end), and certainly no entries with blk_birth of 0, even with -dddddd -vvvvvv [po
PMTol/FS] [objID]. Am I blind, or misunderstanding something?
dasjoeNukien: http://sprunge.us/dIAQ
PMTKocane: zpool history, with no poolname, should print the history of every pool on the system, which presumably is going to be exactly one
Kocaneno pools available
Kocanewhich makes me doubt it's even being created?
PMTit either created it but exported it, or hasn't created it.
dasjoeKocane: then check your logs, and "zpool import" without arguments
Kocanezpool import
Kocaneno pools
Kocanehm
Kocanewhere's zfs logs?
dasjoeNo, OMV logs
Kocaneaight
dasjoeI recommend talking to the OMV guys, too. I don't think many of us here know how they implemented it
KocaneI think you're right
dasjoeAlternatively, just set it up manually. We can help! ;)
Kocanedasjoe, I was thinking about that
KocaneI wouldn't mind it showing up in OMV anyway though
Kocaneshould be importable
KocaneHow long does a RAIDZ creation take?
PMTKocane: approximately no time
dasjoeUsually seconds
KocanePMT yeah, then it's clear something is up
dasjoeAlso, I'd say "don't use raidz, use three-way mirrors or raidz2"
PMTI would generally agree with dasjoe, but I also am growing increasingly paranoid as I get older, and may end up trying to figure out a way to make 12-way mirrors if this continues
KocaneI dunno
KocaneOne parity seems ok
KocaneIll have a backup anyway
dasjoeYou do realize you're out of parity if a single disk fails
KocaneI only got 4 lol
Kocane4x3TB
PMTKocane: if a disk fails completely and you find a parity error on the rebuild, you're SoL.
dasjoeSo you'd have to resilver without any other error happening during the resilver process
PMT(This isn't somehow specific to ZFS, to be clear, just a general remark.)
KocaneYeah I get that. I'm planning on using crashplan though
Kocanein worst case
KocaneDont know how else to get the most space out of my 4 disks
PMTKocane: crashplan's remote storage, because backing up your pool onto your pool isn't going to help ;)
KocanePMT: exactly ;)
Kocanethe mount point I set
Kocanedoes this dir need to be created before I make the raidz?
PMTno
KocaneHmm
KocanePMT seems like it wasnt being created cause I specified a mount point
Kocanedoes it default to /mnt?
dasjoeNo, it defaults to /$poolname
Kocaneaye.
Kocane /data isn't a path being used in linux at any other time is it :p?
Kocanedasjoe: when I created the pool, do I need to create "objects" afterwards?
faenilis it expected that mounting an rpool requires -O ?
faenilI do zpool import -R /mnt rpool
faeniland that already mounts some of the mountpoints, then when I try to mount / it fails with "mnt is not empty" (of course) unless I mount it as overlay
faenilis that how it's supposed to work?
faenilthe root / uses "noauto", that's why it's not mounted by the import I guess
zfsstroobandt commented on commit zfsonlinux/zfs@f74b821a66 - Add `zfs allow` and `zfs unallow` support <https://github.com/zfsonlinux/zfs/commit/f74b821a6696fef9e9953aae05941e99bf83800e#commitcomment-18185330>
zfsJuliaVixen commented on issue zfsonlinux/zfs#4829 - PANIC at fnvpair.c:205:fnvlist_add_nvlist() <https://github.com/zfsonlinux/zfs/issues/4829#issuecomment-231551875>
Kocanepmt , maybe you can answer me that as well ((((:
pcdPMT: zdb will print holes with a nonzero birth time. A good way to see that is to create a large file and then truncate it to a very small size
pcdPMT: re: your earlier comment: what if the entire file has been rewritten since the prevoous snapshot? It's still the same file, bubt all of the block birth times will be greater than the previous snapshot's txg
djsgood lecture by Matt Ahrens ... https://www.youtube.com/watch?v=ptY6-K78McY
djsnot sure if I got the link in here or from somewhere else ;)
DeHackEdI'll watch it later. right now it's time for gamesdonequick to do tasbot. :)
pcdooh, tasbot time
DeHackEdminor trivia, their web site (tas guys) is hosted on a server running zol...
KocaneIs ZFS overkill for home usage?
gotwfno
KocaneI'm considering just going XFS or EXT4 for my 4x3TB setup
Kocanegotwf: after creating the pool, do i create a filesystem object?
gotwfits overkill for any location, depending on your usage ;p
gotwfyes
gotwfwhat is your target objective here?
gotwfLike, a home server/nas? Multi drive Workstation? single drive lappie?
Kocaneyep, home server / nas
Kocanestorage of photos, vids
Kocanemovies n shit
Kocanebut also personal storage for files
gotwfthen zol is great for that
gotwfhow many drives?
Kocane4
Kocaneright now I got a raidz pool
phoxand don't go XFS if you want to keep your data :)
gotwfso then you'd create datasets via e.g. zfs create -various-opts poolname/datasetname
KocaneThanks good tip ;)
phoxbecause... XFS
KocaneI'm creating it thru an interface
Kocaneopenmediavault
Kocanecan i create the pool in /mnt ?
Kocanethen filesystem so it's /mnt/data, say
phoxyep if you want, you can make ZFSen mount anywhere you want.
KocaneAight
KocaneHmm
KocaneDeleting existing pool gives me
Kocanelabelclear operation failed. Vdev /dev/sdb1 is a member (ACTIVE), of pool "data". To remove label information from this device, export or destroy the pool, or remove /dev/sdb1 from the configuration of this pool and retry the labelclear operation.
phoxso with `zpool destroy poolname`?
zfsmailinglists35 commented on issue zfsonlinux/zfs#4832 - reproductible bug: steps to trigger hang of a pool <https://github.com/zfsonlinux/zfs/issues/4832#issuecomment-231554139>
PMTpcd: oh boy, so zdb won't print zero birth time holes? lovely.
Kocanephox: umount: /data: device is busy.
Kocane (In some cases useful info about processes that use
Kocane the device is found by lsof(8) or fuser(1))
Kocanecannot unmount '/data': umount failed
Kocanecould not destroy 'data': could not unmount datasets
Kocaneforce umount?
PMTpcd: I can't find much useful documentation of it to confirm whether I'm insane, but the gen property on the object sure looks like the creation txg of the file.
zfskernelOfTruth opened pull request zfsonlinux/zfs#4835 - [buildbot checkup] Zfs master 29.06.2016+syncfixes+balanced meta data by kernelOfTruth <https://github.com/zfsonlinux/zfs/pull/4835>
pcdPMT: indeed, those are the only holes it doesn't print
PMTI'm somewhat surprised there's no flag to convince zdb to print zero birth time holes.
PMT(I'm also somewhat surprised I couldn't find the code in zdb that excludes those holes when I looked.)
pcdIt's in visit_indirect
pcdif birth_time == 0 return
PMTah, so it is. i suppose i'm going to submit a patch to add a flag to zdb to let it print those, because i really do want to see them.
PMT(if it doesn't get accepted, so be it, but it's IMO nicer than inferring their existence from calculating holes in ranges.
PMTpcd: i'm almost positive at this point that i have the skeleton of a script to detect this, given the version of zdb i just modified to print birth time 0 holes.
pcdPMT: if there's a way to get the creation time of a file, then I'd believe it's possible given two snapshots
PMTi'm currently testing my observation that the "gen" property on the object really looks like a creation txg for the object. failing that, I'd probably see if I could infer it by taking the crtime property and seeing if the txgs have any knowledge of what the time was when they were written, but that's much messier.
Nukienfaenil, Yah, got a little large. But it does what *I* want, so it's good.
faenil:)
Nukienfaenil, Supports bios, and I think it supports uefi - I don't actually have a spare uefi box to test on
faenilyeah, found out in the meanwhile :)
Nukiendasjoe, Yah yah :) I need to move it to github/gist, one of these days. I'm back home end of July, will do it then.
Nukiendasjoe, Poolname being restricted in size since it's used for LABELs, and they're size-restricted. If the poolname/sysname is too long, it chops off the end of the LABEL leading to conflicts
zfsJuliaVixen commented on issue zfsonlinux/zfs#4829 - PANIC at fnvpair.c:205:fnvlist_add_nvlist() <https://github.com/zfsonlinux/zfs/issues/4829#issuecomment-231555986>
NukienCould always use just a stock LABEL - I did originally - but then you can get conflicts when testing amongst multiple VMs.
PMTI suppose slow would be understatement of the year, considering how much traversal this script gets to do.
varesaI am running a bunch of VMs on CentOS 7 on a zfs pool. My VMs keep randomly crashing due to "audit backlog exceeded" or after increasing the backlog size now "task <audit|kworker|dmeventd|...>:... blocked for more than 120 seconds
varesaApparently that is usually because of disk-related trouble. Is there anything on ZFS side I could use to debug?
Nukiendasjoe, ssh-import-id. &deity knows how many years doing this shit, and I never knew about that. Sheesh.
pcdPMT: I mean, it's O(size of filesystem)
pcdit'll be faster than a scrub!
pcdwell, maybe, I guess I don't know how good the prefetching is in zdb
Kocaneit's not possible to set user quota per directory, is it?
PMTpcd: ah, but i'm iterating over the FS and collecting inodes as I go, rather than asking zdb to do something like -ddd on the entire fs. i suppose i could just do that too, but that's...a lot of input.
pcdPMT: gotcha. yeah, that makes sense.
pcdalso, I don't have an actual git repo on any machine in my apt, so I can't verify your guess that gen is the file creation time
pcd(browsing code through github is pretty unpleasant)
bunderKocane: if you put the directory in its own dataset, you can set quotas and reservations
PMTother people seem to also think this in their parse code, so if I'm wrong, I'm in good company
Kocanebunder: so like a volume object ?
bunderi guess, i'm not that familiar with lvm or other volume managers
PMTheh, i just realized, i definitely couldn't do this without the zdb patch to print birth time 0 holes
gotwfKocane: zol, as an 'advanced modern filesystem' is non trivial. You should spend some time reading docs before jumping in. For e.g. zfs sysadmin guide
PMTbecause otherwise i couldn't tell the difference between an L1 hole of size X and N L0 holes of size Y
gotwfwh/is probably in the lurking on snoracle's site somehwere
PMT(I'm not special casing any particular type of hole, I'm just noticing if a hole claims to be birth time zero and doesn't exist in the same form on prior snapshots, which is I think only possible if this bug is occurring)
gotwfalso see, for e.g. http://openzfs.org
varesaVMs running on ZFS storage keep crashing, been told the symptoms are disk issues. How can I debug?
zfsrincebrain commented on issue zfsonlinux/zfs#4809 - Silently corrupted file in snapshots after send/receive <https://github.com/zfsonlinux/zfs/issues/4809#issuecomment-231557085>
varesaShould I be concerned about these DKMS errors while updating? https://paste.esav.fi/raw/ubiciyuvet
varesaI would rather not be missing all my storage after a reboot
DeHackEdvaresa: make sure /lib/modules/$version/ has the spl full set of .ko modules available
DeHackEdshould be under the /extra/ directory
DeHackEdif so it's fine
varesaDeHackEd, will do once the update is finished, thanks
zfsJuliaVixen commented on issue zfsonlinux/zfs#4829 - PANIC at fnvpair.c:205:fnvlist_add_nvlist() <https://github.com/zfsonlinux/zfs/issues/4829#issuecomment-231558418>
varesaDeHackEd, I tried to install zfs&spl for a new kernel I installed, /extra/ is empty: https://paste.esav.fi/raw/ejesohejit
zfsironMann commented on issue zfsonlinux/zfs#4829 - PANIC at fnvpair.c:205:fnvlist_add_nvlist() <https://github.com/zfsonlinux/zfs/issues/4829#issuecomment-231559078>
PMTwell, my parse code for zdb output seems to be working correctly, so that's positive
PMTnow to take the parsed information and apply the logic
PMTpcd: check this out http://pastebin.com/gNBSvnve
zfskernelOfTruth closed pull request zfsonlinux/zfs#4835 - [buildbot checkup] Master 29.06.2016+syncfixes+balanced_meta_data by kernelOfTruth <https://github.com/zfsonlinux/zfs/pull/4835>
pcdPMT: nice
djsPMT: are you working on a script to check for bad holes?
pcddjs: seems like he has one
pcddjs: only works if you're comparing between two snapshots to see if the erorr was introduced in that time period
Shinigami-Samais hole birth a default option?
DeHackEdnew pools default to all options enabled unless you create with -d or '-o version=28' (or lower)