Moopty Ke: Because for my mobile harddisk used for sending snapshots only, I have very short autofs timeouts, most time umounted. Some udev scripts are creating the mapper device for LUKS and also remove that on disconnect. So when unplugging the btrfs is most probably umounted, but the LUKS device mapper is still there. But as you say, should be safe after umount.
MoTrizt: I'm using bcache, that is btrfs-on-luks-on-bcache, with writeback enabled, and I'm very happy with that. Now I'm also compiling on btrfs which was a pain before. However I have a weekly cronjob mailing me the wearlevel count of the caching SSD, as bcache is heavily working on that.
MoWhat does "ERROR: unexpected header" header mean when sending/receiving via btrbk?
MoI have defragmented the latest snapshot on the target, that is used as parent. Therefore I've set it to r/w, ran defrag, and set it back to ro.
MoIt seems that only the incremental send is failing after that, a full send/receive works.
Keyeah incremental send is expected to fail if you dedup something after you have sent it
Kewell perhaps not fail, but you are messing it up anyway
TriztMo: do you use the ssd option in your mount?
MoKe: Hm, that worked some time before.. ok, I wasn't happy to defrag the snapshot, but transfer rate was about 50kb/s. Isn't much better now with about 300kb/s for the full send/receive.
MoTrizt: No, I have nossd set. I asked about that, usually the ssd option isn't even useful for modern ssd sometimes.
Mo Hm, before 4.14... there seems to be a change about that.
MoTrizt: But more curious, I was asking what IO schedulers I should use. Usually I use [kyber] for SSDs and [bfq] for hdds. The bcache device itself has "none" scheduler here, but I still have set the different schedulers for the caching ssd and the backing hdd. Not sure if the same scheduler for both would be better.
MoTrizt: Let's meet on #bcache@OFTC
KeMo: if you wait until I get retired in finnish age of retirement(never) and noone still has not fixed that issue, I will take a look at it
Kethe autofs thing
rdzhey all. i sometimes get I/O errors when reading files from a disk connected to a raspberry pi.i checked the disk with an extended self test and it is ok. i reformatted the disk a while ago to btrfs. it was ext4 and never had i/o errors
rdznow i wonder if ext4 simply doesn't perform a checksum test and thus never reports any errors
rdzif that is the case, i'm actually happy that btrfs reports logical errors
rdzdoes anybody know whether ext4 performs checksum test when created with default options?
multicorerdz: ext4 doesn't have checksums for data (only metadata)
multicorerdz: when you get i/o errors you should check dmesg for the reason
multicorerdz: if it's csum error or not... also i suggest that you scrub your fs
rdzmulticore, i wanted to create a snapshot which failed because of I/O error. then i scrubbed the filesystem and found some files with corrupt blocks (wrong checksums)
rdzwhich ext4 those probably would have gone unnoticed
rdzwithout using btrfs i wouldn't have noticed how unreliable a hard-drive connected to a raspi is
rdzi know check back whenever i copy stuff on it.
rdzi meant: i _now_ check the consistency of all copied files after writing batches of files
optyMo: you can always use an extra sync for sure
optyMo: but i think you can't close the mapping in use
megamacedhi, i got into a situation where I have `rm -rf /var/lib/docker` but the snapshots are still visible when I run `btrfs subvolume list /`. If I try to remove one with `btrfs subvolume delete -C /var/lib/docker/btrfs/subvolumes/f39dcb67cb9179c34b5cdbfd38d7a079ab0c26ed9f1131c3b3fa5f88dae474b2` then I get No such file or directory
megamacedHow can I remove those snapshots which no longer exist?
Knorriemaybe something still has an open file descriptor to something in it, which prevents cleanup
optymegamaced: try lsof?
megamacedwell `ls -la /var/lib/docker` returns nothing and `lsof /var/lib/docker` returns nothing
megamacedignoring the fact its Docker, generally in BTRFS if I delete a snapshot from disk a stupid way with rm -rf, how would I then delete the reference to it when running btrfs subvolume list / ?
optymegamaced: can you pastebin the output of "btrfs subvolume list /"?
optyhm, seems down
optymegamaced: or maybe it's still working
optymegamaced: you can try "btrfs subvolume sync /"
optymegamaced: or newer progs
megamacedwill try
hojuruku OK I've worked out the optomium IO size for my samsung SSD is 16k (2x8k pages). But the if I make the nodesize bigger in btrfs (say 64) i'll get better compression right? What other disadvantages do i get?
Knorrienodesize is metadata
hojurukuonly for the metadata - and the sector size must match 4k (page size) in linux right
hojurukuso the best size for the node size is 16k or can i have it larger? does a larger node size allow better compression
Knorriemetadata is not compressed
hojurukuso it should match the preferred io size of the drive?
hojurukui had some problems with 64k on a vps, 32 seemed to work well
hojuruku ah it depends on the average filesize you are working on
hojurukuif most of your files are 16k or larger (like VM images) larger nodesizes faster.
hojurukubased on an average desktop system, what's the best node size - 16 or 64 (or 32)
TriztMo: thanks for the reply, I'm sorry I didn't reply, had to rush to work
hojurukuyeah 4k is write performance 16k is balenced, at 64k node size is faster read but much slower write (when you are dealing with a stupid number of files)... is that a laymans explanation of it - it relates to file location lookup times not the actual througput
hojurukutrying out zstd compression. if you don't use mount options to set the compression levels can you do it with btrfs properties?
hojurukui wonder if btrfs property set {file} compression zstd:6 would work?
hojurukuthe only way i guess is to use the mount option and a copious use of the NOCOMPRESS FLAG on the filesystem.
hojurukuchattr +c compresses new extents to files only. btrfs property set - does recompression of the whole file right?
multicorehojuruku: nope
hojurukudoes the updated btrfs-tools (my live rescue image doesn't have it atm) btrfs fi defrag support compression levels? or it's only available as a mount option on the filesystem?
multicore4.15 kernel allows setting of zlib compression level, no mention of zstd
multicoredoes it work?
hojurukulike i said i got old btrfs tools on this live cd i'mage i'm using to move my root filesystem over
hojurukuhumm facebook wrote that code. i hate facebook but i love linux. I'm conflicted.
multicorei'm not brave enough to use zstd
hojurukui'm going to try it on my portage tree for starters.
multicorenot supported by boot loaders, can't mount on older kernels and too new...
hojurukuanyone made a script that compresses every single file with lzo completely into tmpfs and checks if the compression ratio is good doing a better job than btrfs to decide weather to enable it or not? what does btrfs look at? the first 4k or something?
multicorehojuruku: max compression range is 128k so the compression rates wont match "normal compression methods"
multicorehojuruku: compression logic is like if the first part doesn't compress then the compression is disabled after that
multicorehojuruku: with compress-force you'll get better results
multicorehojuruku: *better compression ratio
hojurukuso the first 128k?
hojurukubut i don't want to use mount options..... you see the only way to run a database is to use the NOCOW attribute on folders
multicore"If compression is enabled, nodatacow and nodata‚Äźsum are disabled."
hojurukubecause the mount options are muturally exclusive. subvolumes on the same partition don't support different compression / nodatacow options... the best way to tune your system is not to use mount options, but to use defrag, btrfs property chattr to modify the attributes on a per folder / file basis
hojurukumulticore, yeah on mount options ;)
hojurukumulticore, but you can disable copy on write for lets say the /var/lib/mysql folder and the files in it - as long as they are not already created
hojurukuwithout using a mount option.
hojurukuthen when you fire up mysql for the first time with chattr +C /var/lib/mysql (assuming the directory is empty at this point)
hojurukuthere will be now COW on mysql files.
hojurukuthat's the main reason i don't like using mount options for compression, but mount options for compresssion seems the only way to force compression and choose compression levels which is a bitch
multicoreare you talking to yourself?
hojurukuno i'm explaining to you why I don't want to use mount options to set compression.
hojurukubut i want to be able to force compression for files as well.
hojurukuok btrfs fi defrag will recompress the whole file I think - or at least attempt to compress (checking first 128k right?) - but what btrfs fi defrag needs it the option to choose compression ratio as well for zlib/zstd
multicoredefrag will compress the whole file, unless there's a nocompress flag there
hojurukumulticore, does the compression logic you speak of kick in as well when you are doing a btrfs fi defrag -czlib - or is that equivalent to a compress-force=zlib just for those folders?
multicorethere was patches to change defrag + nocompress behaviour but i don't know the state of those
hojurukuby NOCOMPRESS flag you mean btrfs set property <file> "" not chattr -c - they are different right?
multicorehojuruku: yes the property
multicorehojuruku: it can be set with example if fs is mounted with compression and the logic decided that the file isn't compressible
hojurukubtrfs fi defrag will add the +c to files it compress, but if it see's the nocompress flag it wont. so chattr +c is like a status variable to show it was previously compressed, and the btrfs property is like more strict policy enforcement. or they are both the same actual variable? chattr +c set's compression to "lzo" if there are no mount options.
multicorehojuruku: if you defrag should file it's skipped unless the patches i mentioned earlier are in use
hojurukuchattr is an VFS filesystem attribute right.... and btrfs properties are yet another set of attributes. and there are two different ones for compression. oh so confusign.
hojurukujust when i thought i understood the btrfs kernel faq it changes all again with compression.
multicorehojuruku: btrfs defrag doesn't set +c attribute?
hojurukuWhat's the precedence of all the options affecting compression? - chattr +c tells to kernel to try and compress it
hojurukubtrfs set <file> compression = zlib or lzo = FORCE compression on
hojurukuah got it now.
hojurukuand mounting with compress=lzo sets all new directories to have the +c & the "lzo" attribute when they are created i saw.... but once I boot it i wont have the compress=lzo mount option
multicorehojuruku: if you mean by FORCE compress-force, no
hojurukuand chattr +c is only for new extents so it should only really be used on directory to turn on opportunistic encryption
hojurukuYes. The utility chattr supports setting file attribute c that marks the inode to compress newly written data. Setting the compression property on a file using btrfs property set <file> compression <zlib|lzo|zstd> will force compression to be used on that file using the specified algorithm.
hojurukuso btrfs set property forces compression on for a file and will recompress an uncompressed file, or move compression algorithms
multicoreit forces compression algo but it isn't act like compress-force
hojurukuah it acts like compress= for that algorithm
multicorechattr +c defaults to zlib and with property you can set the algo
hojurukuso lets say i have a bzip2 file and i set btrfs property set xyzfile compress = "lzo" ----- btrfs wont compress it because there there wont be a good compression ratio after the first 128k right?
multicoreafaik if you want compress-force only options are to use mount option or defrag
multicorehojuruku: yes, afaik
multicorehojuruku: you can use filefrag -v and start testing
hojuruku"will force compression to be used on that file using the specified algorithm." force is in bold on the faq...
multicoreforcing compression to certain algo isn't same as compress-force
multicoreyou can have zlib,lzo,zstd and uncompressed extents on a single file
hojurukuso lets say i compressed with lzo the whole filesystem, and i see my portage tree and i wanna squash it more, if i do find /usr/portage -exec brtfs property set {} compression zstd; - that's only going to affect NEW files added to the portage tree and new extents written to existing files - it wont recompress like defrag would.
multicorehojuruku: even if you use compress-force you can have uncompressed extents
hojurukuit would be nice if defrag as well as honoring the nocompress flag on existing files honored the existing compression algorithm on directories / files when you run it. that is true too right?
multicorehojuruku: because depending on the kernel version data is only saved compressed if the resulting data is < or <= sized than the uncompressed
hojurukumulticore, compress-force doesn't really force. you mean if i created a new filesystem and from the first time i mounted it with compress-force=zlib - it would still create some uncompresssed extents?
multicorehojuruku: read ^
hojurukumulticore, ah of course... ok so bzip / compressed archive files you can't beat that compression - so it will not compress then. wow i'm starting to understand it it does have some smarts so you don't have to tune it manually as much as i thought.
multicorehojuruku: if you use compress-force and there's a bzip file -force tries to compress it but doesn't save it compressed
multicorehojuruku: 'compress' would skip it after "first parts"
hojurukumulticore, so compress force delays it's compression ratio checking until the very end of writing a file / extent to see if the lzo'd or zlibed data is better than the raw data or original file in the case of a copy operation.
multicorehojuruku: compress-force always compresses the data but only saves it compressed if < or <=
hojurukuthe ratio is better i mean. ok i got it now.
multicorehojuruku: remember the 128k limit
hojurukui spent like a day reading the paper of btrfs quota groups getting my head around that like last year to find out it didn't work due to bugs in the end.
hojurukuat least this works stable as a rock ;)
multicorehojuruku: with compress-force it's possible to get compressed and non-compressed extents on a single file
hojurukuso if i wanted to make a perl script or something to act like compress force... here are the rules i must consider
hojuruku1) read the first 128k of the file into ram.
hojuruku2) compress it & compare size -
hojuruku3) if size smaller with compression, use btrfs defrag (or some perl module or something that works like defag - probably does exist) or just xargs style options to btrfs fi defrag to FORCE the compression on that file.
hojuruku4) check that chattr +c is on for the directory it's in (and not -C) & in testing verify that the btrfs compression prop was set right by btrfs defrag. btfs property set compression - is only for NEW extents to a file - acting like compress=force. btrfs +c will attempt compression. if a directory has btrfs poperty set compression applied to it - it will act like compress-force on for that folder?
hojurukumulticore, last sensible question before i drop the topic. btrfs defrag without any -c options will 1) uncompress and defrag data OR 2) honor the existing btrfs compression property to force compression per files. and *may* compress if it sees a +c on the file or directory?
hojuruku3) we know btrfs defrag honors the "nocompress" flag....
hojurukubtfs defrag i mean ^^
multicorehojuruku: i haven't tested lately so can't be sure and there have been changes...
hojurukumulticore, it's all good i'll have to do some more playing.. now to fix up grub and to boot into it ;) brb
multicorehojuruku: start testing with filefrag -v and compsize
hojurukuroger wilco.
iranenwhat -v does? how about -r
multicoreiranen: my filefrag doesn't have a -r option?
hojurukumight do some experiments with an awk script to get some stats out of my filesystem with different compression options. tune each file to the best compression... if only btrfs defag honored the existing properties so you could just run it recursively on your system and everything will work out right once you've tuned up everything.
multicorehojuruku: do you have compsize installed?
hojurukui will install when i boot the live system, like i said i'm stuck with an older version of btrfs tools (sabayon linux with 4.14 kernel - but i dumped them for gentoo now)
hojurukui'm on a rescue disk now migrating to btrfs
hojurukuzlib has different compression levels... but btrfs just uses a fixed one right? where the dictionary size = 128k right?
multicoreafaik default for zlib is level 3
iranenwhy no zstd?
hojurukuiranen, oh sabayon's distro's btrfs tools is out of date - i can set btrfs zstd property but i can't defrag with it yet - at least on this live cdrom, but when i boot into gentoo i can zstd defrag
hojurukui'm going to play around with this, write an awk script do some experiments and maybe write a blog post about it one day soon.
obsrwrany PCIe experts: can I generate TLPs from the CPU with a max payload size >64 bytes ?
obsrwrs/max payload/payload/g
Zygoso...this happened last night: BTRFS error (device dm-28): parent transid verify failed on 8218474790912 wanted 243817 found 244088 errors in dev stat or scrub
Zygoand the extent tree does not contain an entry for that bytenr
Knorriechecksum is only 271 away from the right one
Knorriemaybe the data is almost correct
Zygoso...ghost transid failure? does btrfs just ignore these now?
Knorrieoh wait
Knorrieit's a transid
Zygoit's a transid from the future
Zygowhich means the referring item is old, but somehow its parent transid was OK?
Zygothat's the only transid message in the last ~20 hours
Knorriescrub doesn't check those right
Zygoscrub did count them before
Zygothey're "verify_errors"
Zygodev stat will count them too, if the filesystem doesn't go read-only at the same time
Zygobut this one is _weird_
Zygothe only thing I can think of is that btrfs read the other raid1 mirror copy, which was OK, and said "right we'll go with that then"
Zygoexcept that I'd expect another log message for that...?