Knorriehm, after the first batch converting the fun decreases...
Knorriemaybe it was too good to be true :o
Knorrieit's between 5 and 10 minutes now per metadata chunk, and more writing
KnorrieI modified my to work on metadata chunks instead and force convert to single, maybe that will help removing the ones with most empty space first and decrease the rewriting into DUP chunks while converting
KnorrieI'll let it run for some time to see what happens. If it becomes a disaster again, I can just throw away all of it again (nice!!)
zerocooli just wanted to say, to those of you that work on btrfs
zerocoolthat i love you
zerocoolfor all of your hard work and dedication
zerocooland kindness and bravery
Knorriekdave: ^^ :)
jidarhow do I promote the snapshot I'm currently running to be the new root? I've also lost access to all of my snapshots when using snapper to rollback to an older commit
jidarbasically this issue :
jidarI had to mount subvol=0 to /mnt to be able to even look at the "lost" snapshot subvols
jidarI guess I'm partially wrong there.. mounting the UUID of the device as subvol=.snapshots seems to have fixed it
Modemfloro: But old kernels are still able to read a zlib9 compressed btrfs?
MoHi, could a balance operation make the btrfs faster in any way? I know
multicoreMo: have you tried defragging metadata?
MoBut there is also said, that balance also does "some defragmentation, but not on a file level..."
Momulticore: What means defragging metadata? I tried to avoid defrag as it would drop all reflinks of my snapshots.
MoSo balance would not drop any reflink?
multicoreMo: read btrfs-filesystem man page.. theres a NOTE section under defrag
Momulticore: That note about "Directory arguments without -r .."? Nothing said about metadata defragmentation.
MoWhat does btrfs-maintenance BTRFS_BALANCE_DUSAGE="1 5 10 20 30 40 50" mean? For usage filter I only know single values or ranges.
multicorei don't know what the script does but i'm assuming that it first balances with 1 and then 5 ...
Momulticore: Why, isn't that the same as only balance with dusage 5?
MoSo you mean 'btrfs filesystem defragment -r /' would not break reflinks and only defrag metadata? Got that from This is not so clear in the man page about defragment -r, "only certain internal trees...defragment files recursively in given directories"
multicore"Directory arguments without -r do not defragment" ... WITHOUT
MoSo this is quite safe to try, not dropping reflinks. So I try if that gives me some performance back.
multicoreMo: it'll defrag metadata it's not clear to me if you'll have to do this for every subvol or not
multicoreMo: if you're getting 50KB/s speeds i doubt it'll help
multicoreMo: have you tested your drives that they are in OK working order?
multicoreMo: and do you have quotas in use?
MoNo quotas. scrub is fine. btrfs check is fine.
multicorewell i had this one hdd that did 2KB/s with no errors
multicoreno ata or smart errors
Mohdparm -tT is also fine on both.
multicoreyou'll have to read the whole drive to make sure
multicoretho if the problem is on source end you should see slow scrub speeds
MoSo for defragment without -r you are not sure about subvolumes? So maybe I need that on every subvolume? I just try it on the toplevel sub.
multicoreMo: yeah, i'm not sure about that
MoHö, it returns immediately: WARNING: directory specified but recursive mode not requested: /mnt/usb/mobiledata/ WARNING: a directory passed to the defrag ioctl will not process the files
multicorethose warnings are because the behaviour with and without -r is confusing
Momulticore: But without -r it did not do anything.
multicorehow do you know that it didn't do anything?
Momulticore: Defragmenting anything in less than 1s? Having 14GB Metadata on a slow rotating USB device?
Momulticore: Ok, if so, I did defrag the metadata now. Another scrub running, and speed is fine, 14GB in 2 minutes and running...
MoAny other idea how to improve the btrfs receive speed?
MoIf not I will need to defrag at least the latest snapshot, ignoring the lost reflinks and lost space.
multicoreMo: defragging metadata shouldn't take too long but <1sec it most likely didn't too anything/much
MoRunning that defrag without -r on my main btrfs takes longer, ok.
MoI guess the metadata on the backup only btrfs, only receiving snapshots, wasn't defragmented or was already defragmented by the balance operations I did before.
multicoreMo: try doing a btrfs send to null so you'll know if the problem is in sending or receiving side
Momulticore: Like btrfs send /mnt/btrfs-top-lvl/root > /dev/null ?
MoSend runs smooth with rates from 10MiB/s to 80 MiB/s, nothing weird. So the receiving backup btrfs is the bottleneck.
UukGoblinis it possible to have de-duplicated files that have a header, i.e. a file stored in an archive (with 0 compression)?
optyblock level based?
UukGoblinbut the header will not necessarily be a full block in size, unfortunately
UukGoblinso there'd have to be a gap in the block
multicoreUukGoblin: full block sized?
UukGoblinmulticore, if block is, say, I dunno, 4kB, the header might be 247 bytes, so there'd have to be a 3849-byte gap in the block. Is that possible?
optyi guess it depends on the archiver
UukGoblinno, there will be different sized-headers for sure
optyi mean the ability to align
UukGoblinyeah, no, I mean, I'm asking if the fs can do it :-)
multicoreUukGoblin: so in effect you'd want a dedupe block smaller than 4k ?
UukGoblinmulticore, well, no, I don't want to de-dupe the header. Let me try to explain on an example. If I have a 1 GB .mkv file, which is then put inside a .rar with 0 compression (stored), let's say rar adds 247 bytes of a header to make a 1GB + 247b .rar file. I'd like to deduplicate all data between the .rar and .mkv
UukGoblinso there'd need to be a non-duplicated block containing the 247 of the header, then possibly that gap I was talking about, followed by 1GB of blocks which are also duplicated on the .mkv
formUukGoblin do you want that be automatically done by a dedupe-run, or do you just want to put the file on the fs that way?
Momulticore: After complete defragmentation of the last snapshots on the USB backup btrfs, the next send/receive is really faster. But I lost some space dues to dropped reflinks.
UukGoblinform, I don't suppose there's any dedupe that can do it right now, so I'm considering writing one myself
UukGoblinso let's say I'll "put the file on the fs that way"
MoI get at least around 18,4MiB/s on the UAS drive and 1,72MiB/s on the USB2.0 drive. All better than 50kb/s before.
formand you have single rar files or multichunk?
UukGoblinform, multichunk, actually
UukGoblin(you know where they're from;-)
formok, i dont know the answer, because you would have to lay down the single rar files "padded" for alignment on the fs. dont know how to do that. but i know how to play rar'ed mkv without extracting :)
UukGoblinthere's rarfs on fuse that I know of, but I was wondering if I could improve that with btrfs dedupe
formyou can just drop the .rar on vlc. or do it with a unrar|mplayer pipe
UukGoblinyeah, I'll be streaming that via dlna, but rarfs on fuse can sort of do it
multicoreUukGoblin: so you're looking for a sliding window deduper
UukGoblinmulticore, yes!
UukGoblinthat's exactly right :-)
multicoreZygo: ^ ?
Kedoes that rar contain the original data at multiple of 4K offset
Keif not, nothing can do this for btrfs
UukGoblinKe, no, assume it doesn't. I know how to do it if it did.
UukGoblinok, that's an OK answer :-)
Zygomulticore: did somebody say sliding window deduper? ;)
Zygowell, the hard limit is 4K alignment, i.e. the duplicate data must be in units of 4K and aligned to 4K blocks
Zygobees will slice and dice the extents as required to dedup any 4K-aligned blocks, but there's nothing it can do if one copy of the data is at 4K and the other is at 5K or 7K
Zygooddly enough I get a lot of random hits on git pack files
ZygoI'd expect to see a 0.025% match rate on those, but it's more like 10-20%
Zygoso who knows what happens in any specific test case
Zygo(these git pack files I'm getting all the hits on come from different git gc runs on the same git repo, so they are storing mostly the same data, but with different deltas inserted in between leading to different file offsets. Still has an insanely high hit rate for not trying to be 4K aligned, though)
Zygoone of these days I should build a tool that tells me what other files share blocks with file X, broken down into special cases like "exactly identical", "Y contains X", "X contains Y", "X and Y share blocks at the same offsets", "X and Y share blocks at different offsets", with a percentage breakdown when there's a mix of results
Zygoand a special case for 'all instances of Y share a common base name, you should rename file X named "#1152562" to "comic_sans.ttf"'
Zygoand then I can go through all my old ext4 lost+found directories and finally dispose of them because I'll have some idea WTF they are :-P
redfishhi, btrfs-prog 4.15 on 4.15.1 (arm64) fails with btrfs: unable to add free space :-17 free-space-cache.c:828: btrfs_add_free_space: BUG_ON `ret == -EEXIST` triggered, value 1. should i report as bug?
redfishthis is 'btrfs check'. also it succeeds on another drive (with independent btrfs fs of also 8TB) just fine.
raynoldahh it's a wonderful day
ZygoKnorrie: I am somewhat impressed with your netapp device
Zygospecifically, the part where you can make snapshot/clone devices and then do performance tests on them
Zygoif I try that with an LVM snapshot, everything is 95% slower :-P
KnorrieZygo: it's pretty incredible stuff yes
linduxedi'm in a liveUSB environment
linduxedwhen i run "uname -a" i get linux 4.2.2-1-ARCH
linduxedso it's not the newest
linduxedwhat i'm wondering is whether there's a problem with creating a btrfs filesystem from this USB, considering i'll boot and install the latest kernel right after
linduxedi know that newer kernels make use of btrfs partitions in better ways (whatever that means, i'm not sure)
Knorriethat depends in the btrfs-progs version more than the kernel
linduxedi'll check
darklingIn general, no, there shouldn't be any problems.
darklingYou may have to deal with a couple of minor tweaks once you've made it.
darkling(Balance away any extra single chunks is the main one)