KnorrieAnkular: sit in a corner and cry
smurfendrek123Hey guys, when i run sudo btrfs fi du -s / i get this: ERROR: failed to lookup root id: Inappropriate ioctl for device ERROR: cannot check space of '/home/': Operation not permitted
smurfendrek123Well it's this: ERROR: failed to lookup root id: Inappropriate ioctl for device
smurfendrek123ERROR: cannot check space of '/': Operation not permitted
Knorriekdave: https://syrinx.knorrie.org/~knorrie/btrfs/btrfs-nossd.gif see left lower corner... that's what happens when switching to nossd for writing data in the last 12 days :D (subvol removal with nossd takes more than 12x as long so that's still on remount,ssd)
Knorriethis is chunk-ordered picture, not physical
Knorriein combination with feeding chunks with fragmented freespace to balance, which I think I found a nice algorithm for now
Knorrieunbeilievable
Knorrie\:D/
smurfendrek123Hey knorrie did you get in the fedora repo's yet?
Knorriedid you search for it?
smurfendrek123Well what's it called?
Knorriehttp://lmgtfy.com/?q=python+btrfs+fedora
KnorrieI don't know if jorti did new packages already for last weeks update
smurfendrek123Should i get python2-btrfs or python3-btrfs?
smurfendrek123or does it not matter?
Knorrie3
Knorriepython 2 support is removed in last version
Knorrieand btrfs-heatmap is a separate package I think, if it's packaged at all
smurfendrek123It's in the repos
smurfendrek123And it's indeed seperate
smurfendrek123that's cool
KnorrieI'm doing the debian packages myself now
smurfendrek123Is it a lot of work to submit and maintain?
Knorrieit's a lot of work to get all little details right
smurfendrek123I can't tell the difference between snake and linear
Knorriehttps://github.com/knorrie/btrfs-heatmap/blob/develop/doc/curves.md
Knorrieyes, they're quite similar
Knorrieand the options are there because otherwise users keep complaining they're not there, not because I think it's useful :)
smurfendrek123haha
smurfendrek123Well thanks for making this
smurfendrek123it's quite nice
Knorrieso I added a documentation page to stress the fact that I think it's not useful :D
Knorrieyeah, it's superfun
btrfs601Hi, is there a way to reduce btrfs weight on kernel boot?
Knorrieweight?
btrfs601well it have a great "weight" in my total boot time
Knorrielarger filesystems take longer to mount, that's a known issue
Knorrieno solution for that now
btrfs601large like 1 TB? I admit I have a somewhat complex setup to mount
Knorrieyes
Knorrielarge like, contains many files
btrfs6011 ssd, and raid 1 and 0 on 2 HD
Knorrieduring mount it needs to find information that's scattered all around the same search space as all parts of all files you have
Knorrieso it causes random search read io
Knorriewhich is slow
btrfs601there exist some things I don't understand
btrfs601like, why it request raid6 modules? I do't have one...
btrfs601is it distro agnostic?
btrfs601on my modules.dep I have kernel/fs/btrfs/btrfs.ko: kernel/crypto/xor.ko kernel/lib/raid6/raid6_pq.ko
btrfs601well thanks anyway I need to go
TomLIs there a way to calculate or extrapolate from a running process the length of time to run btrfs check --init-extent-tree?
potty-nyani have two computers running btrfs across multiple devices in single mode. i keep the data on the two machines synced/mirrored across them. however, one computer reports 5.4 TB of usage and the other 8.1 TB. any idea what might be going on?
TomLHow do you do the mirroring?
potty-nyanrsync
TomLare you certain the mirroring is working?
potty-nyanyeah i run it manually. interestingly the "main" machine reports 5.4 TB and the "backup" machine reports 8.1 TB
TomLwhat transport is rsync using? are you doing rsync over CIFS/samba?
potty-nyanthe main machine only has 6.4 TB of total available space. so something funky is going on.
potty-nyani run it over ssh. i don't use btrfs compression
TomLwhat's the block size of the two file systems?
potty-nyanone sec
TomLbtrfs inspect-internal dump-super <device>, look for sectorsize
potty-nyan4096 on all devices in both computers
TomLOh.
TomLdoes the name/number of files change each time you run rsync?
potty-nyannope
TomLare you using rsync --delete ?
TomL --delete delete extraneous files from dest dirs
potty-nyanusually --delete-before
potty-nyani also prune empty directories with -m
TomLis it possible there are sparse files on the source which become non-sparse on the target?
potty-nyanhow would i got about finding out if that's the case?
TomLcan you run "du <filename>" on each file on the source and then compare to the same on the target?
TomLDepending on the number of files, it might be easier to start by doing a "du" on each directory first, and only drill down to files if they don't compare
potty-nyani'd have to write a script
potty-nyanbut for example the top-level directories all show the same info
TomL... top level?
TomLare you rsync'ing the root folder?!
potty-nyanno, i'm syncing /mnt/media/
potty-nyanthen i have directories underneath
TomLhow many folders in /mnt/media?
potty-nyanlike music, tv, movies, etc.
potty-nyan12
TomLdu <folder>, 12 times on both systems and compare
potty-nyani'm going to try running rsync with the -c flag for each of those 12
TomLmaybe -s
potty-nyan-S?
TomLsummary, so you just get total for the dir
potty-nyanoh, for du
potty-nyanyeah i did du -sh
TomLyes
potty-nyanit's exactly the same
TomLdu output is the same for all 12 folders?
multicore"reports 5.4 TB and the "backup" machine reports 8.1 TB" reported by what?
TomLYea, my next question.
potty-nyandf -hT, btrfs fi show /mnt/media, btrfs fi usage /mnt/media
TomLso there's 2.7TB in the target system that didn't come from the source .... ?
potty-nyani really have no clue
Knorrieyou're using rsync
Knorrieso if you use a recent cp on btrfs, it will reflink the files instead of a full copy, that could also be the reason
Knorriersync undeduplicates that
Knorrieto add something to the long list of guessing things
KnorrieTomL: no, not really, but why would you ever use that?
Knorrieit's like open heart surgery with a blindfold on
potty-nyanwhat can i do to avoid that reflink problem?
Knorriefirst find out if it's the problem, before trying to solve it
potty-nyanhow do i figure out if that's the issue?
potty-nyanit might be because i often move torrents from the downloads folder into proper sorted categories
TomLKnorrie: its a long story, starting with a punctured hardware RAID-6 array
TomLwould a recent stat show multiple links if btrfs linked it? does it look like a hard link?
Knorriereflinks?
TomLwoudl he have to dump the tree and grep for reflinks somehow, then?
Knorrieor just start over, remove everything on the target system, and then suddenly see that it's still reporting 3TB used and then find the forgotten folder with files
Knorriethis guessing all doesn't really lead anywhere :D
potty-nyanit sounds like i shouldn't be using rsync with btrfs
Knorrieif you just want to have a mirror of everything on another fs, and if you don't want to do changes on that mirror, you could also use send/receive
potty-nyanyeah
potty-nyani will look into it
potty-nyanbut right now it's time for lunch :D
TomLyou could even pipe that over ssh, I do that with dd actually to image raw block devlices over the network without intermediate storage
potty-nyanthanks for the help
TomLif you want bidirectional change updates, you can use unison. and the first time it changes a file in the other direction, if it is reflinked it will get dedepulicated by the reverse rsync that takes place
TomLthen they'll both be 8.1TB (if that's what's happening)
TomLI transported in person ~35TB of data on 16 drives from Seattle to Chicago. Upon arrival, 2 dead drives and 3rd one with 6 unrecoverable read errors on the first. I would up with 6 punctures in the array.
TomLso after replacing the 3rd drive, I cleared the bad block map and set about identifying bad blocks and rewriting them
specingwhat were you transporting them in, a rollercoaster?
Knorrieheh
TomLan airplane. In a cardboard box with a styrofoam insert, made for shipping drives
TomLwhich was in my carry on
Knorrieand you don't have a copy of it in the place where you came from
TomLyes I do
TomLI'm trying to avoid recopying 35TB across 2,000 miles
Knorrieis the raid a harware raid controller?
TomLso I can recover from this on a filesystem that doesn't do data checksumming, its not hard
TomLyou identify the bad blocks by reading through them and then rewrite the blocks that give an I/O error
TomLvoila, fixed
TomLproblem is, btrfs does data checksumming. so even when I find the bad block, I can't rewrite it
Knorriebut you can dd the unmounted fs to devnull
TomLI do a write operation which succeeds, but it doesn't actually update
TomLI still get a csum mismatch when I read that block again after doing the write
TomLso I thought fine, I'll do the writes and the init-csum-tree
TomLbut, init-csum-tree crashes with an error in the extent tree
KnorrieI'd go sit in the sun, wait for the next plane back and try again :D
TomLwhat are the odds that one of 6 bad blocks out of hundreds of millions are in the extent tree
baudotACTION climbs the tree
TomLso then I start init-extent-tree, its only 12MB or so. how long can it take
TomL5 days later...
Knorriean extent tree for a 35TB fs is not 12MB
Knorrieand you already lost your csum tree now?
Knorrieor do you keep the old one if it can't complete
TomLThere is a csum tree, tree-stats says it is 33.66MB
TomLIt also says the extent tree is 3.08MB
TomLdid I mention the 35TB is all in one single file? ;)
TomLits a mysql innodb table
multicoreo_O
TomLWhen I designed this thing, it was 2TB.
TomLThe salespeople have been doing well.
multicoreso how many extents ?
TomLI'm in the process of redesigning into a Cassandra cluster, but meanwhile I have to keep this thing alive somehow
TomLI'm not sure how to get that number
TomLis that the nubmer of clusters in tree-stats?
multicoreTomL: with filefrag
TomLits not currently mounted, btrfs check --init-extent-tree is running
TomLI realize now what I should have done is mount with filesystem with nodatasum and then do the writes to repair the punctures. would that have worked?
TomLcould I then remount without nodatasum, and all would be well? or would the csums still mismatch because while mounted nodatasum, it won't update them
multicorethis is the craziest thing i've heard in a while
TomLwell. If I had reliable zfs-style raid-z in btrfs I wouldn't use the hardware RAID :)
multicorei mean 35TB db in btrfs, performance is next-level bad :)
TomLits actually pretty good, I can't feel a difference from xfs
TomLI have two copies on xfs, this is the only one on btrfs
TomLactually, I'm thinking that if one of the punctures hadn't landed in the extent tree, init-csum-tree would have worked and that would be that
TomLbut the odds... its like winning the lottery
TomLor losing as the case may be
Knorrie#yolo-ops
TomLheh
TomLadding new tree backref on start 331776000 len 16384 parent 0 root 7 <-- what is the 331776000? is that a count of something? sectors? bytes? extents?
Knorriebtrfs virtual address space I guess
Knorriewh no
Knorriestart/len looks like an extent
Knorrie7 is the csum tree
Knorrielen 16384 is a 16kiB metadata block
TomLwhy it would be doing something with the csum tree while initializing an extent tree? is it adding backrefs into the csum tree, or into the extent tree for the csum tree?
baudotACTION sharpens her claws on the tree
KnorrieI have no idea
KnorrieI never did this
TomL:)
TomLwell, the last time I had to do mysqldump of the whole thing and then re-load it into a remote it took six weeks, it was 32TB at the time
TomLthat might be faster than waiting for this, and then finding out its still screwed
Knorrie8)
TomLmeanwhile replication is paused and I've got data piling up in a disk cache before it gets loaded into the database so that the binlog partitions dont fill up
TomLand there's only one copy of all that
KnorrieI'd say... cancel the whole operation and start over
TomLyea, I think I'm there.
Knorriethis init tree stuff operating on bad blocks is just too fragile
TomLOh, the array doesn't have bad blocks anymore
Knorriecutting legs and arms off trying to rebuild them with skin from the other legs and head
Knorrieetc
Zygothe first priority with bad blocks is to move the data to a device without bad blocks
TomLI mean, I have no problem with that ^ ;)
Knorriehey Zygo
TomLThe device no longer has bad blocks
Zygohey Knorrie
KnorrieZygo: did you see that animated gif?
ZygoACTION watches the gif
Knorriewhat nossd did to my fs
Zygoit...significantly changed drainage patterns, and moved a lake? ;)
Knorriehaha
Knorrieyou need the hilbert vibe
Knorriethe bright white is 100% filled block groups
Knorriefinally
Zygooh, I get it...it makes it snow, so you can't see the group any more, just the lakes ;)
Zygoground, even
Zygofilling in the block groups is probably good
Knorrieno, this is the snow https://syrinx.knorrie.org/~knorrie/btrfs/2966977-237-60-76638922473472.png
darklingIsn't it nearly spring?
Knorrieis this the moment when we get reports on the mailing list of people seeing their filesystem melting down?
Knorrie"btrfs feels so sluggish"
ZygoI've switched test machines over to ssd to see what the long term effects are
KnorrieI'd recommend starting to generate png pictures every day :)
Zygothe thing about changing allocation schemes or balance rituals is that the new thing often works well for a few months, then some horrible side-effect occurs
Knorriewhat I found out so far (I think) is that nossd fills up many more free space gaps for data, but it allows 64KiB writes for metadata instead of 2MB writes
Knorriewhat I totally don't get yet is that subvol removal totally explodes with metadata writes, but balance doesn't
Zygothere was a patch going by that did something about bulk deletes and another that changed metadata split behavior
Zygothose might be alternative fixes for the same symptoms
Knorrieyeah, I've been thinking about that, but to do more research I'd need a fs to run a simulated workload instead of testing in production
Zygoe.g the bulk delete one let you wipe out all the keys at once in a page, presumably instead of delete one key, update all the nodes up to the root, delete another key, update all the nodes...
Knorriedoes that happen per key delete?
Knorriehm probably
Knorriewow
Knorriesoon(tm) I'm going to do a netapp level clone again of this fs, and then do an attempt again to enable skinny metadata and balance 400GiB of metadata from DUP to single
Knorrieif I can manage to get that done, it might reduce metadata writes for the extent tree by some 60-70%
Knorriewhich would ease the pain a bit
Kobazbtrfs filesystem usage / Device size: 238.47GiB Free (estimated): 9.10GiB
KobazError writing to file: No space left on device
KnorrieKobaz: yeah
Kobazyeap
KnorrieZygo: it wouldn't be that bad if all the updates were done in memory, but my filesystem is just pumping out 100MB/s of writes all day, and not using cpu
Kobaz[5282052.847708] WARNING: CPU: 4 PID: 4415 at fs/btrfs/extent-tree.c:3207 btrfs_cross_ref_exist+0xe6/0x100
Kobazgetting a lot of these
KnorrieKobaz: can you pastebin 'btrfs fi show' output?
Kobazhttps://pastebin.com/D4yWNdaZ
Knorrieso, all your raw disk space is allocated to use for data or metadata
Kobazhttps://pastebin.com/Sgk8FHjM
Knorriethe free space left in that allocated space is so fragmented that btrfs is giving up on it, even if there's still 9GiB of it left inside
Kobazi just removed 3 gigs of data
Knorriebut the kernel errors do not look nice
Kobazthey dont
Kobaz4.9.7
KnorrieI haven't seen those before
Knorriecan you pastebin output of grep btrfs</proc/mounts
Kobazdev/sde1 / btrfs rw,relatime,ssd,space_cache,autodefrag,subvolid=5,subvol=/ 0 0
Knorrienonetheless, it's almost filled up
Knorrieok, now try mount -o remount,nossd /
Knorrie:D
Knorrieand see if the errors go away
Kobaznossd hmm
Kobazyeap
Kobazthe errors went away
Kobazbut i have new ones
Kobazhttps://pastebin.com/n4MwXbrq
KnorrieHardware name: To Be Filled By O.E.M. To Be Filled By O.E.M.
Knorriehaha
Knorriedo you have anything quota-related enables?
Knorried
Kobaznope
Knorriegood
Knorriebut anyway, you have to make a decision about what you want here, running a disk up to almost 100% capacity is not a fun thing with btrfs
Kobazdev/sde1 btrfs 239G 228G 9.2G 97% /
Kobazdo de do
Knorriethe nossd seems to allow writes into smaller pieces of free space, instead of giving up
Knorriegiving up = the enospc
Kobazah
ZygoI backed down from filling disks to 99% to filling them to 98%
Kobazhehe
Zygowhich is painful on some of my filesystems because that's ~460GB
Kobazi try not to fill
Kobazbut
Kobazit's my desktop
Kobazit's getting tight
Knorrieget a bigger boat
Kobazi already have a helsen 22
Kobazit's the largest boat my trailblazer can tow
Knorrienice one
Kobazi could proobbaabbbly tow a bigger one, but it's just not a heavy enough vehicle to do highway speeds
Knorriebut boats want to be in the water, they're not for testing your car
Kobazyeaaaaaaaaah
Kobazbuuuuut
Kobazit's like $3500 a year to get a 22 foot slip at the lake
MooingLemurACTION pretends to moo
baudotACTION crouches down and starts stalking MooingLemur
MooingLemurACTION unmoos
Knorriehttps://www.youtube.com/watch?v=K0Wf8h8gUz8
MooingLemurACTION is currently in Denmark.
Knorriehttps://www.youtube.com/watch?v=Q1zIQ7XycwY
TomLSo you can build a boat out of legos.
KnorrieACTION moos at MooingLemur 
baudotACTION crouches down and starts stalking Knorrie
KnorrieACTION slaps baudot 
Knorriepoor old photocamera mic in the windz
Kobazthat's a nice instrument cluster
Kobazthat looks like maybe around the same size
Kobazmaybe a 26 foot
Knorrieyeah I think so
KnorrieI went on a trip with a colleague of mine back then, and his gf and his father (who owns the boat)
Knorrieah
Knorriethere it is https://syrinx.knorrie.org/~knorrie/foto/2012-08-zeilen/route.jpg
Kobazi'm doing a btrfs fi balance start -dusage=100 /
Knorrieand the whole photo report if you go up a level :)
Kobazusage is a bit better now
Kobazand i deleted some more stuff
Knorriethat rewrites all data in your fs
Knorrieflying is also fun https://www.youtube.com/watch?v=6k9bzfb5FXE
Kobazyeah theres a little county airport like, 5 minutes from here
Kobazthat's on my list
KnorrieI know a guy I work with for hacker event organization who has it as a hobby
matthiaskrgr19:19 < matthiaskrgr> [03:30:45] deleting a COW'ed file,does btrfs have to read the entire file? or does it just have to read a bit of metadata (wondering while deleting files seems so much slower than on ext)
matthiaskrgr19:19 < kilobyte> [03:33:52] just metadata, but that can be a lot when the file is massively reflinked
matthiaskrgr19:19 < kilobyte> [03:36:06] file deletion, especially batched (lots of nearby files at once) is somewhat faster than ext4, but only for an oranges-to-oranges comparison
matthiaskrgr19:19 < kilobyte> [03:36:53] with DUP, reflinks, and so on, it's obviously slower
Knorriewhoa
matthiaskrgrhmm
matthiaskrgron my sys it feels like deletion of files is a couple of magnitudes slower than on my ext 4 back then
Kobazbtrfs seems very io-blocking
matthiaskrgrit feels like that when I am waiting for things on my pc it is 70% of times the hdd and the rest only cpu :(
matthiaskrgrunless I'm compiling something, then of course cpu takes everything
Kobazthe balance start is really blocking a lot of io in my userland apps
darklingKnorrie: That "To be filled by OEM" thing -- I don't think I've ever seen a machine where it _doesn't_ have that.
matthiaskrgrbalance start ?
Kobazbalance in general
matthiaskrgrah
Knorrieyes when it's moving data it doesn't allow you do also change that data at the same time
Kobazbut like, oddly long periods are blocking
Kobazlike 5-10 seconds
Kobazi could see blocking for a few ms while it reallocates a block or two the app is currently wanting to get at
Knorrie5-10 seconds would be oddly short for reallocating a blockgroup on my fs :D
Kobazon an ssd
Knorrieapples oranges yes
Kobaz250 gigs
Kobazso the blockgroup size is not enormous
Knorrie1GiB
Kobazew yay, i think i found my race condition
Knorrieand?
darklingIt had slipped down the back of the sofa?
matthiaskrgrhmm, what is this btrfs-transacti process
matthiaskrgrmanages disk writes?
Knorriewriting out changes to disk
matthiaskrgrso everything that is written to disk goes through that?
Knorriethat I don't know (I don't think so) but I guess it makes sure the barriers are done right
matthiaskrgrok
matthiaskrgrI'm watching iotop transacti already wrote around 700 mb
matthiaskrgrdisk read speed 10 mb/s :(
Knorrieand why are you doing that?
Knorrieor, what's triggering it
matthiaskrgrmost of it seem to be from chrome
matthiaskrgrstrangely enough
TomLbrowser cache
TomLstop streaming pr0n while compiling, problem solved
TomL;)
matthiaskrgrwell as I said, often I have to wait up to 15 seconds to just have a simple terminal start :|
darklingTomL: Stop compiling while streaming pr0n, surely?
TomLI stand corrected.
darklingBrowsers do write insane amounts of information to disk.
darklingIIRC, Firefox writes something like 1 MB of data simply visiting a web page -- and that's not even the cache.
matthiaskrgrheh
multicoresingle google search can use up to 8MB session store
Kobazdarkling: yeap
matthiaskrgrwell I started simply suspending the browser process when I need IO :(
multicoredefault flush time is 15sec on firefox so things add up
matthiaskrgrbut I don't remember browser being such a pain in the ass 7 years ago or so
TomLwho knows what evil lurks in yon javascript
matthiaskrgrthe fun thing is: due to disk IO and waiting for cache, pages load actually slower than re-pulling from web at 1 megabyte/s
TomLhow big is your browser cache? cleared it lately?
TomLhave you tried pr0n mode, where it doesn't cache on disk?
matthiaskrgrI think chrome limits at 500 MB or so
matthiaskrgrbut I have not found a way to increase that to 5 gigs or so
TomLyea but more would be worse, if filesystem tree-walking is the problem
matthiaskrgrhmm
AL13Nthis explains a lot...
AL13Ni'll try suspending firefox when i need IO
matthiaskrgryeah
matthiaskrgrsaves io and cpu cycles
matthiaskrgrunless you want to use youtube for music or something
AL13Nbefore, when i gets too bad, i just killed firefox and clicked "restore all"
matthiaskrgrhehe yeah
multicoreit doesn't write anything it there aren't any changes
AL13Ni grant you, i have too much open windows and tabs
matthiaskrgrwell
AL13Nthe js on the pages is killing the browser
matthiaskrgrthere might be *some site* that continues loading new adds from the web every 20 seconds via some script which then get cached or something crazy like that :(
AL13N16GB or ram is simply not enough for a desktop (when using firefox)
AL13N*might* ???
AL13Ni'm pretty sure there is quite a few of those
multicorehttps://addons.mozilla.org/en-US/firefox/addon/about-sessionstore/
matthiaskrgrthere were times when systems had 500 MB of ram !! and you could run browser, mail client and a compiler at the same time
matthiaskrgrwtf happened !!
AL13Ndude, i had a 8086 that had high memory! (1MB FTW)
AL13Ngranted, i didn't have btrfs on it
matthiaskrgrI bet it didn't run firefox :P
AL13Nnor firefox
matthiaskrgr^^
frinnstturn off javascript and you still can :)
AL13Nsay, does the bind server have a difficult IO load?
matthiaskrgrimagine how blazing fast a 2010 firefox could be on todays systems
AL13Nlately it's been lagging on my server
AL13Nmatthiaskrgr: you would think it's very fast
matthiaskrgrno? :(
AL13Nmatthiaskrgr: and the rest of the world too (in less than an hour)
AL13N:-)
matthiaskrgris that some kind of quote
AL13Nno sorry, i was referring to the estimated time of hacking your firefox
matthiaskrgroh
AL13Nbad joke
TomLall billionaires should log into their online banking on 2010 firefox *drool*
gehidoreheh
usoAL13N: how do you suspend firefox? run it in a VM?
matthiaskrgruso: if you launch a process using zsh shell, hit ctrl+z
matthiaskrgralternatively, this should work, too: https://www.unixtutorial.org/2014/08/linux-pause-process/
TomLyou can send SIGSTOP, right?
usomatthiaskrgr: ah, start it from a shell and keep it open ... too easy :)
matthiaskrgr^^
TomLthe shell is just sending SIGSTOP
TomLwhen you press ctrl+z
TomLyou can do that yourself with "killall -STOP firefox"
matthiaskrgrbut I'm lazy and ctrl+z and "fg" to resume is just convenient :>
TomLexcept now you need an ugly term window open even though you don't always need it, with an associated piggly shell process
multicorei'm running ff in unprivileged lxc container so lxc-freeze works if needed
TomLwhy not just have the pig running to send the signals, -STOP, -CONT, close the pig
Knorrieoink
matthiaskrgrTomL: well, one term more or less does not really matter when you already have 20 spread over 6 workspaces :P
usoTomL: and SIGCONT to resume ... me wonders how well the browser still works after a resume
TomLthat's all zsh does when you type "fg"
TomLon my system, zsh has VSS of 37MB.
TomLx 20 = 740MB of RAM for shells? gawd
usowhat's that compared to the 3 x 5 GB for the browsers, when you use multiple profiles ;)
TomLbash has ctrl+z & "fg", VSS is only 20MB :P
matthiaskrgrI have 12 gigs of ram for a reason :P
usoha, I like that :D, I get you, evil browsers!
matthiaskrgrobligatory http://downloadmoreram.com/ joke
gehidorehttps://pb.gehidore.net/lBrh almost DONE!!!
matthiaskrgrw00t
matthiaskrgrram seems to compress very well
Sargun1) Ideas about: https://github.com/kdave/btrfs-progs/issues/38
Sargun2) IS there a way to get programmatic output from btrfs?
SargunWhere's the best place to file bugs for btrfs
Sargunhttps://gist.github.com/sargun/48f80e83e4612312a0fa976731667131