pmowI realize this is off-topic, but with iscsi mpio do all virtual hosts mount the volume simultaneously? I'm using hyperv<->freenas
mgolischthe guests mount nothing
mgolischonly the hypervisor does
sunrunner20pmow, no. You can only have one client at a time on a ZFS/FreeNAS iSCSI extent
sunrunner20not completely true
sunrunner20if you make it a zvol or file extent and format it with a cluster aware filesystem then I think you can have multiple mounters
mgolischand i thin you can somehow do that with hyperv too
mgolischnot sure how they do it though as ntfs is not a cluster filesystem
mgolischbut none of that has anything to do with mpio no?
mgolischthats about having multiple paths to the same storage
sunrunner20not a thing
pmowHuh ok.
pmowSo all of them see the volume but only the one that formatted it has it mounted.
pmowMaybe I need a cluster or something. I thought MPIO was the same
mgolischyeah i think you need to enable some cluster stuff to allow mounting a ntfs volume on multiple nodes
pmowSounds good
pmowSome folks on spice seem to think shared NTFS is a recipe for disaster
mgolischthink its called cluster shared volumes but i have never done anything like that so no idea
jab416171ugh why do all of my downloading torrents error out
pmowtemp storage?
jab416171temp storage?
jab416171what do you mean pmow
pmowif all your torrents error out, could be something they all share - disk to save to
jab416171yeah, the disk is a freenas nfs share
Linuturkso, how does the Rancer UI work? Where are the docker containers launched? On the host or inside the VM?
Linuturkif it's inside the VM where Rancher is running, it seems silly to limit it to one cpu and such a small amount of RAM
sunrunner20know what I don't get?
sunrunner20how we can have Blender and then something like GIMP
acoctresOk? Your point ?
sunrunner20blender is a fantastic 3D modeling program. GIMP is a junk photo editor
sunrunner20both open source
acoctresI still dont get it
evilbugwould you say optane is a good idea for a freenas box?
sunrunner20guess I'll stop trying to explain
evilbugsay like a massive nas.
sunrunner20if its just serving files, even on 10gig optane would be pointless
sunrunner20(serving files to a couple people)
sunrunner20but you can have a server on gigabit that'd use the heck out of optane, if its running VMs
evilbugand cost in mind it's tons cheaper to get 240gb of ram than optane.
evilbugso adding that optane to a 16/32gb ram nas should be dope, yeah?
sunrunner20actually not really
sunrunner20for read caches (L2ARC) it actually consumes ram. I forget the exact ratio
sunrunner20so what ends up happening is as it fills it makes your ARC smaller and smaller. We've seen machines perform absolutely awfully and suddenly perform a LOT better when the L2ARC is removed
sunrunner20hey DrKK` You remember the last L2ARC incident on the forum. The guy who was getting like single digit read speeds?
evilbugso then optane would be shit for zfs is what you're saying :D
sunrunner20just that's not something you'd put in your average home NAS
sunrunner20optane actually makes a superb SLOG and L2ARC
sunrunner20its super fast, both in raw throughput and latency/IOPS. Its got powerloss protection, and it has terrific endurance
evilbugi was thiking more hypothetically.
evilbuga hypothetical hardcore nas.
sunrunner20then sure. Put one in- After you have 64GB+ RAM
acoctresDoes it have power loss protection?
sunrunner20I have a high end NVMe device in the freenas in my homelab
sunrunner20acoctres, afaik its a physical property of 3d x point
sunrunner20and also afaik all the SAS models have powerloss protection built in
evilbugsunrunner20: i'm running a 950 in my desktop :)
sunrunner20960 here
sunrunner20I can't belive its like the size of my thumb
sunrunner20still, I think I paid as much for 32gb ram as I did for a TB of flash
sunrunner20stupid DDR4 "shortage"
mgolischeither you have gigantic thumbs or those things are smaler than i imagine
sunrunner20might be a bit bigger
sunrunner20or at least longer
evilbugsunrunner20: man, i paid $55 for 16gb ddr4 2133 ram in 2016 when i put this machine together.
evilbugthat same set of ram is now $181.
ConmegaSo... How long could it take for the FreeNAS installer to finish starting up after mounting the rootfs from the ISO? It seems to do a SCSI bus crawl to find all devices and I have a JBOD cabinet with 15 2TB SAS drives off an LSI 9207-8e. It "hangs" after the mounting from (insert cd device here) then after a few minutes spews out "Attempt to query device size failed: NOT READY for each drive
Conmegathen hangs after that... Been like 15 minutes?
sunrunner20but its only 22mm wide
sunrunner20only thing I can suggest Conmega is to DC the JBOD
ConmegaSo disconnect, do the install, then bring it back after its installed?
ConmegaFair enough, I know I got through the install without the device before. So I suppose I'll give it a try without again then go back to it.
evilbugnow wouldn't it be a much better idea for a production nas to run off of an ssd than a flash drive?
mgolischthats the same no?
metalcatedQuick question about idmapd, nfs shares and AD
ConmegaTo be fair this is an odd... Configuration you could say... Basically trying to setup FreeNAS in a qemu/kvm with two mellanox 10gb dual ports and the sas controller passed through to it.
metalcatedI am using nfsv4 and getting that long UID for nfs shares and unable to write any data unless I set 777 to directories
metalcatedUnder AD settings "Idmap backend:" is set to rid, would it make any difference to set that to 'ad' - yes I have UID/GID's set using Unix services
sunrunner20get used to FreeNAS on bearmetal before trying to virtualize
metalcatedOr if I am way off please tell me
sunrunner20metalcated, not a clue
sunrunner20evilbug, we say to run off of USB sticks because it doesn't do a lot of writing and you can mirror them
sunrunner20and its like $15 for two
sunrunner20can't even get a junk SSD for that much
evilbugsunrunner20: yeah except personally, for peace of mind really, i'd rather run it off of two ssds.
evilbugeven if it's 120gb, that's $40/piece.
ozymandias_evilbug, freenas runs off of RAM
ozymandias_and only reads the ssd when updating/booting
ozymandias_and using usb frees up ports/space for storage drives
ozymandias_you can install on anything you want, though
sunrunner20just beware freenas install uses all three primary partitions
ozymandias_using ssd is massive overkill
evilbugfreebsd has only 3 primary partitions?
ozymandias_anything of importance goes on the storage....
ozymandias_not on the boot drives
evilbugso what about 2 usbs with ssd cache?
ozymandias_for what?
ozymandias_caching.... what?
ozymandias_the boot drives are not written to/read from much
ozymandias_you can use ssd in your pools
evilbugor no wait i'm thinking about pfsense.
mrelceei'm noticing. lot of old packages in freenas's jails. is there a equivalent of - HEAD for freenas?
m0nkey_use ports
m0nkey_it'll give you up to date software
mrelceewhat i really want is to use my poudriere repository to install software but it gives me a complaint about being for the wrong os
mrelceei kinda got sick of managing ports manually
ConmegaYup did a full install and once I add the SAS controller back I get the same result trying to boot freenas hrm.
Freenasguy_So yeah, it was definitely that stale snapshot checkbox, got that checked and cleared up a ton of space.
Freenasguy_But now i have another question. Souce dataset is 1.8TB, replicated target dataset is 1.1TB
Freenasguy_is that normal? to have the replicated be quite a bit smaller than the original?
Freenasguy_both have the same compression, lz4
m0nkey_stale snapshots?
sunrunner20that's what I said
Freenasguy_huh, there were a few manual snaps on the source
Freenasguy_got those cleared up and they appear to match now
Freenasguy_Awesome, this will buy me a ton more space, this issue has been building up for over a year, had about 25k stale snaps
Freenasguy_on data that changes a lot
mybalzitchoh right, thats what I was gonna go look for
mybalzitchstale snapshots
sunrunner20lol mybalzitch
ConmegaOk. So it looks like I hang in the boot process when it runs... "camcontrol rescan" or at-least I assume its running that or something similar, because I'm able to reproduce it with leaving the JBOD unplugged, then re-plugging after boot and running that.
mybalzitchfreed up 1% capacity on two volumes!
sunrunner20much storage wow
ConmegaWell actually it didn't hang I have a console... But disks aren't found by freenas hrm. Guess I'll try giving my JBOD a power cycle...
mybalzitchI'm 1% further away from 90% usage!
Freenasguy_I just dropped from 95% usage to 80% on a 35TB pool
Freenasguy_and more getting freed up still
Freenasguy_soooo many stale snapshots
mybalzitchI apparently got 4TB out of it, more space freed up when I ran the command again
sunrunner20nice guys
mybalzitchall thanks to Freenasguy_
sunrunner20I don't have a 35tb volume
mybalzitchI have just about 32TiB free, does that count?
mybalzitchnot on one pool
mybalzitchhaha jk
sunrunner20I'm getting up there in total capacity
ConmegaYup seems to be an issue with my SAS controller/ PMC SEIRRA SAS expander thats in the JBOD cabinet. Greatttt
peercei've run several BSD and FreeNAS systems with sas expanders and HBAs without any issues
ConmegaYea I'm setting up a VM with ubuntu now to see if it behaves any differently under *nix vs *BSD
ConmegaI don't think it did when I had them actually setup under Alpine which I have as the base OS
peercewell, a VM, then its the hypervisor thats managing the storage controllers, unless you're doing pci passthrough
Conmegapci passthrough with vfio-pci
Conmegathe issue I'm having is that the devices aren't being told to spin up or initalize
Conmegaso basically, we can see the disks but they aren't quite alive.
Conmegaubuntu gives me:
Conmegalogical unit not ready notify (enable spinup) required
peercehmm. what sort of HBA? I've never had to do anything special for the disks on my LSI SAS HBA's to spin up
AlVal11.1 beta ui - how can i set iocage jails to autostart on system boot?
peercethe new UI doesn't even SHOW my jails
Linuturkso, how does the Rancer UI work? Where are the docker containers launched? On the host or inside the VM?
Linuturkif it's inside the VM where Rancher is running, it seems silly to limit it to one cpu and such a small amount of RAM
AlValpeerce: it will show iocage jails
AlValthe new ui wont show warden jails
peercehuh. so can I convert my warden jails to iocage ?
AlValbut you can make iocage jails and they will show
AlValim not hanging around for some future migration ability that might come
AlVali made new iocage jails and set all my stuff up in them again
AlValthen went into the old UI and deleted my old warden jails
peerceonly jail i need is a mysql server that stores the data from my weatherstation
peercebut its working, I don't feel like redoing it
AlValwhy would anyone bother with their own weatherstation? is it just as a fun hobby thing?
AlVali get it if its that
AlValbut theres no practical benefit is there?
peercethe weather here varies wildly depending on exactly where you are, temps, rainfall mostly.
peercerainfall just a couple miles apart can be double or half
apusiocage is just another "manager" for jails. the basic underlying structure, the "jail system" itself, is the same. so you can just create a new iocage jail with the same configuration you had for your warden "jail configuration". then you could (when the jail is down) destroy the new dataset, rename the old dataset and start your jail with iocage again. shouldn't be any difference. but please wait a few moments before trying something like that,
apusperhaps my case was an exception where it worked
thinkpadi got a vm running on freenas bhyve. it's debian. i try to ping my freenas host from the vm but get no response
thinkpadi only get a response when i reboot freenas
thinkpadif i restart the vm after that, i lose all contact with host.
thinkpadanyone have this issue?
apusif i were to build a server to use for both storage and virtual machines (>=8 cores), what kind of (used?) xeon hardware would give me the best bang for buck while while keeping the power requirements low ?
nostroraHi, what do you think about this
nostroraSomeone can help me to choose right number of disks i need ?
nostroraMy usage is 4TB for home nas
outrageousnostrora: Are you using 4TB drives or are you looking to have a total of 4TB storage?
nostroraoutrageous: i need free/usuable ~4TB storage
outrageousnostrora: Are you looking to buy new drives/hardware or are you going for used? How important is the data to you? Are you going to have backups of said data?
m0nkey_5x2TB in RAIDZ2 will give you approx 6TB to work with
nostroraoutrageous: 1 : new hardware, new rig (my first diy nas with supermicro and freenas). important data will be sync in my different devices (phone, computer, htpc). for example my password file it is very important but it is sync in all my devices so it's not dangerous if disk fail. backup are in my sync devices
sunrunner20nostrora, atoms will work if all you need is file storage
sunrunner20if you want to run plex or plugins you might run into CPU problem
nostrorasunrunner20: which problem exactly ? i want to use nextcloud and some little cardav webdav caldav server
sunrunner20also I'd recommend buying double your anticipated usage.
sunrunner20lack of raw processing power is what I was refering too
nostrorasunrunner20: this cpu is not enough ?
sunrunner20I uh
sunrunner20didn't know they made atoms with that much oomph
sunrunner20that should be fine
nostrorasunrunner20: and for "only" 25 W is pretty good for me
sunrunner20TDP is deciving these days
sunrunner20it only pulls remotely that when its under full load
nostroramirror 4TB WD red is good option ?
sunrunner20not imho
Cpuroastnostrora: Intel's CPUs are pretty much all dial-a-tdp
Cpuroastnostrora: they can set whatever artificial power limit they wat
nostrorasunrunner20: 3*4TB ?
sunrunner20my desktop with a 1080 and a 110W CPU idles at something like 60W
Cpuroastall CPUs idle at practically nothing
sunrunner20nostrora, yea. want at least two parity drives for a lot of capacity. Keep in mine ZFS preferes to be kept at <80% full.
Cpuroastthe TDP is only for OEMs to fit in their desired thermal envelope
Cpuroastwith their cooling solutions
sunrunner20thats the average peak of hte cpu
nostrorasunrunner20: sdo you mean RAIDZ is good option for me ?
sunrunner20it can actually exceed that power
sunrunner20if you've only got 3 drives I'd go with a mirror
sunrunner20you'll get insane read IOPS and throughput
sunrunner20like 400MB/s and 450IOPS
nostrorasunrunner20: but for a mirror. 2disks is suffisant no ?
sunrunner20two disks fill the technical definition of a mirror
sunrunner20but, the same "why raid5 dies in year xxxx" applies to simple mirrors
nostroraOh so you mean. a mirror with 3 disks. just in case where two disks can fail in same time . right ?
nostrorasunrunner20: It seems to be a rare situation
sunrunner20and because of ZFS's nature. Reads on a mirror are multiplied by the number of drives. Writes are still 1 drive though
sunrunner20rare but not unheard of
sunrunner20We've had several people come in here with dual failures
nostroraWhat happens if a disk fail ? FreeNas will tell me by mail or something ?
sunrunner20if you set it up yes
Cpuroastyou have to setup notifications
nostroraok thank
sunrunner20usually you setup smartd to run scans
sunrunner20now I have to check my 11.1 install
sunrunner20I assume you're implying more than configuring the smtp server Cpuroast
Cpuroastusually, just setting up smtp is enough
Cpuroastas far as I know
sunrunner20I'd recomend mailgun
Cpuroastand configuring your source e-mail address
outrageousI've never had a drive fail so I'm not sure about it, but it certainly spams me when replication fails.
sunrunner20LOL and Q_Q outrageous
nostroraSo i think i'm going to do an mirror with 2*4TB wd red. i think is suffisant for my usage
sunrunner20I have the same problem
sunrunner20your risk
sunrunner20just be sure to do the proper analysis
nostrora16GB ECC DDR4 will be good for 2*4TB (or 3*4TB) ?
sunrunner208gb would likely be fine
Cpuroastfor any new install
CpuroastI'd for 16GB min
CpuroastI'd go for
Cpuroastbut maybe, since RAM is a bit pricey at the moment
Cpuroast8GB and then add another 8GB later
nostrorayep :/
sunrunner20thats why I made my statement
sunrunner20its a pain to even FIND quality DDR4
sunrunner20it took me 3 weeks to aquire 16gb of DDR4 from crucial
CpuroastCrucial DDR4 EUDIMM
sunrunner20*everybody* was out of stock of the kit I needed
Cpuroastcan't order directly from the site?
Cpuroastsunrunner20: not even single 8GB units?
Cpuroastand you just get 2
sunrunner20no 8gb was gone
sunrunner20ended up overnighting 2x4gb
sunrunner20so now I have 24gb ddr4
sunrunner20my pocketbook :(
nostroraWhat should i buy ? RDIMM or UDIMM?
sunrunner20I'd plug your board into crucials HCL and buy what it lists
sunrunner20oh nm
sunrunner20UDIMM most likely
sunrunner20read that as EUDIMM vs UDIMM
sunrunner20I'd love to be there when some asshole sets one of those things on fire
outrageoussunrunner20: The RAM wasn't available on amazon?
sunrunner20had to order direct from crucial and from CDW
mgolischfuck why would you have that much junk in your car?
mgolischiam realy lazy and i have all sort of stuff laying around in my car but mine doesnt look nearly as bad
sunrunner20but its a hazard
sunrunner20and mines also not nearly as bad
sunrunner20I have a few empty water bottles on the passenger floorboards
sunrunner20I should clean those up tomorrow
m0nkey_my car is immaculate
outrageousI don't waste time on keeping mine immaculate.
m0nkey_except for a receipt that's been there for almost a year.
m0nkey_and the empty tissue box
m0nkey_change in the cup holder
sunrunner20mine would be better if I could find a place for a trashcan
outrageousGet one of those Apple cans :P
peercenostrora; UDIMM vs RDIMM is dependent on the mainboard and CPU and possibly how many banks and ranks are populated
peercefor instance, I've seen boards that accept udimm as long as no more than 1 row is populated with 1R or 2R dimms, but if more rows or ranks, then you have to use RDIMM
peercenostrora; so you look at the spec, find its supported memory. if its either, you probably have to read the manual to find out the memory rank population rules.
peercethis is standard systems integration.
ConmegaHey peerce, in reference to your question from last night... <peerce> hmm. what sort of HBA? I've never had to do anything special for the disks on my LSI SAS HBA's to spin up
ConmegaI have a SAS9207-8e, so a bog standard LSI controller, shouldn't even be too old or unsupported or anything
peerceConmega; running p20 IT firmware ?
peercethats a SAS2308, and indeed, I've used those with as many as 50 external SAS drives
ConmegaI never flashed firmware, I was under the impression that you did not have to flash an HBA if it was just an HBA already, only if it was something like a Dell card meant for some other purpose like raid and wanted to re-purpose it
peercewell, what version is it running? sas2flash -listall
ConmegaLet me bring up the FreeNAS VM with the JBOD disconnected, had an ubuntu VM up with it for testing.
peerceubuntu should have sas2flash also, you might need to install it
peercefirmware version might evne be shown in dmesg
ConmegaSooo seems quite down-level
peerceyeah, thats p15
peerceso, you want to get the latest windows/dos zip with the firmware, dig out the 2308IT.bin and the mpt2sas.bin BIOS file, and run like sas2flash -o -f 2308it.bin -b mpt2bios.bin to flash it
Conmegacool, I assume you mean the follow: 9207-8e_Package_P20_IT_Firmware_BIOS_for_MSDOS_Windows Version: ?
peercethat sounds right
peerceok, the firmware in that zip is Firmware\HBA_9207_8e_IT\9207-8e.bin
peerceand the BIOS is sasbios_rel\mptsas2.rom
Conmegaholy moly that only took me forever to get the files into freeNAS and then flash this card
Conmegato say the least I know a hell of a lot more about mounting zfs pools under ubuntu now... and I know that freenas-boot/ROOT/default is the pool for the actual install
peercewhy not just scp them to /tmp or whatever ?
peerceor put them on a http server on your LAN, and wget/curl them.
Conmegadidn't feel like setting up an http server right now/dont have one setup and didn't even think of scp...
Conmegathough it might not be happy about reseting the card since its in a VM with it passed through... its stick on `mps0: Reinitializing controller,`
peercescp is about the ONLY way I move files around anymore
Conmegaseemed to flash properly though
Conmegayea had to reboot the vm but its flashed now
Conmegasuppose I'll give it a whirl bringing it up with the disks now
ozymandias_peerce, in some cases rsync makes slightly more sense
ozymandias_handles partial updates better
mrelceei figured out that the iocage jail at cli, and the jail manager in the fancy space age alternate UI on freenas allow you to make actual freebsd jails... so i'm back in business able to use my poudriere repository to keep jails updated.. yay
mrelceekinda half assed doing it that way...
mgolischi kinda like the iocage cli
mgolischalso it saves all stuff in zfs atributes so the jail config is totaly independant on freenas
mgolischkinda like that
mrelceenot complaining about htat. just about inclding the old manager in one side and the new one in the other ui
mgolischalso the iocage jails can be updated
mgolischthat never worked for me with the warden jails
mrelceei fond out googling on how to hack a freesd jail onto freenas
mrelceeon an unrelated site
mrelceeanyway I ported over all my apps from a vm and moved them into a jail and got it all running well. saturday well spent
peercei managed to update my 9.3 jail to 11.1, there was one extra 'pkg' command that most howto's neglected to mention. I probably should have written it down
Conmegaanddd rip, same problem... Attempt to query device size failed: NOT READY, Logical unit not ready... on every drive uhg
DrKK`peerce: You're saying,
DrKK`you updated a FreeBSD 9 based jail,
mrelceerecovered a few hundred gigs on the vm image (or will once i'm positive one of my precious scripts isn't still locked inside it...)
DrKK`to a FreeBSD-11 based jail?
mrelceeheh. 9.3 to 11.1 i think I'd build over...
peercehmm, maybe it was a 9.10 jail... but after what most howto's said to do didn't work, I found one that had an extra command up front (can't remember what it was) that fixed things so it pkg update (?) worked.
mrelceei noticed that despite saying 11.1-RELEASE in iocage, that what I actually got was 11.1-STABLE. but no complants
mrelceeit would be nice if the webui would automagically do the nullfs mounting for you.. I spent a half hour googling the proper procedure for iocage to make it happen manually..
mrelceelike in the old setup
mgolischyeah the new ui is probably not finished
mgolischhopefully they add an option to manage that on the individual jails
mgolischi kinda disliked how it was managed in the old ui
mgolischlike the mounts would stay there even if you deleted a jail
mgolischthats kinda stupid
mrelceei'm just happy though to have tools that do work
CheezeheadRunning a resilver on a raidz2 setup with 8x2TB drives. Does a resilvering speed of 206M/S seem typical?
mrelceean to my knowledge i've done nothing hackish that will require fixing when I update freensd
peerceCheezehead; yeah, sounds about right
peerceit might even slow down as it gets torwards the end of the drive. remember, its reading 7 and writing one.
Cheezeheadgot a feeling once i get it clean and replicated it's gonna get wiped and setup in mirrors
mrelceei need to order a couple spare drives for my raidz2 just in case one dies. i was planning on it but always something better to spend the money on
mgolischi just bought one, hope thats enough
mgolischbut it has not failed yet