imacdonncould anyone tell me what is the correct way to specify the path to the sudo executable? I have sudo_exe in the [defaults] section of ansible.cfg, but that is apparently deprecated, yet I've been unable to find its replacement
agaffneyimacdonn: probably become_exe or something like that
imacdonnagaffney: that wouldn't really make sense, since become is supposed to be a level higher .. perhaps my playbook targets multiple different platforms, each with different mechanisms for "becoming"
mmercerwhy would sudo not be in an expected location in the first place ?
agaffneythen it needs to be set on a per-group basis, assuming you have groups for OS
agaffneyimacdonn: become is generic. the sudo keywords are being deprecated because they are not generic
bcocammercer: cause people do really weird stuff with sudo .. there are many 'substitutes'
imacdonnmmercer: by "expected", I suppose you mean "the first one in $PATH" ... maybe it's installed in a special location that's not in $PATH, or maybe there could be multiple versions accessible, and the first one in $PATH is not the desired one ... presumably there was rationale for the sudo_exe originally
bcocamore likely /opt/sbin/proprietary_sudo_like_exe
mmercerbcoca: true... i guess i just look at it very differently -- having a 'custom' sudo scares the crap out of me
imacdonnagaffney: I get that .. but the path to the sudo executable is sudo-specific ... IMO, that option should NOT be deprecated
bcocammercer: but .. its 'enteprise'!
mmercerACTION buries his head in the sand
Virtual-PotatoSecurity through obscurity!
agaffneyimacdonn: are there also separate config options for the su, doas, pfexec, etc. binary locations? if not, do you really think there should be? also, a global config for that doesn't make sense, since each host could need a different settings
imacdonnagaffney: yes, I think there should be .. I haven't checked if they exist or not
agaffneythat would get excessive, and still wouldn't work on a per-host basis
bcocayou can set asnible_become_exe per host/group/as var
Sheogorath[m]There are config options per host:
Sheogorath[m]just put it into your inventory ;)
agaffneyuse the ansible_become_exe variable
imacdonnhmm, OK, I guess that could be made to work
imacdonnit's not a direct replacement for sudo_exe in ansible.cfg, which makes the deprecation feel a bit awkward
imacdonnthanks for the responses .. I'll ponder a bit
agaffneyyou could stick it in group_vars/all, and that's mostly equivalent to setting it in ansible.cfg
imacdonnagaffney: except I have one ansible.cfg for all environments that I manage ... vs. separate group_vars for each
mmercerhmmm... so blocks cannot have dissenting when clauses in them ?
imacdonnsomeone made a fair point that the sudo path could be different in one env vs. another (it's not, in my case, but hypothetically, it could)
bcocablocks are not conditional, you can only have 1 when clause per object, but when clause on block will be inherited by tasks withing
bcocaimacdonn: i've foudn people having sudo path diff per user
mmercerahh, interesing, had not realized that. tnx bcoca
bcocammercer: that said, not sure i parsed your sentence correctly, an example might help
imacdonnfor me, it was convenient to set it once in ansible.cfg and forget about it .. until it got deprecated
mmerceri had tried to group 2 identical tasks that only differ on 1 element each and which condition activates them, figured it would be easy to recognize them that way
bcocawhen is also a list, so you can have multiple entries (implicit and)
mmerceri know, i use that one quite regularly
bcocaeach task can have it's own when
mmerceri wish there was an 'or' list format, lol
bcocathere is! when: a or \
bcoca b
bcocaACTION ducks
mmercerblock wasnt needed at all, but i figured id use it because it allowed me to visually parse them quickly... unfortunately it didnt work with it defined that way
mmercerahhh, looks like it might not be the block thats the problem after all.... might be a variable not getting set as id expected
bcocayou can set 'when: enginering is defined' on the block, that should not be an issue
bcocabut 'each task' will be skipped, not the block
bcocathough i would
mmerceryeah, thats what i noticed was that it was skipping both tasks
bcocawhen: engineering|default(True) is False
bcoca^ might need a |bool
mmerceri didn't realize we could combine the bool expression like that
gomixACTION o/ abadger1999
abadger1999gomix: ¡Hola!
gomixabadger1999: saludos desde .ve :)
gomixla casa de ansible en latam
bcocaACTION goes to the ansible house to test the craft beers
EverspaceWhat am I doing that's producing a string that says <generator object do_map at 0x7f0904be5518> in this paste?
EverspaceI mangled that paste, but the "gist of it" should be there
abadger1999Everspace: Not sure if this is it but try piping to list
abadger1999 changed_sites: "{{
abadger1999 git_status.results
abadger1999 | select('changed')
abadger1999 | map(attribute='item')
abadger1999 | list
abadger1999 }}"
EverspaceOh there, I was going to say the debug | list just gave me each character in an array :P
abadger1999Don't say thanks yet, it might do the same thing there ;-)
EverspaceThat seemed to do it abadger1999, thanks
EverspaceA little obnoxious though
dur117Could someone recommend a module/technique to copy the contents of a file and append it to another on a remote host? blockinfile would have been perfect, however it tries delegating to the control machine and thus cannot find the src file.
bcocadoes it not have a remote_src option?
dur117I just tried and it didn't complain, maybe I misjudged. it still complains about the block: (I'm using a file lookup), is it possible to delegate that to the remote host?
bcocalookups are always local
bcocause slurp
bcocaor fetch
dur117Thanks, I'll give them a go
mmercerdid something change recently in 2.5 that affects how 'true/True/false/False' are handled? as well as the 'when' conditionals ?
mmercerive used 'true' for the longest time, and now it is apparently nto working on 2.5.0a1
fishcookeransible $ sudo cpan XML::Simple
fishcookeri've found
agaffneymmercer: that sounds like something you should create a github issue for
mmerceragaffney: im just curious if something changed in general
mmercerif something did change for it, ill absolutely do it, right now im just trying to get our deploys working again :(
bcocammercer: it should work, but i always use |bool to ensure things
agaffneymmercer: that sounds like the sort of thing that wouldn't change late in the release cycle like this, and not without deprecation warnings going off
mmercerrerunning the play now with -vvvvv and stdout callback set to default to see what info i can glean to try and get more information on where it broke, but it seems to be on a play that definitely worked in the past
agaffneyand double-check with 2.4.x, depending on how far in the past it worked last
mmerceragaffney: it worked as of 2.5.x devel
mmerceri had been running on a 2.5.x devel branch from git, i just updated to 2.5.0a1 a few days ago
mmercer -- as best as i can tell, its skipping the top tasks, which of course explains why the list operation fails
mmercerthats with stdout_callback set explicitly to default
mmercercli includes -e 'new=True'
mmercerits being included via import_tasks with a conditional, and i know that conditional is passing as its including the tasks.... i just don't see why the top level operations don't even seem to be triggering at all
bcoca^ that is probably 'string type'
bcocachanges to typing vars might have affected this, but those have bin in devel for over a month
bcocaor 2
mmercerbcoca: quite possible -- i believe i was on a very early 2.5 devel, so its possible that the more current changes are what happened
flowerysongYou should also fix the tasks so that they work when the first two are skipped, which they can't at the moment.
mmercerflowerysong: they *cant* work without that information, it will literally break our entire deployment, it has to fail
mmercerotherwise we wind up with incorrect states in the load balancers
mmercerand that gets even worse xD
flowerysongThen why is it conditional?
mmercerthe conditional is at the block level, the lower two depend on each prior task in order
flowerysongRight. So if the conditional is false, the third task will always fail.
flowerysongWhich is what you should fix.
bcocaconditional affedcts all 3 tasks
mmercerACTION nods...
mmercerwhich is what i expect?
bcocainstead of new == 'True' < type might be 'fixed' for new and True != 'True
bcocaboolean true is not string true
flowerysongbcoca: mmercer: The loop on the third task references keys that don't exist when the first two tasks are skipped, and loops are evaluated before conditionals.
bcocawhen: (new|default(False))|bool
mmercerwell, i used to have it as 'when: - new is defined\n- new
mmercerbut that doesnt appear to work anymore (and new used to be set to new=true) in the invocation
bcocaflowerysong: yeah, but 2.5 defers the error and waits for conditional resolution
bcocammercer: you were relying on the type being a string, use my expression above, shoudl work in ALL cases as we explicitly cast to boolean
bcocaflowerysong, mmercer but it seems like the 'defer' is not working
bcocawith_items: "{{ (alb_targets|default({}).stdout|default({}) | from_json).TargetHealthDescriptions|default({}) | map(attribute='Target.Id') | list }}"
bcoca^ so that task 'always' works
bcocas/works/doesnt throw syntax erors/
mmercersure, but then i can have a state where the load balancer isnt properly managed ?
bcocanew=false would create that error also
mmercerahh, gotcha
bcocathe change will only prevent the syntax error, not force execution, teh when should still control that
mmercerlooks like i have a fair few bool conditions to correct now =D
mmercerhadnt realized i was relying on them as string
bcocacommon issue, dont know how to document it well, if you want bool, use |bool
bcocaif you want int, use |int
bcocain the end its a lot more predictable
mmercerACTION nods. i recall seeing the jinja mentions of using |bool and similar, i just didnt understand how it impacted things
bcoca^ always at 'consumption point' not on definition
bcocajinja by default creates strings, but that does not mean tha vars are internally evaluated as such while construction the expressions
mmercer"template error while templating string: expected token ')', got '.'. String: {{ (alb_targets|default({}).stdout|default({}) | from_json).TargetHealthDescriptions|default({}) | map(attribute='Target.Id') | list }}
mmerceri dont see the missing )
mmercerit 'reads' correctly... as best as i can tell
mmercerdamnit. its still skipping those tasks.
agaffneymmercer: I think it's complaining about the .stdout. you need to put an extra set of parens around 'alb_targets|default({})'
mmerceri had wondered but wasnt sure =D
mmercerand i dont like the stricter bool type checking xD lol
mmercerso as a cli -e should it be defined like new=True|bool ?
flowerysongNo, you can't pass in typed variables with k=v. You need to use YAML/JSON extra-vars.
agaffneythe key=value format only produces strings, because there's no way to type hint
agaffneyyou can do -e '{ "new": true }' to get an actual bool
mmerceroooh, nice
mmercerill give that a shot.
agaffneyand to be sure, you can always use |bool on consumption
flowerysongmmercer: {{ ((alb_targets | default({})).stdout | default('{}') | from_json).TargetHealthDescriptions|default({}) | map(attribute='Target.Id') | list }}
flowerysongYou were missing parentheses and needed to pass from_json a string instead of a structure.
mmercerleave it to me to update ansible without changing major or minor versions, and still get caught in major changes that break deployment xD
mmercerheeey, were past that part, yesss
mmercer... but it completely skipped the entire task set :|
mmerceryeah, this makes no sense.... its skipping each of the tasks in the block, even though im using the checks that were recommended, new is defined.... im stumped.
mmercergoing to head home, then ill try and argue with it more when i get there
SaravanakmrHi, basic question - I see a line like this - host_cnt: "{{ groups['maingroup'] | length }}" - so what is the purpose of | here - This is something similar to | in linux command line ??
Saravanakmrso, groups['maingroup'] | -> this just outputs the total elements ??
flowerysongSaravanakmr: The pipe is Jinja syntax for calling a filter. See
Saravanakmrflowerysong, so here we are calling this filter length(count is alias) ?
mmercerdoes not matter how i define it, it is skipping every one of the plays on a *statically* imported include... wth
Saravanakmrflowerysong, thank you :)
mmercerwith import_tasks that have a conditional -- the import is static, but the condition is not evaluated until execution, right ?
bcocathe conditional is NOT evaluatd for the import
bcocaits prepended to the imported tasks
mmercerACTION nods.
mmercerwonder if it was that conditional that was failing this whole time, ill know in just a minute, since it appears that one still used the legacy 'string' comparison
mmercerbcoca: so because of them being imported, but not evaluated until after the fact, the tasks would be displayed, but there would be no execution chain, correct?
bcocait would be 'attempted to execute' and skipped if the condition fails
bcocas/fails/is false/
mmercer* nods * and it looks like that was the conditional that was failing
bcocaso the 'tasks are executed' , but the result can be a 'skipped task'
bcocatask execution != module/action execution
mmercernearly resolved, just have this one left- Unexpected templating type error occurred on ({{ (alb_targets.stdout | from_json).TargetHealthDescriptions | map(attribute='Target.Id') | list }}): expected string or buffer
nitzmahoneAnsible 2.5.0 Beta 1 is now available via PyPI (`pip install ansible==2.5.0b1`),, and GitHub. The 2.5 release is now considered feature complete. Please put it through its paces and file a Github issue if you hit any snags. Thanks!
mmercerfun fun..
mmercernitzmahone: is there a page that goes over what 'typing' changes were made during 2.5?
nitzmahone@mmercer: draft 2.5 porting guide at
mmercernow to see if stdout results was one of them
pastulioI am not quite sure how to use the '--vault-id' option in ansible-vault >=2.4. What exactly does the 'id' part do? (the stuff before the @ symbol)
pastulioI know you can use the --vault-id flag multiple times, with multiple files, but I don't know how to use the same file with multiple ids?
FluorHello! I have a situation where i want to get rid of an "expected" failure-message. The red message scares my cow orkers (and me), even though it works fine. Anyone care to take a glance at the example code in ?
spufiinclude a when statement?
Fluorheh. hmm. my mind was stuck on trying things with |default([0]) and such.
kradalby I would like to select a random host from the given group, i try to do ` hosts: "{{ groups.all | random }}" ` but that does not work giving the hosts an empty list. Does anyone have any suggestions on how i can do this?
pastulioSeems to work fine for me
pastulioSorry, it seems to work only about 50% of the time for me too
bitlanHi, how i add new user in chroot'ed system via ansible? it's posible onnly via exec ?
kradalbypastulio: for me it works if i use it in debug, but not in the host parameter :S
kradalbyand it acually work if i only have one host in the list, but not more than one:P
pastulioFor me it works about 50% of the time when I put it in hosts: and always in debut (Ansible 2.4)
kradalbyhmm weird
pastulioMaybe you can try to use delegate_to
kradalbyok will read up on that one, thanks
pastulioJust include all hosts and delegate it to a random host
pastulioIt basically delegates a task to a host you specify
pastulioYou should then also add run_once: true
kradalbydo you know if that can be used in roles? or does it have to be used on a task?
Kim^JI have this expression: "{{ item.home_dir | default('/home/' + item) }}/.aws"
Kim^JAm I wrong to assume that when item is a dict and it has the property home_dir, it shouldn't run the default function?
Kim^JBecause right now, it's running the default part even though the item has a property home_dir
pastulioI am not sure kradalby, we use delegate_to on task themselves, but for our own created roles, we just add a variable to the role "<role>_delegate_to" and add this to all the tasks
kradalbypastulio, ok ill try, and if not, that soudns like a reasonable workaround
pastulioBut I may be overcomplecating things for you
kradalbynah, its only three tasks in the play so not to bad
pastulioI'm no expert, so maybe there are better solutions
pastulioOk, you can also use the "block" statement, so you only have to specify the delegation once
kassavhello, any expert with json_query?
kassavi did a query and i got this
kassav"file.ear: GROUP1 , GROU2"
kassavand i want to expand it in two lines
kassavfile.ear: GROUP1
kassavfile.ear: GROUP2
pastulioKim, is it possible to post a bit more of the play so I can test?
rvgatekassav, just make a list?
kassavrvgate: as simple as that yes
Kim^Jpastulio: Sure, hang on a sec.
rvgatekassav, question tho... you want to know the syntax, or you want to do it with an existing var and convert it on runtime ?
rvgatekassav: syntax ->
kassavrvgate: could you please copy that in
kassavi don't have access to gist
Kim^Jpastulio: I'm guessing the problem is the second access of item, but I'd like to not use that at all if .home_dir is defined.
pastulioFor me the code seems to be working as expected
pastulioI get /home/ubuntu and /var/lib/jenkins
pastulioI did change the concat from + to ~ because it was giving me an error
Kim^Jpastulio: Aahhhh thanks!
Kim^J~ did it.
pastulioNo problem :-)
Kim^Jpastulio: Where is this documented?
pastulioThe problem is that 'item' was an object, and not a string
survietaminehello, from this slideshow, I don't get what is "idrac" in getdata.yml:
survietamineI don't see any module named idrac (I know what is DELL iDRAC), but in this playbook for the local_action?
survietamine(page 33)
survietamineah, ok, maybe it's the module available here:
kassavrvgate: it's not what i wanted
kassavi actually make a json query that returns a list [a,b,c]
kassavto that i add an external value to get value: [a,b,c]
kassavin a second step i want to expand to get 3 values
kassavvalue:a , value:b, value:c
kassavi'm not sure if it's possible to do that
pastulioSo you just want to update the values in the list?
pastulioJust to be clear, is this in an playbook file or in a jinja template file?
kassavpastulio: a playbook
pastulioSorry it took so long, I was trying to find a better solution
pastulioYou can use inline jinja templating
pastulio"{% for item in your_list %}value:{{ item }} {% endfor %}"
pastulioThat will return a string of "value:a value:b value:c"
pastulioyou can add \n for the newlines
pastuliokassav: I found a better way: {{ somelist | map('regex_replace', '^', 'value:') | list }}
Floflobelhello, I try to add a LDAP user with "user" ansible module but it's doesn't work because the user doesn't exists locally. What's the best method for edit /etc/group and add user to my group ?
kassavpastulio: let me check
mmercerpastulio: --vault-id isnt for the same file with multiple ids, --vault-id is a multi vault secret implementation, and its a way of tying in which vault uses what key...... so if you have say 3 vaults -- all, test, prod -- all can have things that are global in your environment, and test/prod can be your group level vars... you can use both all and test/prod in the same invocation
survietamineFloflobel: the ldap_entry module doesn't do the job?
kassavpastulio: it's better, but i didn't succeed
kassavbecause the value that you make static is already in a loop
Floflobelsurvietamine: no, my ldap users are imported with SSSD, I do not need to get LDAP entries
pastulioThanks mmercer, but I don't think I quite fully understand it. So the part before the '@' is actually the name of the vault file? for example: group_vars/test
pastuliokassav: Sorry, I am not completely following what you are trying to accomplish.
pastulioThe value is a string of "[a,b,c]"?
kassav"[a,b,c]" are a list
kassavand the value is an item that i loop over
kassavitem: [a,b,c]
mmercerpastulio: the part before the @ is the 'identifier', so that ansible has a way of knowing which vault file uses what key, in theory anyway
pastuliommercer: Ok, so it is just meant to isolate the password to use, instead of trying them all one by one?
pastuliokassav: You can just use: with_items: "{{ somelist | map('regex_replace', '^', 'value:') | list }}"
pastulioI think I am not fully getting it, could you post the part of the playbook?
MKS2020hello! Is it possible to have personal passwords for ansible-vault already?
mmercerpastulio: prior to -vault-id, you could have multiple passwords, but only *1* could be used at a time, now you can use all* of them at once
mmerceralrighty, i give up for the night, will be back tomorrow morning to try to resolve the last issue xD
MKS2020mmercer: is it possible to add new vault passwords by hash? (for example for a new developer)
rvgateMKS2020, like an additional password?
MKS2020rvgate: yes
rvgateMKS2020, i think its limited to 1 password, but not entirely sure... i've only used 1 password for all my ansible managed projects
MKS2020rvgate: 2.4 has —vault-id option ans it could have multiply passwords. Looking for the way to add/revoke new vault passwords
masterkorphello everyone
dpazgreetings friends . I'm trying to use ansible in a close environment which doesn't have pip / setuptools . I've copied the source tar ball to the destination machine and installed all the depended modules under a non standard dir (/usr/local/private/lib/python2.7/site-packages) and configured the pythonpath to this path . when running the ansible bin it says " import ansible.constants as C
dpazImportError: No module named ansible.constants"
dpazwill appreciate some help with getting this to work and some hints on running it from source in a "non standard way"
rvgateMKS2020, if you want to revoke an old one, you can re-encrypt the files/entries with a new password
rvgateMKS2020, you cant really revoke an existing one as it is using the password itself to decrypt it
MKS2020rvgate: yep, looks like my idea to provide everyone own password for vault has no sense because anyway i’ll need revoke all API keys stored in the vault in case of member off-boarding…
MKS2020BTW, is it possible to specify which vault file should be looked-up for encypted variable? i.e. secret_var: “{{secret-var@~/.vault-dev}}” ?
mmercerNope, but prs are welcome
dpazHey I'm trying to use ansible in a close environment which doesn't have pip / setuptools . I've copied the source tar ball to the destination machine and installed all the depended modules under a non standard dir (/usr/local/private/lib/python2.7/site-packages) and configured the PYTHONPATH to this path . when running the ansible bin it says " import ansible.constants as C
dpazImportError: No module named ansible.constants" .How do I get this to work ?
MKS2020am I get right that “multiple vault passwords support” needed only to have everything in the same vault-file? Did I miss any other benefits of using this feature?
mmercerI think you're misunderstanding how it works
mmercerMultiple vault passwords means you can now have multiple independent vault files, each with it's own password, and they can all be used at once now, instead of only vaults with the same password being usable at one time
jhawkesworth_dpaz: you could try running 'source hacking/env-setup' which is really meant for setting up paths for developing ansible, but might do what you need
MKS2020mmercer: Still can’t get why someone would need different vault passwords per environment… Vault-password-file could be a script which points to the correct vault file according to inventory/whatever/. Anyway I can’t use SSO or U2F and clear text passwords must be shared between team members.
dpazjhawkesworth_: thanks ,but if I read correctly it requires pip which I also don't have as well as easy_install
jhawkesworth_dpaz: don't think it does because you can clone the git source and run that script
jhawkesworth_dpaz: but there are a bunch of dependencies that ansible needs, I wouldn't know how you can get them without a connection
dpazjhawkesworth_: git source of easy_install or env-setup ?
jhawkesworth_dpaz: i mean git source of ansible
dpazI've already copied the dependencies to the remote machine
dpazeverything that ansible needs that is
jhawkesworth_ok well above mentioned script might be able to set up the paths you need
dpazproblem is , it's an environment without WAN access and we can not install stuff off the internet . the dependencies were modified by guys in another devision in my company so I was able to use those
jhawkesworth_ansible.constants is part of ansible
dpazOK , i'll give it a try
dpazyeah but even if I take the source as is and place it on the machine , cd to bin and exec ansible it gives me that ansible.constants error
jhawkesworth_looks to me like when i run source hacking/env-setup it adds ~/ansible-development/lib folder to PYTHONPATH environment variable
jhawkesworth_so maybe you are just missing that?
jhawkesworth_(where ~/ansible-development is the dir where I have the ansible source code)
dpazyeah could be I"ll give that a try
dpazjhawkesworth_: when installed ,the module ansible.constants should be under /path/to/site-packages? that's where it obtains it from ?
Zhenechdpaz, it's part of ansible. so it'll be in the ansible folder
Zheneche.g. /usr/lib/python2.7/site-packages/ansible/
dpazok I see I have it therer (/usr/local/GWS/ansible/lib/python2.7/site-packages/ansible/
dpazand my python path is PYTHONPATH=/usr/local/GWS/ansible/lib/python2.7/site-packages/
dpazbut it still doesn't work . what am I missing Zhenech ?
dpazbtw thanks for the help guys :)
hbfWhat's a good action to use when I don't want an action, just 'notify: ... when: ...' ?
hbfor maybe i should say there's no module i want to run
bartmonhbf, you want to create a callback on some random event/condition in the future?
bartmoni don't believe ansible is made for this
hbfNot random. I want a callback if a variable has a particular value.
bartmonhbf, wait_for might be useful
hbfno, I don't want to wait for anything. I want something like "- shell: true /// notify: foo /// when: bar" - that's kind of right since shell always returns |changed, but it's wrong when doing --check.
bartmonhbf, is there actually a script you want the shell module to run?
hbfNo. I simply want to notify handler foo when bar. But ansible requires me to run some module.
bartmonif it outputs anything you could check if the output (or part of it) satisfies some condition in the `when` clause
zamolxishi all. any idea why 'uri' or 'get_url' are not using my user and password when I try to download a file?,14,30,43
hbf..that is, notify foo _if_ variable bar is true
agaffneyzamolxis: iirc, you'd also get a 401 if the credentials are incorrect
bartmonhbf, i think you should just notify the handler in any case from the play and check the condition in the handler play
hbfzamolxis: Start with - debug: var=sw_user and - debug: var=sw_pass to see that the variables are correct.
zamolxisagaffney: the credentials are correct. wget works great with them
dpazOK so I've placed all my ansible modules and dependencies in /usr/lib/python2.7/dist-packages an set the PYTHONPATH to it . I've installed the ansible deb (dpkg -i —force-deps ansible..) and when running ansible it throws this error : Traceback (most recent call last):
dpaz File "/usr/bin/ansible-playbook", line 40, in <module>
dpaz import ansible.constants as C
dpazImportError: No module named ansible.constants
dpazwhat am I missing ?
agaffneydpaz: why are you manually placing files under python's dist-packages dir, and why do you need to use --force-deps to install the ansible DEB?
hbfbartmon: notify how? "- notify: foo" isn't a valid task, it needs to run something. And no, checking in the handler would be wrong
agaffneyzamolxis: they may be correct in general, but that doesn't mean that you didn't enter them wrong for ansible, or that something hasn't happened to them. use the 'debug' module like hbf suggested to verify
agaffneydpaz: scrolling up and skimming the previous conversation, are you not using the system python? the DEB will only work with the system python, and trying to run ansible from the DEB with a non-standard python will fail to find the ansible library
agaffneydpaz: if you aren't using the system python, you need to install using 'pip' from the alternate python
bartmonhbf, i don't know your specific situation but my opinion is that if you don't actually do any state change in the play, a handler is just a complication. how about cutting out the middleman (the no operation play) by moving the handler play to the play where you want to invoke the handler?
hbfWell, several tasks notify the handler. One of them is this one which checks this variable, which was set in another role. So if I copy the handler into a task, it gets run twice. Though I guess I could replace each 'notify' with a 'register' and run that task if either registered variable |changed.
dpazagaffney: yes that correct, but I tried with the system python as well and that also didn't work
dpazalso tried running the bin from the source code which produced the same error
agaffneydpaz: supported install methods are RPM/DEB with system python, or using 'pip' with any other python. aside from those, you're kinda on your own, as it's no longer an ansible problem when it doesn't work
agaffneywhen running directly from a code checkout, you need to first run 'source hacking/env-setup'
dpazagaffney: yep , that's true
dpazwas hoping to get some out of scope help but thanks anyway :)
bartmonhbf, maybe use the meta module for a task `- meta noop` with when and notify clauses
bartmonit's at least portable.
hbfbartmon: neat idea - except it didn't work, ignores changed_when:-( You put me onto another one which does work though, assert: { that: true }.
hbfso, - assert: { that: true } /// changed_when: foo /// notify: bar. A bit ugly, but now I can get on with things
bartmonhbf, not the prettiest for sure but much cleaner than using an unrelated command or shell
kassavhi again,
kassavi have a sed commands to insert in a playbook
kassavbut it returns syntax errors due to specials chars
agaffneywelcome to quoting/escaping hell!
kassavany idea to deal with that
kassavagaffney: :(
agaffneyuse the 'replace' module
kassavcommands are related to a txt file
agaffneythe 'replace' module operates on files
agaffneyit's different than the regex_replace() jinja filter
kassavis it better to insert commands in a file then make a call?
agaffneyI have no idea what you're asking there
kassavmanually run ./
kassavin ansible
agaffney"better" is a subjective term. it depends on the use case and what you feel like maintaining
kassavit's to escape special chars
agaffneycan you show a gist/pastebin of what you're trying to do and the error you get?
agaffneyin that case, the YAML parser is complaining about the ":" in the line. you need to use quotes around the entire thing if there are colons in the value. another option is using 'shell: >' and nesting the command under that, which avoids one level of quoting problems
agaffneykassav: also, what exactly are you trying to do here? why are you stripping out double quotes and "[][]", replacing commas with newlines, removing blank lines and spaces, and adding spaces after colons in this file?
agaffneythat all just seems like a terrible idea in general
kassavagaffney: yes i know, it's getting a file in an yml format
agaffneysed/tr are not appropriate tools for creating/editing YAML :)
NogNeJaperHi, Is it possible to pass variables to an import_playbook statement?
mrproperI was on the Ansible 2.5 loop presentation yesterday. I want to see the example playbook but can’t find it. Can someone point me in the right direction?
BiQhi, anyone has any good ideas how, given host's ipv4 address, I could get the network interface name into a variable in ansible?
BiQdo I need to use command module and grep/awk stuff from "ip addr show" ?
mrproperBiQ: You should be able to pull it using gather_facts
mrproperI’d have to look to confirm, but that seems doable.
robeni think that i can help ypu BIQ
ingykassav: another solution in that specific case is 's/:/:\ /g', since that means the same as 's/:/: /g' but now there is no space after :
ingyI've been wanting for some time to find a way to not have ': ' be ambiguous in yaml values... (in a future yaml spec version)
ingyI think it's unfortunate (though > gets you pretty far)
BiQhmm. okay, setup module seems to give what I want, from ansible_default_ipv4
BiQmrproper: thanks
mrproperBiQ: I assumed it would be somewhere in there.
pastulioNogNeJaper: Ey blurrekup
NogNeJaper@pastulio VEIGE
pastulioNogNeJaper: What exactly are you trying to pass to the playbook import?
pastulioOtherwise you can add another 'play' and specify your variable in the 'var' section
NogNeJaper@pastulio I have to import 3 different playbooks and pass different variables along with the import (each imported playbook has different set of variables)
cukalHi, I'm using group_vars/windows.yml and winrm but I'm wondering how I can add different users when installing on ACC and PRD environments? Right now I have the acc user configured, but what if I now use another hostfile with nodes that require another user?
cukalI tried creating var files for each environment with the same contents as windows.yml and include them on the cmdline with "-e" but that did not work
kassavcan i include an yml file without removing duplicated keys?
cukalnever mind, I can add them -e ansible_username= on the cmdline, that will do fine
pastulioNogNeJaper: I can't directly find it, but can't you just specify the vars in the playbooks themselves using the 'vars' directive? or are they dynamic variables?
pastulioI guess you could always set an update a fact
lberkwhat the suggested way to install a glob of local files all at once? I'm trying to get yum to install 35 or so locally built rpms, but it looks like ansible is trying to call yum once for every rpm, which will fail because they all have various interdependencies
lberkI've tried a with_fileglob: variant, as well as a find: paths/patterns (register'ing the result) and then passing that to the yum section with with_items
pastuliolberk: you should not use with_items for yum, you should pass a list to packages
pastulioI am trying to find the page where I have read it before
lberkthanks, but here's my issue, if it's a locally built set of rpm's one of the things I'm testing in staging is that we haven't missed any deps, so it'd be much more beneficial to glob the full list instead of listing individually
lberkalso, the actual directory changes based on the version number (which I have registered vars for), trying to make this all automagic
pastulioYou can still use fileglob to generate the list, just not on the yum module
pastulioYou register the result of the fileglob module and then join the list with into a csv format
lberkhm, do you happen to have an example of that?
lberkjust so I could try
pastulioJust a second
pastulioI have to make it ;-)
lambiekNLbcoca: a few days we discussed about pfexec. I did some tests and it turns out that in file ansible/playbook/ the line "becomecmd = '%s %s "%s"' % (exe, flags, success_cmd)" should be changed into "becomecmd = '%s %s %s' % (exe, flags, command)" to get pfexec working
pastuliolberk: sorry for the delay, was a bit busy here, but here is the example:
fragloshi, i wrote a playbook for keyboard configuration with the module but the configuration for debian 9 seems to be active only after reboot. what do i have to do to enable the configuration for shell login?
pastuliolberk: basically, I am using find to do the globbing and then using a jinja pipeline to turn the result in to a CSV list of files
lberkpastulio: thanks, taking a look
lberkpastulio++ thank you that worked!
agaffneysivel: finally relented, eh?
agaffneysivel: did someone show up to plead their case again, or did you just decide that it had been enough time?
sivelagaffney: it was my intention to remove eventually. No one pleaded since then
sivelthe list has an upper limit, so generally people shouldn't be left there indefinitely
agaffneyheh, you have to unban people to ban new people!
pastuliolberk: Great! No problem :-)
mmercerok.... lets see if i cant get past this last hangup xD
bcocaACTION takes bets
mmercermy money is on.... 'only with help!'
mmercer"msg": "Unexpected templating type error occurred on ({{ (alb_targets.stdout | from_json).TargetHealthDescriptions | map(attribute='Target.Id') | list }}): expected string or buffer"
mmerceri believe thats also one that is likely affected by the 'type' changes, im just not sure how or why
agaffneyit's not obvious which part of that is complaining. just start stripping off parts from the end until it stops complaining, and then you'll know where the problem is :)
mmercerdamnit xD was kind of afraid of that, lol. ok, off to make a test play that does very similar so I don't have to waste time going through this entire sequence =D
bigmyxtrying to make a conditional variable like this: `scm_branch: "{{ curr_branch | search('rel_.*') | ternary(curr_branch, 'master') }}"`, getting error: """ FAILED! => {"msg": "Unexpected templating type error occurred on ({{ curr_branch | search('rel_.*') | ternary(curr_branch, 'master') }}): expected string or buffer"} """
systestI'm trying to write a binary file using b64 encoded var and copy with `content: "{{ test| b64decode }}"` However, that j2 template does not decode correctly. (see ) Any suggestion how to to decode the b64 var, short of shelling out to `base64` ?
agaffneybigmyx: that probably means that 'curr_branch' is not a string
agaffneysystest: what do you mean when you say that b64decode doesn't decode it correctly? how did you encode it in the first place?
bigmyxagaffney: I was trying to define both facts in a same `set_fact` block, probably that was a problem... thanks!
systestagaffney, from a shell with `base64`
agaffneybigmyx: yeah, that won't work. you can't have one fact depend on another in a 'set_fact' block, because the vars aren't actually set until the 'set_fact' task finishes
systestI've also done it from python
sivelsystest: that won't work, you cannot write binary via `content`
sivelthe jinja gets evaluated early, meaning you are still sticking binary into the module args
systestsivel, bummer. I can always play games with shell/command. however, is there any way to do it wiath native/stock ansible?
sivelA jinja template might work though, but I haven't tried
systesthmm, that's a thought i..e. template vs copy
sivelput `{{ test| b64decode }}` in a template, and use the template module to put it in place
systestunderstood, I'll give it a shot
sivelI've not tried, but off hand it seems like it could work
systestsivel, no joy but worth the shot. thanks
sivelsystest: might you also be running into issues because of the added line breaks?
agaffneyline breaks generally shouldn't be a problem for base64, since they're not part of the character set. look at PEM format, for example
systestsivel, don't think so. When I do that syntax for other formats or write the raw base64 it works as expected
systeste.g. I can "base64 --decode" the b64 file that gets written in the test snippet and it works fine
geoffthi! I have a host_vars file that contains foo: "[1 , 2 , 3]" and bar: "{{ foo }}". why does it get parsed as a list and cause bar to have type list?
geofftif I make foo invalid JSON e.g. "[1 , 2 , 3", then bar remains a string
systestI'll just put the b64 file on the box and do a command base64 to decode it (with some checks to not do that if it already exists)
agaffneysystest: the real problem is likely that ansible forces everything in the play to utf-8, which is problematic for binary content. the 'copy' module with 'src' is the only "guaranteed" way to make it work
agaffneyas the 'src' file gets directly copied to the target machine using 'scp' (or similar)
systestagaffney, expected I may be tripping over something like that, especially with J2
geofftcan I force it to be a string in both cases? (if I wanted them to be a list in both cases, "{{ foo | from_json }}" correctly gives me an error, but I want it to not be parsed)
systestnot a big deal, I can work around it. thanks all for the suggestions / info
sivelgeofft: due to how ansible and jinja2 work with each other, we turn things that look like python data structures, into python data structures.
agaffneygeofft: that's a side effect of the way ansible converts the string output from jinja back to native python types
geofftoh ugh
geofftbecause jinja is a string templating language and not like an actual data structure templating language right
geofftcan I suppress this somehow?
sivelgeofft: we just got a "native types" PR merged into jinja, but it will be a while before we can fully support that in ansible
bcocastill then look at 'type filters' |string |int |bool, etc
sivelgeofft: I don't know enough about what you are doing to answer you though
bcocageofft: but the filters have to be used 'on consumption' not on definition
geofftso I may be doing something extremely confused along the way here, but
bcocaso bar: "{{foo|string}}"
agaffneygeofft: try this: bar: "{{ foo | string }}"
bcocaor in this case to_json would also work
geofftI'm trying to set something in group_vars/all that comes from {{ hostvars["localhost"]["something"]["stdout"] }}
agaffneythat *should* short-circuit the native type conversion logic
geofftand I don't want stdout parsed for JSON
geofftoh let me give |string a shot
geofftalso maybe I should explain what I'm really trying to do here because probably there's a better way to do this whole thing...
bcocato_json just spoke to your example, also it will convert any strucutre into json string, which is what many want, just not your case
geofft$work has a homegrown password manager that involves shelling out, it is slow, and doing password: lookup("pipe") seems to happen once per host and per task
geofftthat password isn't going to change during a playbook, so I'd like to fetch it once and cache it
bcocalookups are evaluated each time, but you can do set_fact to 'save' the value
bcocai.e now: '{{lookup(
geofftright. so I can set a fact on localhost or something, in a play at the very top
bcoca'pipe', 'date' ..
bcocavs set_fact: start: '{{lookup('pipe', 'date' ...
geofftbut then I'm still accessing hostvars.localhost.password everywhere, right? and I wanted to just alias that to password
bcocause first play on localhost to set fact, use vars in 2nd play to create alias
geofftso I'm doing this thing in group_vars/all, but I definitely do not want anyone evaluating the password in case it's JSON, which is how I got here :)
geofftoh, do a play on each host?
geofftthat could work yeah
bcoca mypass: '{{ hostvars[localhost][thevar]
bcocano, just 2 plays, one for localhost, the other for rest
bcocaor you can do all in one play with runonce + delegate_to lcoalhost + dleegate facts
bcocathe vars alias will still work, just not evaluate till after the 'localhost set fact task'
mmercerok, now im confused...
bcocathat is the natural human state
agaffneyembrace the confusion
geofftoooh I found the jinja2.nativetypes PR and it's pretty exciting.
mmercerwhen are they expected to merge that ?
geofftit looks like it's already in jinja 2.10 stable
mmercerhmm - -- error.txt is baffling me... almost seems like a bug, but maybe im misinterpreting
mmerceri gather new instances in the alb (1 instance) -- register that against a fact, i add a new instance to the alb, i attempt to remove the original instance via the fact variable... and during the removal, its trying to execute against *2* hosts.... i expect it to only be operating against the 1 host (and even then, its a local connection operation)
geofftok, {hosts: all, gather_facts: false, tasks: {run_once: true, set_fact: {password: "{{ lookup('pipe'...) }}"}}} is doing what I want
geofftsince it's a lookup I apparently don't even need to mess with delegating to localhost
geofftI still don't totally understand run_once's magic but that's okay :)
geofftand {{ | string }} also works for the other thing I was trying. thanks bcoca and agaffney!
mmercerhrm, no bugs filed against with_items that i saw/see
zamolxisguys, any help here? I'm trying to access a variable from a json output and I don't seem to be able to.,7,21 I want to access the variables under AVAILABLE, for example
zamolxismmercer: you mean using the |to_nice_json?
mmercerzamolxis: from what I saw, you got back a json already ?
zamolxisthe output from the target is already in JSON, yes
mmercerright... so read it in as: (ocm_json_status.stdout | from_json) and continue parsing it as normal
zamolxisok, so msg="{{ ocm_json_status.stdout|from_json }}" would return the exact same output as withoout the from_json filter. msg="{{ ocm_json_status.stdout.AVAILABLE|from_json }}" should work?
flowerysongzamolxis: No, you use from_json to turn the JSON string into a data structure. ocm_json_status.stdout is a string, not a data structure, so it doesn't have an element called 'AVAILABLE'.
zamolxisflowerysong: thanks, man. it worked. I haven't used this form_json module to convert data types
mmercertnx flowerysong, i couldnt figure out how to explain it properly so figured one of you guys could =D
flowerysongmmercer: "expected string or buffer" is an error from from_json, if that helps your debugging.
zamolxisI just assumed that this data type is... just like the setup modules's output and anything can be accessed directly, without any conversion
mmercerflowerysong: well, its weird... its operating on *2* hosts, but it shouldnt be
mmercerthere is only 1 result in the target list that im invoking, so why is it trying to work on 2
mmerceram i misreading it or is it a bug or expected behavior -- in the past it worked as expected, now it has 2 items in the list from the looks of it, which would never work
imcdonaHow do I obtain the network interface names on a host? The only facts that mention interface names are things like "ansible_eth3" which is great assuming I know that the target system has an eth3. At the end of the day I want a list of network devices and associated IP addresses. My thinking is that if I can get a list of network devices I can gather the IP's based on the "ansible_networkint
imcdonaerfacename" tree. Thoughts?
DerDuddleansible_interfaces is that list, isn't it?
imcdonaGeeze! Why couldn't I find that!. Yup. Exactly what I needed
rewilliamsHey guys, is it possible to run a powershell script from the ansible control server?
agaffneyrewilliams: it's possible to run anything anywhere that you have an appropriate interpreter for
mmercerlocal action
rewilliams#agaffney , so i guess just install powercli to ansible control server and run local commands ?
boxrickIs there a simple way to make Ansible ignore the an ansible.cfg file for that run, or per playbook?
agaffneyset the ANSIBLE_CONFIG env var to point to an empty/non-existent file
boxrickCool thanks
agaffneythere's no way to do it from within a playbook
agaffneyas that's too late to change the config file
billyBobhi ya'll, in my playbook i need to become_user: blah where can i define the password that corresponds with that account? i tried --ask-become at exexution but i did not get prompted for it
billyBobi will try again...hi ya'll, in my playbook i need to become_user: blah where can i define the password that corresponds with that account? i tried --ask-become at exexution but i did not get prompted for it
agaffneybillyBob: --ask-become isn't a valid option. there's the --ask-become-pass (-K) option, which is probably what you want. also, you need to set 'become: yes' in your play for that 'become_user' to have any effect
mrproperI am working with the ios_interface module and notice it’s relatively limited in what it can do. It can set some of the physical port characteristics, but not much around VLANs, etc. Is this deliberate (focus on physical) or have the other features just not been configured?
mrproperSame goes for the net_interface module, but I get that is a little harder to code since it’s so OS specific.
mrpropernet_l2_interface, there ya go.
thxffoi am getting an error trying to clone my git repo from ansible tower... peers certificate issuer is not recognized
thxffois there a way to ignore this from tower?
Dan0maNcurious, if you have a nested dictionary in host_vars with one subelement and the different subelement in group_vars, do they merge or does the precedence overwrite replace the lesser dictionary? (man, i hope that question makes sense)
flowerysongThe default behaviour is to overwrite variables completely. There's an option to enable hash merging, but personally I think that's a bad idea.
Dan0maNyeah. i wouldn't want to stray from default too crazily
Dan0maNi'll just rename it
azamatI'm writing a module, does anyone have a quick way to store a var for later use if the module is invoked again?
sivelazamat: typically modules should not be stateful. If the module needs info, then the user should be instructed to provide it. The module could return that data to be registered, and the the user decides what to do with it
finsteror maybe write a custom fact
_KaszpiR_azamat sounds like creating local fact, but that should be outside of module
azamatThis is for a hashi vault, storing the nonce after retrieving the first secret
azamatI could output the nonce and store it as a fact outside the module and then rereference it
azamatbut that just feels dirty
agaffneyazamat: how/where would you store this value?
agaffneyit *is* dirty
agaffneyand "wrong"
azamatagaffney: in memory?
agaffneythat's the kind of thing that should be handled at the playbook level
azamatthe only place secrets should live
agaffneywhere in memory? ansible modules run as a separate process from ansible-playbook (and usually on a different host), and they don't stick around after they perform their task
azamatwe run ansible on the actual host with cloud init
azamatthey provision themselves
agaffneythere's still no way for a module to store anything persistently in memory and automatically retrieve it later
azamatnot persistent
azamatjust for the duration of the play
agaffneyyou can do that sort of thing with an action plugin, but it still doesn't sound like a great idea
agaffneyduration of the play or for eternity...there's still no way
agaffneypersistent == still there after the task completes
azamatif I invoke the module 50 times I need to provide the none vault spoits out on the first secret request
azamatI can do this outside the module at the playbook level
agaffneythen you need to capture it somehow with a task in your playbook, and then feed it into your module as a param each time that it's invoked
azamatwas just hoping I wouldn't have to do that
sivel'tis how it's done
agaffneymodules are designed to be isolated and ephemeral
agaffneyansible pushes a chunk of code to the remote host, runs it, gets back JSON, and then deletes the code it pushed (if not using pipelining)
rmstarhey guys. i am using password for my hosts. But it fails when it is the first time i am logging into the hosts because of no entry in known_hosts file. Is there a flag that i can add to avoid this problem?
holstarmstar: That would undermine SSH entirely.
rmstarthanks guys :)
rmstarsivel: i will test with that flag :)
sivelsetting host key checking to false, uses trust on first use via `StrictHostKeyChecking no`
rmstarah! that's exactly what i am looking for. thanks
agaffneyephemeral cloud instances + SSH host key checking = PITA (but still a necessary one)
lupineyou can get the SSH host key via a trusted mechanism, generally
lupineit's often right there in the same API you get the IP from
lupineoccasionally you have to go via OOB
agaffneyin what cloud provider? the SSH host key is usually generated by the init script the first time that sshd is run in an instance
sivelcrazy security people
trwythagaffney: or you can inject a pre-generated, trusted key
larsksACTION thinks this conversation may be confusing host keys with the public keys used for authentication.
agaffneylarsks: maybe. I know what *I'm* talking about :)
sivelwell, you could generate your own host keys, and have them put in place, not out of the question
trwythACTION didn't see anyone talking about personal keypairs
larsksBut I'm pretty sure that what lupine said earlier about "SSH host keys" was actually not talking about host keys. At least, most cloud providers I'm aware of don't provide host keys via an API, because those are generated, but they do provide authentication keys.
agaffneyif you have the infrastructure to pre-gen SSH host keys, store/distribute them, and ensure they are passed in the user-data of every instance created, then sure you can inject them. most people don't have (and don't want to deal with) that kind of setup
agaffneyor just aren't paranoid enough to justify it :)
agaffneyI've found a good balance of paranoia and apathy
lupinesome do, some don't
lupinethose that don't are a bit awful but you can generally still get it via OOB access
lupineor you can manage the host keys yourself, aye, but it's better if they never leave the machine they're on
agaffneylupine: out of curiosity, how would you do this in EC2 without injecting a SSH host key at launch?
agaffneyyou might get the generated SSH host key fingerprint showing up in the console log, but probably not the value you'd need to stick in known_hosts
trwyththe problem with keys that never leave the host they're on is that when your service gets re-deployed, you get all new host keys... and then everyone who logs into that system has to choose to trust them again
trwythif you have a backup system sufficient to store your most important secrets (and you do, don't you?) then you can store reusable host keys
trwythif you don't do that, you're stuck eating TOFU again and again, but then so's most of the world
agaffneyI only bother with preserving SSH host keys when SSH is customer-facing system, which is not common
lupineI'm not really familiar with ec2
lupine suggests using OOB access and has a wrapped-up way of doing so
lupineyou can also use PKI for host keys, but I'm not aware of anyone actually doing so
agaffneythat just says you can manually verify the new SSH host key fingerprint by checking it against what shows in the console log, which is what I was suggesting above
lupineyeah, that's fine
Tugzhey all. I have some json which im importing into a variable using the from_json filter
agaffneyyou can't pre-populate known_hosts with that, though
lupinesure you can
Tugzhowever some of the map names has a dash in the name
Tugzhow do i access them? seems like ansible freaks out when i try
lupineI mean, not directly
agaffneyTugz: foo['bar-baz']
lupinebut you can use ssh-keyscan and verify that the scanned key is the right one
lupineshame we can't all just rely on SSHFP
Tugzagaffney, i tried that: "{{ creds.['x-networkId'] }}" does not work. however another variable in the same json works: "{{ creds.version }}"
agaffneythere are lots of options, but none of them are particularly good. for most people, paranoia doesn't win our over laziness, and SSH host key verification becomes mostly meaningless
lupineyeah. it's a real pity
agaffneyTugz: {{ creds['x-networkId'] }}
agaffneydon't use the . before [
lupineI had to fight tooth & nail to manage it correctly in a feature at work recently
lupine(i.e. asking the using the verify trhe fingerprint at all)
lupineand of course, most of the time the user will just click through the verification
agaffneyeven as someone that cares about security, I still mostly just "click through" the SSH host key verification
holstaACTION moves slightly away from agaffney
lupineI go as far as verifying, and asking someone in the know if it changes
agaffneyI'm not particular concerned about someone MITM'ing my connection to a server that I just created
lupineyay the 1%, etc
agaffneyI'll definitely follow up if a previously accepted host key no longer matches
agaffneybut I never verify on initial connection
lupineI do, unless I know the exact path the traffic is taking and that there's nothing dodgy on it
lupine(my office used to be directly above a datacentre, good times)
agaffneyyou are more paranoid than I
holstaI auto-deploy host keys to a shared known_hosts so people don't have to think about those things most of the time.
lupineI used to think people worried about wide-scale active intercepts were paranoid
lupinebut then we learned it was happening all the time
agaffneyI've done that before, but it often doesn't work for initial connection during provisioning. it works great after the fact
holstaIt's not paranoia. It's risk management and our risk profiles are not identical.
lupinewell, that's the thing about the kind of attacks I'm worrying about here. they affect everyone indiscriminately
agaffneyholsta: I'd argue that it is paranoia, but justified paranoia
holstaACTION does security/risk for a living. May be biased.
lupineI'm not particularly concerned about targeted intercept
holstaAll it takes it helping the 'wrong' journalist or human rights activity.
agaffneyfor most use cases, I think it's perfectly fine to use '-o StrictHostKeyChecking=no' to automatically add new hosts to known_hosts. I'd never suggest completely disabling host key checking or anything like that
lupinethat's the thing, it's not
lupinemost use cases cross the public internet
lupineand that's not safe, even if you're not the subject of a targeted intercept
agaffneyI only suggest that because it's often difficult to actually verify the SSH host key on initial connection
holstaI think the solution is to make it so simple/easy to get host keys securely that it's the obvious best thing to do.
holstaThe less humans have to think, the easier they are to manipulate into secure habits.
lupineshame cloud types don't want to put the effort in
agaffneywe need something like SSH, that will allow a user to connect remotely in a standard way to retrieve the SSH host public key...
agaffneyit it was handled at the cloud layer, there would need to be specific support in the AMI/image, and some side channel for the guest to communicate that info to the host
lupineSSHFP would be fine if only DNS were trustworthy
larsksThere's the monkeysphere project, which uses gpg to authenticate host keys (and user keys).
lupinethere's some work on that, at least, with dns over http
agaffneyor the cloud provider would need to be the one pre-gen'ing and injecting SSH host keys into guests
agaffneyeven with SSHFP, the problem is that at some point you have to trust *something* that you don't have entirely under your control
lupinesure, that's fine as long as who that is can be quantified
lupine"the cloud provider, anyone who has hacked them and anyone who can legally serve them warrants" is nice and understandable
agaffneylet's all trust some random guy named Steve. problem solved!
lupine"anyone along this piece of string, and everyone who can get access, legally or illegally, to it" is not
agaffneyI barely trust myself, much less anyone else
lupinemm, I do try to run things inhouse rather than on cloudy providers too
lupineeven email, which is suffer
agaffneyI gave up running my own email long ago and farmed it out to google
agaffneyI trust-ish them
lupinethe stuff they're honest about is more than enough to send me running
lupineI am not here to be sold to :D
lupinebut it's so difficult, that's a reasonable choice for almost nobody
agaffneyfor me, it was dealing with spam that pushed me over to gmail for my personal domain. trying to keep up with spamassassin and postgrey was more work than I cared to do on my own time
agaffneyand even dealing with Barracuda appliances in the past with work wasn't very impressive
lupineyeah, I still have frequent floods of spam. doesn't help that I've got a wildcard address set up
lupinethe various dnsbls help, but it's a time sink
agaffneyI mostly switched from Gentoo Linux to Ubuntu for my personal stuff around the same time I moved email for my personal domain to gmail, and for much the same reasons. I was tired of having to continuously mess with it
Walexlupine: agaffney: I have "discovered" a pretty much perfect antispam method, but it requires running a DNS server:
jrkohey guys, does anyone know how can i prepend something to every item in a list? example:
agaffneyWalex: you can at least partially achieve that by using that helps you at least track where it was exposed
agaffneyjrko: {{ my_list | map('regex_replace', '^', 'Id: ') | list }}
jrkoagaffney: let me give that a try, thx
lupinewell, who doesn't run their own DNS server?
lupineWalex: it's an interesting idea though. I tend to use <company>@<domain>, but me@<company>.domain would also be shiny
jrkoagaffney: i think my question language was wrong, that does indeed work but creates a single string, and i think i need a key with integer index
jrko"Id": "foo" vs "Id: foo"
mrproperDoes Ansible allow me to do string manipulation? My example is XYZ123-4, allow me to isolate the 2nd through 5th characters and concatonate with another string?
moritzmrproper: I'd try the slice operator that also works in python
mrpropermoritz: Sounds like what we need.
mrpropermoritz: That’s exactly what I needed. Thanks.
agaffneyjrko: I don't think there's a simple way to prepend the list index to each item in a list
larsksjrko: can you use with_indexed_items to get what you want?
agaffneyjrko: also, it almost looks like you want to create a dict (with numeric keys) from a list, rather than prepend to a string
jrkoyes, it seems i need to convert a list into a dict, not extract the index number itself
agaffneythat's even less easy. jinja (and by extension ansible) isn't great at data manipulation like that
agaffneyyou could write a simple custom filter plugin to do what you want
jrkotrying to use the |combine filter, but not quite getting it
bcocaactually, i thnk a filter that does just that, got added
jrkoor perhaps i'm going about this the wrong way and maybe someone can shed some light on the "targets" parameter of the elb_target_group module:
jrkoit says it expects a list, but the format looks like dict: "Id": "instance-id"
jrkoso to keep things dynamic, i get a list of my target instances, but need to prepend the "Id": key to add any targets
mmercerbcoca: what would cause a with items loop to disregard the number of items in a variable set ?
agaffneymmercer: can you elaborate on what you mean by that?
mmercerin error.html, the part that fails shows the step attempting to operate on both hosts, the old host (which is not a member of the actual play set), and the new host (which is a member of the play inventory)... the old host is added to the alb_targets fact in order to be operated on independently...
agaffneycan you give a line number or something? :)
agaffneythat's just a large wall of non-colored text
mmercerthe host with the 'error' isnt expected to be operated on, so to speak -- it should be removed from the alb target group, but the host itself shouldnt be getting added to the play or anything, its being handled by a local connection and there is only one host being operated on in the first place...
agaffneymmercer: the task that registers 'alb_targets' only appears to have run on one host, which is the one that succeeds on the later task
mmerceragaffney: exactly
Julius__Howdy. Is there a way to force Ansible to change the backup name/location? I'm trying to update a cron file, want to back it up, but I don't want it backed up in /etc/cron.d...
mmercerthats the problem -- the 'remove target' task should be running against localhost, and removing the 'failed' machine from the aws target group, but theres no reason that the task is attempting to execute against the host... it shouldnt even be in the playset inventory in the first place
agaffneyI don't see anything that would cause the gathering task to run on one host and the task that uses it to run on multiple hosts
mmercerim not even quite sure how to report it as a bug because im not quite sure how to describe it or simplify it, unfortunately
agaffneyJulius__: no, I don't think so. you can always just copy it somewhere yourself before updating it
jrkoflowerysong: nice one! that looks to be it, thank you!
agaffneyflowerysong: I really need to learn the json_query() magic
agaffneymmercer: what's the difference between the play at the top that doesn't work and the one at the bottom that does? the top one doesn't seem to actually be a play, as there's no 'hosts' and such
mmerceragaffney: ahh. i stripped out a few things to test the most limited set i could think of to see if it was an issue with the '{{ (alb_targets.stdout | from_json).TargetHealthDescriptions | map(attribute='Target.Id') | list }}'
agaffneymmercer: are you only seeing this with 2.5.0[ab], or also with 2.4.x?
mmerceri have not tested 2.5.0b1 yet. there were a few things that forced me to update to 2.5 back in the day, but i cant remember what, unfortunately. currently just on 2.5.0a1
mmercerill update to 2.5.0b1 in a little bit and see if I can reproduce it there still
agaffneyalso check 2.4.x if you can reproduce in 2.5.0b1. if it's only in 2.5.0b1, then definitely create a github issue, even if you can't quite give an isolated test case yet
mmercerwill do
agaffneyyou should probably just test the current tip of the stable-2.5 branch rather than 2.5.0b1
dfedy'all have a good weekend.
mmercerit will probably take a bit before im able to test, as I have to create a safe method to do so... right now theres no 'safe' way for me to do it without it potentially breaking our environment
mmercertip of the stable 2.5?
mmerceris it tagged currently or ?
agaffneyyou can change "remove" tasks to use 'debug' instead and keep the same jinja expression
mmercerahh, thats a good point
agaffneyit's a branch -->
mmercerahh, nice
mmercermissed that above, ok, ill grab that
agaffneyit was created from 'devel' yesterday, and all 2.5.x releases will be cut from there
mmerceri was on when nitz released b1 =D
mmercer( to my own suffering, lol )
geofft... is there a downside to gathering = smart?
agaffneyin most use cases, probably not
agaffneyit just means you'll need to explicitly gather facts if you did something in a previous play to add a new custom fact and want it available
agaffneybut only within the same playbook run where you add it
agaffneyI'm not sure why 'smart' isn't the default, aside from the fact that the option probably didn't exist originally
mmercerbugfixes are backported, features are not, right ?
agaffneyjudgement calls can be made, but that's how it generally works for the stable-x.y branches
Julius__@agaffney Yeah, I was hoping I could avoid adding another task in for that, but that's certainly viable. Thanks!
zoredacheIs there some easy trick to split a string into chunks in jinja2, then merge it again with a delimeter? I have a fact with the value '47ec163c14d15722', I want to insert colons '47ec:163c:14d1:5722'.
agaffneysplitting without a delimiter can be tricky. you could do what you want with regex_replace()
abelurI need some help with using lineinfile with regex and backref set as true. My code keeps returning a error "invalid matching group".
abelurbut this works well in py:
zoredacheagaffney: Thanks. The regex_replace will work. Though it is kinda ugly.